Home Forums HDP on Linux – Installation YARN and MapReduce Memory Configuration Settings on AWS

Tagged: 

This topic contains 0 replies, has 1 voice, and was last updated by  Rodulfo 3 months, 3 weeks ago.

  • Creator
    Topic
  • #52877

    Rodulfo
    Participant

    Im having lots of troubles to make a Single Node Hadoop work on AWS. Im using an m1.large instance (2 cpu cores, 7,5 GB RAM, 1 disk). I already tried the HDP2.1 and HDP2.0 stacks and with the default configuration the Smoke Tests work fine but then Pig Scripts dont Run. Once I solved a similar problem problem changing the YARN memory settings, but this time I tried again and it doesnt work. Unfortunately I didnt write down the values I used when it did worked,

    Acoording to the YARN Utility Script recomended in:

    http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1-11.html

    I get the following results:

    Using cores=2 memory=7GB disks=1 hbase=True
    Profile: cores=2 memory=4096MB reserved=3GB usableMem=4GB disks=1
    Num Container=3
    Container Ram=1024MB
    Used Ram=3GB
    Unused Ram=3GB
    yarn.scheduler.minimum-allocation-mb=1024
    yarn.scheduler.maximum-allocation-mb=3072
    yarn.nodemanager.resource.memory-mb=3072
    mapreduce.map.memory.mb=1024
    mapreduce.map.java.opts=-Xmx819m
    mapreduce.reduce.memory.mb=2048
    mapreduce.reduce.java.opts=-Xmx1638m
    yarn.app.mapreduce.am.resource.mb=2048
    yarn.app.mapreduce.am.command-opts=-Xmx1638m
    mapreduce.task.io.sort.mb=409

    I did the changes but then Mapreduce or other Smoke tests fail and the jobs still dont run. I also tried several configurations (8 cointainters of 512MB, etc) with no luck.

    Does anybody can suggest a right configuration for YARN and MapReduce memory setting for a single node in AWS m1.large?

    I’ll highly appreciate any help
    Rodulfo

You must be logged in to reply to this topic.