HDP on Linux – Installation Forum
YARN and MapReduce Memory Configuration Settings on AWS
Im having lots of troubles to make a Single Node Hadoop work on AWS. Im using an m1.large instance (2 cpu cores, 7,5 GB RAM, 1 disk). I already tried the HDP2.1 and HDP2.0 stacks and with the default configuration the Smoke Tests work fine but then Pig Scripts dont Run. Once I solved a similar problem problem changing the YARN memory settings, but this time I tried again and it doesnt work. Unfortunately I didnt write down the values I used when it did worked,
Acoording to the YARN Utility Script recomended in:
I get the following results:
Using cores=2 memory=7GB disks=1 hbase=True
Profile: cores=2 memory=4096MB reserved=3GB usableMem=4GB disks=1
I did the changes but then Mapreduce or other Smoke tests fail and the jobs still dont run. I also tried several configurations (8 cointainters of 512MB, etc) with no luck.
Does anybody can suggest a right configuration for YARN and MapReduce memory setting for a single node in AWS m1.large?
I’ll highly appreciate any help
The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.
Support from the Experts
A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.
Become HDP Certified
Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world