HDP on Linux – Installation Forum
Reg:Multinode and streaming job performance
hi to all,
I had a set of 22documents in the form of text files of size 20MB and loaded in hdfs,when running hadoop streaming map/reduce funtion from command line of hdfs ,it took 4mins 31 seconds for streaming the 22 text files.How to increase the map/reduce process as fast as possible so that these text files should complete the process by 5-10 seconds.
What changes I need to do on ambari hadoop.
And having cores = 2
Allocated 2GB of data for Yarn,and 400GB for HDFS
default virtual memory for a job map-task = 341 MB
default virtual memory for a job reduce-task = 683 MB
MAP side sort buffer memory = 136 MB
And when running a job ,Hbase error with Region server goes down,Hive metastore status service check timed out.
With Multinode ,when I run the same job the job gets failed with container.
Thanks & regards,
Bodla Dharani Kumar,
The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.
Support from the Experts
A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.
Become HDP Certified
Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world