HDP 2.0 map/reduce
hi to all,
I had a set of 22documents in the form of text and loaded in hdfs,when running a map/reduce funtion from command line of hdfs ,it took 4mins 31 secs for streaming the 22 text files.How do increase the map/reduce process as fast as possible so that these text files should complete the process by 5-10 seconds.
What changes I need to do on ambari hadoop.
Allocated 2GB of data for Yarn,and 400GB for HDFS
default virtual memory for a job map-task = 341 MB
default virtual memory for a job reduce-task = 683 MB
MAP side sort buffer memory = 136 MB
And when running a job ,Hbase error with Region server goes down,Hive metastore status service check timed out.
Thanks & regards,
Bodla Dharani Kumar,
Support from the Experts
A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.
Become HDP Certified
Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world