Home Forums HDP on Linux – Installation Reg:Multinode and streaming job performance

This topic contains 0 replies, has 1 voice, and was last updated by  Dharanikumar Bodla 5 months, 2 weeks ago.

  • Creator
    Topic
  • #48762

    Dharanikumar Bodla
    Participant

    hi to all,
    Good Morning,
    I had a set of 22documents in the form of text files of size 20MB and loaded in hdfs,when running hadoop streaming map/reduce funtion from command line of hdfs ,it took 4mins 31 seconds for streaming the 22 text files.How to increase the map/reduce process as fast as possible so that these text files should complete the process by 5-10 seconds.
    What changes I need to do on ambari hadoop.
    And having cores = 2
    Allocated 2GB of data for Yarn,and 400GB for HDFS
    default virtual memory for a job map-task = 341 MB
    default virtual memory for a job reduce-task = 683 MB
    MAP side sort buffer memory = 136 MB
    And when running a job ,Hbase error with Region server goes down,Hive metastore status service check timed out.
    With Multinode ,when I run the same job the job gets failed with container.

    Thanks & regards,
    Bodla Dharani Kumar,

You must be logged in to reply to this topic.