MapReduce Forum

Mapreduce Race Condition — Big Job

  • #51197
    Upen K
    Participant

    We are running a hige M/R job using HDP2.06. Cluster size is about 100 nodes and the job is big enough to consume all the containers in the cluster. When the reduce phase begins, and the map hasn’t finished yet, there comes a situation where reducers are running and awaiting results from unfinished Mappers. But schedulers doesn’t pre empty reduce slots for mappers to finish. W/o the pending mappers NOT completing, they are now engaged into deadlock situation. Looks like a huge bug. Does any one have any solution to this problem.

    Thanks
    U

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #51198
    Kevin Risden
    Participant

    You need to increase mapreduce.job.reduce.slowstart.completedmaps so that the reducers don’t start until a higher percentage of the mappers complete.

    #51199
    Sheetal Dolas
    Moderator

    set your slow start number to higher like
    mapreduce.job.reduce.slowstart.completedmaps=0.8;

    This way your reducers wont start until given % of mappers are finished.

    Additionally, you should avoid jobs that need very high number of mappers. Adjust your split sizes to do same work in less number of mappers

    #51201
    Upen K
    Participant

    mapreduce.job.reduce.slowstart.completedmaps=0.8 is a hack not a permanent solution. If the job is very big, even 0.8 may not server the purpose.

    reducing the no. of mappers by increasing the split size will lead to more spills on local disk.

    This is a basic M/R feature that if reducers are awaiting on Mappers to finish, schedulers should pre empt some of the reducers. we are running much bigger job in other cluster with cloudera CDH3U6 (very old) and its running fine.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.