HDP on Windows – Installation Forum

Pig Map Reduce Error

  • #47898
    Manish Sharma
    Participant

    Hi All,

    I am working on processing of MongoDB and Pig MapReduce using PigLatin. I have successfully imported MongoDB data to Hadoop cluster.But When I am trying to do group by operations getting the following Error.

    =============================================================================================================================

    2014-01-31 19:53:22,013 [Thread-19] INFO org.apache.hadoop.mapred.ReduceTask – attempt_local1860096372_0003_r_0000
    Need another 1 map output(s) where 0 is already in progress
    2014-01-31 19:53:22,013 [Thread-19] INFO org.apache.hadoop.mapred.ReduceTask – attempt_local1860096372_0003_r_0000
    Scheduled 0 outputs (0 slow hosts and0 dup hosts)
    2014-01-31 19:53:26,115 [communication thread] INFO org.apache.hadoop.mapred.LocalJobRunner – reduce > copy >
    2014-01-31 19:53:32,116 [communication thread] INFO org.apache.hadoop.mapred.LocalJobRunner – reduce > copy >
    2014-01-31 19:53:35,117 [communication thread] INFO org.apache.hadoop.mapred.LocalJobRunner – reduce > copy >
    2014-01-31 19:53:41,118 [communication thread] INFO org.apache.hadoop.mapred.LocalJobRunner – reduce > copy >

    =============================================================================================================================

    Could any one please tell me why this is happening ?

    Thanks
    MANISH

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #48237
    Robert Molina
    Moderator

    Hi Manish,
    Does the job finish? Does the jobtracker logs have any information or the task attempt?

    Regards,
    Robert

    #48239
    Manish Sharma
    Participant

    Hi,

    I did some changes after that it worked for me, The changes I have made are

    <property>
    <name>mapred.job.tracker</name>
    <value>hdfs://localhost:9000</value>
    <final>true</final> //CHANGE THIS true to False
    </property>

    <property>
    <name>fs.default.name</name>
    <!– cluster variant –>
    <value>hdfs://MASTER:8020</value>
    <final>ture</final> //CHANGE THIS true to False
    </property>

    After this changes it worked for me.

    Thanks
    MANISH

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.