Hadoop streaming cannot work properly with WebHcat

to create new topics or reply. | New User Registration

This topic contains 1 reply, has 1 voice, and was last updated by  Tracy Li 1 year, 10 months ago.

  • Creator
  • #38305

    Tracy Li

    Hi Horton,

    I met a issue with run hadoop streaming through webhcat rest API, Could you please help me confirm if this is webhcat bug???

    It works fine when I am using hadoop command
    hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming- -input /user/hue/streaming/zbp07detail.txt -output /user/hue/streaming/output -mapper ./map.py -reducer ./reduce.py -file ./map.py -file ./reduce.

    Then I upload those 2 file to HDFS under(/user/hue/mapreduce) then run the rest call like follow,:
    curl.exe -s -d user.name=hue -d input=/user/hue/mapreduce/inputdata.txt -d output=/user/hue/mapreduce/result/ -d mapper=map.py -d reducer=reduce.py -d file=/user/hue/mapreduce/map.py -d file=/user/hue/mapreduce/recude.py ''

    It doesn’t works expectly. The error log I capture from mapreduce job log is, Could you please help me confirm if this is webhcat bug???:
    templeton: copy hdfs://sandbox.hortonworks.com:8020/apps/webhcat/hadoop-streaming.jar => hadoop-streaming.jar
    templeton: starting [/usr/bin/hadoop, jar, hadoop-streaming.jar, -Dmapreduce.job.credentials.binary=/hadoop/yarn/usercache/hue/appcache/application_1380215570194_0022/container_1380215570194_0022_01_000002/container_tokens, -input, /user/hue/mapreduce/inputdata.txt, -output, /user/hue/mapreduce/result/, -mapper, map.py, -reducer, reduce.py, -file/user/hue/mapreduce/map.py, -file/user/hue/mapreduce/recude.py]
    With environment variables: HADOOP_USER_NAME=hue

    Try -help for more information
    Streaming Command Failed!
    templeton: job failed with exit code 1

    Log Type: stdout
    Log Length: 2450
    Usage: $HADOOP_PREFIX/bin/hadoop jar hadoop-streaming.jar [options]
    -input DFS input file(s) for the Map step.
    -output DFS output directory for the Reduce step.
    -mapper Optional. Command to be run as mapper.
    -combiner Optional. Command to be run as combiner.

Viewing 1 replies (of 1 total)

You must be to reply to this topic. | Create Account

  • Author
  • #38307

    Tracy Li

    BTW, I think -file prarameter didn’t supported by webhcat rest API, Anybody could you pleas help me to figure out and let me know ?

Viewing 1 replies (of 1 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.