MapReduce Forum

hadoop streaming failed with error code 1

  • #44142
    Anupam Gupta
    Participant

    Hi,

    I am trying to write my first mapreduce program that doubles the integers from 1 to 100.

    ints = to.dfs(1:100)
    calc = mapreduce(input = ints,
    map = function(k, v) cbind(v, 2*v))
    from.dfs(calc)

    However, I get an error message:
    13/11/19 16:02:36 INFO streaming.StreamJob: killJob…
    Streaming Command Failed!
    Error in mr(map = map, reduce = reduce, combine = combine, in.folder = if (is.list(input)) { :
    hadoop streaming failed with error code 1

    I am using CentOS and 2-Node cluster with RStudio running on the Master node.

    However, after reassigning the environmental variables (HADOOP_CMD, HADOOP_CONF, HADOOP_STREAMING etc) in R using putty, I am able to run the program in the R Command Line (putty). When I run the same program in RStudio, I get the error message.

    Please help.

    Thanks,
    Vedant

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #44231
    abdelrahman
    Moderator

    Hi Anupam,

    Have you tried to source the environment variables before running the R?

    Thanks
    -Abdelrahman

    #46186
    Anupam Gupta
    Participant

    Yes I have set the following environmental variables:
    HADOOP_HOME
    HADOOP_CMD
    JAVA_HOME
    HADOOP_MAPRED_HOME
    HADOOP_STREAMING
    HADOOP_CONF
    LD_LIBRARY_PATH

    But I still getting the same error message.

    Thanks,
    Anupam

    #48792
    Participant

    First time I do the tutorial for RHadoop correctly. Now im geting the same error

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.