The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

YARN Forum

Unable to run distributed shell on Yarn

  • #36571

    I am trying to run distributed shell example on yarn cluster

    public void realClusterTest() throws Exception {
    System.setProperty("HADOOP_USER_NAME", "hdfs");
    String[] args = {
    };"Initializing DS Client");
    Client client = new Client(new Configuration());
    boolean initSuccess = client.init(args);
    Assert.assertTrue(initSuccess);"Running DS Client");
    boolean result =;"Client run completed. Result=" + result);

    but it fails with:

    2013-09-17 11:45:28,338 INFO [main] distributedshell.Client ( - Got application report from ASM for, appId=11, clientToAMToken=null, appDiagnostics=Application application_1379338026167_0011 failed 2 times due to AM Container for appattempt_1379338026167_0011_000002 exited with exitCode: 1 due to: Exception from container-launch:
    at org.apache.hadoop.util.Shell.runCommand(
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(

    .Failing this attempt.. Failing the application., appMasterHost=N/A, appQueue=default, appMasterRpcPort=0, appStartTime=1379407525237, yarnAppState=FAILED, distributedFinalState=FAILED,, appUser=hdfs

    Here is what I see in server logs:

    2013-09-17 08:45:26,870 WARN nodemanager.DefaultContainerExecutor ( - Exception from container-launch with container ID: container_1379338026167_0011_02_000001 and exit code: 1
    at org.apache.hadoop.util.Shell.runCommand(
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(

    The question is how can I get more details to identify what is going wrong.

    PS: we are using HDP 2.0.5

  • Author
  • #36606

    Looks like this is a general problem. We tryed running PIG against our cluster and it fails with exactly same exception.

    Here is the record regarding this in jobhistory UI. Server logs are nearly the same.
    application_1379338026167_0045 hdfs PigLatin:DefaultJobName MAPREDUCE default Tue, 17 Sep 2013 11:01:55 GMT Tue, 17 Sep 2013 11:02:11 GMT FAILED FAILED


    Just in case, I see that hive runs it tasks without problems.


    Are you able to find anything in the AM container’s logs? You can get to those logs in the ResourceManager’s per application page.

    Please also share what the NodeManager’s logs are showing before the above exception.



    Thanks a lot for you reply.

    Since some limitations of this forum for file uploads and message size I am putting all requested info here:

    Please let me know if more info needed


    Unfortunately I am not able to get much from the logs.

    Few more questions:
    – NM did say that logs were aggregated. Can you check the per-node file at /app-logs/hdfs/logs/application_1379338026167_0125/ on HDFS?
    – If you can’t find anything on HDFS, can you also check your local log dir /hadoop/yarn for the specific container?
    – Are basic MR jobs working? For e.g. , you can run the standard MR examples. If those are also failing, it could point to a set up issue.

    We are working on better debugging for these AM crash failures, but I’d like to help you in any ways possible.


    Vinod, at least MR tasks initiated by hive run fine


    I got to that directory. I t contains 2 files which are probably archived. But I was unable to open them. Below is the content from one of them:

    m\E8 \88\8A}\A3=\B5v\E3\BBe\EBB`\B31\EE\E9\D2(\D2ث”K\8BN\FC\94y\A5h\BD5v\93ЋR]Hh\F4\FF\F3I\A39A\AF~Lf_/\AF\ABc\84\D0\D1)z\93O\A7\E5e\91 \A1y^\94\E8:= ]\FF\AC&\B3\97\E8\B8\E6 \DBG\EF\99V\8E
    5*\84)msڈ 댸\F1\B8\ADA\CA(ߟv\97\E9\D4[\E0\F8f3> ny\95vow\C7-\93tL\D21I\CB$\87L\F2\98I\9E2O\A8\EB`
    \\AD\F9\E0od\B2\8Ao\F7\CFғ\8B\F8\C0\BE\AF\D2o\8D Kg~\E2\CC\E2a\E7\B4^\91\95\B0\8C\94ԫPCsJ\F3|\92\9Db\97!\F9Gv\FAa4\E8\DAK\8E\95v\B8}bh3\C0m\8Ba֪\FFק\8C\A7F/

    \F6\F4\F7\D3\D2`PK\CE\CF+I\CC\CCK-\8A746\B746\B6002343\8F7042\F1 `\C80j+ \EEx\9CcJ\AFb8\BB\FD| 0j\BCdata:BCFile.indexgz\CE data:TFile.indexgz\CD\CD:6data:TFile.metanone\CD\C7000000000\D1\D3h\91\B5׶9\DFA@\92\BA\E1P


    Hi, any updates on this? I am having a similar issue with a job that ran fine under Hadoop 1.2.0.

    Wang Wei

    You need to set HADOOP_MAPRED_HOME in your env var. If that is not ok,you need to set directly yarn.application.classpath o rmapreduce.application.classpath

    If that is not ok yet,you need to check the am *container*.sh.

    Shane Jarvie


    I ran into a similar issue when running Java code that accessed HBase. The issue for myself was with the environment, as stated below.

    My solution ended being to provide a location to the HBaseConfiguration files, in a manner such as:

    Configuration conf = new Configuration();
    conf.addResource(new Path(“/etc/hbase/conf/hbase-site.xml”));
    HTable table = new HTable(conf, HBASE_TABLE_MAIN);

    You may need to do something similar with the files in hadoop conf directory.

    Hope that helps

The forum ‘YARN’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.