HDP on Windows – Other Forum

Retrying connect to server: 0.0.0.0/0.0.0.0:8030

  • #53372
    Tony Huang
    Participant

    We have setup a hdp2.0 hadoop cluster with 1 master node and 1 slave node.
    running smple mapreduce programm is ok.
    It is okay to run mapreduce code If the MR ApplicationMaster is started on master node.
    but it is hang to run mapreduce code if the MR ApplicationMaster is started on slave node.

    The logs are as below:
    2014-05-11 20:10:59,795 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
    2014-05-11 20:11:01,838 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:03,851 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:05,894 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:07,922 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:09,966 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:11,994 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:14,022 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:16,065 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:18,093 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

    …….
    Anyone has clue on this?

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #53373

    How have you installed the cluster? Manually or using Ambari?

    In any case, this seems like a misconfiguration. Can you set “yarn.resourcemanager.hostname” to match your hostname and restart your cluster?

    #53524
    Tony Huang
    Participant

    yarn.resourcemanager.hostname is set correctly in yarn-site.xml after installation.
    I installed hdp2.0 manually on each node.

    #53525

    Okay, do you have yarn.resourcemanager.scheduler.address explicitly set to 0.0.0.0:8030?

    #53632
    Tony Huang
    Participant

    We use the Windows server 2008 R2 +hdp2.0 for windows.
    I did not set the yarn.resourcemanager.scheduler.address.

    Later explicitly setting the yarn.resourcemanager.scheduler.address in yarn-site.xml, it is same.
    Is it possible that this is a bug of hdp2.0 for windows? How to resolve it?

    #53784
    Tony Huang
    Participant

    The root cause is code issue.
    original code:
    Configuration conf = new Configuration();
    conf.set(“fs.defaultFS”,hdfsUri);
    conf.set(“mapreduce.framework.name”, “yarn”);
    conf.set(“yarn.resourcemanager.address”, yarnip+”:”+8032);

    Add one line:
    conf.set(“yarn.resourcemanager.scheduler.address”, yarnip+”:”+8030)
    Then it is okay.

    By default the 0.0.0.0:8030 is used for scheduler. The slave have no scheduler.MR Appmaster can not work if started on slave.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.