Home Forums HDP on Windows – Non Installation issues Retrying connect to server: 0.0.0.0/0.0.0.0:8030

This topic contains 5 replies, has 2 voices, and was last updated by  Tony Huang 6 months, 2 weeks ago.

  • Creator
    Topic
  • #53372

    Tony Huang
    Participant

    We have setup a hdp2.0 hadoop cluster with 1 master node and 1 slave node.
    running smple mapreduce programm is ok.
    It is okay to run mapreduce code If the MR ApplicationMaster is started on master node.
    but it is hang to run mapreduce code if the MR ApplicationMaster is started on slave node.

    The logs are as below:
    2014-05-11 20:10:59,795 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
    2014-05-11 20:11:01,838 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:03,851 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:05,894 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:07,922 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:09,966 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:11,994 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:14,022 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:16,065 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
    2014-05-11 20:11:18,093 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

    …….
    Anyone has clue on this?

Viewing 5 replies - 1 through 5 (of 5 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #53784

    Tony Huang
    Participant

    The root cause is code issue.
    original code:
    Configuration conf = new Configuration();
    conf.set(“fs.defaultFS”,hdfsUri);
    conf.set(“mapreduce.framework.name”, “yarn”);
    conf.set(“yarn.resourcemanager.address”, yarnip+”:”+8032);

    Add one line:
    conf.set(“yarn.resourcemanager.scheduler.address”, yarnip+”:”+8030)
    Then it is okay.

    By default the 0.0.0.0:8030 is used for scheduler. The slave have no scheduler.MR Appmaster can not work if started on slave.

    Collapse
    #53632

    Tony Huang
    Participant

    We use the Windows server 2008 R2 +hdp2.0 for windows.
    I did not set the yarn.resourcemanager.scheduler.address.

    Later explicitly setting the yarn.resourcemanager.scheduler.address in yarn-site.xml, it is same.
    Is it possible that this is a bug of hdp2.0 for windows? How to resolve it?

    Collapse
    #53525

    Okay, do you have yarn.resourcemanager.scheduler.address explicitly set to 0.0.0.0:8030?

    Collapse
    #53524

    Tony Huang
    Participant

    yarn.resourcemanager.hostname is set correctly in yarn-site.xml after installation.
    I installed hdp2.0 manually on each node.

    Collapse
    #53373

    How have you installed the cluster? Manually or using Ambari?

    In any case, this seems like a misconfiguration. Can you set “yarn.resourcemanager.hostname” to match your hostname and restart your cluster?

    Collapse
Viewing 5 replies - 1 through 5 (of 5 total)