Home Forums HDP on Linux – Installation HDP Cluster Failed to connect worker Node(DataNode, TaskTracker)

This topic contains 1 reply, has 2 voices, and was last updated by  Larry Liu 1 year, 5 months ago.

  • Creator
    Topic
  • #18746

    Hello,
    I am experiencing error on configuring HDP cluster of two nodes (Master(IP:192.168.4.134) & Slave(IP:192.168.4.135))
    Following Cluster Properties are used to make master node.
    HDP_LOG_DIR=e:\hdp\logs
    HDP_DATA_DIR=e:\hdp\data
    NAMENODE_HOST=hdpmaster
    SECONDARY_NAMENODE_HOST=hdpmaster
    JOBTRACKER_HOST=hdpmaster
    HIVE_SERVER_HOST=hdpmaster
    OOZIE_SERVER_HOST=hdpmaster
    TEMPLETON_HOST=hdpmaster
    SLAVE_HOSTS=w2k8entr2
    DB_FLAVOR=derby
    DB_HOSTNAME=hdpmaster
    HIVE_DB_NAME=hive
    HIVE_DB_USERNAME=hive
    HIVE_DB_PASSWORD=hive
    OOZIE_DB_NAME=oozie
    OOZIE_DB_USERNAME=oozie
    OOZIE_DB_PASSWORD=oozie
    My intention is to make w2k8entr2 as datanode. see SLAVE_HOSTS=w2k8entr2
    And in the slave I just changed all Node Name with w2k8entr2.
    In the masternode hadoop command line this line(hadoop datanode) has been executed and got following errors.

    Would you please helm me out…..
    Thanks
    Mahabub

    INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    SHUTDOWN_MSG: Shutting down DataNode at hdpmaster/192.168.4.134
    INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = hdpmaster/192.168.4.134
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.1.0-SNAPSHOT
    STARTUP_MSG: build = git@github.com:hortonworks/hadoop-monarch.git on branch (no branch) -r 1cae347546c2c217eb92fccebcfd95708d5ff848; compiled by ‘jenkins’ on Sun Feb 24 23:01:41 Coordinated Universal Time 2013
    WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
    INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
    INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
    INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads…
    INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
    ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: Problem binding to /192.168.4.135:50010 : Cannot assign requested address: bind
    at org.apache.hadoop.ipc.Server.bind(Server.java:228)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:409)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:304)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1587)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1526)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1544)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1670)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1687)
    Caused by: java.net.BindException: Cannot assign reque

Viewing 1 replies (of 1 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #18838

    Larry Liu
    Moderator

    Hi, Mahabubur

    It seems you are installing HDP on windows. From the error message, it seems that your datanode is not running on the slave node.

    Can you check if all services are running on the 2 machines?

    Thanks
    Larry

    Collapse
Viewing 1 replies (of 1 total)