Home Forums HDFS NameNode not running, port conflict

Tagged: 

This topic contains 8 replies, has 4 voices, and was last updated by  Sungho Kim 2 months, 3 weeks ago.

  • Creator
    Topic
  • #46163

    Timothee Gautheron
    Participant

    Hi there !

    I finished earlier my installation of HDP with Ambari, i’m running Ambari 1.4.2 and the installation was fine. But some services didn’t start during the wizard. Especially the HDFS (namenode not running). So i used the dashboard to start it manually but it failed again.
    In the logs of the NN, i found this line :

    2013-12-23 17:44:09,784 INFO http.HttpServer (HttpServer.java:start(690)) – HttpServer.start() threw a non Bind IOException
    java.net.BindException: Port in use: <host>:50070

    I thought of a classic problem, but a netstat command was telling me that nothing is using :50070

    I try to change the port with the configs pannel of the dashboard, to use 50071. Again, the same problem with a port already in use.

    Do you guys have any idea to solve this problem ?

    Thanks =)

Viewing 8 replies - 1 through 8 (of 8 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #58384

    Sungho Kim
    Participant

    Hi there
    I have in the same problem.
    Anybody who has a solution please let me know.
    How can I remove “dfs.namenode.http-address” property in hdfs-site.xml?

    Collapse
    #48759

    Jana
    Participant

    Hi Timothee,

    I have the same port issue. Can you please tell how exactly you solved this issue?

    Thanks

    Collapse
    #47412

    Timothee Gautheron
    Participant

    Okay ! Last bump in this thread : i resolved my problem !

    I had to tune a little my hosts file to get it to work, a virtual environment wasn’t really the best tools to deploy Ambari i guess but now it works !

    Everything is in the green and that’s a really good start.

    Thank you again for your time on this matter.

    Collapse
    #47252

    Timothee Gautheron
    Participant

    Thanks for the follow up Robert.

    I checked my hdfs-site.xml with xmllint (double checked it), it is well formed.

    This is the part i removed to get the namenode to start :
    <property>
    <name>dfs.namenode.http-address</name>
    <value>c6-76911.cldad03.local:50070</value>
    </property>

    in /etc/hosts i have this line :
    10.200.89.164 c6-76911.cldad03.local c6-76911

    Collapse
    #47191

    Robert Molina
    Moderator

    Hi Timothee,
    1. dfs.namenode.http-address is a property that is there in hdfs-site.xml since the namenode webui should always be up, when service is up. The only thing I can think of is that the xml file (hdfs-site.xml) being placed on the namenode machine might not be well formed.

    2. As far as the syntax, that is standard, for the property to have address and port.

    Can you put the property that you removed? Also, can verify the hdfs-site.xml is well formed xml ? You should can use the xmllint command to try and open the file to verify if it is well formed.

    Regards,
    Robert

    Collapse
    #47171

    Timothee Gautheron
    Participant

    Ok, i have not resolve my problem, however i have more informations about it.

    This is what i got : the hdfs-site.xml is the culprit. How & why i do not know.

    I was concerned about the “port in use” problem so i checked the hdfs-site.xml and took the definition “dfs.namenode.http-address” off locally on my NN. Then, from the NN in shell i successfully started the service. It used the hdfs-default.xml definitions to load it.
    But from Ambari, the “start service” button upload a single conf file across the cluster, and so fail again with the port in use problem.

    So i have two side on this problem :
    1/ I always used a single instance of conf files for an hadoop cluster, why now on a master node some parts of the conf (concerning this very master) have to be cut off ?

    2/ In the advanced configs of the HDFS service on Ambari, i must specify a host:port (wich will be included in the hdfs-site.xml ), can i manage to work arround that ?

    Thank you for your time !

    Collapse
    #46654

    Timothee Gautheron
    Participant

    Hi ! Sorry for the delay.

    So, no other namenode process is running, i checked at that time. And no process is using these ports. Firewall is ok to go through and selinux is off !

    On another instance of the same cluster ( VMs ), i manage to install and running hadoop 2 (from the apache repository). So i pretty sure Ambari is the problem here. I will investigate again the problem this week, i really want to understand what’s going on and evaluate Ambari.

    Thanks for your previous reply and hints.

    Collapse
    #46360

    Robert Molina
    Moderator

    Hi Timothee,
    That’s interesting. Can you verify not another namenode process is already running? You can just do a jps or ps to verify if any process are running via hdfs user. Also, you may want to make sure the ususal of turning of selinux and any firewalls to help isolate the issue.

    Regards,
    Robert

    Collapse
Viewing 8 replies - 1 through 8 (of 8 total)