HDFS Forum

NameNode not running, port conflict

  • #46163
    Timothee Gautheron

    Hi there !

    I finished earlier my installation of HDP with Ambari, i’m running Ambari 1.4.2 and the installation was fine. But some services didn’t start during the wizard. Especially the HDFS (namenode not running). So i used the dashboard to start it manually but it failed again.
    In the logs of the NN, i found this line :

    2013-12-23 17:44:09,784 INFO http.HttpServer (HttpServer.java:start(690)) – HttpServer.start() threw a non Bind IOException
    java.net.BindException: Port in use: <host>:50070

    I thought of a classic problem, but a netstat command was telling me that nothing is using :50070

    I try to change the port with the configs pannel of the dashboard, to use 50071. Again, the same problem with a port already in use.

    Do you guys have any idea to solve this problem ?

    Thanks =)

to create new topics or reply. | New User Registration

  • Author
  • #46360
    Robert Molina

    Hi Timothee,
    That’s interesting. Can you verify not another namenode process is already running? You can just do a jps or ps to verify if any process are running via hdfs user. Also, you may want to make sure the ususal of turning of selinux and any firewalls to help isolate the issue.


    Timothee Gautheron

    Hi ! Sorry for the delay.

    So, no other namenode process is running, i checked at that time. And no process is using these ports. Firewall is ok to go through and selinux is off !

    On another instance of the same cluster ( VMs ), i manage to install and running hadoop 2 (from the apache repository). So i pretty sure Ambari is the problem here. I will investigate again the problem this week, i really want to understand what’s going on and evaluate Ambari.

    Thanks for your previous reply and hints.

    Timothee Gautheron

    Ok, i have not resolve my problem, however i have more informations about it.

    This is what i got : the hdfs-site.xml is the culprit. How & why i do not know.

    I was concerned about the “port in use” problem so i checked the hdfs-site.xml and took the definition “dfs.namenode.http-address” off locally on my NN. Then, from the NN in shell i successfully started the service. It used the hdfs-default.xml definitions to load it.
    But from Ambari, the “start service” button upload a single conf file across the cluster, and so fail again with the port in use problem.

    So i have two side on this problem :
    1/ I always used a single instance of conf files for an hadoop cluster, why now on a master node some parts of the conf (concerning this very master) have to be cut off ?

    2/ In the advanced configs of the HDFS service on Ambari, i must specify a host:port (wich will be included in the hdfs-site.xml ), can i manage to work arround that ?

    Thank you for your time !

    Robert Molina

    Hi Timothee,
    1. dfs.namenode.http-address is a property that is there in hdfs-site.xml since the namenode webui should always be up, when service is up. The only thing I can think of is that the xml file (hdfs-site.xml) being placed on the namenode machine might not be well formed.

    2. As far as the syntax, that is standard, for the property to have address and port.

    Can you put the property that you removed? Also, can verify the hdfs-site.xml is well formed xml ? You should can use the xmllint command to try and open the file to verify if it is well formed.


    Timothee Gautheron

    Thanks for the follow up Robert.

    I checked my hdfs-site.xml with xmllint (double checked it), it is well formed.

    This is the part i removed to get the namenode to start :

    in /etc/hosts i have this line : c6-76911.cldad03.local c6-76911

    Timothee Gautheron

    Okay ! Last bump in this thread : i resolved my problem !

    I had to tune a little my hosts file to get it to work, a virtual environment wasn’t really the best tools to deploy Ambari i guess but now it works !

    Everything is in the green and that’s a really good start.

    Thank you again for your time on this matter.


    Hi Timothee,

    I have the same port issue. Can you please tell how exactly you solved this issue?


    Sungho Kim

    Hi there
    I have in the same problem.
    Anybody who has a solution please let me know.
    How can I remove “dfs.namenode.http-address” property in hdfs-site.xml?

    stéphane verdy

    Hi there
    I have in the same problem.
    Anybody who has a solution please let me know. Timothee for example, can you please tell how exactly you solved this issue?

    stéphane verdy

    Hi there,

    I found an error into /etc/hosts the Ip adress is false. I correct this error and now it works fine 😉

    No need to tune my hosts file… just correct it !

    Orlando Cassano

    Hi Stéphane,

    I’m getting the same issue, but my /etc/hosts is valid and the machine IP address did not change.
    So what should I do? Any idea ?

    Any help would be greatly appreciated.

    Thank you.


    I have same problem on RedHat 6 and I solve it. My problem was in mysql server on same host with NameNode and localhost as hostname for mysql connection. By default, if you connect to mysql server by localhost host, mysql use socket, not port, lock socket ::1:50700,after NameNode start we give Port (sic!) in use error, etc.

    My workaround: comment ::1 line in /etc/hosts file. This helps for me. Thanks Timothee for idea.   hdp-name.nbki.msk   hdp-name localhost localhost.localdomain localhost4 localhost4.localdomain4
    # ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.