The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDFS Forum

name node is not starting in ambari 1.6.0

  • #57782
    Vishal Dhavale

    after formatting namenode it is not starting.when i type ‘JPS’ command it shows namenode runnig but on ambari it is not. it give following error

    Fail: Execution of ‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/ –config /etc/hadoop/conf start namenode’ returned 1. starting namenode, logging to /var/log/hadoop/hdfs/

    and /var/lib/ambari-agent/data/output-1310.txt is as follow

    2014-07-18 23:20:17,263 – Execute[‘/bin/echo 0 > /selinux/enforce’] {‘only_if’: ‘test -f /selinux/enforce’}
    2014-07-18 23:20:17,277 – Skipping Execute[‘/bin/echo 0 > /selinux/enforce’] due to only_if
    2014-07-18 23:20:17,278 – Execute[‘mkdir -p /usr/lib/hadoop/lib/native/Linux-i386-32; ln -sf /usr/lib/ /usr/lib/hadoop/lib/native/Linux-i386-32/’] {}
    2014-07-18 23:20:17,297 – Execute[‘mkdir -p /usr/lib/hadoop/lib/native/Linux-amd64-64; ln -sf /usr/lib64/ /usr/lib/hadoop/lib/native/Linux-amd64-64/’] {}
    2014-07-18 23:20:17,313 – Directory[‘/etc/hadoop/conf.empty’] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True}
    2014-07-18 23:20:17,314 – Link[‘/etc/hadoop/conf’] {‘to’: ‘/etc/hadoop/conf.empty’}
    2014-07-18 23:20:17,314 – Directory[‘/var/log/hadoop’] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True}
    2014-07-18 23:20:17,315 – Directory[‘/var/run/hadoop’] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True}
    2014-07-18 23:20:17,315 – Directory[‘/tmp’] {‘owner’: ‘hdfs’, ‘recursive’: True}
    2014-07-18 23:20:17,327 – File[‘/etc/hadoop/conf/’] {‘content’: Template(‘’), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,329 – File[‘/etc/hadoop/conf/’] {‘content’: Template(‘’), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,331 – File[‘/etc/hadoop/conf/health_check’] {‘content’: Template(‘health_check-v2.j2’), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,331 – File[‘/etc/hadoop/conf/’] {‘content’: ‘…’, ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
    2014-07-18 23:20:17,336 – File[‘/etc/hadoop/conf/’] {‘content’: Template(‘’), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,337 – XmlConfig[‘core-site.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/etc/hadoop/conf’, ‘configurations’: …}
    2014-07-18 23:20:17,342 – Generating config: /etc/hadoop/conf/core-site.xml
    2014-07-18 23:20:17,342 – File[‘/etc/hadoop/conf/core-site.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None}
    2014-07-18 23:20:17,343 – Writing File[‘/etc/hadoop/conf/core-site.xml’] because contents don’t match
    2014-07-18 23:20:17,344 – File[‘/etc/hadoop/conf/’] {‘content’: StaticFile(‘’), ‘mode’: 0755}
    2014-07-18 23:20:17,344 – File[‘/etc/hadoop/conf/configuration.xsl’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
    2014-07-18 23:20:17,347 – File[‘/etc/snmp/snmpd.conf’] {‘content’: Template(‘snmpd.conf.j2’)}
    2014-07-18 23:20:17,348 – Execute[‘service snmpd start’] {}
    2014-07-18 23:20:17,377 – Execute[‘chkconfig snmpd on’] {}
    2014-07-18 23:20:17,578 – File[‘/etc/security/limits.d/hdfs.conf’] {‘content’: Template(‘hdfs.conf.j2’), ‘owner’: ‘root’, ‘group’: ‘root’, ‘mode’: 0644}
    2014-07-18 23:20:17,579 – XmlConfig[‘hdfs-site.xml’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/etc/hadoop/conf’, ‘configurations’: …}
    2014-07-18 23:20:17,585 – Generating config: /etc/hadoop/conf/hdfs-site.xml
    2014-07-18 23:20:17,585 – File[‘/etc/hadoop/conf/hdfs-site.xml’] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None}
    2014-07-18 23:20:17,586 – Writing File[‘/etc/hadoop/conf/hdfs-site.xml’] because contents don’t match
    2014-07-18 23:20:17,588 – File[‘/etc/hadoop/conf/slaves’] {‘content’: Template(‘slaves.j2’), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,589 – Directory[‘/hadoop/hdfs/namenode’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘recursive’: True, ‘mode’: 0755}
    2014-07-18 23:20:17,589 – File[‘/tmp/’] {‘content’: StaticFile(‘’), ‘mode’: 0755}
    2014-07-18 23:20:17,590 – Execute[‘/tmp/ hdfs /etc/hadoop/conf /var/run/hadoop/hdfs/namenode/formatted/ /hadoop/hdfs/namenode’] {‘path’: [‘/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin’], ‘not_if’: ‘test -d /var/run/hadoop/hdfs/namenode/formatted/’}
    2014-07-18 23:20:17,602 – Skipping Execute[‘/tmp/ hdfs /etc/hadoop/conf /var/run/hadoop/hdfs/namenode/formatted/ /hadoop/hdfs/namenode’] due to not_if
    2014-07-18 23:20:17,603 – Execute[‘mkdir -p /var/run/hadoop/hdfs/namenode/formatted/’] {}
    2014-07-18 23:20:17,621 – File[‘/etc/hadoop/conf/dfs.exclude’] {‘owner’: ‘hdfs’, ‘content’: Template(‘exclude_hosts_list.j2’), ‘group’: ‘hadoop’}
    2014-07-18 23:20:17,625 – Directory[‘/var/run/hadoop/hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True}
    2014-07-18 23:20:17,625 – Directory[‘/var/log/hadoop/hdfs’] {‘owner’: ‘hdfs’, ‘recursive’: True}
    2014-07-18 23:20:17,626 – File[‘/var/run/hadoop/hdfs/’] {‘action’: [‘delete’], ‘not_if’: ‘ls /var/run/hadoop/hdfs/ >/dev/null 2>&1 && ps cat /var/run/hadoop/hdfs/ >/dev/null 2>&1′, ‘ignore_failures’: True}
    2014-07-18 23:20:17,649 – Deleting File[‘/var/run/hadoop/hdfs/’]
    2014-07-18 23:20:17,650 – Execute[‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/ –config /etc/hadoop/conf start namenode’] {‘not_if’: ‘ls /var/run/hadoop/hdfs/ >/dev/null 2>&1 && ps cat /var/run/hadoop/hdfs/ >/dev/null 2>&1′, ‘user’: ‘hdfs’}
    2014-07-18 23:20:21,766 – Error while executing command ‘start’:
    Traceback (most recent call last):
    File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/”, line 105, in execute
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/”, line 39, in start
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/”, line 45, in namenode
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/”, line 63, in service
    File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 148, in __init__
    File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 149, in run
    self.run_action(resource, action)
    File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 115, in run_action
    File “/usr/lib/python2.6/site-packages/resource_management/core/providers/”, line 239, in action_run
    raise ex
    Fail: Execution of ‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/ –config /etc/hadoop/conf start namenode’ returned 1. starting namenode, logging to /var/log/hadoop/hdfs/

  • Author
  • #57785
    Ramesh Babu

    are you manually formatting the name node. how you linking in the ambari ? if you are using ambari, itself format the namenode. what is the procedure you are following.
    execute the following command in command line.
    export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/ –config /etc/hadoop/conf start namenode

    Vishal Dhavale

    hi Ramesh ,
    I formatted namenode manually.I execute that command. when I type JPS on comand promt it shows namenode is working. but on ambari dashboard it shows name node is not started and also on yarn nodemanager is also not starting

    Christian Nunez

    Hi Vishal,
    I have the same problem. Did you solve the problem?
    You would help me a lot.

    Mohammed Ansari

    My MapReduce job freezes, my jps shows the namenode has started.

    I think I should format the namenode manually too. How to do it?

    Iolar P


    Thanks, this is important to me,


    Robert Molina

    Hi Vishal,
    You mentioned that namenode process is up, can you check what user that process belongs to? Also for Ambari, can you check the agent logs to verify if there are any errors? If ambari shows process not running, and its actually running, most likely its due to the /var/run/hadoop/hdfs/ is not matching with the current process or the agent is not able to read that file.

    Kind Regards,

    Prem Kumar

    For this you need to set ulimit soft and hard nproc and files is more than 10000 check the ambari cluster setup docuement for your reference.

    Vishal Dhavale

    Hi Robert,
    The agent was not able to read /var/run/hadoop/hdfs/ file now it’s working fine.Thanks.

The forum ‘HDFS’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.