Home Forums HDFS name node is not starting in ambari 1.6.0

This topic contains 3 replies, has 3 voices, and was last updated by  Christian Nunez 2 weeks, 1 day ago.

  • Creator
    Topic
  • #57782

    Vishal Dhavale
    Participant

    Hi,
    after formatting namenode it is not starting.when i type ‘JPS’ command it shows namenode runnig but on ambari it is not. it give following error

    Fail: Execution of ‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode’ returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-agent6.poc.com.out

    and /var/lib/ambari-agent/data/output-1310.txt is as follow

    2014-07-18 23:20:17,263 – Execute['/bin/echo 0 > /selinux/enforce'] {‘only_if’: ‘test -f /selinux/enforce’}
    2014-07-18 23:20:17,277 – Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
    2014-07-18 23:20:17,278 – Execute['mkdir -p /usr/lib/hadoop/lib/native/Linux-i386-32; ln -sf /usr/lib/libsnappy.so /usr/lib/hadoop/lib/native/Linux-i386-32/libsnappy.so'] {}
    2014-07-18 23:20:17,297 – Execute['mkdir -p /usr/lib/hadoop/lib/native/Linux-amd64-64; ln -sf /usr/lib64/libsnappy.so /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so'] {}
    2014-07-18 23:20:17,313 – Directory['/etc/hadoop/conf.empty'] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True}
    2014-07-18 23:20:17,314 – Link['/etc/hadoop/conf'] {‘to’: ‘/etc/hadoop/conf.empty’}
    2014-07-18 23:20:17,314 – Directory['/var/log/hadoop'] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True}
    2014-07-18 23:20:17,315 – Directory['/var/run/hadoop'] {‘owner’: ‘root’, ‘group’: ‘root’, ‘recursive’: True}
    2014-07-18 23:20:17,315 – Directory['/tmp'] {‘owner’: ‘hdfs’, ‘recursive’: True}
    2014-07-18 23:20:17,327 – File['/etc/hadoop/conf/hadoop-env.sh'] {‘content’: Template(‘hadoop-env.sh.j2′), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,329 – File['/etc/hadoop/conf/commons-logging.properties'] {‘content’: Template(‘commons-logging.properties.j2′), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,331 – File['/etc/hadoop/conf/health_check'] {‘content’: Template(‘health_check-v2.j2′), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,331 – File['/etc/hadoop/conf/log4j.properties'] {‘content’: ‘…’, ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
    2014-07-18 23:20:17,336 – File['/etc/hadoop/conf/hadoop-metrics2.properties'] {‘content’: Template(‘hadoop-metrics2.properties.j2′), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,337 – XmlConfig['core-site.xml'] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/etc/hadoop/conf’, ‘configurations’: …}
    2014-07-18 23:20:17,342 – Generating config: /etc/hadoop/conf/core-site.xml
    2014-07-18 23:20:17,342 – File['/etc/hadoop/conf/core-site.xml'] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None}
    2014-07-18 23:20:17,343 – Writing File['/etc/hadoop/conf/core-site.xml'] because contents don’t match
    2014-07-18 23:20:17,344 – File['/etc/hadoop/conf/task-log4j.properties'] {‘content’: StaticFile(‘task-log4j.properties’), ‘mode’: 0755}
    2014-07-18 23:20:17,344 – File['/etc/hadoop/conf/configuration.xsl'] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
    2014-07-18 23:20:17,347 – File['/etc/snmp/snmpd.conf'] {‘content’: Template(‘snmpd.conf.j2′)}
    2014-07-18 23:20:17,348 – Execute['service snmpd start'] {}
    2014-07-18 23:20:17,377 – Execute['chkconfig snmpd on'] {}
    2014-07-18 23:20:17,578 – File['/etc/security/limits.d/hdfs.conf'] {‘content’: Template(‘hdfs.conf.j2′), ‘owner’: ‘root’, ‘group’: ‘root’, ‘mode’: 0644}
    2014-07-18 23:20:17,579 – XmlConfig['hdfs-site.xml'] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘conf_dir’: ‘/etc/hadoop/conf’, ‘configurations’: …}
    2014-07-18 23:20:17,585 – Generating config: /etc/hadoop/conf/hdfs-site.xml
    2014-07-18 23:20:17,585 – File['/etc/hadoop/conf/hdfs-site.xml'] {‘owner’: ‘hdfs’, ‘content’: InlineTemplate(…), ‘group’: ‘hadoop’, ‘mode’: None}
    2014-07-18 23:20:17,586 – Writing File['/etc/hadoop/conf/hdfs-site.xml'] because contents don’t match
    2014-07-18 23:20:17,588 – File['/etc/hadoop/conf/slaves'] {‘content’: Template(‘slaves.j2′), ‘owner’: ‘hdfs’}
    2014-07-18 23:20:17,589 – Directory['/hadoop/hdfs/namenode'] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘recursive’: True, ‘mode’: 0755}
    2014-07-18 23:20:17,589 – File['/tmp/checkForFormat.sh'] {‘content’: StaticFile(‘checkForFormat.sh’), ‘mode’: 0755}
    2014-07-18 23:20:17,590 – Execute['/tmp/checkForFormat.sh hdfs /etc/hadoop/conf /var/run/hadoop/hdfs/namenode/formatted/ /hadoop/hdfs/namenode'] {‘path’: ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], ‘not_if’: ‘test -d /var/run/hadoop/hdfs/namenode/formatted/’}
    2014-07-18 23:20:17,602 – Skipping Execute['/tmp/checkForFormat.sh hdfs /etc/hadoop/conf /var/run/hadoop/hdfs/namenode/formatted/ /hadoop/hdfs/namenode'] due to not_if
    2014-07-18 23:20:17,603 – Execute['mkdir -p /var/run/hadoop/hdfs/namenode/formatted/'] {}
    2014-07-18 23:20:17,621 – File['/etc/hadoop/conf/dfs.exclude'] {‘owner’: ‘hdfs’, ‘content’: Template(‘exclude_hosts_list.j2′), ‘group’: ‘hadoop’}
    2014-07-18 23:20:17,625 – Directory['/var/run/hadoop/hdfs'] {‘owner’: ‘hdfs’, ‘recursive’: True}
    2014-07-18 23:20:17,625 – Directory['/var/log/hadoop/hdfs'] {‘owner’: ‘hdfs’, ‘recursive’: True}
    2014-07-18 23:20:17,626 – File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {‘action’: ['delete'], ‘not_if’: ‘ls /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1 && ps cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1′, ‘ignore_failures’: True}
    2014-07-18 23:20:17,649 – Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
    2014-07-18 23:20:17,650 – Execute['ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode'] {‘not_if’: ‘ls /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1 && ps cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1′, ‘user’: ‘hdfs’}
    2014-07-18 23:20:21,766 – Error while executing command ‘start’:
    Traceback (most recent call last):
    File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 105, in execute
    method(env)
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/namenode.py”, line 39, in start
    namenode(action=”start”)
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs_namenode.py”, line 45, in namenode
    create_log_dir=True
    File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/utils.py”, line 63, in service
    not_if=service_is_up
    File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 148, in __init__
    self.env.run()
    File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 149, in run
    self.run_action(resource, action)
    File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 115, in run_action
    provider_action()
    File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 239, in action_run
    raise ex
    Fail: Execution of ‘ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode’ returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-agent6.poc.com.out

Viewing 3 replies - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #59943

    Christian Nunez
    Participant

    Hi Vishal,
    I have the same problem. Did you solve the problem?
    You would help me a lot.
    Thanks,
    Christian

    Collapse
    #57787

    Vishal Dhavale
    Participant

    hi Ramesh ,
    I formatted namenode manually.I execute that command. when I type JPS on comand promt it shows namenode is working. but on ambari dashboard it shows name node is not started and also on yarn nodemanager is also not starting

    Collapse
    #57785

    Ramesh Babu
    Participant

    are you manually formatting the name node. how you linking in the ambari ? if you are using ambari, itself format the namenode. what is the procedure you are following.
    execute the following command in command line.
    export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh –config /etc/hadoop/conf start namenode

    Collapse
Viewing 3 replies - 1 through 3 (of 3 total)