HDP on Linux – Installation Forum

Ambari Name Node fails to start: 255 instead of one of [0]

  • #28512
    Tim Benninghoff
    Participant

    First, some context. While attempting an installation of a Hadoop cluster using Ambari, I struggled with getting SSH to stop blocking the communication of the cluster. Even though I got that sorted, I figured I had so irreparably screwed up the installation that I wanted to start over and I used this advice to reset the cluster: http://hortonworks.com/community/forums/topic/delete-cluster/
    Now, I got a nearly flawless installation except the NameNode service refused to start. Attempting to start the NameNode through Ambari, I get the following error:
    err: /Stage[2]/Hdp-hadoop::Namenode/Hdp-hadoop::Namenode::Create_app_directories[create_app_directories]/Hdp-hadoop::Hdfs::Directory[/mapred]/Hdp-hadoop::Exec-hadoop[fs -mkdir /mapred]/Hdp::Exec[hadoop –config /etc/hadoop/conf fs -mkdir /mapred]/Exec[hadoop –config /etc/hadoop/conf fs -mkdir /mapred]/returns: change from notrun to 0 failed: hadoop –config /etc/hadoop/conf fs -mkdir /mapred returned 255 instead of one of [0] at /var/lib/ambari-agent/puppet/modules/hdp/manifests/init.pp:340

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #28559
    Tim Benninghoff
    Participant

    Perhaps I don’t have the SSH issue resolved. Looking further upstream in the errors, I see a notice like this at the very beginning:
    notice: /Stage[2]/Hdp-hadoop::Namenode/Hdp-hadoop::Namenode::Create_app_directories[create_app_directories]/Hdp-hadoop::Hdfs::Directory[/mapred]/Hdp-hadoop::Exec-hadoop[fs -mkdir /mapred]/Hdp::Exec[hadoop –config /etc/hadoop/conf fs -mkdir /mapred]/Exec[hadoop –config /etc/hadoop/conf fs -mkdir /mapred]/returns: mkdir: Call to ..com\xx.xx.xx.xx:8020 failed on connection exception: java.net.ConnectException: Connection refused

    I cleared up an issue with being able to SSH to the name node from the name node via the FQDN, but that didn’t help. I’ve also noticed that the /etc/hadoop/conf/ directory ends up having a number of files owned by the hadoop and mapreduce users, but no such users exist on my Name Node.

    Now I’m back to considering whether to reset the cluster or trash the name node and rebuild it from scratch from the OS up.

    #28640
    tedr
    Moderator

    Hi Tim,

    It does indeed sound like there are still some problems with ssh on the namenode to itself. Is the part you xx’d out in your trace an ip address or a host name? if an IP address what are the contants of your /etc/hosts file? these questions are assuming that you’re installing on a Linux box.

    Thanks,
    Ted.

    #28766
    Tim Benninghoff
    Participant

    Hi Ted,

    The part I xx’d out is the IP of the IP of the name node. For example:
    ‘Call to namenode.domain.com\192.168.1.100:8020 failed…etc
    In my /etc/hosts file on the name node I have an entry for 127.0.0.1. I’ve commented out the ::1 entry, and I’ve also added an entry for name nodes actual IP that is something like:
    xx.xx.xx.xx NameNode.domain.com NameNode.domain NameNode
    I do not have any entries in the hosts file for any other nodes.
    I’ve got DNS and Reverse DNS set up in the domain so that all IPs and names for all of the nodes resolve properly.
    I’ve also verified that I can password-less SSH from the namenode to all other nodes in the intended cluster.

    #28795
    Yi Zhang
    Moderator

    Hi Tim,

    If you go to Ambari / Hosts tab, and try start namenode from there, does it start?

    Thanks,
    Yi

    #28801
    Tim Benninghoff
    Participant

    Hi Yi,

    Unfortunately, no, it does not start. And, that’s where I’ve been trying to start it from.

    #28802
    abdelrahman
    Moderator

    Hi Tim,

    Check the namenode logs in /var/logs/hadoop/hdfs for any errors. It is possible that the namenode needs to be formated first if this is a first time install. To format the namenode from command line please run.
    hadoop namenode -format

    Hope this helps.

    Thanks
    -Abdelrahman

    #29031
    Tim Benninghoff
    Participant

    I ended up just giving up on this install. I’m going to try again with a new namenode machine from scratch and see what happens.

    #29036
    tedr
    Moderator

    Hi Tim,

    Sorry that you couldn’t get the installation going, let us know how the next attempt goes.

    Thanks,
    Ted.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.