HDP on Linux – Installation Forum

HDFS start failed

  • #11774
    Jinsong Yin

    I have got an error:HDFS start failed. And i have uploaded the script resulting file(Jinsong.check.sh.log) and the hmc.log(Jinsong.hmc.log), any advice is appreciated.

    thanks a lot!
    Jinsong Yin

to create new topics or reply. | New User Registration

  • Author
  • #11835
    Sasha J

    It looks like your problem coming from timeouts during the installation.

    [timedoutnodes] => Array
    [0] => datanode2test.localdomain
    [1] => datanode1test.localdomain
    [2] => secondarynamenodetest.localdomain

    Please, do the following commands:
    1. on all nodes: yum erase hmc puppet
    2. on HMC node: yum install hmc
    3. on HMC node: service hmc start

    then connect to HMC service trough UI and rerun your installation.
    Make sure you have all the RPMs installed on the nodes during the process.

    Thank you!

    Lindsay Weir

    I am also running into the same problem for version 2.0. I have uploaded a tar file with the check.sh, hmc.log and puppet log files (file called hw.tar). Passwordless SSH works, NTP running, puppet kick tests work from the hmc to all the nodes. Local hosts file resolves each node correctly. I have uninstalled and run the installation several times and it still fails on the same location.




    Hi Lindsay,

    Since you are working with HDP2.0 you should post this questions there. But, in looking at the logs you posted, if looks like you need to disable SElinux on all of the computers on the cluster and turn of iptables on hmc11. Then retry the installation.

    Let me know if this helps.

    Lindsay Weir

    iptables are turned off on all nodes.

    [root@hmc11 ~]# service iptables status
    iptables: Firewall is not running.

    Thanks for SELinux, I corrected this on all the nodes and rebooted all the nodes and tried again.

    [root@hmc11 ~]# more /etc/sysconfig/selinux

    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    # enforcing – SELinux security policy is enforced.
    # permissive – SELinux prints warnings instead of enforcing.
    # disabled – No SELinux policy is loaded.
    # SELINUXTYPE= can take one of these two values:
    # targeted – Targeted processes are protected,
    # mls – Multi Level Security protection.

    [root@hmc11 ~]# getenforce

    I uninstalled:

    yum -y erase puppet hmc
    yum -y install hmc
    service hmc start

    and then ran it again but it fails in the same place. I have uploaded the check.sh and logs again in the hw2.tar upload.

    Thanks again



    Hi Lindsay,

    Could you also send along the namenode logs?


    Lindsay Weir

    Which are they? Thx


    Hi Lindsay,

    the namenode logs are usually in /var/log/hadoop/hdfs and have ‘namenode’ in the filename and end with ‘.log’
    something like: hadoop-<user>-namenode-<hostname>.log


    Lindsay Weir

    Thanks Ted. I have uploaded the logs – hdfs_logs.tar



    Hi Lindsay,

    From the namenode logs, it looks like your name node did not get formatted during installation. You can correct for this by:

    * run the installation till if fails
    * don’t do the uninstall that HMC recommends
    * instead close the browser
    * on the console/terminal in the box where you are running the installer issue the following commands as root:
        su hdfs
        hadoop namenode -format
        yum erase hmc puppet
        yum install hmc
        service hmc start
    * now do the installation as usual

    I hope that this gets you to a good install

    Lindsay Weir

    Thanks Ted. I did make progress but it now fails on the Oozie start phase. hw.tar has been uploaded with logs.




    The only thing I can suggest at this point is to retry the install without doing the uninstall. I’ve had times during an install that one of the items failed, but on a retry it passed. Also since you are using HDP2.0 I’ve really helped you here more than I should have. This question should have been posted in the HDP2.0 alpha feedback section of this forum.


You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.