The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDFS Forum

HDFS fails to start

  • #11715
    Rob Styles


    Once I’ve added proxy settings to /etc/init.d/hmc the cluster prepares itself and the cluster install completes successfully. It then fails on starting the hdfs service.

    Looking at the deployment logs I find that when hdfs starts the secondary name node (su – hdfs -c ‘/usr/lib/hadoop/sbin/ –config /etc/hadoop/conf start secondarynamenode’) it attempts to create /hdfs (mkdir: cannot create directory `/hdfs’: Permission denied).

    Checking the /etc/hadoop/conf/hdfs-site.xml I find that many of the property values have not been set – including and other dirs which I guess is why it’s trying to create directories in the root.

    I’ve not worked out how to take this one further yet so I’d really welcome any suggestions.

    I have the deploy log json saved if that’s of any use.


    Update 11:44 – I have entries like this:

    Wed Oct 31 10:36:33 +0000 2012 /Stage[1]/Manifestloader/Exec[puppet_apply]/returns (notice): Wed Oct 31 10:15:06 +0000 2012 Scope(Hdp2::Configfile[/etc/hbase/conf//hbase-site.xml]) (warning): Could not look up qualified variable ‘::hdp-hbase::params::hbase_hdfs_root_dir’; class ::hdp-hbase::params has not been evaluated

    in the puppet_agent.log. The entries (at a first glance) seem to match up with the missing values in the config files.

    /etc/hadoop/conf is a symlink to alternatives and then back to conf.empty. Not sure if that’s how it should have been left.


  • Author
  • #11728
    Jeff Sposetti

    Hi Rob,

    When you did your install, couple questions…

    1) Did you use HMC to perform the install?
    2) If yes, number of nodes? And did you accept the recommended master component layout in the wizard or did you customize? Meaning: did you move around NameNode or SecondaryNameNode, etc?


    Rob Styles

    Hi Jeff,

    Yes, I am using the hmc installed from the hdp2 instructions (

    The cluster has 5 nodes, all with an identical install of CentOS-6.2-x86_64-minimal.iso and a full yum update to the latest versions of everything. They’re all have a hosts file referencing the other machines correctly and all have password-less ssh keys setup and known_hosts configured. i.e. I did the pre-reqs 😉

    On one install I accepted the recommended master component layout and on a subsequent attempt I move the hive master onto the same machine as the mysql instance is running on.

    I didn’t move the name node or secondary name node either time.

    The entries I’m getting in the puppet_agent.log (an example in the previous post) look like a ruby class evaluation issue – forum posts about classes not being evaluated seem to point to a missing include. That would seem odd as the hdp2 hmc worked fine installing on my VM.

    Any help getting it up and running is greatly appreciated.


    Jeff Sposetti

    Ok. Next level of questions related to your mysql choices…

    1) Did you leave the defaults, which has HMC perform the MySQL install for Hive?
    2) Or did you designate a host with any already existing instance of MySQL (that you had installed)? And that MySQL host is in your cluster, so you tried to target Hive Metastore to install on that server?

    Rob Styles

    I pre-installed mysql on one of the host machines (06, there is no 05 right now) before installing hmc on 01. I setup mysql with a hive db and a hive user that can logon from any machine with a password.

    I gave all those details to hmc on the hive page. When I got the customize services tabs it opened straight to the hive tab, so I thought they was mandatory. Little round red marker on the tab. Other red marker was on nagios, asking for an email address for nagios alerts.


    Jeff Sposetti

    We are tracking a potential issue that when you modify the default recommended master layout, configs do not get pushed to all nodes during install (i.e. you see some property values not set). That will result in services on those nodes to not function. Can you try to re-run your install, but accept (i.e. do not modify) the default layout, to see if that gets you past the issue?

    Rob Styles

    Hi Jeff,

    I’ve had to put the cluster back to my original hadoop install for now – I need to get some work done!

    Hopefully I’ll be able to take another look at HDP2 in a week or so.

    thanks for the help


The topic ‘HDFS fails to start’ is closed to new replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.