HDP on Linux – Installation Forum

Name Node storage directory

  • #43414

    How many namenode storage directories should there be ?
    Because the installer(HDP2.0 thru Ambari) chose 5 dirs, and the name node was failing to start.
    When I removed 4 directories and made it to one dir. It started fine.

    Why does the installer(HDP2.0) choose 5 dir as namenode storage dirs?
    Ideally how many should there be, Currently I have only 1 dir as the namenode storage.. is this ok?
    I have enough storage space in this dir.

    I am trying to see if this is related to the heap size (shooting up) and going into safe mode issue (I posted earlier)
    Please advice.

to create new topics or reply. | New User Registration

  • Author
  • #43890


    There can be as many as you like for redundancy purposes.
    In most installs I see 3 – are you speaking about dfs.namenode.storage.dir in hdfs-site.xml ?
    I have just installed a fresh install and it only setup one – which I specified.
    I think your issues were to do with available disk space – which was resolved in your earlier post.




    Thanks Dave, your info was very helpful.

    I am talking about >dfs.namenode.name.dir< in the hdfs-site.xml which I think is the namenode storage dir.
    For some reason during the install when this had 5 dirs by default, it failed to start, when I changed into just 1 dir, it started fine.
    So just wanted to understand.

    I have another question on the Region Manager for Hbase, I have 2 node cluster, I have Hbase and Region Server on server1(datanode1), I do not have Region server on the 2nd datanode in server2.Architecture pics show that region server should be installed on all datanodes ?
    Whats the impact if I do not have the region manager on datanode2, and Can I add thru Ambari, can i do it while services are running,
    Any configuration files I need modify when I add a region server to a datanode ?
    I opened a separate thread for this, but please respond in any..

    Thanks again!

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.