The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Linux – Installation Forum

Few Questions regarding the section "Starting HDP Services"

  • #6115
    Hee Min Yoo

    HI, I am referring to this page

    I am using VMWare Workstation 6 with CentOS 5.8 (final)
    The setting is single-node psuedo distributed mode,
    I wanted to install with HMC installer however I faced lots of trouble and switched to gsInstaller

    Things went well in installation, and passed all the smoke tests

    after reboot, since I tried to start the services again using the commands from

    and saw that it was not going well…

    for example this command
    su -l hdfs -c “/usr/sbin/ –config /etc/hadoop start namenode”

    shouldn’t this be
    su -l hdfs -c “{$yourhadoophome}/bin/ –config /etc/hadoop/conf.empty start namenode”

    It seemed like hadoop-daemon script wants the parent dir where the actual conf files are at not two levels above (in this case /etc/hadoop)

    I am not sure if this is because I am using pseudo-distributed, but I was not able to do this with the command that was given in the guide

    so my question is have I set up the path wrong in the configuration initially? I thought I was following the guide and pretty much made my folders and configuration default to make everything simple.

    If you can clear on this point that would be great


  • Author
  • #6116

    Sorry to hear that you could not use HMC for installation. We would appreciate if you can give us some feedback on the issues you faced installing using HMC.

    Namenode can be started using the following command:
    su -l hdfs -c "/usr/lib/hadoop/bin/ --config /etc/hadoop/conf start namenode"

    Thanks for pointing this out, we will correct the documents.

    Hee Min Yoo

    yep, I will try to post that but roughly I remember it was the puppet issue,

    I have never used puppet so cannot debug, (something like puppet cannot be kicked)

    I will try to get logs and etc when I try reinstall with hmc.

    Anyhow, I think the fixed version works great, and just watned to clarify that the only reason I used conf.empty was for my configuration, the setup decided to put all values in that folder and that became my configuration folder, and conf is linked to conf.empty.


    Sasha J

    Hi Hee,

    When you attempted the single node install with the graphical HMC installer, did you use all the defaults for heap sizes? If so you may not have had enough memory to run everything on a single VM

    you can try to up your memory on the VM, or decrease the heaps to try and fit them all in memory – otherwise your VM may enter a state where is virtually paging all the time and that could cause some issues.

    with gsInstaller, you can alternatively “not” install everything on your vm, and ensure that at least the core components work (i.e. – HDFS, MapReduce, zookeeper, HBase, HCatalog, Hive, Templeton)

    let us know how it goes or if you need further assistance


The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.