Few Questions regarding the section "Starting HDP Services"

to create new topics or reply. | New User Registration

This topic contains 3 replies, has 3 voices, and was last updated by  Sasha J 2 years, 9 months ago.

  • Creator
  • #6115

    Hee Min Yoo

    HI, I am referring to this page


    I am using VMWare Workstation 6 with CentOS 5.8 (final)
    The setting is single-node psuedo distributed mode,
    I wanted to install with HMC installer however I faced lots of trouble and switched to gsInstaller

    Things went well in installation, and passed all the smoke tests

    after reboot, since I tried to start the services again using the commands from


    and saw that it was not going well…

    for example this command
    su -l hdfs -c “/usr/sbin/hadoop-daemon.sh –config /etc/hadoop start namenode”

    shouldn’t this be
    su -l hdfs -c “{$yourhadoophome}/bin/hadoop-daemon.sh –config /etc/hadoop/conf.empty start namenode”

    It seemed like hadoop-daemon script wants the parent dir where the actual conf files are at not two levels above (in this case /etc/hadoop)

    I am not sure if this is because I am using pseudo-distributed, but I was not able to do this with the command that was given in the guide

    so my question is have I set up the path wrong in the configuration initially? I thought I was following the guide and pretty much made my folders and configuration default to make everything simple.

    If you can clear on this point that would be great


Viewing 3 replies - 1 through 3 (of 3 total)

You must be to reply to this topic. | Create Account

  • Author
  • #6229

    Sasha J

    Hi Hee,

    When you attempted the single node install with the graphical HMC installer, did you use all the defaults for heap sizes? If so you may not have had enough memory to run everything on a single VM

    you can try to up your memory on the VM, or decrease the heaps to try and fit them all in memory – otherwise your VM may enter a state where is virtually paging all the time and that could cause some issues.

    with gsInstaller, you can alternatively “not” install everything on your vm, and ensure that at least the core components work (i.e. – HDFS, MapReduce, zookeeper, HBase, HCatalog, Hive, Templeton)

    let us know how it goes or if you need further assistance



    Hee Min Yoo

    yep, I will try to post that but roughly I remember it was the puppet issue,

    I have never used puppet so cannot debug, (something like puppet cannot be kicked)

    I will try to get logs and etc when I try reinstall with hmc.

    Anyhow, I think the fixed version works great, and just watned to clarify that the only reason I used conf.empty was for my configuration, the setup decided to put all values in that folder and that became my configuration folder, and conf is linked to conf.empty.



    Sorry to hear that you could not use HMC for installation. We would appreciate if you can give us some feedback on the issues you faced installing using HMC.

    Namenode can be started using the following command:
    su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode"

    Thanks for pointing this out, we will correct the documents.

Viewing 3 replies - 1 through 3 (of 3 total)
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.