Home Forums HDP on Linux – Installation Few Questions regarding the section "Starting HDP Services"

This topic contains 3 replies, has 3 voices, and was last updated by  Sasha J 1 year, 10 months ago.

  • Creator
    Topic
  • #6115

    Hee Min Yoo
    Member

    HI, I am referring to this page

    http://docs.hortonworks.com/CURRENT/index.htm#Deploying_Hortonworks_Data_Platform/Using_gsInstaller/Configuring_Local_Mirror_Repository.htm

    I am using VMWare Workstation 6 with CentOS 5.8 (final)
    The setting is single-node psuedo distributed mode,
    I wanted to install with HMC installer however I faced lots of trouble and switched to gsInstaller

    Things went well in installation, and passed all the smoke tests

    after reboot, since I tried to start the services again using the commands from

    http://docs.hortonworks.com/CURRENT/index.htm#Deploying_Hortonworks_Data_Platform/Using_gsInstaller/Configuring_Local_Mirror_Repository.htm

    and saw that it was not going well…

    for example this command
    su -l hdfs -c “/usr/sbin/hadoop-daemon.sh –config /etc/hadoop start namenode”

    shouldn’t this be
    su -l hdfs -c “{$yourhadoophome}/bin/hadoop-daemon.sh –config /etc/hadoop/conf.empty start namenode”

    It seemed like hadoop-daemon script wants the parent dir where the actual conf files are at not two levels above (in this case /etc/hadoop)

    I am not sure if this is because I am using pseudo-distributed, but I was not able to do this with the command that was given in the guide

    so my question is have I set up the path wrong in the configuration initially? I thought I was following the guide and pretty much made my folders and configuration default to make everything simple.

    If you can clear on this point that would be great

    Thanks
    Min

Viewing 3 replies - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #6229

    Sasha J
    Moderator

    Hi Hee,

    When you attempted the single node install with the graphical HMC installer, did you use all the defaults for heap sizes? If so you may not have had enough memory to run everything on a single VM

    you can try to up your memory on the VM, or decrease the heaps to try and fit them all in memory – otherwise your VM may enter a state where is virtually paging all the time and that could cause some issues.

    with gsInstaller, you can alternatively “not” install everything on your vm, and ensure that at least the core components work (i.e. – HDFS, MapReduce, zookeeper, HBase, HCatalog, Hive, Templeton)

    let us know how it goes or if you need further assistance

    Sasha

    Collapse
    #6117

    Hee Min Yoo
    Member

    yep, I will try to post that but roughly I remember it was the puppet issue,

    I have never used puppet so cannot debug, (something like puppet cannot be kicked)

    I will try to get logs and etc when I try reinstall with hmc.

    Anyhow, I think the fixed version works great, and just watned to clarify that the only reason I used conf.empty was for my configuration, the setup decided to put all values in that folder and that became my configuration folder, and conf is linked to conf.empty.

    Thanks

    Collapse
    #6116

    Sorry to hear that you could not use HMC for installation. We would appreciate if you can give us some feedback on the issues you faced installing using HMC.

    Namenode can be started using the following command:
    su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode"

    Thanks for pointing this out, we will correct the documents.

    Collapse
Viewing 3 replies - 1 through 3 (of 3 total)