HDP on Linux – Installation Forum

Few Questions regarding the section "Starting HDP Services"

  • #6115
    Hee Min Yoo
    Member

    HI, I am referring to this page
    http://docs.hortonworks.com/CURRENT/index.htm#Deploying_Hortonworks_Data_Platform/Using_gsInstaller/Configuring_Local_Mirror_Repository.htm

    I am using VMWare Workstation 6 with CentOS 5.8 (final)
    The setting is single-node psuedo distributed mode,
    I wanted to install with HMC installer however I faced lots of trouble and switched to gsInstaller

    Things went well in installation, and passed all the smoke tests

    after reboot, since I tried to start the services again using the commands from
    http://docs.hortonworks.com/CURRENT/index.htm#Deploying_Hortonworks_Data_Platform/Using_gsInstaller/Configuring_Local_Mirror_Repository.htm

    and saw that it was not going well…

    for example this command
    su -l hdfs -c “/usr/sbin/hadoop-daemon.sh –config /etc/hadoop start namenode”

    shouldn’t this be
    su -l hdfs -c “{$yourhadoophome}/bin/hadoop-daemon.sh –config /etc/hadoop/conf.empty start namenode”

    It seemed like hadoop-daemon script wants the parent dir where the actual conf files are at not two levels above (in this case /etc/hadoop)

    I am not sure if this is because I am using pseudo-distributed, but I was not able to do this with the command that was given in the guide

    so my question is have I set up the path wrong in the configuration initially? I thought I was following the guide and pretty much made my folders and configuration default to make everything simple.

    If you can clear on this point that would be great

    Thanks
    Min

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #6116

    Sorry to hear that you could not use HMC for installation. We would appreciate if you can give us some feedback on the issues you faced installing using HMC.

    Namenode can be started using the following command:
    su -l hdfs -c "/usr/lib/hadoop/bin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode"

    Thanks for pointing this out, we will correct the documents.

    #6117
    Hee Min Yoo
    Member

    yep, I will try to post that but roughly I remember it was the puppet issue,

    I have never used puppet so cannot debug, (something like puppet cannot be kicked)

    I will try to get logs and etc when I try reinstall with hmc.

    Anyhow, I think the fixed version works great, and just watned to clarify that the only reason I used conf.empty was for my configuration, the setup decided to put all values in that folder and that became my configuration folder, and conf is linked to conf.empty.

    Thanks

    #6229
    Sasha J
    Moderator

    Hi Hee,

    When you attempted the single node install with the graphical HMC installer, did you use all the defaults for heap sizes? If so you may not have had enough memory to run everything on a single VM

    you can try to up your memory on the VM, or decrease the heaps to try and fit them all in memory – otherwise your VM may enter a state where is virtually paging all the time and that could cause some issues.

    with gsInstaller, you can alternatively “not” install everything on your vm, and ensure that at least the core components work (i.e. – HDFS, MapReduce, zookeeper, HBase, HCatalog, Hive, Templeton)

    let us know how it goes or if you need further assistance

    Sasha

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.