Home Forums HDP on Linux – Installation Installing single node HDP on VMware

This topic contains 17 replies, has 2 voices, and was last updated by  Max 1 year, 8 months ago.

  • Creator
    Topic
  • #8078

    Max
    Member

    Is there a recommended CentOS configuration and version and/or available as a virtual VMware to download?

    I installed a single node HDP-1.0.1.14 running virtual CentOS 6.3 (VMware). I managed to get a clean install. However, when I restart the services I get the following errors:

    Existing PID file found during start.
    Tomcat appears to still be running with PID 2521. Start aborted.

    ERROR: Oozie start aborted
    .
    .
    .
    .
    Failed to start /usr/sbin/gmond for cluster HDPHBaseMaster
    Failed to start /usr/sbin/gmetad

    Any assistance is greatly appreciated.

    Thanks,
    Max.

    ps. Here is the full log:

    [root@localhost gsInstaller]# sh startHDP.sh

    **************** Starting Hdfs Components Like Namenode, Secondary Namenode and Data nodes ***************
    **************** Starting Name Node ***************
    starting namenode, logging to /usr/hdp/disk0/data/HDP/hadoop/log_dir/hdfs/hadoop-hdfs-namenode-localhost.localdomain.out
    2012-08-09 10:23:37,508 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = localhost.localdomain/127.0.0.1
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.3.14
    STARTUP_MSG: build = -r ; compiled by ‘jenkins’ on Fri Jul 27 04:53:12 PDT 2012
    ************************************************************/
    2012-08-09 10:23:37,827 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
    2012-08-09 10:23:37,897 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
    2012-08-09 10:23:38,075 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.

    **************** Starting Nagios and Snmpd Services ***************
    Stopping snmpd: [ OK ]
    Starting snmpd: [ OK ]
    Stopping nagios: [ OK ]
    Starting nagios: [ OK ]

    **************** Starting the Ganglia Services ***************
    ==================================
    Shutting down hdp-gmond…
    ==================================

    =============================
    Starting hdp-gmond…
    =============================
    Failed to start /usr/sbin/gmond for cluster HDPHBaseMaster

    ==================================
    Shutting down hdp-gmetad…
    ==================================

    =============================
    Starting hdp-gmetad…
    =============================
    Started /usr/bin/rrdcached with PID 11064
    Failed to start /usr/sbin/gmetad

    Stopping httpd: [ OK ]
    Starting httpd: [Thu Aug 09 10:29:25 2012] [warn] The Alias directive in /etc/httpd/conf.d/hdp_mon_nagios_addons.conf at line 1 will probably never match because it overlaps an earlier Alias.
    [ OK ]

    **************** Service Associated With Ip Ports ***************
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    tcp 0 0 127.0.0.1:45926 0.0.0.0:* LISTEN 8919/java
    tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 10926/snmpd
    tcp 0 0 127.0.0.1:51111 0.0.0.0:* LISTEN 8604/java
    tcp 0 0 0.0.0.0:8649 0.0.0.0:* LISTEN 2329/gmond
    tcp 0 0 0.0.0.0:8010 0.0.0.0:* LISTEN 5901/java
    tcp 0 0 127.0.0.1:50090 0.0.0.0:* LISTEN 5540/java
    tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2255/mysqld
    tcp 0 0 0.0.0.0:8651 0.0.0.0:* LISTEN 1686/gmetad
    tcp 0 0 0.0.0.0:50060 0.0.0.0:* LISTEN 8919/java
    tcp 0 0 0.0.0.0:8652 0.0.0.0:* LISTEN 1686/gmetad
    tcp 0 0 127.0.0.1:50030 0.0.0.0:* LISTEN 8171/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1613/rpcbind
    tcp 0 0 127.0.0.1:8020 0.0.0.0:* LISTEN 5087/java
    tcp 0 0 127.0.0.1:50070 0.0.0.0:* LISTEN 5087/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2130/sshd
    tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1683/cupsd
    tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 2304/postmaster
    tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2409/master
    tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 5901/java
    tcp 0 0 0.0.0.0:9083 0.0.0.0:* LISTEN 9263/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 5901/java
    tcp 0 0 127.0.0.1:50300 0.0.0.0:* LISTEN 8171/java
    tcp 0 0 0.0.0.0:50111 0.0.0.0:* LISTEN 10722/java
    tcp 0 0 0.0.0.0:46882 0.0.0.0:* LISTEN 1849/rpc.statd
    tcp 0 0 :::2181 :::* LISTEN 9711/java
    tcp 0 0 ::ffff:127.0.0.1:8005 :::* LISTEN 2521/java
    tcp 0 0 ::ffff:127.0.0.1:60010 :::* LISTEN 10194/java
    tcp 0 0 :::37102 :::* LISTEN 9711/java
    tcp 0 0 :::51151 :::* LISTEN 1849/rpc.statd
    tcp 0 0 :::111 :::* LISTEN 1613/rpcbind
    tcp 0 0 :::80 :::* LISTEN 11103/httpd
    tcp 0 0 ::ffff:127.0.0.1:60020 :::* LISTEN 9799/java
    tcp 0 0 :::22 :::* LISTEN 2130/sshd
    tcp 0 0 ::1:631 :::* LISTEN 1683/cupsd
    tcp 0 0 :::11000 :::* LISTEN 2521/java
    tcp 0 0 :::60030 :::* LISTEN 9799/java
    tcp 0 0 ::ffff:127.0.0.1:60000 :::* LISTEN 10194/java

    **************** Java Process ***************
    10194 hbase -XX:OnOutOfMemoryError=kill
    10722 2001 -Dproc_jar
    2521 oozie -Djava.util.logging.config.file=/var/lib/oozie/oozie-server/conf/logging.properties
    5087 hdfs -Dproc_namenode
    5540 hdfs -Dproc_secondarynamenode
    5901 hdfs -Dproc_datanode
    8171 mapred -Dproc_jobtracker
    8604 mapred -Dproc_historyserver
    8919 mapred -Dproc_tasktracker
    9263 hive -Dproc_jar
    9711 2005 -Dzookeeper.log.dir=/usr/hdp/disk0/data/HDP/zk_log_dir
    9799 hbase -XX:OnOutOfMemoryError=kill

Viewing 17 replies - 1 through 17 (of 17 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #8159

    Max
    Member

    I got it. In my case just a simple http://localhost.localdomain/hmc/html is all that is needed.

    Max.

    Collapse
    #8157

    Max
    Member

    I went through the installation and now when http://node1/hmc/html I get nothing. Any ideas?

    Thanks,
    Max.

    Collapse
    #8099

    Max
    Member

    You have been a great help! Thank you Sasha.

    Now hostname = node1 and hostname -f = node1.localdomain

    In preparing for installation:

    Step 4 – Disable SELinux. I am planing on doing that by: echo 0 >/selinux/enforce. Is that okay?

    Step 5 – Enable NTP. Since I am installing on a single node, do we need this?

    Regards,
    Max.

    Collapse
    #8098

    Sasha J
    Moderator

    OH, you should also edit /etc/sysconfig/network file and put node1 in there…
    Sorry forgot to mentions this before…

    Thank you!
    Sasha

    Collapse
    #8096

    Max
    Member

    Got it :) Thanks!

    I change added that and the content of the file now is:
    192.168.61.128 node1
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6

    the first time and then to:
    I change added that and the content of the file now is:
    192.168.61.128 node1.localdomain node1
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6

    I have rebooted the system each time and I still get “localhost.localdomain” when I run hostname. Is this a make-it or break-it in getting the single node installation to work? If so, I found how to change the hostname (http://www.howtogeek.com/wiki/Change_the_Hostname_on_a_Redhat_Linux_Machine) but there is only one entry in this file and no IP address. Any ideas?

    Thanks,
    Max.

    Collapse
    #8094

    Sasha J
    Moderator

    not replace the whole file, just add this line to it.
    NEVER remove 127.0.0.1 line from there, it will kill your system .

    Thank you!
    Sasha

    Collapse
    #8093

    Max
    Member

    Thank you Sasha for you help!

    I will follow your recommendation and HMC to do a fresh install.

    Just to be clear :) are you suggesting to replace the content of /etc/hosts file with “192.168.61.128 node1″ prior to the HMC install?

    Thanks again,
    Max.

    Collapse
    #8091

    Sasha J
    Moderator

    OK,
    at least you should name you node somehow (like “node1″) and make sure you put this name along with the IP address to your /etc/hosts file:
    192.168.61.128 node1

    You “hostname” commands should return “node1″ after this change.

    For the HMC instructions, go to http://hortonworks.com/download/
    Then click on HDP1.0 Installation Instructions link (it is in the shape of button)…

    Then follow it.

    And, of course, ask question if any.
    But make sure node name resolving normally and do not use localhost.

    Thank you!
    Sasha

    Collapse
    #8088

    Max
    Member

    The install process created a lot in /tmp.

    Here is the result of what you asked for:

    hostname = localhost.localdomain

    hostname -f = localhost.localdomain

    ifconfig =
    eth5 Link encap:Ethernet HWaddr 00:0C:29:91:7A:FB
    inet addr:192.168.61.128 Bcast:192.168.61.255 Mask:255.255.255.0
    inet6 addr: fe80::20c:29ff:fe91:7afb/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:37277 errors:0 dropped:0 overruns:0 frame:0
    TX packets:35753 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:28281058 (26.9 MiB) TX bytes:6080039 (5.7 MiB)

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:3419817 errors:0 dropped:0 overruns:0 frame:0
    TX packets:3419817 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:625793967 (596.8 MiB) TX bytes:625793967 (596.8 MiB)

    cat /etc/hosts =
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6

    Would you suggest for me to start over again?

    For a single node installation using HMC, would you please provide stop-by-step of what needs to be done, as the HMC installation link you provided does not make that distinction?

    Thanks,
    Max.

    Collapse
    #8087

    Sasha J
    Moderator

    Oh, yes…
    That script assumes HMC installation…
    Did it create anything in /tmp?
    In any case, take a look on HMC, it is recommended and only supported way to install cluster.
    and try to stop/start services one more time.
    Ganglia problems may be related to name resolution.
    could you give me some more details on this?
    I need to see output from the following commands:
    hostname
    hostname -f
    ifconfig
    cat /etc/hosts

    Thank you!
    Sasha

    Collapse
    #8086

    Max
    Member

    Okay. Now I am really confused. I used what was provided in your documentation on how to do a single node installation:

    http://docs.hortonworks.com/CURRENT/index.htm#Deploying_Hortonworks_Data_Platform/Using_gsInstaller/Deploying_Single_Node_Cluster/Instructions_Single_Node.htm#XREF_66958_Configure_HDP

    So – what to do now?

    Thanks,
    Max.

    Collapse
    #8084

    Max
    Member

    Sash,

    Sorry to tell – but the script the above asking me to run does not run. Can you provide a list of components I need to install and configure before running the debug script as I don’t have /var/db/hmc directory?

    Here is what I get back:
    [root@localhost ~]# sh debugHDP.sh
    Error: unable to open database “/var/db/hmc/data/data.db”: unable to open database file
    Error: unable to open database “/var/db/hmc/data/data.db”: unable to open database file
    Error: unable to open database “/var/db/hmc/data/data.db”: unable to open database file
    Error: unable to open database “/var/db/hmc/data/data.db”: unable to open database file
    Error: unable to open database “/var/db/hmc/data/data.db”: unable to open database file
    grep: /var/log/hmc/hmc.log: No such file or directory
    Resulting file is: /tmp/…out
    Please, upload it to Hortonworks Support FTP site.

    Thanks,
    Max.

    Collapse
    #8083

    Sasha J
    Moderator

    Try to restart services first.
    start script may clean pid files automatically.

    There is one critical thing:
    gsInstaller you use is NOT recommended way to install cluster, HMC is preferrable way and fully supported.

    Please, check it out:

    http://hortonworks.com/download/thankyou_hdp1a/

    Thank you!
    Sasha

    Collapse
    #8082

    Max
    Member

    Furthermore,

    I shot down the services using stopHDP.sh and ran the following:
    [root@localhost /]# cd /usr/hdp/disk0/data/HDP
    [root@localhost HDP]# find -name ‘*.pid’
    ./hadoop/pid_dir/mapred/hadoop-mapred-jobtracker.pid
    ./hadoop/pid_dir/mapred/hadoop-mapred-tasktracker.pid
    ./hadoop/pid_dir/mapred/hadoop-mapred-historyserver.pid
    ./hadoop/pid_dir/hdfs/hadoop-hdfs-datanode.pid
    ./hadoop/pid_dir/hdfs/hadoop-hdfs-namenode.pid
    ./hadoop/pid_dir/hdfs/hadoop-hdfs-secondarynamenode.pid
    ./templeton_pid_dir/templeton.pid
    [root@localhost HDP]#

    There are other PID files but the one for the oozie does not exist. Are you suggesting to delete all PID files listed above after a shotdown?

    Thanks again,
    Max.

    Collapse
    #8081

    Sasha J
    Moderator

    MAx,
    please, do as described here:

    http://hortonworks.com/community/forums/topic/hmc-installation-support-help-us-help-you/

    Upload all the results and let me know of the file name.

    Thank you!
    Sasha

    Collapse
    #8080

    Max
    Member

    Thank you for quick Sasha.

    I ran the following and still getting Ganglia error:

    [root@localhost /]# /etc/init.d/hdp-gmond restart
    ==================================
    Shutting down hdp-gmond…
    ==================================

    =============================
    Starting hdp-gmond…
    =============================
    Failed to start /usr/sbin/gmond for cluster HDPHBaseMaster

    Regards,
    Max.

    Collapse
    #8079

    Sasha J
    Moderator

    Max,
    sometimes, not all services are stopped cleanly…
    This is known reported issue, our engineering working on fixing this.
    for oozie, check it the process are running indeed and start it if needed. You may need to delete obsolete PID file.

    for Ganglia, use “/etc/init.d/hdp-gmod restart”

    This should fix Ganglia monitors status.

    Thank you!
    Sasha

    Collapse
Viewing 17 replies - 1 through 17 (of 17 total)