Home Forums HDP on Linux – Installation Installing single node HDP on VMware

This topic contains 17 replies, has 2 voices, and was last updated by  Max 2 years, 4 months ago.

  • Creator
    Topic
  • #8078

    Max
    Member

    Is there a recommended CentOS configuration and version and/or available as a virtual VMware to download?

    I installed a single node HDP-1.0.1.14 running virtual CentOS 6.3 (VMware). I managed to get a clean install. However, when I restart the services I get the following errors:

    Existing PID file found during start.
    Tomcat appears to still be running with PID 2521. Start aborted.

    ERROR: Oozie start aborted
    .
    .
    .
    .
    Failed to start /usr/sbin/gmond for cluster HDPHBaseMaster
    Failed to start /usr/sbin/gmetad

    Any assistance is greatly appreciated.

    Thanks,
    Max.

    ps. Here is the full log:

    [root@localhost gsInstaller]# sh startHDP.sh

    **************** Starting Hdfs Components Like Namenode, Secondary Namenode and Data nodes ***************
    **************** Starting Name Node ***************
    starting namenode, logging to /usr/hdp/disk0/data/HDP/hadoop/log_dir/hdfs/hadoop-hdfs-namenode-localhost.localdomain.out
    2012-08-09 10:23:37,508 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = localhost.localdomain/127.0.0.1
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.3.14
    STARTUP_MSG: build = -r ; compiled by ‘jenkins’ on Fri Jul 27 04:53:12 PDT 2012
    ************************************************************/
    2012-08-09 10:23:37,827 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
    2012-08-09 10:23:37,897 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
    2012-08-09 10:23:38,075 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.

    **************** Starting Nagios and Snmpd Services ***************
    Stopping snmpd: [ OK ]
    Starting snmpd: [ OK ]
    Stopping nagios: [ OK ]
    Starting nagios: [ OK ]

    **************** Starting the Ganglia Services ***************
    ==================================
    Shutting down hdp-gmond…
    ==================================

    =============================
    Starting hdp-gmond…
    =============================
    Failed to start /usr/sbin/gmond for cluster HDPHBaseMaster

    ==================================
    Shutting down hdp-gmetad…
    ==================================

    =============================
    Starting hdp-gmetad…
    =============================
    Started /usr/bin/rrdcached with PID 11064
    Failed to start /usr/sbin/gmetad

    Stopping httpd: [ OK ]
    Starting httpd: [Thu Aug 09 10:29:25 2012] [warn] The Alias directive in /etc/httpd/conf.d/hdp_mon_nagios_addons.conf at line 1 will probably never match because it overlaps an earlier Alias.
    [ OK ]

    **************** Service Associated With Ip Ports ***************
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    tcp 0 0 127.0.0.1:45926 0.0.0.0:* LISTEN 8919/java
    tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 10926/snmpd
    tcp 0 0 127.0.0.1:51111 0.0.0.0:* LISTEN 8604/java
    tcp 0 0 0.0.0.0:8649 0.0.0.0:* LISTEN 2329/gmond
    tcp 0 0 0.0.0.0:8010 0.0.0.0:* LISTEN 5901/java
    tcp 0 0 127.0.0.1:50090 0.0.0.0:* LISTEN 5540/java
    tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2255/mysqld
    tcp 0 0 0.0.0.0:8651 0.0.0.0:* LISTEN 1686/gmetad
    tcp 0 0 0.0.0.0:50060 0.0.0.0:* LISTEN 8919/java
    tcp 0 0 0.0.0.0:8652 0.0.0.0:* LISTEN 1686/gmetad
    tcp 0 0 127.0.0.1:50030 0.0.0.0:* LISTEN 8171/java
    tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1613/rpcbind
    tcp 0 0 127.0.0.1:8020 0.0.0.0:* LISTEN 5087/java
    tcp 0 0 127.0.0.1:50070 0.0.0.0:* LISTEN 5087/java
    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2130/sshd
    tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1683/cupsd
    tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 2304/postmaster
    tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2409/master
    tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 5901/java
    tcp 0 0 0.0.0.0:9083 0.0.0.0:* LISTEN 9263/java
    tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 5901/java
    tcp 0 0 127.0.0.1:50300 0.0.0.0:* LISTEN 8171/java
    tcp 0 0 0.0.0.0:50111 0.0.0.0:* LISTEN 10722/java
    tcp 0 0 0.0.0.0:46882 0.0.0.0:* LISTEN 1849/rpc.statd
    tcp 0 0 :::2181 :::* LISTEN 9711/java
    tcp 0 0 ::ffff:127.0.0.1:8005 :::* LISTEN 2521/java
    tcp 0 0 ::ffff:127.0.0.1:60010 :::* LISTEN 10194/java
    tcp 0 0 :::37102 :::* LISTEN 9711/java
    tcp 0 0 :::51151 :::* LISTEN 1849/rpc.statd
    tcp 0 0 :::111 :::* LISTEN 1613/rpcbind
    tcp 0 0 :::80 :::* LISTEN 11103/httpd
    tcp 0 0 ::ffff:127.0.0.1:60020 :::* LISTEN 9799/java
    tcp 0 0 :::22 :::* LISTEN 2130/sshd
    tcp 0 0 ::1:631 :::* LISTEN 1683/cupsd
    tcp 0 0 :::11000 :::* LISTEN 2521/java
    tcp 0 0 :::60030 :::* LISTEN 9799/java
    tcp 0 0 ::ffff:127.0.0.1:60000 :::* LISTEN 10194/java

    **************** Java Process ***************
    10194 hbase -XX:OnOutOfMemoryError=kill
    10722 2001 -Dproc_jar
    2521 oozie -Djava.util.logging.config.file=/var/lib/oozie/oozie-server/conf/logging.properties
    5087 hdfs -Dproc_namenode
    5540 hdfs -Dproc_secondarynamenode
    5901 hdfs -Dproc_datanode
    8171 mapred -Dproc_jobtracker
    8604 mapred -Dproc_historyserver
    8919 mapred -Dproc_tasktracker
    9263 hive -Dproc_jar
    9711 2005 -Dzookeeper.log.dir=/usr/hdp/disk0/data/HDP/zk_log_dir
    9799 hbase -XX:OnOutOfMemoryError=kill

You must be logged in to reply to this topic.