The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Linux – Installation Forum

Ganglia not working on HDP 1.2

  • #20007
    Pal J

    I have successfully installed HDP on VM with CentOS 6.4 (64 bit) as OS, after successfully configuring the cluster via Ambari portal all services were started (showing green dot). (Note: both host and cluster are on same VM)
    On Dashboard tab/page “Cluster Metrics” were blank with message “No Data There was no data available. Possible reasons include inaccessible Ganglia Service“. Clicked on services tab to check “Ganglia” service, service was started with below messages

    Ganglia Collector [gmond] process down alert for HBase Master
    Ganglia Collector [gmond] process down alert for slaves
    Ganglia Collector [gmond] process down alert for NameNode
    Ganglia Collector [gmond] process down alert for JobTracker

    Checked Gangila service using following commands:
    “service gmetad status” the result was “gmetad (pid 6133) is running…”
    “service hdp- gmetad status” the result was =======================================
    Checking status of hdp-gmetad…
    “service gmond status “ the result was “gmond stopped”
    “service hdp- gmond status” the result was “Failed to find running /usr/sbin/gmond for cluster HDPSlaves”

    Clicked on “Host” tab and “Ganglia Monitor / Ganglia” was not started (red dot) tired to start by clicking on Action  Start but I was not successful

    The issue seems to Ganglia Monitor (gmond) not able to start on cluster HDPSlaves.
    Can you please advise what could be the cause for this issue and how to trouble to shoot this ?

  • Author
  • #20139

    Hi Pal,

    Thanks for trying out Hortonworks Data Platform.

    What can sometimes happen that blocks the Ganglia services for HDP to come up is that when installed Ganglia puts its own hooks into system startup. The fix for this is to kill all of the running gmond processes and gmetad processes. Then start Ganglia from within Ambari.


    Pal J

    Hi Ted,
    I killed “gmond” and “gmetad “ services using stop command and tired to start “Ganglia” in ambari initially I was not successful . I changed data_source from default “my cluster” to “localhost.localdomain” in /etc/ganglia/gmetad.conf and was able to start “Ganglia” in ambari. But there were still 4 errors in “Alerts and Health Check” one of them was “Ganglia Collector [gmond] process down alert for slaves”
    When I checked status of “service gmond” it was stopped and status of “service hdp-gmond “ was “Failed to find running /usr/sbin/gmond for cluster HDPSlaves”. The status of “gmetad” was OK

    Can you please advise what next…

    Pal J

    Hi Ted,
    Forgot to add below in the previous update
    On the “Hosts” page “Ganglia Monitor / Ganglia” was stopped I was not successful in starting it


    Seth Lyubich

    Hi Paj,

    Some debugging information in post below might be useful:

    Hope this helps,

    Pal J

    Hi Seth,
    Thanks for link..I also tired disabling IPV6 still no luck Gaglia service in Services tabs show green ..when I check the status of /etc/init.d/hdp-gmond start I get this message “Failed to start /usr/sbin/gmond for cluster HDPSlaves”
    I am new to HDP , please let me know what logs and do I have turn ON debug flags


    Seth Lyubich

    Hi Pal,

    There are several things that were suggested in the post in my last comment. Here is a summary that you can try:

    Make sure that time is synchronized in your cluster.

    directory /var/lib/ganglia/rrds should contain directories with rrd files.

    If you don’t have data there you might have issue with rrd tool. You can try the following:

    Make sure rrdcached process is running. Usually rrd tool gets started with hdp-gmetad service:

    #service hdp-gmetad start

    Starting hdp-gmetad…
    /usr/bin/rrdcached already running with PID 24053
    /usr/sbin/gmetad already running with PID 24083

    If you have hdp-gmetad and hdp-gmond running make sure that corresponding ports are listening:

    netstat -anp| grep ’8660\|8661\|8662\|8663′

    make sure that sockets using IPv4 in output from command above.

    Finally, check that sockets are listening on expected configured ports (not localhost):

    grep -A4 8660 /etc/ganglia/hdp/gmetad.conf

    you should see something like below. Make sure that socket does not point to localhost.

    [root@ambari1 hdp]# grep -A4 8660 /etc/ganglia/hdp/gmetad.conf
    data_source “HDPSlaves” ambari1:8660
    data_source “HDPNameNode” ambari1:8661
    data_source “HDPJobTracker” ambari1:8662
    data_source “HDPHBaseMaster” ambari1:8663

    One more thing you can check is rrd tool packages. This is what I have on my machine:

    [root@ambari1 hdp]# rpm -qa|grep rrd

    Hope this helps,


    Pal J

    Hi Seth
    Sorry for late respond. It took a while for me to new clean install. I installed HDP 1.2 on CentOS 6.4 all services including Ganglia are working


    Seth Lyubich

    Hi Pal,

    Thanks for letting us know that your issue is resolved.


    Ardavan Moinzadeh

    Sef, could you explain what does this error mean? I get the same error that Pal faced :

    service hdp- gmond status” the result was “Failed to find running /usr/sbin/gmond for cluster HDPSlaves”
    Thank you!

The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.