The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Linux – Installation Forum

Ganglia Monitor component remains down after installation of HDP2

  • #46178
    Vinay Sudhakaran
    Participant

    Hi,

    I’ve been able to successfully install all the components of HDP2, except the Ganglia Monitor process on a single node Linux VM with CentOS 6.5
    Summary:
    Hostname:
    localhost.localdomain
    IP Address:
    <My IP Address>
    OS:
    centos6 (x86_64)
    CPU:
    2
    Disk:
    Data Unavailable
    Memory:
    5.71GB
    Load Avg:

    Agent
    Heartbeat:
    less than a minute ago

    Ganglia Server: Started
    Ganglia Monitors: 0/1 Ganglia Monitors Live

    The Alerts and Health checks display:
    Ganglia Monitor process for [Slaves, Resource Manager, NameNode, HistoryServer, HBase Master]
    Connection refused

    Any idea why the Ganglia Monitor process is not starting? Any recommendations to get this working would be helpful.
    I edited the gmond.conf and gmetad.conf to add my cluster name. Also, changed the hostname to localhost.localdomain but in vain.

    Regards,
    VS

  • Author
    Replies
  • #46179
    Jeff Sposetti
    Moderator

    Hi,

    1) What is in your /etc/hosts file?
    2) What does “hostname -f” return?

    Thanks!

    #46180
    Vinay Sudhakaran
    Participant

    [root@localhost ~]# hostname -f
    localhost.localdomain
    [root@localhost ~]# cat /etc/hosts
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6

    While installing HDP2, I had changed the ganglia user from ‘nobody’ to ‘vinay’.
    I updated that information in gmond.conf and gmetad.conf and since then Ganglia has been working. I can see the system metrics @http://localhost.localdomain/ganglia but in the HDP dashboard, the ganglia server and the monitor components don’t start. Hence, I continue to get this:

    The Alerts and Health checks display:
    Ganglia Monitor process for [Slaves, Resource Manager, NameNode, HistoryServer, HBase Master]
    Connection refused

    Thanks,
    VS

    #46190
    Vinay Sudhakaran
    Participant

    Hi,

    I played around a bit with the hdp-gmond and hdp-gmetad conf files and was able to resolve the issue by commenting out the following lines in the /etc/ganglia/hdp/HDP[*]/conf.d/gmond.master.conf files

    /* The gmond cluster master must additionally provide an XML
    * description of the cluster to the gmetad that will query it.
    */
    /*
    tcp_accept_channel {
    bind = localhost.localdomain
    port = 8664
    }
    */

    Now, I have all the services up and ganglia reporting the metrics for them on a single node VM.

    #46195
    Jeff Sposetti
    Moderator

    Thanks Vinay. I think you only need to comment out the “bind = ” and that will work too.

    #50697
    Victor Chugunov
    Participant

    I have similar configuration and similar problems:
    Ganglia Server: Started
    Ganglia Monitors: 0/1 Ganglia Monitors Live

    The Alerts and Health checks display:
    Ganglia Monitor process for [Slaves, Resource Manager, NameNode, HistoryServer, HBase Master]
    Connection refused

    The suggested solution ( to modify the /etc/ganglia/hdp/HDP[*]/conf.d/gmond.master.conf files) doesn’t work for me – and they will be overwritten each time Ganglia Monitor is restarted. On the main dashboard page 4 metrics widgets (CPU Usage,Cluster Load, Memory Usage, Network Usage) have no data, only message “No Data. There was no data available. Possible reasons include inaccessible Ganglia service”. Ambari-server.log contains these statements:
    20:41:34,233 ERROR [pool-1-thread-14] JMXPropertyProvider:487 – Caught exception getting JMX metrics : Connection refused
    20:41:34,239 ERROR [pool-1-thread-8] JMXPropertyProvider:487 – Caught exception getting JMX metrics : Connection refused
    20:41:40,442 ERROR [pool-1-thread-23] JMXPropertyProvider:487 – Caught exception getting JMX metrics : Connection refused
    20:41:40,447 ERROR [pool-1-thread-20] JMXPropertyProvider:487 – Caught exception getting JMX metrics : Connection refused
    20:41:46,652 ERROR [pool-1-thread-15] JMXPropertyProvider:487 – Caught exception getting JMX metrics : Connection refused
    20:41:46,657 ERROR [pool-1-thread-27] JMXPropertyProvider:487 – Caught exception getting JMX metrics : Connection refused

    Any idea why the Ganglia Monitor process is not starting? Any recommendations to get this working would be helpful.
    Thanks
    Victor

    #64492
    Vinay Sudhakaran
    Participant

    Hi Victor,
    Were you able to resolve the problem with a more permanent fix?

    Thanks,
    VS

    #64502
    Victor Chugunov
    Participant

    Hi Viney,
    I was trying to make it works, but couldn’t. I’ve killed and reinstalled the cluster.
    Victor

The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.