Poor Ambari Performance, Ganglia?
I’ve got a 20 node (15 data nodes) HDP 2.0.6 cluster running in AWS EC2 and lately I’ve been noticing pretty poor performance for Ambari. Services are taking much, much longer than previously to start and stop. The Ambari node is an m1.medium and the data nodes are m3.larges. I doublechecked all of the nodes CPU, Memory, and IO and none of them approach full utilization or signs of bottlenecks.
Thinking that the problem was the Ambari server itself, I checked out the ambari-server log and I see this error message spammed in the log (it’s actually something like 40-50 lines long):
18:00:25,296 ERROR [qtp141562188-3705] GangliaPropertyProvider:530 – Caught exception getting Ganglia metrics : spec=http://<ambari server private DNS name>/cgi-bin/rrd.py
java.net.SocketTimeoutException: Read timed out
On the Ambari server I checked some netstats and saw a huge number of packet receive errors for UDP. Looking further I see a huge Rec-Q, somewhere in the neighborhood 170k to 200k and it never seems to go down.
Any ideas? I thought that perhaps it’s because my ambari server is trying to both send and receive on 8660 and maybe there’s some kind of Ganglia contention going on. If this were the case, how would I remove the ambari server from sending stats to itself and just have the server set to receive ganglia info from the other nodes in the cluster? Or, am I way off base with that thought?
Support from the Experts
A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.
Become HDP Certified
Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world