The issue has been fixed. Turns out that when hdfs was restarted, active and standby states of the namenodes were switched. I shut down the active namenode via Ambari, and they switched again. Everything seems to be back to normal, but it may be the case that HA configuration needs to further looked into, since the failover did not go over smoothly.
Most services crashed on HDP2.0 with HA
Running HDP 2.0 with HA enabled.
I added a parameter to core-site.xml via Ambari, and, honestly, don’t even remember if I hit save. I restarted hdfs through Ambari for the changes to take effect. The result was pretty much everything crashing.
I can access hue, but the file browser and running hive scripts both show the error:
“Operation category READ is not supported in state standby (error 403)”
Running hive script from shell leads to the same error. Using “hadoop fs -ls” command still works from shell. I restarted all the services via Ambari but nothing has changed. Ambari shows that all the hosts are fine.
Support from the Experts
A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.
Become HDP Certified
Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world