Home Forums HDFS Most services crashed on HDP2.0 with HA

This topic contains 2 replies, has 2 voices, and was last updated by  Nipuna Perera 3 months, 1 week ago.

  • Creator
  • #46112

    Running HDP 2.0 with HA enabled.

    I added a parameter to core-site.xml via Ambari, and, honestly, don’t even remember if I hit save. I restarted hdfs through Ambari for the changes to take effect. The result was pretty much everything crashing.

    I can access hue, but the file browser and running hive scripts both show the error:
    “Operation category READ is not supported in state standby (error 403)”

    Running hive script from shell leads to the same error. Using “hadoop fs -ls” command still works from shell. I restarted all the services via Ambari but nothing has changed. Ambari shows that all the hosts are fine.

Viewing 2 replies - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

  • Author
  • #57359

    Nipuna Perera

    I also facing same issue.

    There is no specific error, suddenly namenode switching happens.
    Error in hdfs logs ->
    2014-07-17 01:58:53,381 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(6769)) – Get corrupt file blocks returned error: Operation category READ is not supported in state standby

    Versions ->
    HDFS Apache Hadoop Distributed File System

    I’m using hortonworks

    Please help me to resolve this.


    The issue has been fixed. Turns out that when hdfs was restarted, active and standby states of the namenodes were switched. I shut down the active namenode via Ambari, and they switched again. Everything seems to be back to normal, but it may be the case that HA configuration needs to further looked into, since the failover did not go over smoothly.

Viewing 2 replies - 1 through 2 (of 2 total)