The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HBase Forum

HBase Error – This server is in the failed servers list

  • #40574

    Hi,
    I am using Hortonworks HDP 2.1 beta with CentOS RHEL 6.2 and trying to run a simple HBase Java program.
    By JPS command all the services are running fine.
    [root@localhost ~]# jps
    28508 HRegionServer
    2423 SecondaryNameNode
    3389 NodeManager
    3570 JobHistoryServer
    32362 Jps
    2328 NameNode
    18379 QuorumPeerMain
    2671 DataNode
    4219 org.eclipse.equinox.launcher_1.2.0.v20110502.jar
    28379 HMaster
    3138 ResourceManager

    Below is the java program.
    public static void main(String[] args) {
    // TODO Auto-generated method stub
    Configuration conf = HBaseConfiguration.create();
    try {
    HBaseAdmin hbase = new HBaseAdmin(conf);
    boolean flag = hbase.isMasterRunning();
    System.out.print(“ok”);
    } catch (Exception e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
    }
    Once I run the program it is showing below output.
    13/10/10 11:17:15 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=60000 watcher=hconnection-0x6564dbd5
    13/10/10 11:17:15 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6564dbd5 connecting to ZooKeeper ensemble=127.0.0.1:2181
    13/10/10 11:17:15 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    13/10/10 11:17:15 INFO zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session
    13/10/10 11:17:15 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x141a029bd47004e, negotiated timeout = 40000
    13/10/10 11:17:16 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 failed; retrying after sleep of 100, exception=com.google.protobuf.ServiceException: java.io.IOException: Could not set up IO Streams
    13/10/10 11:17:16 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 failed; retrying after sleep of 200, exception=com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: localhost.localdomain/127.0.0.1:60000
    Here is the HBase-site.xml

    hbase.rootdir
    hdfs://127.0.0.1:8020/apps/hbase

    hbase.master.info.bindAddress
    127.0.0.1

    hbase.zookeeper.quorum
    127.0.0.1

    hbase.cluster.distributed
    true

    Here is the /etc/hosts file
    127.0.0.1 localhost.localdomain localhost

    Please let me know why the error is happening.
    Thanks,
    Aparna

  • Author
    Replies
  • #41322
    abdelrahman
    Moderator

    Hi Aparna,

    Let us validate if the HBase Master is up and running. From command line, please run:
    netstat -all | egrep “6000|60010”

    The HBase master address is stored in Zookeeper instance. It should match the same address as above. To find out the HBase master address from command line, please run:

    # hbase org.jruby.Main /usr/lib/hbase/bin/get-active-master.rb

    Thanks
    -Rahman

    #44835
    Xiandong Su
    Member

    I am having the exact same issue. I executed the egrep command Rahman mentioned. However I could not determine the result based on the print outs. The result did mention that tcp between sandbox.hortonworks.c:60000 and sandbox.hortonworks.c:40993 established. Running the command to find out HBase master address seems failed with SLF4j print out (class path contains multiple SLF4j bindings).

    I am using Hortonworks Sandbox 2.0

    Thanks

    Sean

    #57324
    Cruise Tang
    Participant

    I encountered the axact same issue with you.
    I want to connect the hbase by native JAVA API, after I launched my program, the console produced these output:

    14/07/16 17:33:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=10.24.23.48:2181 sessionTimeout=90000 watcher=hconnection-0x1dd63a8, quorum=10.24.23.48:2181, baseZNode=/hbase
    14/07/16 17:33:30 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1dd63a8 connecting to ZooKeeper ensemble=10.24.23.48:2181
    14/07/16 17:33:30 INFO zookeeper.ClientCnxn: Opening socket connection to server 10.24.23.48/10.24.23.48:2181. Will not attempt to authenticate using SASL (unknown error)
    14/07/16 17:33:30 INFO zookeeper.ClientCnxn: Socket connection established to 10.24.23.48/10.24.23.48:2181, initiating session
    14/07/16 17:33:30 INFO zookeeper.ClientCnxn: Session establishment complete on server 10.24.23.48/10.24.23.48:2181, sessionid = 0x1473e744f5b000a, negotiated timeout = 90000
    14/07/16 17:33:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=10.24.23.48:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x1dd63a8, quorum=10.24.23.48:2181, baseZNode=/hbase
    14/07/16 17:33:30 INFO zookeeper.RecoverableZooKeeper: Process identifier=catalogtracker-on-hconnection-0x1dd63a8 connecting to ZooKeeper ensemble=10.24.23.48:2181
    14/07/16 17:33:30 INFO zookeeper.ClientCnxn: Opening socket connection to server 10.24.23.48/10.24.23.48:2181. Will not attempt to authenticate using SASL (unknown error)
    14/07/16 17:33:30 INFO zookeeper.ClientCnxn: Socket connection established to 10.24.23.48/10.24.23.48:2181, initiating session
    14/07/16 17:33:30 INFO zookeeper.ClientCnxn: Session establishment complete on server 10.24.23.48/10.24.23.48:2181, sessionid = 0x1473e744f5b000b, negotiated timeout = 90000

    after a few minutes, some exceptions are thrown out:

    14/07/16 17:43:13 INFO zookeeper.ZooKeeper: Session: 0x1473e744f5b000b closed
    14/07/16 17:43:13 INFO zookeeper.ClientCnxn: EventThread shut down
    org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
    Wed Jul 16 17:33:32 CST 2014, org.apache.hadoop.hbase.client.RpcRetryingCaller@1d43178, java.net.ConnectException: Connection refused: no further information
    Wed Jul 16 17:33:32 CST 2014, org.apache.hadoop.hbase.client.RpcRetryingCaller@1d43178, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: node01/10.24.23.48:60020
    Wed Jul 16 17:33:32 CST 2014, org.apache.hadoop.hbase.client.RpcRetryingCaller@1d43178, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: node01/10.24.23.48:60020
    Wed Jul 16 17:33:33 CST 2014, org.apache.hadoop.hbase.client.RpcRetryingCaller@1d43178, org.apache.hadoop.hbase.ipc.RpcClient$FailedServerException: This server is in the failed servers list: node01/10.24.23.48:60020

    I wonder how did you solve it finally. Thanks a lot!

The forum ‘HBase’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.