HBase Forum

org.apache.hadoop.hbase.client.RpcRetryingCaller in hbase/phoenix

  • #52554
    Very Test

    Following the PDF provided by HDP (bk_installing_manually_book-20140422.pdf), I got an exception when running:

    ./psql.py localhost /usr/share/doc/phoenix- /usr/share/doc/phoenix- /usr/share/doc/phoenix-
    Tue Apr 29 17:26:44 UTC 2014, org.apache.hadoop.hbase.client.RpcRetryingCaller@65072974, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): java.io.IOException: Table Namespace Manager not ready yet, try again later
    	at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3205)
    	at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1730)
    	at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1860)
    	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:38221)
    	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
    	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
    	at org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    	at java.lang.Thread.run(Thread.java:724)

    What does that means ? The others checks in this PDF are correct.


to create new topics or reply. | New User Registration

  • Author
  • #54434
    Neeraj Garg


    We’ve been facing the same issue in HBase 0.98. We’ve been using HDP 2.1 distribution over 8 nodes cluster.

    Could somebody quickly post the resolution.

    Thank you in advance,

    Andrew Grande

    Hi, there probably was a typo in the phoenix core jar path, make sure that you symlink to a valid file. After you do this, check the hbase regionserver startup logs (/usr/lib/hbase/logs). If the issue is still there, try copying the file physically into /usr/lib/hbase/lib instead of symlinking.

    We’re fixing the docs to reflect changes as we speak.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.