The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HBase Forum

Hbase 0.96 – Dynamic loading of endpoint on table

  • #46551
    Fanilo Andrianasolo

    Recently I have been testing coprocessors on Hbase 0.96 included with HDP 2.0 and the latest sandbox. I’ve successfully deployed an observer on a table via the shell.
    For the endpoint, I have been trying to edit the implementation of ColumnAggregationEndpoint for my needs. I then tried loading it the same way as the observer : by disabling the table, altering the descriptor and re-enabling the table.
    This is where things don’t go very well : my region doesn’t open. It is then blocked into being not enabled and not disabled at the same time, I can’t drop it nor assign it, the webUI says its state is FAILED_OPEN, and the command hbase hbck -fix etc… doesn’t help.

    I just replicated the experience in the Sandbox, and got a hand on the region server logging :
    2014-01-09 06:48:12,258 ERROR [RS_OPEN_REGION-sandbox:60020-1] handler.OpenRegionHandler: Failed open of region=test2,,1389276767014.52c72a06a6cebac705c78bfd2e05825f., starting to roll back the global memstore size.
    java.lang.IllegalStateException: Could not instantiate a region instance.
    Caused by: java.lang.LinkageError: loader constraint violation in interface itable initialization: when resolving method “[…].endpoint.MyEndpoint.getService()Lcom/google/protobuf/Service;” the class loader (instance of org/apache/hadoop/hbase/util/CoprocessorClassLoader) of the current class, […]/endpoint/MyEndpoint, and the class loader (instance of sun/misc/Launcher$AppClassLoader) for interface org/apache/hadoop/hbase/coprocessor/CoprocessorService have different Class objects for the type com/google/protobuf/Service used in the signature
    2014-01-09 06:48:12,259 DEBUG [RS_OPEN_REGION-sandbox:60020-1] zookeeper.ZKAssign: regionserver:60020-0x143777166470004 Transitioning 52c72a06a6cebac705c78bfd2e05825f from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN

    So I kinda get where the bug comes from…but
    1/ how to delete the tables (without restarting the master) ? do I need to manually delete some zNodes first ?
    2/ is there a more documented or simpler protobuf endpoint implementation that can be dynamically loaded to a table ? I do understand things around coprocessors are still moving

    Thanks !

  • Author
  • #46581
    Fanilo Andrianasolo

    EDIT : the problem with my endpoint was solved : I was building the endpoint jar with dependencies and dynamically loaded it to the table, and as the log says there are conflicting dependencies. Build without deps and everything goes fine.

    As for the regions blocked in FAILED_OPEN state because of this .jar, can’t reassign them nor alter them. Sometimes I manage to enable them again, then drop them but I have no idea why. I’m open to any ideas :)

The forum ‘HBase’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.