The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HBase Forum

KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server

  • #54359
    Gwenael Le Barzic
    Participant

    Hello !

    I write this topic because we are encountering a problem in Hive when we try to create also an HBASE table.

    We are running a HDP 2.0.6 cluster with Hbase 0.96 and Hive 0.12.
    Our cluster is secured with Kerberos 5.

    Here is the content of my file createHiveHbase.sql :
    USE <MyBDD>;
    SET mapreduce.job.queuename=<MyQ>;

    CREATE TABLE IF NOT EXISTS pagecounts_hbase (rowkey STRING, pageviews STRING, bytes STRING)
    STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
    WITH SERDEPROPERTIES ('hbase.columns.mapping' = ':key,f:c1,f:c2')
    TBLPROPERTIES ('hbase.table.name' = 'pagecounts');

    I try then the following command :
    hive -v -f createHiveHbase.sql

    And it gives me the following error (I will continue this error message in the second post of this topic) :
    CREATE TABLE IF NOT EXISTS pagecounts_hbase (rowkey STRING, pageviews STRING, bytes STRING)
    STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
    WITH SERDEPROPERTIES ('hbase.columns.mapping' = ':key,f:c1,f:c2')
    TBLPROPERTIES ('hbase.table.name' = 'pagecounts')
    FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.io.IOException: Attempt to start meta tracker failed.
    at org.apache.hadoop.hbase.catalog.CatalogTracker.start(CatalogTracker.java:199)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getCatalogTracker(HBaseAdmin.java:221)
    at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:269)
    at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:285)
    at org.apache.hadoop.hive.hbase.HBaseStorageHandler.preCreateTable(HBaseStorageHandler.java:161)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:478)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:471)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
    at $Proxy7.createTable(Unknown Source)
    at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:596)
    at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3677)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:252)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1437)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1215)
    at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1043)

  • Author
    Replies
  • #54360
    Gwenael Le Barzic
    Participant

    Here is the end of the error message :
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)
    at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
    at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:446)
    at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:456)
    at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:737)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
    Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:199)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:425)
    at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
    at org.apache.hadoop.hbase.catalog.CatalogTracker.start(CatalogTracker.java:195)
    ... 35 more

    Seeing the following line :
    Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server

    I went to check in my zoo.cfg and in the hbase-site.xml.
    Here are the content of these two files.

    zoo.cfg.
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/data/1/zookeeper
    clientPort=2181
    server.1=<FQDN_SERVER_1>:2888:3888
    server.2=<FQDN_SERVER_2>2888:3888
    server.3=<FQDN_SERVER_3>:2888:3888
    authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
    jaasLoginRenew=3600000
    kerberos.removeHostFromPrincipal=true
    kerberos.removeRealmFromPrincipal=true
    maxClientCnxns=300

    I will put the content of the hbase-site.xml in the next post of this topic.

    Best regards.

    Gwenael Le Barzic

    #54361
    Gwenael Le Barzic
    Participant

    Here is the content of the hbase-site.xml (I gave you the text version here for better understanding) :
    hbase.regionserver.kerberos.principal=hbase/_HOST@<KDC_REALM>
    hbase.hregion.majorcompaction=86400000
    hfile.block.cache.size=0.40
    hbase.superuser=hbase
    hbase.tmp.dir=/var/log/hbase
    hbase.regionserver.handler.count=60
    hbase.hregion.memstore.flush.size=67108864
    hbase.cluster.distributed=true
    hbase.hstore.compactionThreshold=3
    hbase.security.authentication=simple
    hbase.client.scanner.caching=100
    hbase.regionserver.global.memstore.lowerLimit=0.38
    hbase.zookeeper.useMulti=true
    hbase.master.keytab.file=/etc/security/keytabs/hbase.service.keytab
    hbase.rpc.engine=org.apache.hadoop.hbase.ipc.WritableRpcEngine
    hbase.hstore.blockingStoreFiles=7
    hbase.regionserver.keytab.file=/etc/security/keytabs/hbase.service.keytab
    dfs.support.append=true
    hbase.zookeeper.quorum=<FQDN_SERVER_1>,<FQDN_SERVER_2>,<FQDN_SERVER_3>
    hbase.hregion.memstore.mslab.enabled=true
    hbase.hregion.max.filesize=21474836480
    hbase.security.authorization=false
    hbase.client.keyvalue.maxsize=10485760
    zookeeper.znode.parent=/hbase-secure
    hbase.master.kerberos.principal=hbase/_HOST@DEV.HDF
    hbase.defaults.for.version.skip=true
    hbase.coprocessor.master.classes=org.apache.hadoop.hbase.security.access.AccessController
    hbase.regionserver.global.memstore.upperLimit=0.4
    hbase.hregion.memstore.block.multiplier=4
    zookeeper.session.timeout=60000
    hbase.bulkload.staging.dir=/apps/hbase/staging
    hbase.zookeeper.property.clientPort=2181
    hbase.rootdir=hdfs://<FQDN_SERVERNAMENODE>:8020/apps/hbase/data
    hbase.coprocessor.region.classes=org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,org.apache.hadoop.hbase.security.access.AccessController

    We checked on some other similar cases and tried to execute the hive command with the setting of the following property :
    hive -v -hiveconf hbase.zookeeper.quorum=<FQDN_SERVER_1>,<FQDN_SERVER_2>,<FQDN_SERVER_3> -f createHiveHbase.sql

    With this addition of the zookeeper quorum, we don’t get any error. The command just hang forever.

    May you help me please ?

    Best regards.

    Gwenael Le Barzic

    #54362
    Gwenael Le Barzic
    Participant

    A last information for today.

    One of my colleague checked other docs and tried to follow the documentation described here :
    http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.2/bk_user-guide/content/user-guide-hbase-import-2.html

    If we follow these steps, by creating the file simple.ddl and then executing it with hcat, it works :
    CREATE TABLE
    simple_hcat_load_table (id STRING, c1 STRING, c2 STRING)
    STORED BY 'org.apache.hcatalog.hbase.HBaseHCatStorageHandler'
    TBLPROPERTIES (
    'hbase.table.name' = 'simple_hcat_load_table',
    'hbase.columns.mapping' = 'd:c1,d:c2',
    'hcat.hbase.output.bulkMode' = 'true'
    );

    hcat -f simple.ddl

    What do you think of this guys ?

    Best regards.

    Gwenael Le Barzic

The forum ‘HBase’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.