The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HDP on Linux – Installation Forum

HBase Master can not Start

  • #57398
    坤霖 李
    Participant

    I have successful installed HDP 2.1 in Centos 6.5.I can start all service of HDP component but hbase-master.
    When I try to start the service I am getting the following error

      Traceback (most recent call last):
      File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HBASE/package/scripts/hbase_master.py”, line 71, in <module>
      HbaseMaster().execute()
      File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 105, in execute
      method(env)
      File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HBASE/package/scripts/hbase_master.py”, line 43, in start
      self.configure(env) # for security
      File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HBASE/package/scripts/hbase_master.py”, line 38, in configure
      hbase(name=’master’)
      File “/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HBASE/package/scripts/hbase.py”, line 40, in hbase
      params.HdfsDirectory(None, action=”create”)
      File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 149, in __init__
      self.env.run()
      File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 150, in run
      self.run_action(resource, action)
      File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 116, in run_action
      provider_action()
      File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_directory.py”, line 105, in action_create
      not_if=format(“su – {hdp_hdfs_user} -c ‘hadoop fs -ls {dir_list_str}’”)
      File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 149, in __init__
      self.env.run()
      File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 150, in run
      self.run_action(resource, action)
      File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 116, in run_action
      provider_action()
      File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 236, in action_run
      wait_for_finish=self.resource.wait_for_finish, timeout=self.resource.timeout)
      File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 36, in checked_call
      return _call(command, logoutput, True, cwd, env, preexec_fn, user, wait_for_finish, timeout)
      File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 90, in _call
      err_msg = (“Execution of ‘%s’ returned %d. %s”) % (command[-1], code, out)
      UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xe9 in position 133: ordinal not in range(128)

    I tried to add ‘#encoding:utf-8′ in all of ‘xxx.py’.But it is still fail.Does anyone know what’s wrong with hbase-master?

  • Author
    Replies
  • #57405
    Jeff Sposetti
    Moderator

    What’s the output of “echo $LANG” from a command prompt?

    #57488
    坤霖 李
    Participant

    Thanks for Jeff’s reply.When I run ‘echo $LANG’ command,get ‘zh_Tw.big5’.

    #57489
    坤霖 李
    Participant

    Thank Jeff for reminding me.When I change ‘$LANG=zh_TW.UTF-8’, it is working.

    #57490
    坤霖 李
    Participant

    But I got some error message,and hbase-master stopped.
    Erroe Message:

      2014-07-21 10:01:22,804 WARN [master:vm202:60000] hdfs.DFSClient: DFS Read
      org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-61822306-172.16.1.200-1405493185923:blk_1073741825_1001 file=/apps/hbase/data/hbase.version
      at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:880)
      at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:560)
      at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:790)
      at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:837)
      at java.io.DataInputStream.read(DataInputStream.java:149)
      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
      at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:482)
      at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:569)
      at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:456)
      at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
      at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
      at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:792)
      at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:609)
      at java.lang.Thread.run(Thread.java:745)
      2014-07-21 10:01:22,805 FATAL [master:vm202:60000] master.HMaster: Unhandled exception. Starting shutdown.
      org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-61822306-172.16.1.200-1405493185923:blk_1073741825_1001 file=/apps/hbase/data/hbase.version
      at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:880)
      at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:560)
      at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:790)
      at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:837)
      at java.io.DataInputStream.read(DataInputStream.java:149)
      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
      at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:482)
      at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:569)
      at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:456)
      at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)
      at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
      at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:792)
      at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:609)
      at java.lang.Thread.run(Thread.java:745)
      2014-07-21 10:01:22,805 INFO [master:vm202:60000] master.HMaster: Aborting
      2014-07-21 10:01:22,806 DEBUG
    #57492
    坤霖 李
    Participant

    I kill the hdfs folder ‘/hadoop/hdfs’,and restart hbase-master service.it is working now.

    #57562

    Looks like the é character appears in the command. Was there a folder or file name that happened to contain that character?
    The command returned a non-zero error code, and the python script tried to print the error message, but that raised can exception in shell.py while trying to concatenate a non-unicode character with the rest of the error message.

The forum ‘HDP on Linux – Installation’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.