The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HBase Forum

Hbase CellCounter Error

  • #57751
    varun kumar kalluri
    Participant

    Hi,
    I have hbase (hbase-0.94.6.1.3.3.0-58) installed with ambari,
    I am trying to run CellCounter job to test my hbase service, But I am seeing following errors,
    HADOOP_CLASSPATH=hbase classpath hadoop jar /usr/lib/hbase/hbase-0.94.6.1.3.3.0-58-security.jar CellCounter ambarismoketest /user/k22751/CellCounter12
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-58–1, built on 11/18/2013 01:24 GMT
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:host.name=euca-192-168-216-107.eucalyptus.internal.devlab.dev
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_45
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_45/jre

    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/hadoop/libexec/../lib/native/Linux-amd64-64
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.18.1.el6.x86_64
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:user.name=k22751
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:user.home=/export/home/k22751
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:user.dir=/root
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=euca-x.x.x.x:2181,euca-x.x.x.x.eucalyptus.internal.devlab.dev:2181,x.x.x.xv:2181 sessionTimeout=60000 watcher=hconnection
    14/07/23 17:37:30 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 2723@euca-x.x.x.x.eucalyptus.internal.devlab.dev
    14/07/23 17:37:30 INFO zookeeper.ClientCnxn: Opening socket connection to server euca-x.x.x.x.eucalyptus.internal.devlab.dev/x.x.x.x:2181. Will not attempt to authenticate using SASL (unknown error)
    14/07/23 17:37:30 INFO zookeeper.ClientCnxn: Socket connection established to euca-x-x-x-x.eucalyptus.internal.devlab.dev/x.x.x.x:2181, initiating session
    14/07/23 17:37:30 INFO zookeeper.ClientCnxn: Session establishment complete on server euca-x-x-x-x.eucalyptus.internal.devlab.dev/x.x.x.x:2181, sessionid = 0x3474068d3090031, negotiated timeout = 40000
    14/07/23 17:37:31 INFO mapred.JobClient: Running job: job_201407181049_0026
    14/07/23 17:37:32 INFO mapred.JobClient: map 0% reduce 0%
    14/07/23 17:37:45 INFO mapred.JobClient: Task Id : attempt_201407181049_0026_m_000002_0, Status : FAILED
    Error initializing attempt_201407181049_0026_m_000002_0:

  • Author
    Replies
  • #57752
    varun kumar kalluri
    Participant

    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/hadoop/libexec/../lib/native/Linux-amd64-64
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.18.1.el6.x86_64
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:user.name=k22751
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:user.home=/export/home/k22751
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Client environment:user.dir=/root
    14/07/23 17:37:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=euca-192-168-216-110.eucalyptus.internal.devlab.dev:2181,euca-192-168-216-108.eucalyptus.internal.devlab.dev:2181,euca-192-168-216-116.eucalyptus.internal.devlab.dev:2181 sessionTimeout=60000 watcher=hconnection
    14/07/23 17:37:30 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 2723@euca-192-168-216-107.eucalyptus.internal.devlab.dev
    14/07/23 17:37:30 INFO zookeeper.ClientCnxn: Opening socket connection to server euca-192-168-216-116.eucalyptus.internal.devlab.dev/192.168.216.56:2181. Will not attempt to authenticate using SASL (unknown error)
    14/07/23 17:37:30 INFO zookeeper.ClientCnxn: Socket connection established to euca-192-168-216-116.eucalyptus.internal.devlab.dev/192.168.216.56:2181, initiating session
    14/07/23 17:37:30 INFO zookeeper.ClientCnxn: Session establishment complete on server euca-192-168-216-116.eucalyptus.internal.devlab.dev/192.168.216.56:2181, sessionid = 0x3474068d3090031, negotiated timeout = 40000
    14/07/23 17:37:31 INFO mapred.JobClient: Running job: job_201407181049_0026
    14/07/23 17:37:32 INFO mapred.JobClient: map 0% reduce 0%
    14/07/23 17:37:45 INFO mapred.JobClient: Task Id : attempt_201407181049_0026_m_000002_0, Status : FAILED
    Error initializing attempt_201407181049_0026_m_000002_0:
    java.io.FileNotFoundException: /mnt1/hadoop/mapred/taskTracker/k22751/jobcache/job_201407181049_0026/jars/org/apache/hadoop/hbase/util/SizeBasedThrottler.class (No space left on device)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
    at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
    at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
    at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
    at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
    at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:211)
    at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1340)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1315)
    at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1230)
    at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2641)
    at java.lang.Thread.run(Thread.java:744)

    14/07/23 17:37:45 WARN mapred.JobClient: Error reading task outputhttp://euca-192-168-216-107.eucalyptus.internal.devlab.dev:50060/tasklog?plaintext=true&attemptid=attempt_201407181049_0026_m_000002_0&filter=stdout
    14/07/23 17:37:45 WARN mapred.JobClient: Error reading task outputhttp://euca-192-168-216-107.eucalyptus.internal.devlab.dev:50060/tasklog?plaintext=true&attemptid=attempt_201407181049_0026_m_000002_0&filter=stderr
    14/07/23 17:37:46 INFO mapred.JobClient: Task Id : attempt_201407181049_0026_r_000002_0, Status : FAILED
    Error initializing attempt_201407181049_0026_r_000002_0:
    java.io.FileNotFoundException: /mnt1/hadoop/mapred/taskTracker/k22751/jobcache/job_201407181049_0026/jars/org/apache/hadoop/hbase/util/SizeBasedThrottler.class (No space left on device)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
    at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
    at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
    at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
    at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
    at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:211)
    at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1340)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1315)
    at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1230)
    at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2641)
    at java.lang.Thread.run(Thread.java:744)

    #57753
    Enis Soztutar
    Moderator

    It seems that you have run out of disk space:
    From exception trace:
    (No space left on device)

    #57754
    varun kumar kalluri
    Participant

    I have enough space.
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/vg00-rootvol
    29G 3.9G 23G 15% /
    tmpfs 7.8G 0 7.8G 0% /dev/shm
    /dev/sda1 243M 29M 202M 13% /boot
    /dev/sdb 60G 183M 60G 1% /mnt1
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/vg00-rootvol
    29G 3.0G 24G 12% /
    tmpfs 7.8G 0 7.8G 0% /dev/shm
    /dev/sda1 243M 29M 202M 13% /boot
    /dev/sdb 60G 3.9G 57G 7% /mnt1
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/vg00-rootvol
    29G 3.0G 24G 11% /
    tmpfs 7.8G 0 7.8G 0% /dev/shm
    /dev/sda1 243M 29M 202M 13% /boot
    /dev/sdb 60G 3.9G 57G 7% /mnt1
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/vg00-rootvol
    29G 4.7G 23G 18% /
    tmpfs 7.8G 0 7.8G 0% /dev/shm
    /dev/sda1 243M 29M 202M 13% /boot
    /dev/sdb 60G 3.9G 57G 7% /mnt1

    Thanks,
    varun

The forum ‘HBase’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.