The Hortonworks Community Connection is now live. A completely rebuilt Q&A forum, Knowledge Base, Code Hub and more, backed by the experts in the industry.

You will be redirected here in 10 seconds. If your are not redirected, click here to visit the new site.

The legacy Hortonworks Forum is now closed. You can view a read-only version of the former site by clicking here. The site will be taken offline on January 31,2016

HBase Forum

RS Shutdown

  • #12019
    Laurentiu
    Member

    Hi,
    Can you help me troubleshoot this error ? Could it be related to the issues reported earlier ?

    2012-11-12 16:56:40,020 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream XX.XX.XX.104:50010 java.net.SocketException: Too many open f
    iles
    2012-11-12 16:56:40,020 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-7735610519379550708_27883
    2012-11-12 16:56:40,020 INFO org.apache.hadoop.hdfs.DFSClient: Excluding datanode XX.XX.XX.104:50010
    2012-11-12 16:56:40,021 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream 10.70.21.103:50010 java.net.SocketException: Too many open f
    iles
    2012-11-12 16:56:40,021 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_8200187085947614532_27883
    2012-11-12 16:56:40,021 INFO org.apache.hadoop.hdfs.DFSClient: Excluding datanode XX.XX.XX.103:50010
    2012-11-12 16:56:40,022 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream 10.70.21.105:50010 java.net.SocketException: Too many open f
    iles
    2012-11-12 16:56:40,022 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_4829735829789842460_27883
    2012-11-12 16:56:40,023 INFO org.apache.hadoop.hdfs.DFSClient: Excluding datanode XX.XX.XX.105:50010
    2012-11-12 16:56:40,024 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream 10.70.21.102:50010 java.net.SocketException: Too many open f
    iles
    2012-11-12 16:56:40,024 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-3500893986757226670_27883
    2012-11-12 16:56:40,024 INFO org.apache.hadoop.hdfs.DFSClient: Excluding datanode XX.XX.XX.102:50010
    2012-11-12 16:56:40,024 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3418)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2609)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2849)

    2012-11-12 16:56:40,024 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_-3500893986757226670_27883 bad datanode[0] nodes == null
    2012-11-12 16:56:40,024 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file “/apps/hbase/data/usertable/6dd2fdb4c038eeea8562515f89
    f083a8/.tmp/89dc1d0bd52540a2b3fff561140f43b4” – Aborting…
    2012-11-12 16:56:40,026 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server XXXXXXXXXX, 60020,1352231477670: Replay of HLog required. Forcing server shutdown
    org.apache.hadoop.hbase.DroppedSnapshotException: region: usertable,user6089489121697188913,1352756918315.6dd2fdb4c038eeea8562515f89f083a8.

  • Author
    Replies
  • #12020
    Laurentiu
    Member

    In the HB Master the following is reported:
    2012-11-12 16:56:40,020 ERROR org.apache.hadoop.hbase.master.HMaster: Region server ^@^@hdp-nod-chi-03.trustwave.com,60020,1352231477670 reported a fatal error:
    ABORTING region server XXXXXXXXXXXXX,60020,1352231477670: Replay of HLog required. Forcing server shutdown
    Cause:
    org.apache.hadoop.hbase.DroppedSnapshotException: region: usertable,user6089489121697188913,1352756918315.6dd2fdb4c038eeea8562515f89f083a8.
    at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1288)
    at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1172)
    at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1114)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:400)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:374)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:243)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.SocketException: Too many open files

    #12032
    tedr
    Member

    Hi Laurentiu,

    This looks like it is the same thing that is going on in your other post- http://hortonworks.com/community/forums/topic/too-many-open-files-rs-logs/

    Ted.

    #12037
    Laurentiu
    Member

    I have re-run the tests after making the fixes outlined in the other posts and I have not issue again.

    #12069
    tedr
    Member

    Hi Laurentiu,

    Thanks for letting us know.

    Ted

The forum ‘HBase’ is closed to new topics and replies.

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.