Home Forums HBase RS Shutdown

This topic contains 4 replies, has 2 voices, and was last updated by  tedr 1 year, 11 months ago.

  • Creator
    Topic
  • #12019

    Laurentiu
    Member

    Hi,
    Can you help me troubleshoot this error ? Could it be related to the issues reported earlier ?

    2012-11-12 16:56:40,020 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream XX.XX.XX.104:50010 java.net.SocketException: Too many open f
    iles
    2012-11-12 16:56:40,020 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-7735610519379550708_27883
    2012-11-12 16:56:40,020 INFO org.apache.hadoop.hdfs.DFSClient: Excluding datanode XX.XX.XX.104:50010
    2012-11-12 16:56:40,021 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream 10.70.21.103:50010 java.net.SocketException: Too many open f
    iles
    2012-11-12 16:56:40,021 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_8200187085947614532_27883
    2012-11-12 16:56:40,021 INFO org.apache.hadoop.hdfs.DFSClient: Excluding datanode XX.XX.XX.103:50010
    2012-11-12 16:56:40,022 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream 10.70.21.105:50010 java.net.SocketException: Too many open f
    iles
    2012-11-12 16:56:40,022 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_4829735829789842460_27883
    2012-11-12 16:56:40,023 INFO org.apache.hadoop.hdfs.DFSClient: Excluding datanode XX.XX.XX.105:50010
    2012-11-12 16:56:40,024 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream 10.70.21.102:50010 java.net.SocketException: Too many open f
    iles
    2012-11-12 16:56:40,024 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-3500893986757226670_27883
    2012-11-12 16:56:40,024 INFO org.apache.hadoop.hdfs.DFSClient: Excluding datanode XX.XX.XX.102:50010
    2012-11-12 16:56:40,024 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3418)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2609)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2849)

    2012-11-12 16:56:40,024 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_-3500893986757226670_27883 bad datanode[0] nodes == null
    2012-11-12 16:56:40,024 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file “/apps/hbase/data/usertable/6dd2fdb4c038eeea8562515f89
    f083a8/.tmp/89dc1d0bd52540a2b3fff561140f43b4″ – Aborting…
    2012-11-12 16:56:40,026 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server XXXXXXXXXX, 60020,1352231477670: Replay of HLog required. Forcing server shutdown
    org.apache.hadoop.hbase.DroppedSnapshotException: region: usertable,user6089489121697188913,1352756918315.6dd2fdb4c038eeea8562515f89f083a8.

Viewing 4 replies - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #12069

    tedr
    Member

    Hi Laurentiu,

    Thanks for letting us know.

    Ted

    Collapse
    #12037

    Laurentiu
    Member

    I have re-run the tests after making the fixes outlined in the other posts and I have not issue again.

    Collapse
    #12032

    tedr
    Member

    Hi Laurentiu,

    This looks like it is the same thing that is going on in your other post- http://hortonworks.com/community/forums/topic/too-many-open-files-rs-logs/

    Ted.

    Collapse
    #12020

    Laurentiu
    Member

    In the HB Master the following is reported:
    2012-11-12 16:56:40,020 ERROR org.apache.hadoop.hbase.master.HMaster: Region server ^@^@hdp-nod-chi-03.trustwave.com,60020,1352231477670 reported a fatal error:
    ABORTING region server XXXXXXXXXXXXX,60020,1352231477670: Replay of HLog required. Forcing server shutdown
    Cause:
    org.apache.hadoop.hbase.DroppedSnapshotException: region: usertable,user6089489121697188913,1352756918315.6dd2fdb4c038eeea8562515f89f083a8.
    at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1288)
    at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1172)
    at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1114)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:400)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:374)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:243)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.SocketException: Too many open files

    Collapse
Viewing 4 replies - 1 through 4 (of 4 total)