HBase Forum

long GC cause regionserver die

  • #48534
    Sun Ww
    Participant

    HI
    I am using hbase 0.94.2. Enable mslab and Concurrent Mark-Sweep GC.
    Recently a regionserver died because of GC. This is GC log:

    2014-02-04T22:16:23.847+0800: 4883193.102: [GC 4883193.102: [ParNew: 439859K->52033K(471872K), 0.0446910 secs] 11657413K->11269587K(16724800K), 0.0449160 secs] [Times: user=0.63 sys=0.00, real=0.05 secs]
    2014-02-04T22:16:53.376+0800: 4883222.631: [GC 4883222.631: [ParNew: 471489K->20023K(471872K), 42.7967180 secs] 11689043K->11259608K(16724800K), 42.7969920 secs] [Times: user=349.40 sys=40.95, real=42.79 secs]
    2014-02-04T22:17:37.333+0800: 4883266.589: [GC 4883266.589: [ParNew: 439479K->36584K(471872K), 0.0400790 secs] 11679064K->11276169K(16724800K), 0.0402220 secs] [Times: user=0.62 sys=0.00, real=0.04 secs]

    It seems like Young Generation GC use 42 seconds to free 400M ?
    Is that too long?

    any suggestion will be appreciated.
    Thank you

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #48585
    Nick Dimiduk
    Moderator

    Hi Sun Ww,

    Are you using an HDP release? My guess is not. This forum is best served when you’re running HDP. If not, you’re better off contacting the hbase user mailing list.

    That said, you’ll need to provide more information about your configuration and what the RS was doing at the time. Do you have RS logs from that time window? Are you using the default memstore and blockcache size? How many regions are being served? How large are they? What’s the workload (is it particularly read or write heavy)?

    #48628
    Sun Ww
    Participant

    OK.I will try it.
    Thank you.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.