HDFS Forum

Block Pool Used too much disk

  • #50617
    Shawn Du
    Participant

    hi,
    see the output of cluster summary.

    Cluster Summary

    Security is OFF
    69023 files and directories, 68580 blocks = 137603 total.
    Heap Memory used 79.92 MB is 45% of Commited Heap Memory 176.75 MB. Max Heap Memory is 1.74 GB.
    Non Heap Memory used 36.25 MB is 70% of Commited Non Heap Memory 51.50 MB. Max Non Heap Memory is 130 MB.
    Configured Capacity : 1.56 TB
    DFS Used : 1.21 TB
    Non DFS Used : 16.22 GB
    DFS Remaining : 336.78 GB
    DFS Used% : 77.87%
    DFS Remaining% : 21.11%
    Block Pool Used : 1.21 TB
    Block Pool Used% : 77.87%

    you can see that most disk is taken by Block Pool.
    This will causes disk is full issue when running map-reduce.

    What’s the root cause about this issue? how to prevent it?

    Thanks.

to create new topics or reply. | New User Registration

  • Author
    Replies
  • #50618
    Shawn Du
    Participant

    By the way: The cluster has one NameNode and 7 DataNodes.

    #50628
    Robert Molina
    Moderator

    Hi Shawn,
    “Block pool used ” is the set of blocks that belong in this namespace. Because you have you have single namenode/namespace, DFS used is the same as block pool used.

    Regards,
    Robert

    #50693
    Shawn Du
    Participant

    Robert,

    Thanks your reply.
    In my understanding, when HDFS client delete a file, it delete the meta data in namenode, but it seems that not delete the data in the datanode. The cluster run many jobs every day, those jobs first copy files from local file system to HDFS, then do the map-reduce. after job succeed, delete the source data. I guess the data is deleted in DataNode. I met the same problem before. That time, I restart the HDFS, I see namenode send many notifications to tell datanode to delete the useless blocks. My question, how to force HDFS to delete the file immediately both in namenode and datanode. how to force a sync manually between namenode and datanode.

    Thank.

    #50694
    Shawn Du
    Participant

    By the way, My hadoop version is 2.2.0.

    #50695
    Shawn Du
    Participant

    dfs.datanode.scan.period.hours default value is 21*24. that’s 21days. I think it is a little larger for me. It is a pity that this configuration doesn’t appear in the doucment. I get it from source code. now I set it as 120 that’s 5 days. wish it work.

    Thanks.

You must be to reply to this topic. | Create Account

Support from the Experts

A HDP Support Subscription connects you experts with deep experience running Apache Hadoop in production, at-scale on the most demanding workloads.

Enterprise Support »

Become HDP Certified

Real world training designed by the core architects of Hadoop. Scenario-based training courses are available in-classroom or online from anywhere in the world

Training »

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.