Home Forums HDFS Block Pool Used too much disk

Tagged: 

This topic contains 5 replies, has 2 voices, and was last updated by  Shawn Du 4 months ago.

  • Creator
    Topic
  • #50617

    Shawn Du
    Participant

    hi,
    see the output of cluster summary.

    Cluster Summary

    Security is OFF
    69023 files and directories, 68580 blocks = 137603 total.
    Heap Memory used 79.92 MB is 45% of Commited Heap Memory 176.75 MB. Max Heap Memory is 1.74 GB.
    Non Heap Memory used 36.25 MB is 70% of Commited Non Heap Memory 51.50 MB. Max Non Heap Memory is 130 MB.
    Configured Capacity : 1.56 TB
    DFS Used : 1.21 TB
    Non DFS Used : 16.22 GB
    DFS Remaining : 336.78 GB
    DFS Used% : 77.87%
    DFS Remaining% : 21.11%
    Block Pool Used : 1.21 TB
    Block Pool Used% : 77.87%

    you can see that most disk is taken by Block Pool.
    This will causes disk is full issue when running map-reduce.

    What’s the root cause about this issue? how to prevent it?

    Thanks.

Viewing 5 replies - 1 through 5 (of 5 total)

You must be logged in to reply to this topic.

  • Author
    Replies
  • #50695

    Shawn Du
    Participant

    dfs.datanode.scan.period.hours default value is 21*24. that’s 21days. I think it is a little larger for me. It is a pity that this configuration doesn’t appear in the doucment. I get it from source code. now I set it as 120 that’s 5 days. wish it work.

    Thanks.

    Collapse
    #50694

    Shawn Du
    Participant

    By the way, My hadoop version is 2.2.0.

    Collapse
    #50693

    Shawn Du
    Participant

    Robert,

    Thanks your reply.
    In my understanding, when HDFS client delete a file, it delete the meta data in namenode, but it seems that not delete the data in the datanode. The cluster run many jobs every day, those jobs first copy files from local file system to HDFS, then do the map-reduce. after job succeed, delete the source data. I guess the data is deleted in DataNode. I met the same problem before. That time, I restart the HDFS, I see namenode send many notifications to tell datanode to delete the useless blocks. My question, how to force HDFS to delete the file immediately both in namenode and datanode. how to force a sync manually between namenode and datanode.

    Thank.

    Collapse
    #50628

    Robert Molina
    Moderator

    Hi Shawn,
    “Block pool used ” is the set of blocks that belong in this namespace. Because you have you have single namenode/namespace, DFS used is the same as block pool used.

    Regards,
    Robert

    Collapse
    #50618

    Shawn Du
    Participant

    By the way: The cluster has one NameNode and 7 DataNodes.

    Collapse
Viewing 5 replies - 1 through 5 (of 5 total)