Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.


Sign up for the Developers Newsletter

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.


Get Started


Ready to Get Started?

Download sandbox

How can we help you?

* I understand I can unsubscribe at any time. I also acknowledge the additional information found in Hortonworks Privacy Policy.
closeClose button
October 22, 2014
prev slideNext slide

HDFS Metadata Directories Explained

HDFS metadata represents the structure of HDFS directories and files in a tree. It also includes the various attributes of directories and files, such as ownership, permissions, quotas, and replication factor. In this blog post, I’ll describe how HDFS persists its metadata in Hadoop 2 by exploring the underlying local storage directories and files. All examples shown are from testing a build of the soon-to-be-released Apache Hadoop 2.6.0.

WARNING: Do not attempt to modify metadata directories or files. Unexpected modifications can cause HDFS downtime, or even permanent data loss. This information is provided for educational purposes only.

Persistence of HDFS metadata broadly breaks down into 2 categories of files:

  • fsimage – An fsimage file contains the complete state of the file system at a point in time. Every file system modification is assigned a unique, monotonically increasing transaction ID. An fsimage file represents the file system state after all modifications up to a specific transaction ID.
  • edits – An edits file is a log that lists each file system change (file creation, deletion or modification) that was made after the most recent fsimage.

Checkpointing is the process of merging the content of the most recent fsimage with all edits applied after that fsimage is merged in order to create a new fsimage. Checkpointing is triggered automatically by configuration policies or manually by HDFS administration commands.


Here is an example of an HDFS metadata directory taken from a NameNode. This shows the output of running the tree command on the metadata directory, which is configured by setting in hdfs-site.xml.

├── current
│ ├── edits_0000000000000000001-0000000000000000007
│ ├── edits_0000000000000000008-0000000000000000015
│ ├── edits_0000000000000000016-0000000000000000022
│ ├── edits_0000000000000000023-0000000000000000029
│ ├── edits_0000000000000000030-0000000000000000030
│ ├── edits_0000000000000000031-0000000000000000031
│ ├── edits_inprogress_0000000000000000032
│ ├── fsimage_0000000000000000030
│ ├── fsimage_0000000000000000030.md5
│ ├── fsimage_0000000000000000031
│ ├── fsimage_0000000000000000031.md5
│ └── seen_txid
└── in_use.lock

In this example, the same directory has been used for both fsimage and edits. Alternatively, configuration options are available that allow separating fsimage and edits into different directories. Each file within this directory serves a specific purpose in the overall scheme of metadata persistence:

  • VERSION – Text file that contains:
    • layoutVersion – The version of the HDFS metadata format. When we add new features that require changing the metadata format, we change this number. An HDFS upgrade is required when the current HDFS software uses a layout version newer than what is currently tracked here.
    • namespaceID/clusterID/blockpoolID – These are unique identifiers of an HDFS cluster. The identifiers are used to prevent DataNodes from registering accidentally with an incorrect NameNode that is part of a different cluster. These identifiers also are particularly important in a federated deployment. Within a federated deployment, there are multiple NameNodes working independently. Each NameNode serves a unique portion of the namespace (namespaceID) and manages a unique set of blocks (blockpoolID). The clusterID ties the whole cluster together as a single logical unit. It’s the same across all nodes in the cluster.
    • storageType – This is either NAME_NODE or JOURNAL_NODE. Metadata on a JournalNode in an HA deployment is discussed later.
    • cTime – Creation time of file system state. This field is updated during HDFS upgrades.
  • edits_start transaction ID-end transaction ID – These are finalized (unmodifiable) edit log segments. Each of these files contains all of the edit log transactions in the range defined by the file name’s through . In an HA deployment, the standby can only read up through the finalized log segments. It will not be up-to-date with the current edit log in progress (described next). However, when an HA failover happens, the failover finalizes the current log segment so that it’s completely caught up before switching to active.
  • edits_inprogress__start transaction ID – This is the current edit log in progress. All transactions starting from are in this file, and all new incoming transactions will get appended to this file. HDFS pre-allocates space in this file in 1 MB chunks for efficiency, and then fills it with incoming transactions. You’ll probably see this file’s size as a multiple of 1 MB. When HDFS finalizes the log segment, it truncates the unused portion of the space that doesn’t contain any transactions, so the finalized file’s space will shrink down.
  • fsimage_end transaction ID – This contains the complete metadata image up through . Each fsimage file also has a corresponding .md5 file containing a MD5 checksum, which HDFS uses to guard against disk corruption.
  • seen_txid – This contains the last transaction ID of the last checkpoint (merge of edits into a fsimage) or edit log roll (finalization of current edits_inprogress and creation of a new one). Note that this is not the last transaction ID accepted by the NameNode. The file is not updated on every transaction, only on a checkpoint or an edit log roll. The purpose of this file is to try to identify if edits are missing during startup. It’s possible to configure the NameNode to use separate directories for fsimage and edits files. If the edits directory accidentally gets deleted, then all transactions since the last checkpoint would go away, and the NameNode would start up using just fsimage at an old state. To guard against this, NameNode startup also checks seen_txid to verify that it can load transactions at least up through that number. It aborts startup if it can’t.
  • in_use.lock – This is a lock file held by the NameNode process, used to prevent multiple NameNode processes from starting up and concurrently modifying the directory.


In an HA deployment, edits are logged to a separate set of daemons called JournalNodes. A JournalNode’s metadata directory is configured by setting dfs.journalnode.edits.dir. The JournalNode will contain a VERSION file, multiple edits__ files and an edits_inprogress_, just like the NameNode. The JournalNode will not have fsimage files or seen_txid. In addition, it contains several other files relevant to the HA implementation. These files help prevent a split-brain scenario, in which multiple NameNodes could think they are active and all try to write edits.

  • committed-txid – Tracks last transaction ID committed by a NameNode.
  • last-promised-epoch – This file contains the “epoch,” which is a monotonically increasing number. When a new writer (a new NameNode) starts as active, it increments the epoch and presents it in calls to the JournalNode. This scheme is the NameNode’s way of claiming that it is active and requests from another NameNode, presenting a lower epoch, must be ignored.
  • last-writer-epoch – Similar to the above, but this contains the epoch number associated with the writer who last actually wrote a transaction. (This was a bug fix for an edge case not handled by last-promised-epoch alone.)
  • paxos – Directory containing temporary files used in implementation of the Paxos distributed consensus protocol. This directory often will appear as empty.


Although DataNodes do not contain metadata about the directories and files stored in an HDFS cluster, they do contain a small amount of metadata about the DataNode itself and its relationship to a cluster. This shows the output of running the tree command on the DataNode’s directory, configured by setting in hdfs-site.xml.

├── current
│ ├── BP-1079595417-
│ │ ├── current
│ │ │ ├── VERSION
│ │ │ ├── finalized
│ │ │ │ └── subdir0
│ │ │ │ └── subdir1
│ │ │ │ ├── blk_1073741825
│ │ │ │ └── blk_1073741825_1001.meta
│ │ │ │── lazyPersist
│ │ │ └── rbw
│ │ ├── dncp_block_verification.log.curr
│ │ ├── dncp_block_verification.log.prev
│ │ └── tmp
└── in_use.lock

The purpose of these files is:

  • BP-random integer-NameNode-IP address-creation time – The naming convention on this directory is significant and constitutes a form of cluster metadata. The name is a block pool ID. “BP” stands for “block pool,” the abstraction that collects a set of blocks belonging to a single namespace. In the case of a federated deployment, there will be multiple “BP” sub-directories, one for each block pool. The remaining components form a unique ID: a random integer, followed by the IP address of the NameNode that created the block pool, followed by creation time.
  • VERSION – Much like the NameNode and JournalNode, this is a text file containing multiple properties, such as layoutVersion, clusterId and cTime, all discussed earlier. There is a VERSION file tracked for the entire DataNode as well as a separate VERSION file in each block pool sub-directory. In addition to the properties already discussed earlier, the DataNode’s VERSION files also contain:
    • storageType – In this case, the storageType field is set to DATA_NODE.
    • blockpoolID – This repeats the block pool ID information encoded into the sub-directory name.
  • finalized/rbw – Both finalized and rbw contain a directory structure for block storage. This holds numerous block files, which contain HDFS file data and the corresponding .meta files, which contain checksum information. “Rbw” stands for “replica being written”. This area contains blocks that are still being written to by an HDFS client. The finalized sub-directory contains blocks that are not being written to by a client and have been completed.
  • lazyPersist – HDFS is incorporating a new feature to support writing transient data to memory, followed by lazy persistence to disk in the background. If this feature is in use, then a lazyPersist sub-directory is present and used for lazy persistence of in-memory blocks to disk. We’ll cover this exciting new feature in greater detail in a future blog post.
  • dncp_block_verification.log – This file tracks the last time each block was verified by checking its contents against its checksum. The last verification time is significant for deciding how to prioritize subsequent verification work. The DataNode orders its background block verification work in ascending order of last verification time. This file is rolled periodically, so it’s typical to see a .curr file (current) and a .prev file (previous).
  • in_use.lock – This is a lock file held by the DataNode process, used to prevent multiple DataNode processes from starting up and concurrently modifying the directory.


Various HDFS commands impact the metadata directories

Commands Description
hdfs namenode NameNode startup automatically saves a new checkpoint. As stated earlier, checkpointing is the process of merging any outstanding edit logs with the latest fsimage, saving the full state to a new fsimage file, and rolling edits. Rolling edits means finalizing the current edits_inprogress and starting a new one.
hdfs dfsadmin -safemode enter
hdfs dfsadmin -saveNamespace
This saves a new checkpoint (much like restarting NameNode) while the NameNode process remains running. Note that the NameNode must be in safe mode, so all attempted write activity would fail while this is running.
hdfs dfsadmin -rollEdits This manually rolls edits. Safe mode is not required. This can be useful if a standby NameNode is lagging behind the active and you want it to get caught up more quickly. (The standby NameNode can only read finalized edit log segments, not the current in progress edits file.)
hdfs dfsadmin -fetchImage Downloads the latest fsimage from the NameNode. This can be helpful for a remote backup type of scenario.

Configuration Properties

Several configuration properties in hdfs-site.xml control the behavior of HDFS metadata directories.

  • – Determines where on the local filesystem the DFS name node should store the name table (fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
  • dfs.namenode.edits.dir – Determines where on the local filesystem the DFS name node should store the transaction (edits) file. If this is a comma-delimited list of directories then the transaction file is replicated in all of the directories, for redundancy. Default value is same as
  • dfs.namenode.checkpoint.period – The number of seconds between two periodic checkpoints.
  • dfs.namenode.checkpoint.txns – The standby will create a checkpoint of the namespace every ‘dfs.namenode.checkpoint.txns’ transactions, regardless of whether ‘dfs.namenode.checkpoint.period’ has expired.
  • dfs.namenode.checkpoint.check.period – How frequently to query for the number of uncheckpointed transactions.
  • dfs.namenode.num.checkpoints.retained – The number of image checkpoint files that will be retained in storage directories. All edit logs necessary to recover an up-to-date namespace from the oldest retained checkpoint will also be retained.
  • dfs.namenode.num.extra.edits.retained – The number of extra transactions which should be retained beyond what is minimally necessary for a NN restart. This can be useful for audit purposes or for an HA setup where a remote Standby Node may have been offline for some time and need to have a longer backlog of retained edits in order to start again.
  • dfs.namenode.edit.log.autoroll.multiplier.threshold – Determines when an active namenode will roll its own edit log. The actual threshold (in number of edits) is determined by multiplying this value by dfs.namenode.checkpoint.txns. This prevents extremely large edit files from accumulating on the active namenode, which can cause timeouts during namenode startup and pose an administrative hassle. This behavior is intended as a failsafe for when the standby fails to roll the edit log by the normal checkpoint threshold.
  • – How often an active namenode will check if it needs to roll its edit log, in milliseconds.
  • – Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored. Heterogeneous storage allows specifying that each directory resides on a different type of storage: DISK, SSD, ARCHIVE or RAM_DISK.


We briefly discussed how HDFS persists its metadata in Hadoop 2 by exploring the underlying local storage directories and files, the relevant configurations that drive specific behaviors, and appropriate HDFS metadata directory commands that print out the directory tree, initiate checkpoint, and create a fsimage.

In a future blog, we’ll explore lazy persistence, a scheme to persist in-memory data to disk, in more in details.



Akhilesh says:

Thumbs up. Neatly done blog. Thanks for writing it.

Chris Nauroth says:

Thank you, Akhilesh!

Chris Warburton says:

Hi Chris… I am working with your libhdfs and I have a few questions… Do you reckon you could email me so we could discuss a couple of things? I’m re-writing certain things and wanted to know what you think and if you want the code afterwards?

Chris Nauroth says:

Hi Chris,

Thank you for looking into libhdfs and volunteering to contribute some code. My recommendation is to file an Apache jira to track proposed changes and post patches. Alternatively, here is a search of existing unresolved libhdfs issues:

If any of the existing issues look like what you’re doing, then you could participate on those.

By tracking in Apache jira, the whole Apache community can participate in any design and code decisions.

Thanks again!

Arun Nallusamy says:

This is awesome write-up, it gives us more clarity on hdfs items. thanks for sharing and can we get more like this? Thank you

Chris Nauroth says:

Thank you for the positive feedback, Arun! We’ll aim to continue with more posts like this on HDFS internals.

Bhaskar says:

Excellent Post! Looking forward to the post on lazy persistence in HDFS 2.6.0.

srikanth says:

Great stuff . Really ! Difficult to find such info even in books these days.

However, I got puzzled on this line DATANODE-“finalized/rbw – Both finalized and rbw contain a directory structure for block storage. This holds numerous block files, which contain HDFS file data and the corresponding .meta files, which contain checksum ”

As for as I understand, the namenode has the meta data which essentially contains the information about which blocks constitute a given file and the mappings of those blocks with the datanode hosts , in which they are present.

Now, coming back to the example quoted in Datanode section
├── blk_1073741825
│ │ │ │ └── blk_1073741825_1001.meta

I want to know what information does this meta file have ? Is it like a typical meta data in a linux FS. Also, what is the difference between blk_1073741825_1001.meta and the entries in the namenode meta data stored in fsimage or in edits file.

Can you please help me in clearing this confusion ? Thank you !

Chris Nauroth says:

Hello Srikanth.

You are correct that the NameNode holds the majority of the file system metadata. It contains all metadata associated with the inodes (directories and files) that make up the file system hierarchy. This includes permissions, block size, replication factor, and many other pieces of information.

The DataNode does not store this kind of metadata, but it does store some metadata for each block that it holds. For each block file, there is a corresponding .meta file that has the same base name. The primary purpose of this metadata file is to hold the CRC calculated for the contents of the block. This becomes important later when an HDFS client or the background block scanner needs to verify the checksum to make sure the data hasn’t become corrupted.

Shubham Agarwal says:

I installed HBase on my system. I checked Hbase directory in HDFS (using ./bin/hadoop fs -ls /hbase) but the hbase.version and file were not present. So what can be done to resolve it.

Ankita says:

I got a question like: What is the default size of the file used by Master node to store the metadata of chunks?

I know the default block size of the cluster is 64/128mb.
A=Also, is tge metadata of chunks stored by namenode or datanode? Above it says the block metadata is stored in Datanode. And if so, what is the size of the file used to store it?

What is the file size for metadata storage in NameNode?
What are the file sizes of fsimage and edits?

nidhin says:

Great Post..Thanks Chris

Fernando Paul says:

Hi Chris, great write up – wondering if you know if HDFS has a file path maximum length limitation, similar to the way most operating systems’ file system’s do?

Thanks again for the write up.

Chris Nauroth says:

Hello Fernando,

I’m glad the article was helpful!

HDFS enforces a maximum length per path component. For example, given the path /foo/bar/baz, the limit is enforced separately on “foo”, “bar”, “baz”. An individual component may not exceed the limit, but the sum total of all of the path components may exceed that limit. By default, the enforced limit is 255 bytes in UTF-8 encoding. This can be tuned with configuration property dfs.namenode.fs-limits.max-component-length in hdfs-site.xml. For more details, please see the documentation of that configuration property here:

Thank you,

Vipin koul says:

Thanks a lot for the wonderful article, helped to clarify many of the things. I am having hadoop 2.7, and it seems some of the directory structure has changed. Do you have any write for the latest version?

dragon says:

Wp ursa!!

cc:strom spirit

Prasanth says:

Thanks for the awesome article! Does anyone know how to setup retention policies for the data blocks in dfs data directory in the data nodes using hdfs-site.xml? Do we have a specific property that can be used?

Mitul Kava says:

Its awesome article very helpful
In my case i am using hadoop for only processing purpose and after completion of process i want to clean those datanode metadata files. its it possible to delete those files and if yes then how to achieve that because after some time my disk is become full.

Smarak Das says:

Hello Chris,
Wonderfully written. Helps to understand the key concepts of HDFS. I have 01 query:
(a) If we have an HA NameNode configuration & say, 03 Journal Node…the Edits to Namespace on Active NameNode are applied to all Journal Nodes simultaneously via the “EditLog***” file on the Journal Nodes. If so, does the NameNode checks for consistency in the EditLog*** files across the Journal Nodes to ensure they are in sync before applying the EditLog*** file to the locally stored FSImage file (As Journal Node doesn’t host FSImage) to achieve a Checkpoint.

Udayakumar Balakrishnan says:

Amazing Article Chris !!

I have a doubt regarding the epoch number value in Namenode.

“”When a new writer (a new NameNode) starts as active, it increments the epoch and presents it in calls to the JournalNode.This scheme is the NameNode’s way of claiming that it is active and requests from another NameNode, presenting a lower epoch, must be ignored.””

In the above lines, it is said that “Namenode will present epoch to Journal nodes. We know that in journal nodes, epoch numbers were present in local filesystem.
Now, Could you please tell us where the Namenode stores these epoch ??

Murali Krishna says:

Nice article, clear details.

Vikram Chouhan says:

Very good article. After long googling, found this one very useful.

Leave a Reply

Your email address will not be published. Required fields are marked *

If you have specific technical questions, please post them in the Forums