The Apache Hadoop Distributed File System (HDFS) has been the de facto file system for big data. It is easy to forget just how scalable and robust HDFS is in the real world. Our customers run clusters with thousands of nodes; these clusters store over 100 petabytes of data serving thousands of concurrent clients.
True to its big data roots, HDFS works best when most of the files are large – tens to hundreds of MBs. HDFS suffers from the famous small files limitation and struggles with over 400 Million files. There is an increased demand for an HDFS-like storage system that can scale to billions of small files.
Ozone is a distributed key-value store that can manage both small and large files alike. While HDFS provides POSIX-like semantics, Ozone looks and behaves like an Object Store.
Ozone is being designed and implemented designed by a team of engineers and architects with significant experience managing large Apache Hadoop clusters. This has given us an insight into what HDFS does well and some things that can be done differently. These lessons have influenced the design and evolution of Ozone.
An Alpha release of Ozone is available on the Apache Ozone website. Click Here
The design for Ozone was guided by the following principles.
2.1. Strongly Consistent
Strong consistency simplifies application design. Ozone is designed to provide strict serializability.
2.2. Architectural Simplicity
A simple architecture is easier to reason about and easier to debug when things go wrong. We have tried to keep the Ozone architecture simple even at the cost of potential scalability. However Ozone is no slouch when it comes to scale. We designed it to store over 100 Billion objects in a single cluster.
2.3 Layered Architecture
In order to achieve the scale for the modern storage systems, Ozone is a layered file system. It separates the namespace management from block and node management layer, which allows users to independently scale on both axes.
2.4. Painless Recovery
A key strength of HDFS is that it can effectively recover from catastrophic events like cluster-wide power loss without losing data and without expensive recovery steps. Rack and node losses are relatively minor blips. Ozone will be similarly robust in the face of failures.
2.5. Open Source in Apache
We believe the Apache Open Source community is critical to the success of Ozone. All Ozone design and development is being done in the Apache Hadoop community.
2.6. Interoperability with Hadoop Ecosystem
Ozone should be usable by the existing Apache Hadoop ecosystem and related applications like Apache Hive, Apache Spark and traditional MapReduce jobs. Hence Ozone supports:
An Alpha release of Ozone is available for evaluation from the Apache Ozone web site at https://hadoop.apache.org/ozone/. The community is working hard on numerous features which will be available in a future Beta release, namely:
The Apache Hadoop community has proposed multiple ways to scale HDFS in the past e.g.
Ozone design borrows ideas from all of these proposals. Numerous active and past developers have contributed ideas and code to the Ozone project.