newsletter

Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.

AVAILABLE NEWSLETTERS:

Sign up for the Developers Newsletter

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.

cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

* I understand I can unsubscribe at any time. I also acknowledge the additional information found in Hortonworks Privacy Policy.
closeClose button
October 08, 2018
prev slideNext slide

Introducing Apache Hadoop Ozone: An Object Store for Apache Hadoop

1. Introduction

The Apache Hadoop Distributed File System (HDFS) has been the de facto file system for big data. It is easy to forget just how scalable and robust HDFS is in the real world. Our customers run clusters with thousands of nodes; these clusters store over 100 petabytes of data serving thousands of concurrent clients.

True to its big data roots, HDFS works best when most of the files are large – tens to hundreds of MBs. HDFS suffers from the famous small files limitation and struggles with over 400 Million files. There is an increased demand for an HDFS-like storage system that can scale to billions of small files.

Ozone is a distributed key-value store that can manage both small and large files alike. While HDFS provides POSIX-like semantics, Ozone looks and behaves like an Object Store.

Ozone is being designed and implemented designed by a team of engineers and architects with significant experience managing large Apache Hadoop clusters. This has given us an insight into what HDFS does well and some things that can be done differently. These lessons have influenced the design and evolution of Ozone.

An Alpha release of Ozone is available on the Apache Ozone website. Click Here

2. Design Tenets

The design for Ozone was guided by the following principles.

2.1. Strongly Consistent

Strong consistency simplifies application design. Ozone is designed to provide strict serializability.

2.2. Architectural Simplicity

A simple architecture is easier to reason about and easier to debug when things go wrong. We have tried to keep the Ozone architecture simple even at the cost of potential scalability. However Ozone is no slouch when it comes to scale. We designed it to store over 100 Billion objects in a single cluster.

2.3 Layered Architecture

In order to achieve the scale for the modern storage systems, Ozone is a  layered file system. It separates the namespace management from block and node management layer, which allows users to independently scale on both axes.

2.4. Painless Recovery

A key strength of HDFS is that it can effectively recover from catastrophic events like cluster-wide power loss without losing data and without expensive recovery steps. Rack and node losses are relatively minor blips. Ozone will be similarly robust in the face of failures.

2.5. Open Source in Apache

We believe the Apache Open Source community is critical to the success of Ozone. All Ozone design and development is being done in the Apache Hadoop community.

2.6. Interoperability with Hadoop Ecosystem

Ozone should be usable by the existing Apache Hadoop ecosystem and related applications like Apache Hive, Apache Spark and traditional MapReduce jobs. Hence Ozone supports:

  1. Hadoop Compatible FileSystem API (aka OzoneFS). This allows Hive, Spark etc. to use Ozone as a storage layer with zero modifications.
  2. Data Locality. Data locality was key to the original HDFS/MapReduce architecture by allowing compute tasks to be scheduled on the same nodes as the data. Ozone will also support data locality for applications that choose to use it.
  3. Side-by-side deployment with HDFS. Ozone can be installed in an existing Hadoop cluster and can share storage disks with HDFS.

3. Current Status

An Alpha release of Ozone is available for evaluation from the Apache Ozone web site at https://hadoop.apache.org/ozone/. The community is working hard on numerous features which will be available in a future Beta release, namely:

  1. Security: Kerberos & Delegation Token
  2. High Availability
  3. Amazon S3-compatible REST API
  4. Rack-aware data placement

4. Credits

The Apache Hadoop community has proposed multiple ways to scale HDFS in the past e.g.

  1. HDFS-5477 – Block manager as a service.
  2. HDFS-8286 – Scaling out the namespace using KV store
  3. HDFS-5389 – A Namenode that keeps only a part of the namespace in memory
  4. Block Collection/Mega-block abstraction

Ozone design borrows ideas from all of these proposals. Numerous active and past developers have contributed ideas and code to the Ozone project.

5. Further Reading

  1. Apache Hadoop Ozone web site – https://hadoop.apache.org/ozone/
  2. Try out Ozone – https://www.katacoda.com/elek/scenarios/ozone101

Comments

bigso11 says:

http://decembercalendar2018.com
http://faxcoversheet.com

Leave a Reply

Your email address will not be published. Required fields are marked *

If you have specific technical questions, please post them in the Forums