Evolving Apache Hadoop YARN to Provide Resource and Workload Management for Services

Services leverage YARN’s extended capabilities

The Journey

Almost to the date, two years ago the Apache Hadoop community voted to make YARN a sub-project of Apache Hadoop followed by the GA release nearly a year ago last fall.

Since then, it’s becoming plainly obvious that Apache Hadoop 2.x, powered by YARN as its architectural center, is the best platform for workloads such as Apache Hadoop MapReduce, Apache Pig, Apache Hive etc., which were designed to process data on Apache Hadoop HDFS. Furthermore, YARN has been embraced in a wholesale manner by other open-source communities providing data-processing frameworks such as Apache Giraph, Apache Tez, Apache Spark, Apache Flink, and many others.

Equally exciting, YARN has already evolved beyond just data-processing applications to long-running services with support from Apache Helix and applications like Apache Storm, Apache HBase, Apache Accumulo, and many others running on YARN via Apache Slider.

yarn

Last, not least, the most exciting part of the journey in the past year, personally, has been how quickly key vendors like HP, Microsoft, and SAS are moving to embrace YARN to run their existing products and services natively in Hadoop.

Resource Management and Workload Management

I’ll take a moment to clarify a couple of concepts: fundamentally, as a cluster operating system, YARN provides two distinct capabilities to applications: resource management and workload management. Now, resource management refers to resource allocation and attendant resource isolation across a cluster of many thousands of nodes and tens of thousands of applications including aspects such as tracking node failures, availability of resources at individual machines etc., whereas workload management refers to mechanics of deciding whom to allocate resources to (applications, users, queues), SLAs for allocation (e.g. via preemption) and so on.

Clearly, the first step in this space was to make it easy to allow existing systems to slide onto YARN, unmodified, via Apache Slider. Slider provides a simple way to run any existing system, unmodified, on YARN by merely providing details such as required resources (CPU/memory per container, number of containers, software, start/stop commands etc.); this is being leveraged now to run myriad systems like HBase, Accumulo, Storm etc. on YARN—thereby leveraging resource management capabilities of YARN to allow them to share resources with data-processing workloads like MapReduce, Tez, and Spark. Apache Helix provides similar capabilities for a more diverse set of use-cases; furthermore, it provides facilities for sharding etc.

As we spent time with key partners like Teradata, HP, and SAS to learn more about their systems and analytical products, it was clear that there were opportunities to provide deeper integration than merely allocate resources to these systems in a somewhat static manner (e.g. run the DW or the Analytics Server on 50 servers out of the 1000 managed by YARN). The idea here was that we could provide facilities in YARN to let these external systems leverage not just resource management capabilities of YARN, but also the workload management features.

Container Delegation

So, how do we go about providing this functionality to services?

The answer lies in a new capability being added to YARN called container delegation.

Currently, the Standard Container Model looks as such: a YARN application (via the ApplicationMaster i.e. AM) negotiates a container from the ResourceManager and then launches the container by interacting with the NodeManager to utilize the allocated resources (CPU, memory etc.) on the specific machine.

yarn_1

YARN Standard Container Model

Now, we are adding a new model we call the Delegated Container Model for YARN.

Preface: it helps to understand the distinction between a Service that is already running (in YARN containers or externally) such as a Database and an application that needs to utilize the service e.g. a SQL query.

In the Delegated Container Model, the SQL query negotiates containers from the ResourceManager (via the ApplicationMaster or the Client) and, rather than launching the container to utilize the allocated resources, it delegates the resources allocated to the container to the Service, which then gets additional resources on the machine from the NodeManager to be used on behalf of the query. This scheme contrasts with the Standard model in which the application would have launched the container itself to use the allocated resources.

yarn2

YARN Delegated Container Model – Container Delegation

From an implementation perspective, the NodeManager, on delegation, will expand the target container i.e. the Service, by modifying its cgroups to allow for more CPU, memory etc.

yarn3When the container is no longer needed, the Service can release the container back to YARN, so the resources can now be used by other applications. The Service now can no longer use the resources of the delegated container since the NodeManager enforces the smaller pre-delegation resource limits.

yarn4

YARN Delegated Container Model – Delegation Complete, Container Released

As a result, the YARN Container Delegation Model opens up YARN to provide facilities for deeper integration and allows services to rely on workload management provided by YARN. Thus, these services are now liberated from having to implement workload management and resource management capabilities for their own applications, thereby simplifying them.

Equally important, this allows management of all workloads—Hadoop (Hive, Pig, Spark) and external—through a single resource management interface vis-à-vis YARN. This radically simplifies operability and provides a homogenous way to reason and report about all workloads in the datacenter.

With this ability, there are a number of applications including Apache Hadoop HDFS that can take advantage of YARN. For example, the In-Memory Storage Tier being developed for HDFS can now be managed through YARN by allowing individual applications to negotiate memory from YARN, and then delegating to HDFS to be used on their behalf to cache datasets in RAM.

For more details, please follow along the development of this feature in the open at the Apache Software Foundation through YARN-1488.

Summary

YARN Container Delegation Model is a novel feature that allows external services to take advantage of both resource management and workload management capabilities of YARN, thereby radically simplifying how enterprises can manage heterogeneous workloads and services in their datacenter. Furthermore, it demonstrates how the Apache Hadoop YARN community is focused on an ever broadening set of use-cases to provide utility not just to  projects in the Apache Hadoop ecosystem, but also existing non-Hadoop products and services that want to take advantage of Hadoop resources.

Many thanks to everyone in the Apache Hadoop community who have participated in the discussions and contributed their time and effort including Alejandro Abdelnur, Vinod K. V., Henry Robinson, Bikas Saha, Hitesh Shah, Sandy Ryza, Siddharth Seth and many others.

Categorized by :
Administrator CIO & ITDM Data Management Developer High Availability

Leave a Reply

Your email address will not be published. Required fields are marked *

If you have specific technical questions, please post them in the Forums

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Contact Us
Hortonworks provides enterprise-grade support, services and training. Discuss how to leverage Hadoop in your business with our sales team.
Integrate with existing systems
Hortonworks maintains and works with an extensive partner ecosystem from broad enterprise platform vendors to specialized solutions and systems integrators.