Part of the core Hadoop project, YARN is the architectural center of Hadoop that allows multiple data processing engines such as interactive SQL, real-time streaming, data science and batch processing to handle data stored in a single platform, unlocking an entirely new approach to analytics. It is the foundation of the new generation of Hadoop and is enabling organizations everywhere to realize a modern data architecture.
YARN is the prerequisite for Enterprise Hadoop, providing resource management and a central platform to deliver consistent operations, security, and data governance tools across Hadoop clusters.
YARN also extends the power of Hadoop to incumbent and new technologies found within the data center so that they can take advantage of cost effective, linear-scale storage and processing. It provides ISVs and developers a consistent framework for writing data access applications that run IN Hadoop.
Hortonworks Focus for YARN
YARN is the central point of investment for Hortonworks within the Apache community. In fact, YARN was originally proposed (MR-279) and architected by one of our founders, Arun Murthy. Our engineers have been working within the Hadoop community to deliver and improve YARN for years. It has matured to become the solid, reliable architectural center of Hadoop and is a foundational component.
While relied upon by thousands, YARN can always be improved, especially with new engines emerging to interact with Hadoop data. To this end, Hortonworks has laid out the following investment themes for this foundational technology.
|Scheduling and Isolation||
|Applications on YARN||
Recent Progress in YARN
- Capacity scheduler preemption
- Security for Timeline Server
- Resource Manager application submit/kill REST API
|Apache Hadoop Version||Prior Enhancements|
What YARN Does
As its architectural center, YARN enhances a Hadoop compute cluster in the following ways:
Multi-tenant data processing improves an enterprise’s return on its Hadoop investments.
How YARN Works
YARN’s original purpose was to split up the two major responsibilities of the JobTracker/TaskTracker into separate entities:
- a global ResourceManager
- a per-application ApplicationMaster
- a per-node slave NodeManager
- a per-application Container running on a NodeManager
The ResourceManager and the NodeManager formed the new generic system for managing applications in a distributed manner. The ResourceManager is the ultimate authority that arbitrates resources among all applications in the system. The ApplicationMaster is a framework-specific entity that negotiates resources from the ResourceManager and works with the NodeManager(s) to execute and monitor the component tasks.
The ResourceManager has a scheduler, which is responsible for allocating resources to the various applications running in the cluster, according to constraints such as queue capacities and user limits. The scheduler schedules based on the resource requirements of each application.
Each ApplicationMaster has responsibility for negotiating appropriate resource containers from the scheduler, tracking their status, and monitoring their progress. From the system perspective, the ApplicationMaster runs as a normal container.
The NodeManager is the per-machine slave, which is responsible for launching the applications’ containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager.
Try YARN with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with HDP running alongside a set of hands-on, step-by-step Hadoop tutorials.Get Sandbox