We hosted a webinar on YARN a couple of weeks ago (see the slides and playback here). As you might expect, there was a lot of great questions and here is a set of answers to those questions.
Our next YARN-oriented Office Hours online on Sept 11th at 2pm PST. Join us on Meetup!
Who is using YARN and what benefits have they received from it?
On great public example of in production use of YARN, is at Yahoo!. They outlined some performance gains in a keynote address at Hadoop Summit this year. The video can be found here.
Yahoo uses YARN for three use cases, stream processing, iterative processing and shared storage. With Storm on YARN they stream data into a cluster and execute 5 second analytics windows. This cluster is only 320 nodes, but is processing 133,000 events per second and is executing 12000 threads. Their shared data cluster uses 1900 nodes to store 2PB of data.
In all, Yahoo has over 30000 nodes running YARN across over 365PB of data. They calculate running about 400,000 jobs per day for about 10 million hours of compute time. They also have estimated a 60% – 150% improvement on node usage per day.
In a application such as HBase, if the containers are dynamically allocated, how does a client of a service figure out which node(s) to connect to?
HBase registers its configuration parameters with ZooKeeper. Clients bootstrap by looking to ZooKeeper for current critical location. This architecture enables dynamic scale out with zero dependency on HBase service address. Like HBase, it is recommended that applications utilize ZooKeeper/HDFS to discover service configuration.
For applications that rely on static configuration, YARN supports container allocation on specific hosts.
How are apps advised to manage the split brain problem (e.g. if an application master loses contact with a resource manager and the resource manager starts a 2nd application master)?
In a large cluster split brain is a real concern and must be factored into the AM design. At the framework level, Resource Manager – upon spanning a new Application Master (AM) – will actively terminate the old AM and containers allocated to it. However, this action won’t materialize until the NodeManager establishes a connection back with the Resource Manager leaving the possibility of 2 concurrent AMs for a period of time. It is the responsibility of the AM to factor such a possibility and ensure job integrity. The MapReduce AM uses a locking scheme for checkpoint file and output directory to guarantee job consistency.
When is YARN going to be production?
YARN is the distributed data processing framework for Hadoop 2. Hadoop 2.1.1-Beta was released on 8/25. Hadoop 2.2.0 GA is anticipated in Fall, 2013. Hortonworks Data Platform Version 2.0 Beta packages the beta release along with the broad set of projects necessary for a Hadoop implementation, including simple provisioning with Apache Ambari.
Are there any examples of AuxiliaryService other than MR’s auxiliary service?
Mapreduce.shuffle is the only auxilary service available in Hadoop 2.
Can the launched task exist in the localResources itself (i.e. can the AM transport the executable binary in localResource)?
Yes. The AM binary can be specified as a local resources for the AM’s container launch context. Likewise, a task’s executable could be specified as a local resource for the container in which the task is to be launched.
When multiple containers are allocated, does the RM prefer to spread containers evenly over all nodes?
In the absence of specific resource request, Resource Manager distributes containers across the cluster. However, it is common for AM to request containers that are co-located with the data that it needs to process. This is the default behaviour of MapReduce AM.
How does the client communicate directly with an AM? Are there examples of this?
The client cannot communicate directly with the AM unless the AM and client have been written to support a communication protocol across the two. Given that an AM can be launched anywhere on the cluster, the client can access this information via the ApplicationReport exposed via the YARN client APIs. Using this, the client initiate the appropriate communication protocol to the AM. An example of this is implemented in MapReduce with the JobClient being able to communicate directly with the MapReduce ApplicationMaster.
Are the AMRM and AMNM protocols thread-safe (can multiple threads call into the protocol APIs simultaneously)?
Yes. AMRM & AMNM protocols are thread-safe.
If enough container slots are available, will my AM see them allocated on the first attempt?
Contingent on availability, the Resource Manager will make best attempt to allocate all the requested resources to the AM on the first attempt. However, this behavior isn’t guaranteed and should be accounted for in the design of the Application Master.
Is there a way for an auxiliary service to specify LocalResources (e.g. so it can run native binaries that are available in HDFS)?
An auxiliary service is not launched as part of a container, hence local resources are not relevant to it. Given that its configuration and deployment is an administrative task, the administrators are expected to install and make available all the necessary resources required by the aux service in the NM’s classpath.
Does the resource manager have failover or would that be a single point of failure?
YARN currently does not support automated resource manager failover. Restarts are manual. Resource Manager checkpoints its state and recovers from it upon restart. Additionally, the resource manager failure has zero impact on executing applications.
Automated Resource Manager failover is on the roadmap for early 2014.
Is YARN part of the training modules hortonworks has currently?
Hortonworks is working on a Hadoop 2 training module. Courses for Hadoop 2 will be available in Fall, 2013.
How can we upgrade to new HDFS if someone is already running old HDFS keeping the existing data?
Yes. Hadoop 2 supports in place upgrade from HDFS 1.0 to HDFS 2.0
Are there any pain points for upgrade from hadoop1 to hadoop2?
The Apache Hadoop community has spent significant effort ensuring backwards compatibility and a frictionless upgrade experience. At this point, over 50,000 Hadoop nodes have been upgraded at Yahoo from Hadoop 1.0 to Hadoop 2, yielding 50% improvement in cluster utilization & efficiency. It is anticipated that 99% of current application will require no code change. For a full review of upgrade impact on existing applications, please refer to our recent blog post.
What is the expected release date for YARN and Tez?
Hadoop 2.2.0 is expected to GA in Fall 2013. Tez GA is planned for Q1, 2014. Hortonworks Data Platform Version 2.0 Beta packages the beta release of Hadoop including YARN along with the broad set of projects necessary for a Hadoop implementaiton, including simple provisioning with Apache Ambari.
Do you see YARN as a cluster operating system?
Yes. YARN is the modern cluster operating system. It enables organization to pool physical resource across a datacenter and dynamically allocate the capacity across a multitude for workloads in a secure manner.
Once the Beta is installed, what else does a developer need to do to create a complete development environment to write a new application master?
ARN Client and Application Master are java applications. It is typical of application developers to use their favorite Java IDE along with Hadoop 2 environment.
Given the repetitive nature of the process for requesting and allocating containers etc., has there any thought to develop a workflow interface or GUI to automate the process?
Hadoop 2 does not include a workflow interface/ GUI for automating YARN resource negotiations. As the YARN adoption accelerates with the General Availability of Hadoop 2, common patterns of usage will surface. The community plans to closely follow the adoption pattern and provide extensions to further streamline the development process.
Is there an implementation of MPI on YARN that one can try out?
Yes. Both OpenMPI and MPICH2 are available on YARN.
Is HOYA fully qualified on YARN? I mean will HOYA work exactly same as HBase used run pre-YARN?
HOYA is currently in development. Upon General Availability, HOYA will add the ability to dynamically scale HBase cluster to meet variable query loads. The core HBase master and region server functionality is not impacted by HOYA.
Architecturally how does YARN compare with Mesos?
Conceptually YARN and Mesos address similar requirements. They enable organizations to pool and share horizontal compute resources across a multitude of workloads. YARN was architected specifically as an evolution of Hadoop 1.x. YARN thus tightly integrates with HDFS, MapReduce and Hadoop security.
How does an application interact with persistant storage? i.e. in this case HDFS. Can applications break out of HDFS and use its own form of persistant storage as well?
YARN does not change the storage interaction model for applications. Applications can interact with any available storage resource, including HDFS. For high availability and node resiliency, a scalable cluster filesystem such as HDFS is recommended.
Can you speak to how realistic it would be to run traditinal RDBMS on YARN+HDFS2?
Like HBase, YARN can bootstrap RDBMS to serve real time transactional queries. Leveraging YARN, enterprises can dynamically scale their RDBMS clusters to meet variable throughput needs.
How does a “secured” cluster play into this? ()e.g. Kerborized cluster). Does that all stay intact and working for YARN?
Yes, Kerborized cluster is supported. Hadoop 2 & YARN provide strong authentication using Kerberos & delegation tokens.
Thanks for all the interest in Hadoop 2 – it should be an exciting Fall for next generation Hadoop development. You can download HDP 2.0 Beta here to get going.