A Framework for YARN-based, Long-running Applications In Hadoop
Apache™ Hadoop continues to attract new engines to run within the data platform, as organizations want to efficiently store their data in a single repository and interact with it simultaneously in different ways. They want SQL, streaming, machine learning, along with traditional batch processing…all in the same cluster. Many of these applications must be “always-on” or “long-running” services that are ready to process data whenever it comes in.
Slider “slides” these long-running services (like Apache HBase, Apache Accumulo and Apache Storm) onto YARN, so that they have enough resources to handle changing amounts of data, without tying up more processing resources than they need.
Slider is a framework for deployment and management of these long-running data access applications in Hadoop.
Slider leverages YARN’s resource management capabilities to deploy those applications, to manage their lifecycles and scale them up or down–even while the application is running. Slider “slides” existing long-running services (like Apache HBase, Apache Accumulo and Apache Storm) onto YARN, so that they have enough resources to handle changing amounts of data, without tying up more processing resources than they need.
Apache Slider allows users to create and run different versions of heterogeneous long-running applications in Hadoop with YARN. Each application instance can be configured differently, with its operational life cycle managed individually. On an on-demand basis, Slider can expand or shrink application instances while they are running. In the case of container failure, Slider transparently leverages YARN facilities to manage application recovery. All of this is available on Linux or Windows platforms.
These Apache Slider features provide three key benefits to enterprises running Hadoop:
|Turnkey YARN enablement||Enables long-running applications to take advantage of YARN’s benefits without code changes:
|Hadoop integration||Applications running with Apache Slider cooperate with the Enterprise Hadoop ecosystem in an integrated way–leveraging Hadoop’s data and processing resources, as well as its security, governance, and operations capabilities|
|Lifecycle management||Automatically makes applications manageable through Apache Ambari without any additional work|
Apache Slider views any application, as a set of components and each component is a daemon or executable with its own configuration, scripts, and data files. Components may have one or more instances. Slider manages applications by managing their component instances.
To manage application component instances, Slider launches a YARN application master for each instance. After the launching an application master, Slider can allocate or de-allocate resources and stop or start an application instance. This can be done based on the application admin’s request through the Slider client or through YARN’s resource scheduling pre-emptions.
At Hortonworks, we are helping to lead further Slider development within the community and completely in the open. We are working on extending Slider to both support new applications and to reinforce its support for those already enabled.
|Topologies||Support for complex application topology|
|Dynamic Scaling||Dynamic scaling of application or component instances|
|Application packaging tools||Support for Docker as a packaging mechanism|
|Application lifecycle management||Support for application upgrades, backup-recovery, relocation|
Introduction Hadoop has always been associated with BigData, yet the perception is it’s only suitable for high latency, high throughput queries. With the contribution of the community, you can use Hadoop interactively for data exploration and visualization. In this tutorial you’ll learn how to analyze large datasets using Apache Hive LLAP on Amazon Web Services […]
A very common request from many customers is to be able to index text in image files; for example, text in scanned PNG files. In this tutorial we are going to walkthrough how to do this with SOLR. Prerequisites Download the Hortonworks Sandbox Complete the Learning the Ropes of the HDP Sandbox tutorial. Step-by-step guide […]
Introduction In this tutorial, you will learn about the different features available in the HDF sandbox. HDF stands for Hortonworks DataFlow. HDF was built to make processing data-in-motion an easier task while also directing the data from source to the destination. You will learn about quick links to access these tools that way when you […]
Introduction JReport is a embedded BI reporting tool can easily extract and visualize data from the Hortonworks Data Platform 2.3 using the Apache Hive JDBC driver. You can then create reports, dashboards, and data analysis, which can be embedded into your own applications. In this tutorial we are going to walkthrough the folllowing steps to […]
The Hortonworks Sandbox is delivered as a Dockerized container with the most common ports already opened and forwarded for you. If you would like to open even more ports, check out this tutorial.
Introduction R is a popular tool for statistics and data analysis. It has rich visualization capabilities and a large collection of libraries that have been developed and maintained by the R developer community. One drawback to R is that it’s designed to run on in-memory data, which makes it unsuitable for large datasets. Spark is […]
Apache Zeppelin on HDP 2.4.2 Author: Vinay Shukla In March 2016 we delivered the second technical preview of Apache Zeppelin, on HDP 2.4. Meanwhile we and the Zeppelin community have continued to add new features to Zeppelin. These features are now available in the final technical preview of Apache Zeppelin. This technical preview works with […]
Welcome to the Hortonworks Sandbox! Look at the attached sections for sandbox documentation.
Apache, Hadoop, Falcon, Atlas, Tez, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie, Phoenix, NiFi, Nifi Registry, HAWQ, Zeppelin, Slider, Mahout, MapReduce, HDFS, YARN, Metron and the Hadoop elephant and Apache project logos are either registered trademarks or trademarks of the Apache Software Foundation in the United States or other countries.