Apache Ambari is the only 100% open source management and provisioning tool for Apache Hadoop. Recent innovations of Apache Ambari have focused on opening Apache Ambari into a pluggable management platform that can automate cluster provisioning, deploy 3rd party software and provide custom operational and developers’ views to the end user.
Join us Thursday March 26 at 10am PT, for an online technical workshop where we will cover 3 key integration points of Apache Ambari including Stacks, Views and Blueprints and deliver working examples of each.
Ambari Blueprints allow for a repeatable model for consistent cluster provisioning as well as a method to automate cluster provisioning (for ad hoc cluster creation, whether bare metal or cloud). In addition, they enable portable and cohesive definitions of a cluster for sharing best practices on component layout and configuration.
We will demo sample Blueprints and show how easy they make installing multi-node clusters consistently, as well as automating configuration updates. To help you try out Ambari REST APIs and to get you started, we have created an API explorer view
Stacks wrap services of all shapes and sizes with a consistent definition and lifecycle-control layer. With this wrapper in-place, Ambari can rationalize operations over a broad set of services. To the Hadoop operator, this means that regardless of differences across services (e.g.install/start/stop/configure/status) each service can be managed and monitored with a consistent approach. This also provides a natural extension point for operators and the community to create their own custom stack definitions to “plug-in” new services that can co-exist with Hadoop.
We will show how any Unix service can easily be wrapped in an Ambari service to allow easy deployment and monitoring via Ambari.
Real world examples highlighting the utility of Ambari services for:
Views customize interaction and experience for operators and users. They enable the community and operators to develop new ways to visualize operations, troubleshoot issues and interact with Hadoop. They will provide a framework to offer those experiences to specific sets of users. Through the pluggable UI framework, operators will be able to control which users get certain capabilities and to customize how those users interact with Hadoop.
We will demonstrate: