Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.

cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
September 11, 2014
prev slideNext slide

Deploy a Hortonworks Data Platform With the StackIQ Command Line Interface

StackIQ, a Hortonworks technology partner, offers a comprehensive software suite that automates the deployment, provisioning, and management of Big Infrastructure. In his second guest blog, Anoop Rajendra (@anoop_r), a Senior Software Developer at StackIQ, gives instructions for using StackIQ Comand Line Interface (CLI) to deploy a Hortonworks Data Platform (HDP) cluster.

In a previous blog post, we discussed how StackIQ’s Cluster Manager automates the installation and configuration of an Apache Ambari server. The blog post then went on to illustrate the installation of a Hortonworks Data Platform (HDP) cluster through the Ambari web interface. This is a follow up post that walks you through on how to deploy a HDP cluster using StackIQ’s command line interface (CLI).

The Apache Ambari web interface is very user-friendly, but to help with automation of the cluster installation, a CLI is needed. To that end, the Ambari server provides a REST API to help deploy and configure HDP services and components. This REST API uses HTTP calls – GET, POST, PUT, and DELETE – to inspect, create, configure, and remove components and services in the Ambari server.

StackIQ’s Cluster Manager provides a CLI, with the moniker “rocks”, that allows system administrators to manage large cluster installations with ease. The CLI has a complete view of the entire cluster, the hardware resources, and the software configuration.

We’ve extended the StackIQ CLI to talk to the Ambari server using the REST API. This allows us to use the familiar “rocks” commands to add HDP clusters, hosts, services, components, and configure each of the services to fit the needs of the users.

There are two advantages to using the StackIQ CLI to deploy HDP:

  1. As mentioned earlier, a CLI allows for easy scripting—hence easy automation.
  2. The StackIQ CLI is completely plugged into the StackIQ database, and can transfer knowledge about the cluster to the Ambari server.

The list of StackIQ commands that talk to the Ambari server are given below.
run ambari api [resource=string] [command=GET|PUT|POST|DELETE] [body=JSON]

This command makes a generic API call to the Ambari server. This is the basis to all the Ambari-specific commands listed below.

For example, if we need to list all the clusters in an Ambari server, run:
# rocks run ambari api resource=clusters command=GET

To create a HDP cluster called “dev”, run:
# rocks run ambari api resource=clusters/dev command=POST body=’{“Clusters”: {“version”: “HDP-2.1”}}’

  • add ambari host {host}

Add hosts to the Ambari server. This bootstraps the host, runs the ambari-agent on the host, and registers the host with the Ambari server.

  • add ambari cluster {cluster} [version=string]

Add a HDP cluster to the Ambari server, with the given name and version number.

  • add ambari cluster host {hosts} [cluster=string]

Add a host to the HDP cluster. The host must be added to the Ambari server using the “rocks add ambari host” command before this command is run.

  • add ambari config [body=JSON] [cluster=string] [config=string]

Send service specific configuration to the Ambari server as a JSON string.

  • add ambari service [cluster=string] [service=string]

Add a HDP service (e.g., HDFS, MapReduce or YARN) to the HDP cluster.

  • add ambari host component {host} [cluster=string] [component=string]

Add a HDP component (e.g., HDFS Namenode, MapReduce History Server, etc.) to the HDP cluster.

  • report ambari config [cluster=string]

Print the desired HDP cluster configuration.

  • start ambari cluster {cluster}

Start the HDP cluster through Ambari.

  • start ambari host component {hosts} [cluster=string] [component=string]

Start a single host component in the HDP cluster.

  • start ambari service [cluster=string] [service=string]

Start a single service in the HDP cluster

  • start ambari service component [cluster=string] [component=string] [service=string]

Start all instances of a service component in a HDP cluster.

  • stop ambari cluster {cluster}

Stop all services in a HDP cluster.

  • stop ambari host component {hosts} [cluster=string] [component=string]

Stop a single instance of a service component running on the specified host.

  • stop ambari service [cluster=string] [service=string]

Stop a single service in the HDP cluster.

  • stop ambari service component [cluster=string] [component=string] [service=string]

Stop all instances of the specified component in a HDP cluster.

  • sync ambari config [cluster=string]

Synchronize the configuration of an Ambari cluster to the configuration specified in the StackIQ database. This command gets the output of the “rocks report ambari config” command, and applies it to the Ambari server.

  • sync ambari hosts {cluster}

Synchronize the hosts and host components to the HDP cluster. The database maintains a mapping of hosts to service components. For example, the StackIQ database contains data mapping the “HDFS datanode” service, and the “YARN Resource Manager” service to “compute-0-1.

This command adds the host to the HDP cluster, and creates those components on the host.

  • sync ambari services {cluster}

Synchronize all the services to the HDP cluster. The list of services , ex. HDFS, MapReduce, YARN, Nagios, etc., that the admin wants to deploy is gleaned from the StackIQ database. This command gets the list of services from the database, and creates the services in the HDP cluster.

  • create ambari cluster {cluster}

This command is a meta command that runs some / all the commands listed above.

Deploying a HDP cluster on a Newly Installed StackIQ Cluster

The “create ambari cluster” command can be used to initially deploy HDP on a newly installed StackIQ cluster.

Simply map the necessary HDP components to the specific backend nodes that you want the components to run on. This mapping can be done using the “rocks set host attr” command.

For example, if we want to deploy a namenode on compute-0-2, we can run:
# rocks set host attr compute-0-2 attr=hdp.hdfs.namenode value=true

We repeat the above command for all the components that we want to deploy. At a minimum, HDFS, ZooKeeper, Ganglia, and Nagios service components must be mapped to compute nodes. Once the host-to-service component mapping is satisfactory, we can run the “rocks create ambari cluster” command to deploy HDP.

This command gets a list of all hosts in the StackIQ cluster, checks to see which hosts are to be used in the HDP cluster, adds them, creates the required services, maps the service components to the hosts in the cluster, installs them, configures them, and then starts these services.

The command also has added support for Namenode HA. If the namenode service component is mapped to two hosts, and the secondary namenode service component is not specified, command installs HDFS in High-Availability NameNode mode. This allows for the standby namenode to take over namenode operations without losing any data, if the primary namenode were to fail.

In the next blog post, we’ll tell you more about the host-to-service mapping, and using spreadsheets to manage it all. In the meantime, give this a try and let us know what you think. Stay tuned for more.

The StackIQ Team
@StackIQ

Tags:

Comments

  • Just FYI – Apache Ambari has a CLI (and a REST client). These features above (among a few others) can be done using the CLI shell as well, and the whole process can be automated.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    If you have specific technical questions, please post them in the Forums

    You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>