Get Started with Cascading on Hortonworks Data Platform 2.1

Implementing WordCount with Cascading on HDP 2.1 Sandbox

This tutorial will enable you, as a Java developer, to learn the following:

  • Introduce you to Hortonworks Data Platform 2.1 on Hortonworks Sandbox, a single-node cluster
  • Introduce you to Java Cascading SDK
  • Examine the WordCount program in Java
  • Build the single unit of execution, the jar file, using the gradle build tool
  • Deploy the jar file onto to the Sandbox
  • Examine the resulting MapReduce Jobs
  • View at the output stored as an HDSF file.

To start this tutorial, you must do two things: First, download the Sandbox and follow the installation instructions. Second, download the Cascading SDK.

The example WordCount is derived from part 2 of the Cascading Impatient Series.

Downloading and installing the HDP 2.1 Sandbox

  1. Download and install HDP 2.1 Sandbox.
  2. Familiarize yourself with the navigation on the Linux virtual host through a shell window.
  3. Login into your Linux Sandbox and create a user cascade (E.g : useradd cascade ).

Git Clone Cascading example and Build it

      1. Download and install gradle-1.1 onto the Linux sandbox.
      2. On the Sandbox cd /home/cascade
      3. git clone git://github.com/Cascading/Impatient.git
      4. cd /home/cascade/Impatient/part2
      5. gradle clean jar (this builds the impatient.jar file, which is your wordcount unit of execution)

Deploying and running the Cascading Java application

Now you’re ready to run and deploy your impatient.jar file onto the cluster.

      1. su cascade
      2. cd /home/cascade/Impatient/part2
      3. hadoop fs -mkdir -p /user/cascade/data/
        hadoop fs -copyFromLocal data/rain.txt /user/cascade/data/
      4. hadoop jar ./build/libs/impatient.jar data/rain.txt output/wc

This command will produce the following output:

Screen Shot 2014-04-20 at 4.27.45 PM

Tracking the MapReduce Jobs on Sandbox

Once the job is submitted (or running) you can actually track its progress from the Sandbox Hue’s Job Browser. By default, it will display all jobs run by the user hue; filter by the user cascade.

Screen Shot 2014-04-18 at 4.17.04 PM

Double click on any links to see job details.

Screen Shot 2014-04-19 at 11.12.09 AM

Viewing the WordCount Output

When the job is finished, the word counts are written as an HDFS file part-00000. Use the Sandbox Hue’s File Browser to navigate to the HDFS directory and view its contents.

Screen Shot 2014-04-19 at 11.02.53 AM

Above and Beyond

For the adventurous, you can try the entire Impatient Series, after you have downloaded the sources from the github. Beyond the Impatient series, there’re other tutorials and case examples to play with.

Have Fun!

Leave a Reply

Your email address will not be published. Required fields are marked *

If you have specific technical questions, please post them in the Forums

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Try this tutorial with :

These tutorials are designed to work with Sandbox, a simple and easy to get started with Hadoop. Sandbox offers a full HDP environment that runs in a virtual machine.