cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button

Introduction

In this tutorial, we will learn to store data files using Ambari HDFS Files View. We will implement pig latin scripts to process, analyze and manipulate data files of baseball statistics. Let’s build our own Pig Latin Scripts now.

Pre-Requisites

Outline

What is Pig?

Pig is a high level scripting language that is used with Apache Hadoop. Pig excels at describing data analysis problems as data flows. Pig is complete in that you can do all the required data manipulations in Apache Hadoop with Pig. In addition through the User Defined Functions(UDF) facility in Pig you can have Pig invoke code in many languages like JRuby, Jython and Java. Conversely you can execute Pig scripts in other languages. The result is that you can use Pig as a component to build larger and more complex applications that tackle real business problems.

A good example of a Pig application is the ETL transaction model that describes how a process will extract data from a source, transform it according to a rule set and then load it into a datastore. Pig can ingest data from files, streams or other sources using the User Defined Functions(UDF). Once it has the data it can perform select, iteration, and other transforms over the data. Again the UDF feature allows passing the data to more complex algorithms for the transform. Finally Pig can store the results into the Hadoop Data File System.

Pig scripts are translated into a series of MapReduce jobs that are run on the Apache Hadoop cluster. As part of the translation the Pig interpreter does perform optimizations to speed execution on Apache Hadoop. We are going to write a Pig script that will do our data analysis task.

Our Data Processing Task

We are going to read in a baseball statistics file. We are going to compute the highest runs by a player for each year. This file has all the statistics from 1871–2011 and it contains over 90,000 rows. Once we have the highest runs we will extend the script to translate a player id field into the first and last names of the players.

Step 1: Download The Data

The data file we are using comes from the site www.seanlahman.com. You can download the data file in csv zip:

lahman591-csv.zip

Once you have the file you will need to unzip the file into a directory. We will be uploading just the master.csv and batting.csv files.

Step 2: Upload the data files

We start by selecting the HDFS Files view from the Off-canvas menu at the top. The HDFS Files view allows us to view the Hortonworks Data Platform(HDP) file store. This is separate from the local file system. For the Hortonworks Sandbox, it will be part of the file system in the Hortonworks Sandbox VM.

Navigate to /user/maria_dev and click on the Upload button to select the files we want to upload into the Hortonworks Sandbox environment.

Click on the browse button to open a dialog box. Navigate to where you stored the Batting.csv file on your local disk and select Batting.csv and click again upload. Do the same thing for Master.csv. When you are done you will see there are two files in your directory.

Step 3: Create Pig Script

Now that we have our data files, we can start writing our Pig script. Click on the Pig button from the Off-canvas menu.

3.1 Explore the Pig User Interface

We see the Pig user interface in our browser window. On the left we can choose between our saved Pig Scripts, UDFs and the Pig Jobs executed in the past. To the right of this menu bar we see our saved Pig Scripts.

3.2 Create a New Script

To get started push the button "New Script" at the top right and fill in a name for your script. If you leave the gap “Script HDFS Location” empty, it will be filled automatically.

After clicking on “create”, a new page opens.
At the center is the composition area where we will be writing our script. At top right of the composition area are buttons to Execute, Explain and perform a Syntax check of the current script.

At the left are buttons to save, copy or delete the script and at the very bottom we can add a argument.

3.3 Create a Script to Load Batting.csv Data

The first thing we need to do is load the data. We use the load statement for this. The PigStorage function is what does the loading and we pass it a comma as the data delimiter. Our code is:

batting = load 'Batting.csv' using PigStorage(',');

3.4 Create a Script to Filter Out Data

To filter out the first row of the data we have to add this line:

raw_runs = FILTER batting BY $1>0;

3.5 Implement a Script to Name the Fields

The next thing we want to do is name the fields. We will use a FOREACH statement to iterate through the batting data object. We can use Pig Helper that is at the bottom of the composition area to provide us with a template. We will click on Pig Helper, select Data processing functions and then click on the FOREACH template. We can then replace each element by hitting the tab key.

So the FOREACH statement will iterate through the batting data object and GENERATE pulls out selected fields and assigns them names. The new data object we are creating is then named runs. Our code will now be:

runs = FOREACH raw_runs GENERATE $0 as playerID, $1 as year, $8 as runs;

3.6 Use Script to Filter The Data (all_runs for each year)

The next line of code is a GROUP statement that groups the elements in runs by the year field. So the grp_data object will then be indexed by year. In the next statement as we iterate through grp_data we will go through year by year. Type in the code:

grp_data = GROUP runs by (year);

3.7 Compose a Script to Search for Max Runs Per Year

In the next FOREACH statement, we are going to find the maximum runs for each year. The code for this is:

max_runs = FOREACH grp_data GENERATE group as grp,MAX(runs.runs) as max_runs;

3.8 Build a Script to join Year, PlayerID and Max Run

Now that we have the maximum runs we need to join this with the runs data object so we can pick up the player id. The result will be a dataset with Year, PlayerID and Max Run. At the end we DUMP the data to the output.

join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);  
join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;  
DUMP join_data;

Let’s take a look at our script. The first thing to notice is we never really address single rows of data to the left of the equals sign and on the right we just describe what we want to do for each row. We just assume things are applied to all the rows. We also have powerful operators like GROUP and JOIN to sort rows by a key and to build new data objects.

3.9 Save and Execute The Script

At this point we can save our script. Let’s execute our code by clicking on the execute button at the top right of the composition area, which opens a new page.

As the jobs are run we will get status boxes where we will see logs, error message, the output of our script and our code at the bottom.

If you scroll down to the “Logs…” and click on the link you can see the log file of your jobs. We should always check the Logs to check if your script was executed correctly.

Code Recap

So we have created a simple Pig script that reads in some comma separated data.
Once we have that set of records in Pig we pull out the playerID, year and runs fields from each row.
We then sort them by year with one statement, GROUP.
Then we find the maximum runs for each year.
This is finally mapped to the playerID and we produce our final dataset.

As mentioned before Pig operates on data flows. We consider each group of rows together and we specify how we operate on them as a group. As the datasets get larger and/or add fields our Pig script will remain pretty much the same because it is concentrating on how we want to manipulate the data.

Full Pig Latin Script for Exercise

batting = load 'Batting.csv' using PigStorage(',');
raw_runs = FILTER batting BY $1>0;
runs = FOREACH raw_runs GENERATE $0 as playerID, $1 as year, $8 as runs;
grp_data = GROUP runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp,MAX(runs.runs) as max_runs;
join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);  
join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;  
DUMP join_data;

Further Reading


Tutorial Q&A and Reporting Issues

If you need help or have questions with this tutorial, please first check HCC for existing Answers to questions on this tutorial using the Find Answers button. If you don’t find your answer you can post a new HCC question for this tutorial using the Ask Questions button.

Find Answers Ask Questions

Tutorial Name: How To Process Data with Apache Pig
HCC Tutorial Tag: tutorial-150 and HDP-2.4

If the tutorial has multiple labs please indicate which lab your question corresponds to. Please provide any feedback related to that lab.

All Hortonworks, partner and community tutorials are posted in the Hortonworks github and can be contributed via the Hortonworks Tutorial Collaboration Guide. If you are certain there is an issue or bug with the tutorial, please create an issue on the repository and we will do our best to resolve it!