Version 1.0 for HDP 2.5 January 25, 2017
Hortonworks Data Platform (HDP) can be used to to refine and analyze data from heating, ventilation, and air conditioning (HVAC) systems to maintain optimal office building temperatures and minimize expenses.
Demo video Enable Predictive Analytics with Hadoop illustrates what you’ll be building in this tutorial.
A sensor is a device that measures a physical quantity and transforms it into a digital signal. Sensors are always on, capturing data at a low cost, and powering the “Internet of Things.”
In this tutorial, we will focus on sensor data from building operations. Specifically, we will refine and analyze the data from Heating, Ventilation, Air Conditioning (HVAC) systems in 20 large buildings around the world.
In this tutorial, we work with HVAC sensor data from 20 buildings from different countries. Learn a method to refine and analyze HVAC data:
Lab1 – Upload and Refine Data with Hive
Lab 2 – Visualize HVAC Data Via Zeppelin
You will learn to refine HVAC data into a useful format. You will gain a practical experience with creating Hive tables that run on ORC files for fast and efficient data processing. You will gain insight to writing Hive scripts that enrich our data to reveal to us when temperature is at a cold, normal or hot state. Additionally, you will learn to write Hive queries on the data to determine which particular buildings are associated with these temperature states.
Read through the Introduction
Download and Extract the Sensor Data Files
1. Download the sample sensor data contained in a compressed (.zip) folder here:
2. Save the
SensorFiles.zip file to your computer, then extract the files. You should see a SensorFiles folder that contains the following files:
HVAC.csv – contains the targeted building temperatures, along with the actual (measured) building temperatures. The building temperature data was obtained using Apache Flume. Flume can be usedas a log aggregator, collecting log data from many diverse sources and moving it to a centralized data store. In this case, Flume was used to capture the sensor log data, which we can now load into the Hadoop Distributed File System (HFDS). For more details on Flume, refer to Tutorial 13: Refining and Visualizing Sentiment Data
building.csv – contains the “building” database table. Apache Sqoop can be used to transfer this type of data from a structured database into HFDS.
1. Login to Ambari login at
2. Use the dropdown menu in the top right corner to access the Hive view:
Navigate to the Upload Table tab, upload the two csv files contained within
When uploading the two tables we’ll need to change a few things.
1. Change the table name to
2. Change the name of the
Date column to
TargetTemp's type is
STRING, change it to
ActualTemp's type is
STRING, change it to
As the table uploads, you’ll notice a progress window appear.
1. Change the table name to
We have both tables loaded in, we want to get better performance in Hive, so we’re going to create new tables that utilize the highly efficient ORC file format. This will allow for faster queries when our datasets are much much larger.
1. Create a new table
hvac that is stored as an ORC file.
CREATE TABLE hvac STORED AS ORC AS SELECT * FROM hvac_raw;
2. Use a similar Hive query to create the
CREATE TABLE buildings STORED AS ORC AS SELECT * FROM building_raw;
We will use two Hive scripts to refine the sensor data. The knowledge we seek to acquire from this data:
We will identify whether the actual temperature was more than five degrees different from the target temperature with a Hive script.
1. Create a new worksheet in the Hive view and paste the following Hive query into the editor:
CREATE TABLE hvac_temperatures as select *, targettemp - actualtemp as temp_diff, IF((targettemp - actualtemp) > 5, 'COLD', IF((targettemp - actualtemp) < -5, 'HOT', 'NORMAL')) AS temprange, IF((targettemp - actualtemp) > 5, '1', IF((targettemp - actualtemp) < -5, '1', 0)) AS extremetemp from hvac;
What’s this query does?
hvac_temperaturesand copies data from the
2. Use Execute to create the new table.
What are the two new attributes?
The data in the temprange column indicates whether the actual temperature was:
If the temperature is outside of the normal range,
extremetemp is assigned a value of 1; otherwise its value is 0.
3. Let’s combine the hvac and hvac_temperatures data sets.
Create a new worksheet in the hive view and use the following query to create a new table
hvac_building that contains data from the
hvac_temperatures table and the
create table if not exists hvac_building as select h.*, b.country, b.hvacproduct, b.buildingage, b.buildingmgr from buildings b join hvac_temperatures h on b.buildingid = h.buildingid;
Which tables is hvac_building’s data coming from?
Use Execute to run the query:
After you’ve successfully executed the query, use the database explorer to load a sample of the data from the new
The aggregated data shows us the buildings that are associated with a certain temprange.
We’ve successfully refined the data into a useful format. We learned to create Hive tables that run on ORC files for fast and efficient data processing. We learned to write Hive scripts to enrich our data to reveal to us when temperature is at a cold, normal or hot state. Additionally, we used the data to bring us insight into which particular buildings are associated with these temperature states. Our next step is to use different reporting tools to analyze the results.
You will learn to gain more insights on your data by visualizing certain key attributes using Apache Zeppelin. You will to learn to visualize a graph that shows the countries that have the most extreme temperature and the amount of NORMAL events there are compared to HOT and COLD. You will also learn to see which HVAC units result in the most
Apache Zeppelin is data reporting tool on Hadoop. Zeppelin integrates with Hadoop by using tools called interpreters. It has direct connections to Apache Spark and Hive in your cluster and allows you to create visualizations and analyze your data on the fly.
Apache Zeppelin gives you the power to connect right to your Hadoop cluster to quickly obtain results from the data inside of Hadoop without having to export data to any other sources.
It’s also important to note that Zeppelin contains several interpreters that can be utilized to obtain data in a variety of ways.
One of the default interpreters included with Zeppelin is for Apache Spark. With the popularity of Apache Spark rising, you can simply write Spark scripts to execute directly on Apache Zeppelin to obtain results from your data in a matter of seconds.
1. Access Zeppelin at localhost:9995
From here we’re going to need to create a new Zeppelin Notebook. Notebooks in Zeppelin is how we differentiate reports from one another.
2. Hover over Notebook. Use the dropdown menu and Create a new note.
3. Name the note HVAC Analysis Report and then Create Note.
We will use the Hive interpreter to run Hive queries and visualize the results in Zeppelin.
4. To access the Hive interpreter for this note, we must insert
%jdbc(hive) at the top of the note. Everything afterwards will be interpreted as a Hive query.
5. Type the following query into the note, then run it by clicking the Run arrow or by using the shortcut Shift+Enter.
%jdbc(hive) select country, extremetemp, temprange from hvac_building
Now that the query is run, let’s visualize the data with a chart.
6. Select the bar graph chart button located just under the query.
7. Click settings to open up more advanced settings for creating the chart.
8. Here you will experiment with different values and columns to customize data that is illustrated in your visualization.
temprangeinto the groups box.
extremetempand change it to COUNT.
countryis the only field under Keys.
You’ve just customized your chart’s settings to portray the countries and their temperature from cold, normal to hot using Apache Zeppelin.
Is it possible to figure out which buildings might need HVAC upgrades, and which do not? Let’s determine that answer in the steps ahead…
9. Paste the following query into the blank Zeppelin note following the chart we made previously.
%jdbc(hive) select hvacproduct, extremetemp from hvac_building
10. Use Shift+Enter to run the note.
11. Arrange the fields according to the following image so we can recreate the chart below.
hvacproductis in the Keys box.
extremetempis in the Values box and that it is set to COUNT.
extremetempreadings. Thus we can make a more informed decision when purchasing new HVAC systems.
We’ve successfully gained more insights on our data by visualizing certain key attributes using Apache Zeppelin. We learned to visualize a graph that shows the countries that have the most extreme temperature and the amount of NORMAL events there are compared to HOT and COLD. We learned to see which HVAC units result in the most