Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.


Get Started


Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
HDF > Develop Data Flow & Streaming Applications > Hello World

Storm in Trucking IoT on HDF

Running the Demo

cloud Ready to Get Started?


Running the Demo


Let’s walk through the demo and get an understanding for the data pipeline before we dive deeper into Storm internals.


Environment Setup

SSH into your Hortonworks DataFlow (HDF) environment and download the corresponding demo project.

git clone
cd trucking-iot-demo-storm-on-scala

Generate Sensor Data

The demo application leverages a very robust data simulator, which generates data of two types and publishes them to Kafka topics as a CSV string.

EnrichedTruckData: Data simulated by sensors onboard each truck. For the purposes of this demo, this data has been pre-enriched with data from a weather service.

1488767711734|26|1|Edgar Orendain|107|Springfield to Kansas City Via Columbia|38.95940879245423|-92.21923828125|65|Speeding|1|0|1|60

EnrichedTruckData fields

TrafficData: Data simulated from an online traffic service, which reports on traffic congestion on any particular trucking route.


TrafficData fields

Start the data generator by executing the appropriate script:


Let’s wait for the simulator to finish generating data. Once that’s done, we can look at the data that was generated and stored in Kafka:

Note: “Ctrl + c” to exit out from the Kafka command.

/usr/hdf/current/kafka-broker/bin/ --bootstrap-server --from-beginning --topic trucking_data_truck


/usr/hdf/current/kafka-broker/bin/ --bootstrap-server --from-beginning --topic trucking_data_traffic

Deploy the Storm Topology

With simulated data now being pumped into Kafka topics, we power up Storm and process this data. In a separate terminal window, run the following command:


Note: We’ll cover what exactly a “topology” is in the next section.

Here is a slightly more in-depth look at the steps Storm is taking in processing and transforming the two types of simulated data from above.

General Storm Process

Verify the Processed Data

With the data now fully processed by Storm and published back into accessible Kafka topics, it’s time to verify our handiwork. Run the following command to list the joined set of data.

/usr/hdf/current/kafka-broker/bin/ --bootstrap-server --from-beginning --topic trucking_data_joined

Nice! Our topology joined data from two separate streams of data.

Next: Building a Storm Topology

Now that we know how Storm fits into this data pipeline and what type of work it is performing, let’s dive into the actual code and see exactly how it is built.