Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics information and knowledge of Big Data.

cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
HDF > Develop with Hadoop > Real World Examples

Realtime Event Processing in Hadoop with NiFi, Kafka and Storm

Real Time Data Ingestion in HBase and Hive using Storm

cloud Ready to Get Started?

DOWNLOAD SANDBOX

Introduction

The Trucking business is a high-risk business in which truck drivers venture into remote areas, often in  harsh weather conditions and chaotic traffic on a daily basis. Using this solution illustrating Modern Data Architecture with Hortonworks Data Platform, we have developed a centralized management system that can help reduce risk and lower the total cost of operations.

This system can take into consideration adverse weather conditions, the driver’s driving patterns, current traffic conditions and other criteria to alert and inform the management staff and the drivers themselves when risk factors run high.

In previous tutorial, we have explored generating and capturing streaming data with Apache NiFi and Apache Kafka.

In this tutorial, we will build a solution to ingest real time streaming data into HBase using Storm. Storm has a spout that reads truck_events data from Kafka and passes it to bolts, which process and persist the data into Hive & HBase tables.

Prerequisites

Outline

HBase

HBase provides near real-time, random read and write access to tables (or to be more accurate ‘maps’) storing billions of rows and millions of columns.

In this case, once we store this rapidly and continuously growing dataset from Internet of Things (IoT), we will be able to perform a swift lookup for analytics regardless of the data size.

Apache Storm

Apache Storm is an Open Source distributed, reliable, fault–tolerant system for real time processing of large volume of data.
It’s used for:

  • Real time analytics
  • Scoring machine learning modeles
  • Continuous statics computations
  • Operational Analytics
  • And, to enforce Extract, Transform, and Load (ETL) paradigms.

A Storm Topology is network of Spouts and Bolts. The Spouts generate streams, which contain sequences of tuples (data) while the Bolts process input streams and produce output streams. Hence, the Storm Topology can talk to databases, run functions, filter, merge or join data. We will be using Storm parse data, perform complex computations on truck events and send data to HBase Tables.

  • Spout: Works on the source of data streams. In the “Truck Events” use case, Spout will read data from Kafka topics.
  • Bolt: Spout passes streams of data to Bolt which processes and persists  it to a data store or sends it downstream to another Bolt. We have a RouteBolt that transforms the tuple and passes that data onto the other for further processing. We have 3 HBase bolts that write to 3 tables.

Learn more about Apache Storm at the Storm Documentation page.

Tutorial Overview

  • Create HBase Tables
  • Deploy Storm Topology
  • Analyze a Storm Spout and several Bolts.
  • Store Persisting data into HBase.
  • Verify Data Stored in HBase.

Step 1: Create tables in HBase

  • Create HBase tables

We will work with 3 Hbase tables in this tutorial.

The first table stores all events generated, the second stores only
dangerous events and third stores the number of incidents per driverId.

su hbase

hbase shell

create 'driver_events', 'allevents'  
create 'driver_dangerous_events', 'events'
create 'driver_dangerous_events_count', 'counters'
list  
exit

Now let’s exit from hbase user, type exit.

  • driver_events can be thought of as All Events table
  • driver_dangerous_events can be thought of as Dangerous Events Table
  • driver_dangerous_events_count can be thought of as
    Incidents Per Driver Table

Note: ‘driver_events’ is the table name and ‘allevents’ is column family.
In the script above, we have one column family. Yet, if we want we can have
multiple column families. We just need to include more arguments.

Screen Shot 2015-06-04 at 7.03.00 PM.png

Step 2: Launch Storm Topology

Recall that the source code is under directory path
iot-truck-streaming/storm-streaming/src/.

The pre-compiled jars are under the directory path
iot-truck-streaming/storm-streaming/target/.

Note: Back in Tutorial 0 in which we set up the Trucking Demo, we used maven
to create the target folder.

2.1 Verify Kafka is Running & Deploy Topology

1. Verify that Kafka service is running using Ambari dashboard. If not, start the Kafka service as we did in tutorial 3.

2. Deploy Storm Topology

We now have ‘supervisor’ daemon and Kafka processes running.
To do real-time computation on Storm, you create what are called “topologies”. A topology is a Directed Acyclic Graph (DAG) of spouts and bolts with streams of tuples representing the edges. Each node in a topology contains processing logic, and links between nodes indicate how data should be passed around between nodes.

Running a topology is straightforward. First, you package all your code and dependencies into a single jar. In tutorial 0 when we set up our IDE environment, we ran mvn clean package after we were satisfied with the state of our code for the topology, which packaged our storm project into a storm…SNAPSHOT.jar. The command below will deploy a new Storm Topology for Truck Events.

[root@sandbox ~]# cd ~/iot-truck-streaming
[root@sandbox iot-truck-streaming]# storm jar storm-streaming/target/storm-streaming-1.0-SNAPSHOT.jar com.hortonworks.streaming.impl.topologies.TruckEventKafkaExperimTopology /etc/storm_demo/config.properties

You should see that the topology deployed successfully:

Screen Shot 2015-06-04 at 7.55.23 PM.png

This runs the class TruckEventKafkaExperimTopology. The main function of the class defines the topology and submits it to Nimbus. The storm jar part takes care of connecting to Nimbus and uploading the jar.

Open your Ambari Dashboard. Click the Storm View located in the Ambari User Views list.

storm_view_iot

You should see the new Topology truck-event-processor.

storm_view_topology_listing

Run the NiFi DataFlow to generate events.
Return to the Storm View and click on truck-event-processor topology in the list of topologies to drill into it.

As you scroll down the page, let’s analyze a Visualization of our truck-event-processor topology:

storm_topology_new_stormAPI_iot

2.2 Analysis of Topology Visualization:

  • RouteBolt processes the data received by KafkaSpout

  • CountBolt takes the data from RoutBolt and counts the incidents per driver

  • 1 HBaseBolt performs complex transformations on the data received by CountBolt
    to write to Incidents Per Driver Table

  • 2 HBase Bolts perform complex transformations on the data from RouteBolt to
    write to All Events and Dangerous Event Tables.

2.3 Overview of the Storm View

After 6-10 minutes, you should see that numbers of emitted and transferred tuples for each node(Spout or Bolt) in the topology is increasing, which shows that the messages are processed in real time by Spout and Bolts. If we hover over one of the spouts or bolts, we can see how much data they process and their latency.

Here is an example of the data that goes through the kafkaSpout and RouteBolt in the topology:

analysis_of_dive_into_storm_view

Overview of truck-event-processor in Storm View

  • Topology Summary
  • Topology Stats
  • truck-event-processor Visualization
  • Spout
  • Bolts
  • Topology Configuration

2.4 Overview of Spout Statistics:

To see statistics of a particular component or node in the storm topology, click on that component located in the Spouts or Bolts section. For instance, let’s dive into the KafkaSpout’s statistics.

Overview of Spout Statistics

  • Component Summary
  • Spout Stats
  • Output Stats ( All time )
  • Executor Stats ( All time )
  • Error Stats ( All time )

spout_statistics_iot

2.5 Overview of Bolt Statistics:

Follow similar process in section 2.4 to see the statistics of a particular bolt in your topology. Let’s dive into the RouteBolt statistics.

Overview of Bolt Statistics

  • Component Summary
  • Bolt Stats
  • Input Stats ( All time )
  • Output Stats ( All time )
  • Executor Stats ( All time )
  • Error Stats ( All time )

bolt_statistics_iot

What differences do you notice about the spout statistics compared to the bolt statistics?

Step 3: Verify Data in HBase

Let’s verify that Storm’s 3 HBase bolts successfully sent data to the 3 HBase Tables.

  • If you haven’t done so, you can you can stop the NiFi DataFlow. Press the stop symbol.
  • Verify that the data is in HBase by executing the following commands in HBase shell:
hbase shell

list
count 'driver_events'
count 'driver_dangerous_events'
count 'driver_dangerous_events_count'
exit

The driver_dangerous_events table is updated upon every violation event.

verify_data_in_hbase_iot

3.1 Troubleshoot Unexpected Data in HBase Table

If the data in the HBase table is displayed in hexadecimal values, you can perform the following special operation on a table to display the data in the correct format. For instance, if your driver_dangerous_events had unexpected data, run the following hbase query:

scan 'driver_dangerous_events_count' , {COLUMNS => ['counters:driverId:toInt', 'counters:incidentTotalCount:toLong']}

Your data should look as follows:

hbase_dangerous_correct_format

  • Once done, stop the Storm topology

Open the terminal of your sandbox:, then we can deactivate/kill the Storm topology from the Storm View or shell.

storm kill TruckEventKafkaExperimTopology

storm_topology_actions_iot

Summary

Congratulations, you built your first Hortonworks DataFlow Application.
When NiFi, Kafka and Storm are combined, they create the Hortonworks DataFlow.
You have used the power of NiFi to ingest, route and land real-time streaming
data. You learned to capture that data with Kafka and perform instant processing
with Storm. A common challenge with many use cases, which is also observed in
this tutorial series is ingesting a live stream of random data, and filtering
the junk data from the actual data we care about. Through these tutorials, you
learned to manipulate, persist and perform many other operations on random data.
We have a working application that shows us a visualization of driver behavior,
normal and dangerous events per city. Can you brainstorm ways to further enhance
this application?

Further Reading