Get fresh updates from Hortonworks by email

Once a month, receive latest insights, trends, analytics, offering information and knowledge of the Big Data.

cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
June 03, 2016
prev slideNext slide

Apache NiFi Not From Scratch

About the author: Paul Boal is the big data practice lead at Amitech Solutions. At StampedeCon in St. Louis on July 26-28, 2016 he will be presenting more details on the use of NiFi and Hadoop to manage and analyze data from wearable fitness devices in a population health management solution with Big Cloud Analytics.

If you haven’t heard about it, yet, Apache NiFi is a recent addition to the list of big data technologies that Hortonworks is helping to develop in the open source community. Whereas Hadoop is a data at rest and data processing platform, NiFi is specifically a data in motion technology that uses a flow based processing paradigm. If you’d like more history and background on NiFi, take a look at the official overview.

At first glance, NiFi will look very different to developers with different backgrounds. Java programmers think it’s one thing; ETL developers think something else. This article uses the example of preparing our population health solution to scale from just a few hundred to millions of individual lives to highlight the following topics:

  • NiFi from a Java developer’s perspective
  • NiFi from an ETL developer’s perspective
  • Rewriting your existing solution from scratch using NiFi
  • Understanding the places where NiFi really helps
  • Adapting your existing Java project to run in NiFi without starting from scratch

Is NiFi Really All That?

There are two different ways that most developers are going to come at NiFi:

  1. From a traditional software programming background with some data processing experience but mostly application focused development, or
  2. From a data warehousing background where extract, transform, and load (ETL) tools are the norm.

Adventurous and open-minded developers will look at the NiFi tutorials on how to stream the Twitter gardenhose into HDFS and simply be excited to learn something new. Unless they’re already facing Big Data scalability challenges, skeptics from the application development camp might see NiFi as a point-and-click waste of time where configuration files and dynamic code will be much more flexible.

NiFi for ETL

The skeptics from the ETL camp might scoff at NiFi and write it off as Big Data folks trying to recreate the ETL wheel. In all of these cases, the project managers associated with these developers probably see the potential for a huge hit to productivity as developers want (or are told to) rewrite existing code using NiFi.

But rest assured: it doesn’t have to be that way. Instead of starting from scratch with NiFi as the foundation, it’s worthwhile to invert the problem and see if your project can’t simply refactor itself with NiFi in mind. At Amitech Solutions and Big Cloud Analytics, we have been working with the real world scenario of wearable fitness devices – something more relevant in healthcare than the canned Twitter tutorial.

Fitness device manufacturers are making the data from their devices available to partners using web-based APIs. The model for each of these vendors tends to be fairly similar and consistent with many consumer application APIs: authenticate yourself as a user then query the API for data using that session key. In the healthcare industry, startups are looking for ways to use data from wearable devices to help inform services such as population health management.

In our particular case at Amitech, we’ve been working with a legacy code base that was developed during the early startup days of the company. It’s worked fine for just a few customers, covering a few thousand individuals. As the early pilots have wrapped up and the business case has become clearer to other prospective customers, the solution has needed to scale to hundreds of thousands and millions of individual fitness devices. The project has, all of a sudden, moved into the realm of Big Data that needs to be ingested from a huge and growing set of devices in all their many formats.

NiFi From Scratch

As the lead architect responsible for planning to adapt the solution for scalability, I felt that NiFi, Storm, Spark Streaming, or a similar technology would be a sensible part of the solution, so I put on my traditional data integration/ETL mindset and took a look at the available NiFi processors.

The list is exhaustive and very different than the kinds of components available in traditional ETL tools. At the time of this post, there are more than 135 different processors listed. Of course, the list included a complete set of HTTP and REST related processors that I could use to communicate with the fitness device vendors’ APIs. So, I wired together a simple series of processors that would take username and password as input, authenticate against an API to retrieve an authorization token, add that to the HTTP header, and then query the API for the data set that I wanted.

Proctor

Yeah! A simple real world use of NiFi that isn’t about Twitter! A working and practical application is always something to celebrate, but it occurred to me that with this approach, we’d need to rebuild the existing Java libraries already written for each type of device using a similar kind of approach. Not hard, but certainly a hit to the project timeline and a risk to data integrity as we made the transition to the new model.

NiFi Not From Scratch

Typically, data warehousing and ETL tool vendors recommended that we write your own custom components. After all, the target market for ETL tools is a space where the tools are specifically marketed as reducing the need for “error prone and time consuming” manual coding. When I ran across this tutorial on writing your own NiFi processor it occurred to me that NiFi is the exact opposite. It’s both Open Source and designed for extensibility from the ground up. I found it quite reasonable to write a custom NiFi processor that leverages our existing code base.

The existing code is a Java program with separate classes for each device vendor, all with the same interface to abstract the nuances of each vendor from the main data export program. This interface follows a traditional paradigm: login, query, query, query, logout. Given that my input to NiFi above takes in simple username, password, and query criteria arguments, it seems trivial to create a NiFi processor class that adapts the existing code into the NiFi API. Here’s a slightly abbreviated version of the actual code. (In reality, it’s all of 70 lines of code.)

public void onTrigger(final ProcessContext context, final ProcessSession session) {

final ProcessorLog log = this.getLogger();

final AtomicReference<String> value = new AtomicReference<>();

boolean success = false;

FlowFile flowfile = session.get();

try {

AbstractDevice vendor = null;

String v = context.getProperty(DEVICE).evaluateAttributeExpressions(flowfile).toString();

switch (v) {

// … Instantiate specific device class depending on flow file attribute

default:

log.error(“Invalid device vendor type: ” + v);

throw new ProcessException(“Unable to determine vendor type: ” + v);

}

// … Get various other attributes we need to call the API

// Here’s where we actually query the vendor API

if (vendor.login(userProp, passProp)) {

   vendor.queryVendor(startDate, endDate);

   value.set(vendor.getDataAsString());

   if (!vendor.getDataAsString().contentEquals(“”)) {

     success = true;

   }

}

flowfile = session.write(flowfile, new OutputStreamCallback() {

public void process(OutputStream out) throws IOException {

out.write(value.get().getBytes());

}

});

if (success) {

session.transfer(flowfile, SUCCESS);

} else {

session.transfer(flowfile, FAILURE);

}

}

We added a few dependencies and a builder to the Maven POM file, and Maven generates the NAR file that needs to be deployed into NiFi. After a quick restart, the new processor shows up in the list of available processors and the new flow looks like this—much simpler than a series of HTTP and attribute parsing processors:

proctor 2

How Does NiFi Add Value?

So, it seems that in the case of this solution, it will be fairly reasonable to adapt existing code to run within the NiFi framework without introducing a lot of the time and risk of rewriting the core business logic with a new tool. What, then, are the benefits of embedding this existing process into NiFi. The data flow in the traditional program was:

  1. Query the operational database for a list of individuals to process
  2. For each individual:
  3. Login to vendor API
  4. Query vendor API for data
  5. Parse data into normalized format
  6. Save to RDBMS

This serial process works fine for a few hundred or thousand users. The processing takes under an hour. With a million users, though, the process can’t be run serially in any reasonable timeframe. It would take several days to get through one day’s worth of processing. The logical response is to find some way of parallelizing the process.

We could setup and manage multiple instances of the Java program on multiple servers. In that model, there are a lot of new risks involved in managing the infrastructure and deployment of code, though. Not that those are insurmountable, but they are one thing that the NiFi framework accommodates easily. NiFi also gives us robust data provenance without any additional programming.

The data provenance log keeps track of every flowfile (a combination of data and attributes) and each of the transformations that happen to that flowfile along the way.

image01[3]

Anyone who’s ever been involved in the operational support and debugging of a data integration and data processing application can see the strength of these data provenance features.

Next Steps

Another thing that we plan to do with this project is migrate the backend data store from its current RDBMS to something more flexible and easier to scale like HBase or MongoDB. As it turns out, integrating the existing business logic into NiFi will make this process significantly easier. Instead of having to rip into the existing Java program and add in new classes for writing to the new data store, we can simply route and store the same data within NiFi. NiFi already has the processors for doing this in a distributed and scalable way:

Final Thoughts

For anyone who has an existing application that needs to scale or is costing too much to scale, take a serious look at simply wrapping your current business logic into a custom Apache Nifi processor.

  • Get beyond the Twitter garden hose example
  • Writing a custom processor for NiFi is relatively simple
  • Other processors in NiFi make it easy to adapt inputs and outputs to your existing code
  • Data provenance in NiFi is an incredibly valuable feature for support and troubleshooting
  • If you create something broadly useful, ask about contributing it to the NiFi community!

NiFi is enabling our population health management solution to quickly scale to track and help improve the health of millions of individuals across the globe. Imagine what you could do with NiFi for your business and your industry…

Comments

  • What about a Processor similar to Lookup Transformation in Informatica? How can I achieve that? I I might use ExecuteSQL processor and select data from a Lookup table, but how do I make the comparisons?

  • Leave a Reply

    Your email address will not be published. Required fields are marked *