Apache™ Falcon is a framework for simplifying and orchestrating data management and pipeline processing in Apache Hadoop®. It enables automation of data movement and processing for ingest, pipelines, replication and compliance use cases. Falcon also leverages its integration with YARN—the architectural center of Hadoop—to centrally manage the cluster’s data governance, maximize data pipeline reuse and enforce consistent data lifecycles.
YARN allows an enterprise to process a single massive dataset stored in HDFS in multiple ways—for batch, interactive and streaming applications. With more data and more users of that data, Apache Falcon’s data governance capabilities play a critical role. As the value of Hadoop data increases, so does the importance of cleaning that data, preparing it for business intelligence tools, and removing it from the cluster when it outlives its useful life.
Hortonworks Focus for Falcon
The Apache Falcon community is working to enhance operations, support for transactional applications and improved tooling.
|Transactional application support||
Recent Progress in Apache Falcon
What Falcon Does
Falcon simplifies the development and management of data processing pipelines with a higher layer of abstraction, taking the complex coding out of data processing applications by providing out-of-the-box data management services. This simplifies the configuration and orchestration of data motion, disaster recovery and data retention workflows.
Apache Falcon meets enterprise data governance needs in three areas:
|Centralized data lifecycle management||
|Compliance and audit||
|Database replication and archival||
How Falcon Works
Falcon runs as a standalone server as part of your Hadoop cluster.
A user creates entity specifications and submits to Falcon using the Command Line Interface (CLI) or REST API. Falcon transforms the entity specifications into repeated actions through a Hadoop workflow scheduler. All the functions and workflow state management requirements are delegated to the scheduler. By default, Falcon uses Apache Oozie as the scheduler.
The following diagram illustrates the entities defined as part of the Falcon framework:
Cluster: Represents the “interfaces” to a Hadoop cluster
Feed: Defines a dataset (such as HDFS files or Hive tables) with location, replication schedule and retention policy.
- Process: consumes Feeds and processes Feeds
Try these Tutorials
Try Falcon with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with HDP running alongside a set of hands-on, step-by-step Hadoop tutorials.Get Sandbox