Apache Falcon

A framework for managing data life cycle in Hadoop clusters

Apache™ Falcon addresses enterprise challenges related to Hadoop data replication, business continuity, and lineage tracing by deploying a framework for data management and processing. Falcon centrally manages the data lifecycle, facilitate quick data replication for business continuity and disaster recovery and provides a foundation for audit and compliance by tracking entity lineage and collection of audit logs.

What Falcon Does

Falcon allows an enterprise to process a single massive dataset stored in HDFS in multiple ways—for batch, interactive and streaming applications. With more data and more users of that data, Apache Falcon’s data governance capabilities play a critical role. As the value of Hadoop data increases, so does the importance of cleaning that data, preparing it for business intelligence tools, and removing it from the cluster when it outlives its useful life.

Falcon simplifies the development and management of data processing pipelines with a higher layer of abstraction, taking the complex coding out of data processing applications by providing out-of-the-box data management services. This simplifies the configuration and orchestration of data motion, disaster recovery and data retention workflows.

The Falcon framework can also leverage other HDP components, such as Pig, HDFS, and Oozie. Falcon enables this simplified management by providing a framework to define, deploy, and manage data pipelines.

Apache Falcon meets enterprise data governance needs in three areas:

Need Feature
Centralized data lifecycle management
  • Centralized definition & management of pipelines for data ingest, process & export
  • Ensure disaster readiness & business continuity
  • Out of the box policies for data replication & retention
  • End to end monitoring of data pipelines
Compliance and audit
  • Visualize data pipeline lineage
  • Track data pipeline audit logs
  • Tag data with business metadata
Database replication and archival
  • Replication across on-premise and cloud-based storages targets: Microsoft Azure and Amazon S3
  • Data lineage with supporting documentation and examples
  • Heterogeneous storage tiering in HDFS
  • Definition of hot/cold storage tiers within a cluster

How Falcon Works

Hadoop operators can use the Falcon web UI or the command-line interface (CLI) to create data pipelines, which consist of cluster storage location definitions, dataset feeds, and processing logic.


Each pipeline consists of XML pipeline specifications, called entities. These entities act together to provide a dynamic flow of information to load, clean, and process data.

There are three types of entities:

  • Cluster: Defines where data and processes are stored.
  • Feed: Defines the datasets to be cleaned and processed.
  • Process: Consumes feeds, invokes processing logic, and produces further feeds. A process defines the configuration of the Oozie workflow and defines when and how often the workflow should run. Also allows for late data handling.

Each entity is defined separately and then linked together to form a data pipeline. Falcon provides predefined policies for data replication, retention, late data handling, and replication. These sample policies are easily customized to suit your needs.

  • Centrally management of the data lifecycle: Falcon enables you to manage the data lifecyle in one common place where you can define and manage policies and pipelines for data ingest, processing, and export.
  • Business continuity and disaster recovery: Falcon can replicate HDFS and Hive datasets, trigger processes for retry, and handle late data arrival logic. In addition, Falcon can mirror file systems or Hive HCatalog on clusters using recipes that enable to you re-use complex workflows.
  • Address audit and compliance requirements: Falcon provides audit and compliance features that enable you to visualize data pipeline lineage, track data pipeline audit logs, and tag data with business metadata.


Hortonworks Focus for Falcon

The Apache Falcon community is working to enhance operations, support for transactional applications and improved tooling.

Focus Planned Enhancements
  • Pipeline run notification via SNMP and email
  • Resource usage and performance related queries using Audit data
  • Queries related to resource usage and performance, using Audit data
  • File import SSH and SCP
  • HDFS snapshot integration
Transactional application support
  • Hive ACID support
  • Enhanced UI for cluster/feed entity management
  • Visual pipeline workflow designer with re-useable components

Recent Progress in Apache Falcon

Falcon Version Progress
Version 0.6.0
  • Security integration for authentication and authorization
  • Pipeline lineage for HDFS and Hive tables GA
  • Improved UI for pipeline setup and management
  • Backup and replication to cloud
  • Hive and HCatalog metastore replication
Version 0.5.0
  • Pipeline definition, re-use, and automation
  • Ambari Install, Start/Stop
  • Pipeline Lineage – Tech Preview Feature
  • Visualize data pipeline audit logs
  • Tag data with business metadata

Falcon Tutorials

Falcon in our Blog

Webinars & Presentations


to create new topics or reply. | New User Registration

This forum contains 18 topics and 27 replies, and was last updated by  K W 4 days, 1 hour ago.

Viewing 18 topics - 1 through 18 (of 18 total)
Viewing 18 topics - 1 through 18 (of 18 total)

You must be to create new topics. | Create Account

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.
Stay up to date!
Developer updates!