Apache Falcon

A framework for managing data processing in Hadoop clusters

Apache™ Falcon is a framework for simplifying and orchestrating data management and pipeline processing in Apache Hadoop®. It enables automation of data movement and processing for ingest, pipelines, replication and compliance use cases. Falcon also leverages its integration with YARN—the architectural center of Hadoop—to centrally manage the cluster’s data governance, maximize data pipeline reuse and enforce consistent data lifecycles.

YARN allows an enterprise to process a single massive dataset stored in HDFS in multiple ways—for batch, interactive and streaming applications. With more data and more users of that data, Apache Falcon’s data governance capabilities play a critical role. As the value of Hadoop data increases, so does the importance of cleaning that data, preparing it for business intelligence tools, and removing it from the cluster when it outlives its useful life.

Hortonworks Focus for Falcon

The Apache Falcon community is working to enhance operations, support for transactional applications and improved tooling.

Focus Planned Enhancements
Operations
  • Pipeline run notification via SNMP and email
  • Resource usage and performance related queries using Audit data
  • Queries related to resource usage and performance, using Audit data
  • File import SSH and SCP
  • HDFS snapshot integration
Transactional application support
  • Hive ACID support
Tooling
  • Enhanced UI for cluster/feed entity management
  • Visual pipeline workflow designer with re-useable components

Recent Progress in Apache Falcon

Falcon Version Progress
Version 0.6.0
  • Security integration for authentication and authorization
  • Pipeline lineage for HDFS and Hive tables GA
  • Improved UI for pipeline setup and management
  • Backup and replication to cloud
  • Hive and HCatalog metastore replication
Version 0.5.0
  • Pipeline definition, re-use, and automation
  • Ambari Install, Start/Stop
  • Pipeline Lineage – Tech Preview Feature
  • Visualize data pipeline audit logs
  • Tag data with business metadata

What Falcon Does

Falcon simplifies the development and management of data processing pipelines with a higher layer of abstraction, taking the complex coding out of data processing applications by providing out-of-the-box data management services. This simplifies the configuration and orchestration of data motion, disaster recovery and data retention workflows.

Apache Falcon meets enterprise data governance needs in three areas:

Need Feature
Centralized data lifecycle management
  • Centralized definition & management of pipelines for data ingest, process & export
  • Ensure disaster readiness & business continuity
  • Out of the box policies for data replication & retention
  • End to end monitoring of data pipelines
Compliance and audit
  • Visualize data pipeline lineage
  • Track data pipeline audit logs
  • Tag data with business metadata
Database replication and archival
  • Replication across on-premise and cloud-based storages targets: Microsoft Azure and Amazon S3
  • Data lineage with supporting documentation and examples
  • Heterogeneous storage tiering in HDFS
  • Definition of hot/cold storage tiers within a cluster

How Falcon Works

Falcon runs as a standalone server as part of your Hadoop cluster.

falcon-architecture

A user creates entity specifications and submits to Falcon using the Command Line Interface (CLI) or REST API. Falcon transforms the entity specifications into repeated actions through a Hadoop workflow scheduler. All the functions and workflow state management requirements are delegated to the scheduler. By default, Falcon uses Apache Oozie as the scheduler.

falcon-userflow

Entities

The following diagram illustrates the entities defined as part of the Falcon framework:

falcon-entities

  • Cluster: Represents the “interfaces” to a Hadoop cluster

  • Feed: Defines a dataset (such as HDFS files or Hive tables) with location, replication schedule and retention policy.

  • Process: consumes Feeds and processes Feeds

Try these Tutorials

Apache Top-Level Project Since
January 2015
Hortonworks Committers
6

Falcon User Interface – Tech Preview

Download, installation and setup instructions for evaluating Apache Falcon User Interface Technical Preview.

Try Falcon User Interface

 

Try Falcon with Sandbox

Hortonworks Sandbox is a self-contained virtual machine with HDP running alongside a set of hands-on, step-by-step Hadoop tutorials.

Get Sandbox

Upcoming Webinars!

Operationalize your Data Lake with Consistent Data Governance: Hortonworks Technical Workshop
Thursday, July 2, 2015
1:00 PM Eastern / 12:00 PM Central / 10:00 AM Pacific

More Webinars »

View Past Webinars

Discover HDP 2.2: Apache Falcon for Hadoop Data Governance
Discover HDP 2.1: Apache Falcon for Data Governance in Hadoop

Resources

More posts on:
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.