cta

Get Started

cloud

Ready to Get Started?

Download sandbox

How can we help you?

closeClose button
Apache Projects
Apache Hadoop MapReduce

Apache Hadoop MapReduce

MENU

OVERVIEW

A framework for writing applications that process large amounts of data

MapReduce is the original framework for writing applications that process large amounts of structured and unstructured data stored in the Hadoop Distributed File System (HDFS). Apache Hadoop YARN opened Hadoop to other data processing engines that can now run alongside existing MapReduce jobs to process data in many different ways at the same time.

What MapReduce Does

MapReduce is useful for batch processing on terabytes or petabytes of data stored in Apache Hadoop.

The following tables describes some of MapReduce’s key benefits:

Benefit Description
Simplicity Developers can write applications in their language of choice, such as Java, C++ or Python, and MapReduce jobs are easy to run
Scalability MapReduce can process petabytes of data, stored in HDFS on one cluster
Speed Parallel processing means that MapReduce can take problems that used to take days to solve and solve them in hours or minutes
Recovery MapReduce takes care of failures. If a machine with one copy of the data is unavailable, another machine has a copy of the same key/value pair, which can be used to solve the same sub-task. The JobTracker keeps track of it all.
Minimal data motion MapReduce moves compute processes to the data on HDFS and not the other way around. Processing tasks can occur on the physical node where the data resides. This significantly reduces the network I/O patterns and contributes to Hadoop’s processing speed.

Even though newer engines like Apache Tez can process certain workloads more efficiently than MapReduce, tried and true MapReduce jobs continue to work and may benefit from other efficiency improvements made by the Apache Hadoop open source community.

How MapReduce Works

A MapReduce job splits a large data set into independent chunks and organizes them into key, value pairs for parallel processing. This parallel processing improves the speed and reliability of the cluster, returning solutions more quickly and with greater reliability.

The Map function divides the input into ranges by the InputFormat and creates a map task for each range in the input. The JobTracker distributes those tasks to the worker nodes. The output of each map task is partitioned into a group of key-value pairs for each reduce.

The Reduce function then collects the various results and combines them to answer the larger problem that the master node needs to solve. Each reduce pulls the relevant partition from the machines where the maps executed, then writes its output back into HDFS. Thus, the reduce is able to collect the data from all of the maps for the keys and combine them to solve the problem.

Forums

Hadoop Tutorials

Hadoop in our Blog