A framework for writing applications that process large amounts of data

MapReduce is the original framework for writing applications that process large amounts of structured and unstructured data stored in the Hadoop Distributed File System (HDFS). Apache Hadoop YARN opened Hadoop to other data processing engines that can now run alongside existing MapReduce jobs to process data in many different ways at the same time.

What MapReduce Does

MapReduce is useful for batch processing on terabytes or petabytes of data stored in Apache Hadoop.

The following tables describes some of MapReduce’s key benefits:

Benefit Description
Simplicity Developers can write applications in their language of choice, such as Java, C++ or Python, and MapReduce jobs are easy to run
Scalability MapReduce can process petabytes of data, stored in HDFS on one cluster
Speed Parallel processing means that MapReduce can take problems that used to take days to solve and solve them in hours or minutes
Recovery MapReduce takes care of failures. If a machine with one copy of the data is unavailable, another machine has a copy of the same key/value pair, which can be used to solve the same sub-task. The JobTracker keeps track of it all.
Minimal data motion MapReduce moves compute processes to the data on HDFS and not the other way around. Processing tasks can occur on the physical node where the data resides. This significantly reduces the network I/O patterns and contributes to Hadoop’s processing speed.

Even though newer engines like Apache Tez can process certain workloads more efficiently than MapReduce, tried and true MapReduce jobs continue to work and may benefit from other efficiency improvements made by the Apache Hadoop open source community.

How MapReduce Works

A MapReduce job splits a large data set into independent chunks and organizes them into key, value pairs for parallel processing. This parallel processing improves the speed and reliability of the cluster, returning solutions more quickly and with greater reliability.

The Map function divides the input into ranges by the InputFormat and creates a map task for each range in the input. The JobTracker distributes those tasks to the worker nodes. The output of each map task is partitioned into a group of key-value pairs for each reduce.

The Reduce function then collects the various results and combines them to answer the larger problem that the master node needs to solve. Each reduce pulls the relevant partition from the machines where the maps executed, then writes its output back into HDFS. Thus, the reduce is able to collect the data from all of the maps for the keys and combine them to solve the problem.

MapReduce Tutorials

MapReduce in our Blog


to create new topics or reply. | New User Registration

This forum contains 130 topics and 141 replies, and was last updated by  Binayak Das 1 week ago.

Viewing 20 topics - 1 through 20 (of 130 total)
Viewing 20 topics - 1 through 20 (of 130 total)

You must be to create new topics. | Create Account

Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.
Stay up to date!
Developer updates!