MapReduce architecture

The computing core of the Hadoop cluster is a Java-based framework MapReduce tightly bound with YARN. MapReduce provides Java classes, interfaces, and runtime environment for building and running custom applications. Through these components, a MapReduce application interacts with the YARN Resource Manager to perform its tasks on the cluster nodes where the YARN Node Manager is installed. HDFS DataNodes and YARN Node Managers are installed on the same cluster nodes. In this way, Hadoop binds together its storage and computing resources. In some cases, the names Node Manager and DataNode can be used interchangeably because they are installed on the same physical nodes.

Operations

MapReduce uses the following operations to process input data by an application request:

  1. Split. The application defines the way to split the input files to records called splits. For example, if the application declares its input type as TextInputFormat, the MapReduce framework breaks the input data into lines.

  2. Map. This operation is performed by the map method defined in the application Mapper class. The latter is used to create a map task replicated to all DataNodes where the input HDFS blocks are located. The MapReduce framework calls the map method for every split on the node. The task creates multiple key/value pairs and saves them to the local file system. The output file is partitioned by key. This is necessary for the next operation.

  3. Shuffle. The system transfers data generated by the previous operation to those DataNodes where the next operation can be performed. For each reduce task, the shuffle service supplies all key/value pairs with a specific key.

  4. Reduce. This operation is performed by the reduce method defined in the application Reducer class. The latter is used to create a reduce task replicated to several DataNodes and used to aggregate the data created by the map tasks. The framework calls the reduce method for each unique key with all of its paired data received by the node.

In the list above, a MapReduce application must define Java classes for two main stages:

  • The Map stage is used to transform the input data into key/value pairs as defined in the application code.

  • The Reduce stage is used to aggregate the result coming from the map stage to another key/value format and save the output to HDFS. When called, the reduce task creates a unique key/value pair, that is, it aggregates the input data.

If input data comes originally to the local file system, you need to copy it to HDFS that breaks files into HDFS blocks stored on different DataNodes. You can create a chain of MapReduce jobs where one job takes data from HDFS files created by another job. The job output data is stored in HDFS (for reliability). MapReduce and YARN are responsible for scheduling and monitoring tasks. When a task fails due to some issue in the cluster, YARN starts this task again.

Components

To implement the operations mentioned earlier, MapReduce uses the following components (most of them belongs to YARN):

  • A Hadoop client is used to start a MapReduce application.

  • The Resource Manager governs computing resources of the entire cluster.

  • A Node Manager manages resources on a cluster node.

  • The Application Master orchestrates the application lifecycle.

  • A YARN container provides necessary computing resources to a map or reduce task running on the cluster node where that container exists.

MapReduce algorithm

You can start a MapReduce application as a job from a Hadoop client with a command similar to the following:

$ hadoop jar <jar_file_name> <class_name> <input> <output>

This command accepts the following arguments (this example presents not all possible arguments):

  • <jar_file_name> is the path to the JAR file containing the application Java package.

  • <class_name> is the name of the application main class.

  • <input> is the path to the input file, directory, or a path template (for example, /path/to/folder/test*) in HDFS.

  • <output> is the path to the output directory in HDFS.

MapReduce and YARN execute a MapReduce job (application) in the following order (for brevity, some operations are simplified):

  1. MapReduce (as a YARN client) submits the application to the YARN Resource Manager.

  2. The Resource Manager creates the MapReduce-specific Application Master (MRAppMaster for MapReduce applications) in a YARN container on one of the nodes managed by a Node Manager.

  3. The Application Master requests the Resource Manager for computing resources on those DataNodes where the input blocks are stored. Then it requests the Node Manager on each of those nodes to create a container with a map task. The number of map tasks is the same as the number of blocks (excluding replicated blocks).

  4. MapReduce prepares splits and runs the map method to process each split. The map process periodically sends a report to the Application Master. If a map process fails, the Application Master restarts the task.

  5. The map tasks process the data and save the output to the local files system. HDFS is not used here to not overload the cluster with intermediate data.

  6. After the map tasks complete their processes, the Application Master organizes containers with the reduce tasks.

  7. The Application Master requests the shuffle service, which runs on the same nodes where the map tasks have stored their data, to transfer (shuffle) that data to the nodes where the reduce tasks are created.

  8. The reduce tasks aggregate data in accordance with the application code and save the output to HDFS. Each task stores data in its own HDFS file.

  9. The Application Master closes the job.

See Quick start with MapReduce for more information.

Found a mistake? Seleсt text and press Ctrl+Enter to report it