Apache Spark A Look under the Hood - Sigmoid

Apache Spark A Look under the Hood

Blog/Technology

Apache Spark: A Look under the Hood

Before diving deep into how Apache Spark works, lets understand the jargon of Apache Spark

  • Job: A piece of code which reads some input from HDFS or local, performs some computation on the data and writes some output data.
  • Stages: Jobs are divided into stages. Stages are classified as a Map or reduce stages (Its easier to understand if you have worked on Hadoop and want to correlate). Stages are divided based on computational boundaries, all computations (operators) cannot be Updated in a single Stage. It happens over many stages.
  • Tasks: Each stage has some tasks, one task per partition. One task is executed on one partition of data on one executor (machine).
  • DAG: DAG stands for Directed Acyclic Graph, in the present context its a DAG of operators.
  • Executor: The process responsible for executing a task.
  • Driver: The program/process responsible for running the Job over the Spark Engine
  • Master: The machine on which the Driver program runs
  • Slave: The machine on which the Executor program runs

All jobs in spark comprise a series of operators and run on a set of data. All the operators in a job are used to construct a DAG (Directed Acyclic Graph). The DAG is optimized by rearranging and combining operators where possible. For instance let’s assume that you have to submit a Spark job which contains a map operation followed by a filter operation. Spark DAG optimizer would rearrange the order of these operators, as filtering would reduce the number of records to undergo map operation.

DAG Execution

How Spark Works?

Spark has a small code base and the system is divided in various layers. Each layer has some responsibilities. The layers are independent of each other.

  1. The first layer is the interpreter, Spark uses a Scala interpreter, with some modifications.
  2. As you enter your code in spark console (creating RDD’s and applying operators), Spark creates a operator graph.
  3. When the user runs an action (like collect), the Graph is submitted to a DAG Scheduler. The DAG scheduler divides operator graph into (map and reduce) stages.
  4. A stage is comprised of tasks based on partitions of the input data. The DAG scheduler pipelines operators together to optimize the graph. For e.g. Many map operators can be scheduled in a single stage. This optimization is key to Sparks performance. The final result of a DAG scheduler is a set of stages.
  5. The stages are passed on to the Task Scheduler. The task scheduler launches tasks via cluster manager. (Spark Standalone/Yarn/Mesos). The task scheduler doesn’t know about dependencies among stages.
  6. The Worker executes the tasks. A new JVM is started per job. The worker knows only about the code that is passed to it.
How Apache Spark works

Spark caches the data to be processed, allowing it to me 100 times faster than Hadoop. Spark uses Akka for Multithreading, managing executor state, scheduling tasks.

It uses Jetty to share files (Jars and other files), Http Broadcast, run Spark Web UI. Spark is highly configurable, and is capable of utilizing the existing components already existing in the Hadoop Eco-System. This has allowed Spark to grow exponentially, and in a little time many organisations are already using it in production.

 
author
Arush Kharbanda was a technical team member at Sigmoid. He was involved in multiple projects including building data pipelines and real time processing frameworks.

Why waste time searching for our new posts, when we can directly send it to your inbox.
Subscribe to our regular feeds and
learn more about our recent developments.

E-mail:*
Recommended for you

Why Apache Arrow is the Future for Open Source Columnar In-Memory Analytics

By | March 29th, 2016|

Blog/Technology Why Apache Arrow is the Future for Open Source Columnar In-Mem

Implementing a Real-Time Multi- dimensional Dashboard

By | July 13th, 2015|

Blog/Technology Implementing a Real-Time Multi- dimensional Dashboard Subscrib

[How-To] Run SparkR with RStudio

By | July 3rd, 2015|

Blog/Technology [How-To] Run SparkR with RStudio Subscribe With the latest r

Facebooktwittergoogle_plusredditpinterestlinkedinmail
Tags: |

About the Author:

Arush Kharbanda was a technical team member at Sigmoid. He was involved in multiple projects including building data pipelines and real time processing frameworks.