top of page
  • Writer's pictureJohn Shelburne

Quick History of Apache Beam


Beam History





In the early 2000s, a revolution in data processing began, one that would dramatically transform how we handle vast amounts of information. This shift was set in motion in 2004 when Google introduced the world to MapReduce, a novel approach that simplified the processing of distributed data into two primary stages: mapping and reducing. This innovation was a significant leap forward, but initially, Google kept these advancements within its own walls, using them exclusively for internal purposes.


Meanwhile, the open-source community, spearheaded by groups like Apache, seized upon Google's MapReduce concept.


They developed Hadoop, a framework that would become a cornerstone in the world of big data processing. Hadoop built upon the foundations of MapReduce, using it to efficiently process data as key-value pairs across distributed systems. This development was a game-changer, offering a way to handle large-scale data processing on inexpensive hardware.





However, it soon became clear that MapReduce had its limitations. It excelled in situations where tasks could be neatly divided into mapping and reducing phases, but struggled with more complex scenarios that required multiple stages of these processes. This led to inefficiencies, particularly in the form of excessive disk read and write operations, slowing down the entire processing system.


Recognizing these challenges, the quest for a more advanced solution led to the creation of Flume. Flume optimized and orchestrated the MapReduce processes, enabling the creation of more complex operation graphs. These graphs could be transformed into a series of optimized MapReduce tasks, significantly reducing the need for repetitive disk operations.


The evolution continued with the advent of Apache Spark, a processing engine renowned for its speed – purportedly up to 100 times faster than Hadoop. Spark's innovation lay in its approach to batch and stream processing, performing lazy evaluations and creating Directed Acyclic Graphs (DAGs) of the programs for execution. Despite its speed, Spark initially struggled with handling out-of-order data elements in streaming processes.


Apache addressed this limitation with the introduction of Flink, a platform similar to Spark but with enhanced speed, performance, and automatic tuning capabilities. Flink set itself apart by treating data streams as primary citizens, making it a true stream processing engine. Its ability to manage out-of-order data elements through advanced techniques like watermarking was a significant breakthrough.


On the Google front, a parallel development occurred with Millwheel, an engine akin to Flink, focusing on handling data streams effectively.

Throughout this period, Apache continued to expand its ecosystem with various technologies like HBase, a columnar database, and other tools for scheduling and processing.


In 2014, Google shifted its strategy. Instead of releasing just papers, it started offering actual code, leading to the creation of Cloud Dataflow as part of the Google Cloud Platform (GCP). Cloud Dataflow represented a leap forward, offering a unified model for batch and stream data processing.


But a divide still existed between the open-source world and Google's proprietary systems. Code written for one could not easily be used in the other.


This is where Apache Beam came into play. Beam is a unifying force, bridging the gap between these two worlds. Based on Google's Dataflow model, Beam allows for the creation of language-agnostic pipelines that can be run on various platforms, whether in the open-source domain or on GCP. This integration meant that developers could write code in their preferred language with Beam and run it on any supported engine, bringing a new level of flexibility and integration to the world of big data processing.

15 views
bottom of page