More talks in the program:
15:15 - 16:05
The MapReduce paper, published by Google more than 10 years ago (2004!), sparked the parallel processing revolution and gave birth to countless open source and research projects. The MapReduce model is now officially obsolete, so the new data processing models we use are called Flume (for the processing pipeline definition) and MillWheel (for the real-time dataflow orchestration). They are known externally as Cloud Dataflow / Apache Beam. They allow you to specify both batch and real-time data processing pipelines in Java and have them deployed and maintained automatically – and yes, dataflow can deploy lots of machines to handle Google-scale problems.
What is the magic behind the scenes ? What is the post-MapReduce dataflow model ? What is a streaming-first model ? What are the flow optimization algorithms ? Read the papers or come for a walk through the algorithms with me.