Google Cloud Dataflow: hands-on with real-time data processing pipelines at Google scale
The MapReduce paper, published by Google more than 10 years ago (2004!), sparked the parallel processing revolution and gave birth to countless open source and research projects. The MapReduce model is now officially obsolete, so the new data processing model we use is the “Dataflow model” and you can see it in action in the hosted Cloud Dataflow service, or its open source Apache Beam implementation. They allow you to specify both batch and real-time data processing pipelines and have them deployed and maintained automatically – and yes, dataflow can deploy *lots* of machines to handle Google-scale problems.