More talks in the program:
ETL, or ELT more recently, has been the go-to solution for moving data across the organisation to satisfy new data-driven use cases. Copying all data into a central data repository has been the architecture of choice. Given the challenges faced by centralised data teams and the learnings from building microservices, many organisations are moving toward decentralised data architectures, rooted in the ideas of Domain-Driven Design, recognising boundaries and differences across business domains. In this new scenario, traditional ETL shows its limitations. Extracting data-on-the-inside from operational applications is fragile to changes. Putting data at rest as the default option limits us and forces us to use reverse-ETL to take data back in motion. A data-in-motion first approach, leveraging the streaming technologies, like Apache Kafka and Flink, allows building new data-driven products and real-time insights unachievable with extract-and-load. An inversion of responsibilities between producer and consumer, well-defined data contracts, managing data as products better respond to changes and remove bottlenecks. In this talk, Lorenzo and Tareq will use real-world examples from their work with consultancies and product companies to illustrate how a mind shift in data engineering and architecture enables a more evolutionary and consumer-centric approach and extract value from data.