Some of the more vocal AI pioneers are celebrating the end of software engineering. They are confusing output with quality. The machine generates output — but it does not understand the system it enters. It does not understand dependencies, constraints, or consequences.
Rod Johnson — creator of the Spring Framework and long-time JAX keynote speaker — puts this shift into a clear engineering perspective: “Enterprise AI is an application development problem. It’s not a data science problem, it’s not an ML problem.” Learn more about JAX London
STAY TUNED!
This reframes the situation entirely. The challenge is no longer generating output, but integrating it into systems that can carry it.
John Davies — enterprise software veteran and long-time contributor to the JAX community — makes the operational requirement explicit: “If your business depends on AI, you need control, consistency, and the ability to test changes before you roll them out — because without that, it’s impossible to build reliable systems.”
Enterprise AI becomes an engineering problem
AI systems introduce non-deterministic behaviour into environments that have historically been deterministic. Clear inputs, predictable outputs, stable boundaries — these assumptions no longer hold in the same way. The result is not theoretical complexity, but practical friction in real systems.
Torsten Bøgh Köster — search and operations engineer — describes this shift precisely:
“We now have the entropy of the real world inside our applications — combined with the non-determinism of LLMs. The challenge is to make these interactions observable, testable, and controllable.”
The core capabilities that have always defined strong engineers become more critical, not less: defining system boundaries, managing dependencies, ensuring observability, maintaining control over behaviour in production. What changes is the context in which these capabilities are applied.
Enterprise AI is therefore not about using AI tools. It is about integrating probabilistic components into systems that still need to be reliable, understandable, and controllable.
Understanding becomes the second constraint. Russ Miles — software architect and researcher on the relationship between humans and AI, and a keynote speaker at JAX London — points to the underlying risk: “We also need to talk about cognitive debt: do you really understand how it works? Then there’s intent debt: it’s when we’ve forgotten why it works the way it does. Without intent, the agents can’t work well — and neither can we.”
That is the focus of JAX London.
The conference brings together engineers who design and operate systems under real conditions — and extends that perspective into AI. Not as a separate discipline, but as part of the same responsibility: making systems work.
The program reflects this focus end-to-end:
- how AI components are integrated into system architectures
- how control is maintained over generated code and agentic behaviour
- how systems remain observable, testable, and stable in operation
- how platforms define the boundaries and governance of AI usage
- how teams take responsibility for systems that no longer behave purely deterministically
These are not isolated topics. They describe a coherent engineering problem.
AI increases the demands on engineering. It requires more structure, more discipline, and more clarity about how systems behave under real conditions. The ability to generate code is no longer the limiting factor. The ability to design systems that can carry that code is.
JAX London has always been about that ability. Now it applies it to the next step: Enterprise AI.




