Challenging Determinism: Generative AI as the Quantum Moment in Software Development
For decades, working in tech meant living in a highly deterministic world. A given input would always produce the same output, as long as the system state didn’t change, of course.
If something went wrong, like a bug, it was always a logical issue. Okay, most of the time it was the user who was defective, like seriously, who could have thought of that usage flow? But since developers are cool people (and because the product and sales teams are gently yelling at us), we adapt the code to handle more cases.
And now… we have generative AI !

Don’t get me wrong, LLM-powered apps are an awesome opportunity. We can now build software components that are highly adaptable to a broad range of inputs. That means chatbots that don’t instantly fail if you type something outside the expected options.
But it also means harder-to-reproduce failures and unexpected behaviors.
Anyway, it got me thinking: is Generative AI the quantum moment in software development?
A shaky parallel with the world of physics
Buckle up, we’re entering philosophical waters. First lets establish credibility with an historical citation.
The theory produces a good deal but hardly brings us closer to the secret of the Old One, I am at all events convinced that He does not play dice.
Albert Einstein - December 1926.
This is perhaps the most famous quote from the well-known physicist Albert Einstein. It comes from a letter to the German physicist Max Born.
To me, it perfectly symbolizes the difficult transition between two worlds: one deterministic, where we can describe reality precisely with equations and the challenge is finding the right ones, and another where we must introduce a probabilistic component into those equations. This is exactly the feeling I personally have with Generative AI entering the software industry.
I’m not a physicist and I don’t want to twist history further to suit my narrative. If you’re interested in the topic, I recommend reading Si Einstein avait su (If Einstein Knew) by Alain Aspect, Nobel Prize in Physics 2022.
What does this mean for a software engineer
At first, it meant new, exciting possibilities. Why spend time creating a text parser when an AI can do it for me? My first truly satisfying experience with GenAI-embedded software development was realizing that I could easily build apps adapted to a huge, effectively infinite range of text inputs. Just ask the AI to convert them into a structured format or decide which part of the program to call next.
But soon, a new type of challenge emerged. How do I properly track the performance of my software when it can handle an infinite number of scenarios? And no, I don’t just want to create an infinite number of tests (this one’s for the mathematicians in the room).
Worse, I just received a detailed bug report from my product manager… but I can’t reproduce it. Every call to the LLM produces different output.
How am I supposed to fix this now?
So you start begging the LLM to follow instructions, you even provide more examples for its few-shot learning, and guidelines in a beautifully crafted README. And still, you’re not even sure if you’re actually making progress (no metrics means no way to measure progress either).
Determinism was already an illusion for many
Now it’s time to confess, I lied to you (not a big one, I promise). In fact, I’ve already had to deal with non-deterministic software development or more precisely, software architecture.
During my Ph.D., I worked on handling massive log data streams, petabytes per day. Part of my work was improving the log parsing process by proposing a new method with lower algorithmic complexity (constant time, by the way and yes, I’m very proud of it).
But… it still wasn’t enough to handle the full workload on a single machine.
Easy, right? Just use multiple machines in parallel and tadaaa... problem solved !
Then you realize that every machine needs to access and edit the same data structure to avoid inconsistencies (so you don’t end up with two different parses for the same log depending on which machine processed it).
In the end, we “cheated” a bit: we adapted our algorithm to avoid modifying the memory structure at runtime, which meant we could store it on every machine and update it at fixed intervals. [Paper here.]
In fact, I was far from the only one facing this issue. At some point, I was talking with a friend at Google and realized the nightmare he was living. Most of his challenges were designing systems that work even though the databases or services are so large that they don’t fit on a single machine, meaning they had to make a lot of trade-offs.
This also means that two of your calls might produce different results depending on which server handles them, since data synchronization takes time.
Anyway
I tried really hard to end this essay with something catchy, something you could take out of context on social media for likes (and hateful comments).
But in the end, my conclusion is generic: everything is a matter of trade-offs. The only question you should be asking yourself is: If this step fails or goes wrong, is it really a problem?
The answer depends on your business and application context. If certain cases are critical, you can embed tests directly in the application code, implement fallbacks, and so on.
Anyway, I just wanted to share these thoughts with you today. I’m genuinely curious about the future. As software engineers, we’ve just received brand-new tools, I have no doubt that time will refine them, and best practices will continue to emerge.
And… if you’ve made it this far, please subscribe and share this work around you. Fly away…
Member discussion