Breaking the Internet: Scott Aaronson’s 2050 Forecast for Quantum Computing and Cryptography
Why the time to worry about post-quantum security was yesterday.
Scott Aaronson has dedicated his life to researching the capabilities and limits of quantum computers. Tom McCarthy, Entrepreneur in Residence at Nebular, and myself linked up with Scott at the Q2B Silicon Valley Conference in Santa Clara last December to talk about the next 25 years of quantum computing, artificial intelligence, and cryptography. We explore whether classical AI might eclipse quantum computing, how post-quantum cryptography will upend global security, and why the rise of superintelligence both excites and deeply unsettles Scott.
We also debate whether a quantum teleporter would actually be a suicide machine, how advanced AI might help us uncover the fundamental nature of reality, and the disturbing possibility that future superintelligences may compete with one another for dominance, treating humans as collateral.
Scott argues that the time to prepare for a post-quantum world isn’t years away - it’s right now.
Watch the full conversation here:
Quantum Computing Has an AlphaFold Problem
One of the most persistent myths about quantum computing is that it will outperform classical systems across all manner of computational tasks.
But Scott thinks there is a new challenge. The speed at which classical AI is improving.
While AI may help us build quantum computers, the reverse is less clear. Scott explained that the kinds of exponential quantum speedups we know how to achieve rely on specific mathematical structures which most AI optimization problems do not share.
The research around “quantum LLMs” is still nascent. Scott also argues that classical AI may outpace quantum computing in many of the domains quantum machines were expected to add significant power, including simulating protein structures. That is why AlphaFold deservedly won the Nobel Prize in Chemistry in 2024. It is also why John Preskill of CalTech gave a talk at the very same conference Scott was speaking at with a slide entitled Will AI Eat Quantum’s Lunch?
For a quantum computer to be useful in certain domains, they must beat classical computers. And it’s not clear quantum computers will do this when it comes to predicting protein confirmations.
The hope is that there are still plenty of things that AlphaFold cannot do. We just need the computers to work to start figuring them all out.
There Will Be Cryptographically Relevant QCs by 2050
But we do know that quantum computers won’t be useless. If there is one place Scott is confident quantum computers will matter, it’s cryptography. If these machines don’t exist in 25 years, Scott wonders whether that means we will have killed ourselves off as a species.
By 2050, Scott expects cryptographically relevant quantum computers to exist - machines capable of breaking much of the public-key infrastructure that secures the internet today. That doesn’t mean the end of security, but it does mean a painful transition to new architecture.
This isn’t just a live possibility. It is going to happen, and perhaps sooner than Scott expects.
That’s why Nebular incubated Project Eleven, which is helping to prepare crypto for the post-quantum world that is fast approaching.
Scott’s warning is blunt. If you take one thing away from this post, take heed now:
If your privacy and security depends on pre-quantum cryptography, the time to worry isn’t in 2050; it was yesterday.
The Quantum Teleporter Is Probably a Suicide Machine
The discussion then drifted into the intersection of quantum & biology, and led us down a path into one of the oldest and strangest questions in philosophy, and in Startrek:
What, exactly, makes you, you?
Imagine a machine that can copy you perfectly and transmit that information to a farflung planet in the Andromeda Galaxy by reconstructing you atom by atom, while the original version of you on Earth is destroyed.
From the outside, nothing seems lost. The early arrived friends and family on your newfound desert home in this far flung binary star system can’t tell the difference. Nor you see any difference with them. The copy remembers your life, your values, your unfinished thoughts, and everything else that is important to you who are. But is the copy really you? What if they sent two?
He suggests trying to answer the question by asking: just how high fidelity does the copy have to be before you’re willing to count that new version as you?
Or, if multiple indistinguishable versions of you could exist simultaneously, which one would you expect to be? Does your soul just “flip a coin” and the universe randomly decides which body you wake up in?
And if none is privileged, what does that say about survival, death, and the meaning of personal identity? Perhaps the philosopher Derek Parfit was right in that what matters most for personal identity over time is psychological continuity, not physical.
These are no longer just thought experiments. As brain connectome mapping and quantum teleportation technologies advance in the coming decades, questions once confined to philosophy seminars will become engineering problems.
Scott doesn’t rush to any definitive answers, but these ancient questions will become impossible to ignore with the technological progress we will have brought about by 2050.
As Tyler Cowen pointed out in episode one, technology will solve some problems but create new ones.
The Wars Between Superintelligences & The Ultimate Nature of Reality
Scott is deeply unsettled by the rise of AI.
The most conservative scenarios place AI among the most consequential technologies humans have ever created.
More extreme scenarios involve superintelligences that compete with one another, form agreements with each other that we can’t understand, or simply sideline us altogether. Or they will simply squabble over territory like we have been doing throughout recorded history.
We may be gawked at in zoos by superintelligences.
But if one or more of these machines is willing to cooperate with us, it may help us answer profound unsolved problems in science and shed light on the ultimate nature of reality:
Examples include the P vs NP problem, writing down the correct quantum theory of gravity, explaining the constants of the Standard Model, and elucidating the origin of the Big Bang and life on Earth.
For some of these problems, we have a clear idea of what the solution might look like. For P Vs NP, for instance, a solution is just a mathematical proof.
On the other hand, we don’t know if there will ever be a satisfying answer to questions like the origin of life on Earth. Perhaps it is a freak accident - a brute fact - as Bertrand Russell conjectured.
And then there are these philosophical questions like explaining consciousness, or explaining why there is something rather than nothing, where it isn’t even clear what sort of answer we would want in the first place.
If there is an answer to these deep questions, could it even be understood in principle by an infinite intelligence? And if so, would it be able to explain it to us in any way that would be useful?
The answers to these questions depend on whether what Gottfried Leibniz, the co-inventor of calculus, called the Principle of Sufficient Reason, is true. If it is, then there are no freak accidents or brute facts. Everything that exists has an explanation that we can know.
For our sake, I hope this principle does hold. That gives me hope that we will one day understand the reason for life, the universe, and everything.
Having it all be a brute fact would be an odd state of affairs.
