Forecast 2050 - Episode II - Transcript - Scott Aaronson
Quantum, Superintelligence and much much more.
Recorded December 11th, 2025 in Santa Clara, CA
Scott Aaronson believes the next 25 years will transform quantum computing, AI, and cryptography in ways we’re only beginning to understand. In this episode of Forecast 2050 with Nebular General Partner, Finn Murphy, and Entrepreneur in Residence, Tom McCarthy, the conversation ranges from whether classical AI might outdate quantum, how post-quantum cryptography will reshape global security, and how the prospect of superintelligence inspires and terrifies Scott at the same time.
Along the way, they debate whether a quantum teleporter is actually a suicide machine, how advanced AI might shed light on the ultimate nature of reality, and the chilling possibility that superintelligences will fight amongst themselves for global dominance, leaving us humans by the wayside. Scott emphasizes that the time to start worrying about post-quantum security is now.
Watch the full interview:
Transcript:
Finn Murphy: Scott, thanks so much for joining myself and Tom. I’m Finn Murphy, General Partner at Nebular. This is Tom McCarthy, Entrepreneur in Residence at Nebular. And we’re here with you, Professor at University of Texas at Austin: prolific blogger and computational theorist.
Looking 25 years into the future, whether it’s timelines for AGI, superintelligence, or whether it’s timelines for cryptographically relevant quantum computers, or even more potentially powerful quantum computers capable of like real world applications: do you see those two worlds intersecting - of classical computing as the hard charging drive of AGI - and quantum computing? Where do you see those two worlds overlapping?
How AI & Quantum Computing Interact Today
Scott Aaronson: Pretty much everything affects everything else in some ways. And certainly, people have been thinking for decades about what is the intersection between quantum computing and AI. So how could quantum computing help AI? How could AI help quantum computing?
Those are two very different questions. The second one is easier in a sense. I think that AI is already helping. Quantum computing will probably massively help quantum computing for the same reasons why it will massively affect everything that humans do.
We’ve already seen good results from using neural nets to decode quantum error correcting codes, in real time, recognize the error syndromes and so forth. We’ve seen good results from using neural nets to optimize quantum circuits whenever you’re doing experimental quantum computing.
A large, large fraction of what you’re doing in practice is actually classical computing. It’s, all the code that controls the qubits, that tells the qubits what to do when that compiles your abstract algorithm down to a sequence of pulses that get applied to the qubits?
That’s all classical. And I think AI is going to affect that as it affects so many other things for quantum computing to help AI. That’s harder now, unfortunately. A narrative sort of got fixed in place maybe 15 or 20 years ago, that quantum computing is just going to accelerate everything that we do with computers.
It’s going to. People like AI. It’s going to help with recognizing faces, recognizing handwriting, and optimizing vehicle routes. Why not have quantum LLMs? Quantum image models. Because all of this sounds good. It’s all catnip to investors, right?
And just about none of it maps on to any of the quantum algorithms that I, as a researcher in this field, actually know about that we’re actually confident would give you an advantage over a classical computer. That’s the problem here. So in terms of quantum advantages for AI problems, this is something that I gave a talk about when I was an undergrad at Cornell in 1998.
The issue is, if you want a really huge speedup from a quantum computer, like the exponential speedup that we think that you get for factoring numbers and thereby breaking most of them, the public key encryption that protects the internet right now. It seems to require exploiting very, very special structure in the problems that are being solved.
We could go into the reasons for that later if you want, but it just so happens that for the problems that we use for cryptography, many of them have that structure. For reasons of group theory and number theory. And that is what Peter Shor exploited 30 years ago when he invented his famous quantum algorithm for factoring.
When you look at more generic combinatorial optimization problems, such as the traveling salesman problem or the problem of optimizing the weights of a neural net, the kinds of things that would arise in AI. There’s another quantum algorithm that is relevant that’s called Grover’s algorithm.
Grover’s algorithm can very often deliver some advantage for those problems, but the advantage is much more modest than for Shor’s algorithm. It’s basically if you needed n steps classically, then with Grover you need about n steps. Now that sounds pretty good.
Certainly it’s better than nothing. The issue is for the foreseeable future, running a quantum computer with error correction as you’ll need for Grover’s algorithm is going to incur an enormous overhead. Optimistically, that could be a factor of a million, let’s say.
Now, if you’re getting an exponential speedup like for factoring a factor of a million, you can still come out ahead. Way ahead.
Murphy: When you think back to 1994, that is going back more than 25 years. Shor’s algorithm was published in 94, so it had been published at that point. Were you optimistic in 98 about more quantum algorithms being developed?
Aaronson: It’s a good question. I mean, when I first learned about this subject in I guess 95 or 96, I was just a teenager. But I read a popular article about Shor’s factoring algorithm and the way that it presented it was the way that so many popular articles still present it today that it just tries all the possible divisors in parallel.
And I remember my reaction at the time was that this sounds like garbage. This sounds like physicists just don’t understand what they’re up against. And that can’t possibly scale. And then I had to learn something about it, to be sure. And when you learn about it, you find I know it’s actually much more subtle than that.
And then you understand why you can get these huge speedups, but only by choreographing this pattern of interference of amplitudes, which involves taking advantage of very, very special structure in the problem that you’re solving. This is a weirder thing than any science fiction writer would have had the imagination to invent.
And of course I thought: could we improve over Grover’s algorithm? And I was doing a summer internship at Bell Labs, and I went to find Lov Grover, who worked in that building and tell him my ideas for Grover’s algorithm, which were complete nonsense. Which just didn’t work at all.
But Lov was kind enough to offer me an internship with him the following summer. And that’s where I met students of Umesh Vazirani from Berkeley, who were really at the frontier of understanding what kinds of speedups can you get and not get for various problems.
And I think from that point forward, I had the understanding which I’ve retained to this day that quantum speedups are a very special, miraculous thing. We have no right to demand of the universe that quantum computers would be good for anything other than perhaps simulating quantum mechanics itself - which was Feynman’s original application.
And like it’s sort of a miracle that it’s ever useful for anything beyond that. But it’s good not to be too greedy. So I’ve never started with the assumption that there have to be all these quantum algorithms. I just want to know the truth.
Murphy: Do you think as we develop quantum computers that are functional and useful - because there’s like an empirical feedback loop - that more people will be able to develop these algorithms? I mean, you’ve tried obviously.
Aaronson: Yeah. And my students have and we have proposed new quantum algorithms. Many other people have one called DQI that was just discovered within the past few years. So certainly new quantum algorithms are being discovered. I expect more will be. I expect that once we can experiment with the devices that will help.
Whenever I point out to the hypsters, let’s say that, okay, all this stuff you’re saying about speedups for AI is not grounded in any quantum algorithms that we know - this is what they always fall back on. Once we have the devices, then we’ll discover the algorithms later.
Now let me tell you the problem with that. Of course you can always look for algorithms empirically, you don’t need a proof that they work. In practice, people have done that in classical computing for decades with things like simulated annealing or for that matter, with deep learning, whose spectacular progress has been pretty much entirely empirically driven with no real understanding of why any of it works right now.
The trouble with quantum computing is that for it to be useful, it has to beat classical computing. And so we keep discovering new quantum algorithms, but we also keep discovering new classical algorithms.
And when the classical algorithms improve, that can make a quantum speedup go away. And that’s actually happened over and over in the history of this field. So the only thing that matters if we care about quantum speedup is the differential between the two. This is the point that I think people often miss.
Classical Computing May Outdate Quantum
Murphy: The reason why quantum computers matter is because reality is quantum. And in order to simulate reality, we need to be able to truly simulate those quantum states.
Aaronson: That was the original rationale by Feynman.
Murphy: But when you think about it, using neural nets and deep learning, for effective approximation, take AlphaFold where we’re able to get extremely close to the empirical output.
Do you think if we reach this parabolic escape trajectory with AI and deep learning, that the approximations will be so good that we won’t need quantum computers for simulating these physical quantum states?
Aaronson: That is possible. I mean, you were asking me before about the synergy between quantum computing and AI. In some cases, they’re also in competition with each other. In fact, John Preskill just gave a talk yesterday [December 10th, 2025]. That was where he had a slide called Will AI Eat Quantum’s Lunch? for any quantum simulation problem.
You can map it onto a quantum computer in some sense. That is what quantum computers were designed to do. That was the original idea. But you can also use just classical machine learning to try to guess in on sorts for the wave function. That would give you a good enough approximation.
And so we’ve already seen examples where, until a few years ago, protein folding was often put forward as a possible use case for quantum computers to whatever extent quantum dynamics are important. And how exactly a protein folds, maybe a quantum computer will be able to help with this sort of massively important problem in biochemistry.
But now we have AlphaFold. Which, deservedly, won the Nobel Prize in Chemistry. Which is such an unbelievably massive advance on protein folding that now, whatever the quantum computer does now it has to beat AlphaFold, the latest version of it. The hope is that there are still plenty of things that AlphaFold can’t do.
There are still plenty of things that are the best classical heuristics, even running on supercomputers for simulating chemistry and condensed matter physics, whether that’s a density functional theory or that’s tensor network methods, they still don’t give quite good enough results.
And hopefully AI combined with data from a real quantum computer will be able to do better, at least sometimes than AI by itself without the quantum computer. And, you might need only a few big wins there to sort of spur billion dollar industries.
So that’s the hope. But now it’s also possible that once we start running quantum computers, using them to simulate material science and chemistry and so forth, we can get new data about quantum systems. That way we’re going to have a ton of data which can then be fed into classical ML and which will then improve it.
And so then the question is for how long will there continue to be a need for quantum computers, for quantum simulation? I don’t think that AI is going to crack factoring.
Murphy: This was going to be my question to you, because the most popular use case for quantum computers is one-way functions that are asymmetric.
Aaronson: Like specific one-way functions: the ones based on factoring, discrete logarithms, elliptic curves…
Murphy: And those are the areas where people have tried for hundreds of years to develop efficient algorithms, and no one has been successful. So don’t you think we will see quantum advantage there before, say, one of the new mathematics-based models is capable of generating an efficient factoring algorithm?
Aaronson: By their nature, revolutionary developments in math are very hard to predict. There are a fair number of mathematicians who believe that there is a fast classical factoring algorithm. We just haven’t found it yet. I am agnostic about that. Certainly what we can say is that progress in classical factoring algorithms, at least, that we publicly know about seems to have mostly hit a wall by the 1980s or so with this algorithm called the number field sieve, which uses time that is exponential in the cube root of the number of digits of the number that you’re trying to factor roughly.
Okay, maybe you can do better than that. But that would be, I think, a revolution in number theory. So either that happens or else quantum computers do get an advantage for this problem. And of course, we would like to know the truth of the matter. That’s why we do theoretical computer science to try to figure things out like that!
McCarthy: What would be the most surprising quantum algorithms that have been developed in 25 years time?
Aaronson: You mean since Shor’s and Grover’s algorithm? Yes. It’s a very good question. I mean, I would point to, for example, some of the algorithms based on quantum walks that have come out of the group of Ed Farhi, who used to be my colleague at MIT. They showed how you can get the full square root Grover speedup for searching an arbitrary game tree.
So for example, for an optimal play in a game like chess or go right where you have to not just search a list of possibilities, but say, is there a move I can make, such as for every move my opponent makes, there’s a move that I can make and so on so that I win. Now, I had actually, as a teenager at Bell Labs in the 90s, I had thought about that exact problem, and it formed an intuition that you’re not going to get a Grover speedup once the number of alternations becomes too great.
And in 2007 they were able to show that that’s wrong. They’re using intuitions from particle physics. They actually show that you do get the Grover speed up there. But on the other hand, the Grover speed up is the best that you get. Okay. So again, there’s this limit. Now if you want exponential speedups, they also had a beautiful paper in 2002 where they showed that there are certain very special graphs where a quantum walk can search exponentially faster than any possible classical walk. But these graphs tend to be things like if you have two complete binary trees whose leaves are connected to each other by a random cycle, okay.
And you start at the root of the one tree and you’re trying to find the root of the other tree. So if you ever have that problem in real life, then quantum quantum is going to be a great solution. The challenge ever since that work has been to figure out things like, what about the kinds of search or optimization problems that would arise in real life.
With this kind of structure that enables this exponential quantum speedup, would that ever arise there? And I would say, we don’t know the answer to that question. But it seems like the picture that emerged is that. You need some very special structure.
The more structure you have, the more potential there is for a quantum speedup. And the trouble is that the problems that arise in real life are mostly very messy. They don’t have this very regular structure that we would find in factoring or with these conjoined binary trees that seems to enable these exponential quantum speedups.
The Post-Quantum World of 2050
Murphy: I’d be really curious then, if we take the known algorithms (e.g. Shor’s) and we look forward to 2050. We believe there will be cryptographically relevant quantum computers by then.
Aaronson: Yeah, I do too. I do. Look, if you told me that by 2050, there wouldn’t be that, then I would want to know what had happened to civilization: did we just all kill ourselves off? Did AI take over the world? Did we just descend back into caveman technology?
McCarthy: There’s a dilution refrigerator somewhere, but nobody knows how to use it!
Aaronson: Exactly!
Murphy: There’s been these kinds of various periods of history and cryptography where no message was known to be secure.
Aaronson: In principle, we knew the one time pad since the 1920s. And Claude Shannon did a rigorous analysis of it in the 1940s. So we knew that. If you’re willing to share these very large keys and only use them once, then you can get theoretically unbreakable cryptography.
Murphy: I don’t know if the red phone from the Kremlin to the White House is still one time pads.
Aaronson: Yeah.
Murphy: Okay. Which is, I think, the only practical use of one time pads.
Aaronson: Well, no, I mean the Soviet spies were using it during the Cold War, but sometimes they messed up and they use the same pad twice. That was what enabled the forerunner of the NSA to break it with what was called project VENONA. And that was how they caught Julius and Ethel Rosenberg, for example.
Murphy: Oh, wow.
Aaronson: That all became declassified in the 90s.
Murphy: If we go forward to this 2050 world where we have like RSA, elliptic curves have been broken, do you think we just moved to less efficient, more secure methods of cryptography, or what do you think the geographic landscape is?
Aaronson: I think that is the overwhelmingly favored response right now. So we do have these apparently quantum resistant public key cryptographic codes now mostly based on lattice problems or based on this thing very related to lattices called learning with errors. This came out of work by Oded Regev, who you mentioned, but also Itai and Dwork and a bunch of other people in the 90s, the early 2000s.
And these cryptographic codes are based on problems that, again, have a lot of mathematical structure. You need that. You seem to need that for any public key cryptography. But it seems like it’s not the right kind of structure for Shor’s algorithm or for any of the other quantum algorithms that we know.
Basically, Shor’s algorithm sort of cracks all the problems that are based on hidden structures in abelian groups, or the kind where A B = B A. But to crack these these newer systems based on lattices, you would need to find hidden structures in non-abelian groups where A B B A. and for 30 years people have tried to generalize Shor’s algorithm from abelian to non-abelian groups, but they’ve almost entirely failed.
There’s a few very special non-abelian groups where you can do it and they’re not the right ones for these cryptosystems. That’s where things stand. Now in 2017, NIST, the federal agency, announced the competition to try to formulate standards for post-quantum cryptography to start the process of migrating.
And by 2022, they had announced winners of this competition, which were basically these lattice based systems. There were some other contenders, some of which were broken in the course of the competition. That’s the point. You’d like to discover that sooner rather than later.
But the lattice systems are still standing, and now the process of migrating to those systems has already started. Okay, so I think Google has been migrating. Federal agencies are supposed to be ready to switch to post-quantum cryptography by 2031 or so.
And to actually do it by 2035 I hope that will be enough time - they may have to accelerate the timeline depending on how things go. Okay. But this is a big headache. Those of us who are old enough remember the Y2K issue. The advantage with Y2K was we knew exactly when the deadline was.
We don’t know exactly when is the deadline. And it’s a big headache. But hopefully if it goes well, then we’re all just back where we started.
Murphy: You mentioned earlier before we were recording about where publications have started to slow down in terms of certain areas of quantum theory. How do you see this evolving over the next five years?
Aaronson: Look, if you look at what’s happening in hardware, we’ve now seen systems with two qubit gates that are fully programmable with all to all connectivity and that are about 99.9% accurate. And that is actually past the theoretical threshold for fault tolerant quantum computation, which means if you can maintain that kind of accuracy in two qubit gates as you scale up to thousands and then millions of physical qubits, then that should be enough.
Even just using known fault tolerance methods to do an arbitrarily long quantum computation. So it feels like the basic building blocks at least as of the last year or so, have now been demonstrated. And what remains is a huge engineering problem of scaling it up.
We don’t know how long that’s going to take. But here’s an analogy that people often reach for: nuclear physics research. In 1940, if you were able to extract enough U-235 from the U-238 or manufacture this plutonium, and you got enough of it, then you could build a bomb. You would need an absolutely enormous amount.
Niels Bohr had this famous quote: it’s not going to actually happen because you would have to basically turn a whole country into a uranium enrichment factory. And then a few years later, after he’d escaped from Europe and come to America, he was taken on a tour of the Manhattan Project.
And he said, well, I see that that’s what you’ve done. So it is an enormous engineering effort that’s going to be required. But I don’t think you can say that any more.
Fundamental breakthroughs are needed for that, and I think it’s gotten to the point where, first of all, if someone is worried about security, if they’re using a cryptography or signature schemes, based on elliptic curves or based on RSA or Diffie-Hellman, then the time to worry about that is now, if it wasn’t yesterday.
And I think there’s even a question when physicists are calculating exactly how much U-235 do you need for a critical mass, or whatever. At some point they stop publishing that because at some point they’re like, this is not helpful for us to publish that. And at some point, really detailed research into breaking cryptography with a quantum computer reaches that kind of stage.
Murphy: Do you think a cryptographically relevant quantum computer will be active and live before the general public or ourselves are aware of it?
Aaronson: I’m not the person to ask that question.
McCarthy: Have you noticed the unexplained disappearance of the top researchers in the field?
Aaronson: One thing that we can say is that if there were this gigantic, secret effort to build a quantum computer, we would expect to see top people in this field disappearing on a huge scale.
Murphy: And you’re here!
Aaronson: Yeah. I’m probably not the one you would want for the actual engineering. But I know the people who are and they and they haven’t been disappearing on that kind of scale. And in the Manhattan Project, it was astounding that they were able to keep it secret to the extent that they did.
But we have to remember, they were in a world, first of all, with wartime censorship of the media. And they were in a world without Twitter. So I like to joke that, if you tried to create a quantum computing Manhattan Project in the desert today, in ten minutes it would be a trending hashtag, like what’s going on in Los Alamos. I think it’s way, way harder to keep things on that scale secret today.
Quantum Computing & Biology
McCarthy: One thing you write about is the human cloning ability, and that one of the promises of quantum computers is that they’re going to be faster at simulating. So we should hopefully, as time goes on, be able to simulate larger and larger systems. What do you think of quantum computers eventually enabling some sort of consciousness cloning, or to be able to, to get sufficiently large that they could sort of in full faith, reproduce some portion of the brain?
Aaronson: Yeah. So the issue is like, if you just take the like, accepted sort of conventional beliefs about neuroscience, it seems like, at the point when neurons are doing something that is relevant for cognition, right then, like each neuron is either firing or not firing and which one is a very classical event. And so like the conventional view would, would certainly say that you don’t need a quantum computer. A quantum computer shouldn’t even particularly help you very much for simulating a brain, or at least simulating those aspects of it that are cognitively relevant. Maybe you’re not going to simulate all the detailed molecular dynamics, but those you can just sort of integrate over anyway.
Certainly the incredible success that people have had over the past decade with deep learning is consistent with that hypothesis. That is just building a classical neural net and scaling it up enough and training it on a huge amount of data that works better than all the most wild eyed optimists, I think, would have thought that it worked, decades ago. If there is a fundamental ceiling to what that approach can give us, I don’t think we know yet what it is.
There’s been speculation for a long time that consciousness is mysterious, quantum mechanics is mysterious, maybe they’re related somehow. And,I get the appeal of that because they are two of the most mysterious things that humans have ever found about the world. And and the mysteriousness of quantum mechanics very much is about how do you pass from this, a third person description of the world that involves in quantum mechanics, this just vector of amplitudes, this just sort of giant list of potentialities, to our actual experience, which of course involves one specific world.
And so understanding how you get from physics to our experience is something Democritus and the other ancient Greeks worried about thousands of years ago. It’s been a problem since long before quantum mechanics. Quantum mechanics seems to add a new twist to that problem.
So I get the appeal of looking for a connection. Now, the problem that you find when you try to do that is that the brain, it seems like a very hot, wet, noisy environment. It seems like no place for a qubit to survive for any appreciable length of time. And, and I mean, we have a hard enough problem building reliable quantum computers when we can cool things to 10 millikelvin and control everything with lasers.
How on earth are you going to get a quantum computation, to work in the environment of the brain? So there’s that issue. And then there’s the issue, like the kinds of things where a quantum computer gives you an advantage, factoring or quantum simulation. They’re not a very good map to what would have had a lot of survival value on the African savanna, let’s say.
So I don’t see the evidence like right now, that the brain is doing something that I would want to interpret as quantum computation. 12 years ago, I wrote maybe the craziest thing that I ever wrote in this long essay called The Ghost in the Quantum Turing Machine. And what I explored there was the question of even if we say that the sort of cognitive behavior of the brain is basically classical.
There’s still clearly a lot of randomness in the process, or there’s a lot of what conventional neuroscience would model as randomness. Like a particular sodium ion channel, whether it opens or closes. This is actually explicitly modeled by the Hodgkin-Huxley equations and neuroscience as a stochastic event driven by basically molecules just bouncing around randomly.
Now, if you really pushed to the limit, then that would depend on quantum mechanical events. And so you would you would have to say, in fact, those who believe in the many-worlds interpretation of quantum mechanics, do say if you watched a person for long enough and if you had a God’s eye view of the complete state of the universe, you would see a superposition evolve where with some amplitude, this person makes one choice with some amplitude, they make a different choice, or they have a different experience.
Each version of them will only perceive one right. How much detail would you need in order to actually make a copy of someone, or actually to get a good enough model of someone that you could not only build an intelligent machine in general, but predict the choices of this specific person.
And plausibly, in order to do that, you would just need a copy that is so accurate that you would not be able to create it without killing the person.
Murphy: Do we not know that in biological systems that discovery around quantum tunneling is a key important process in photosynthesis, and so biology already uses quantum processes?
Aaronson: It does. Quantum mechanics is clearly relevant to how photosynthesis works. It’s also relevant to how European robins navigate how they detect the Earth’s magnetic field. But, in some sense it’s not so surprising. Because all of chemistry is quantum mechanical if you go down to a small enough scale.
And so, quantum effects are there whether you want them or not. And so, of course, biochemistry at a small enough scale is going to be selected for exploiting those effects. That still doesn’t help if our goal is to make this cognitively relevant.
Because then we say, okay, if we really think that cognition lies at the level of neurons, firing or not firing and synapses storing the memories, then that all seems pretty classical at that scale. That all seems like you could even imagine nanobots in the future that would just make a copy of the whole connectome of your brain and the strength of each synapse while you were just not even hurt by it, not even noticing.
McCarthy: Sitting in a podcast.
The Quantum Teleporter as a Suicide Machine
Aaronson: Yeah, exactly. Sitting in a podcast, talking while they worked. But, if we say that like when you’re balanced between like two possible choices that are both plausible choices you could make, just like an LLM that is sort of balanced between two possible tokens that it could output, and is just making the choice randomly.
If we say, well, I think some of that randomness is actually not randomness is actually really important. And I want to know which specific choice you’re going to make. Then maybe for that you would have to go to the quantum level and you wouldn’t be able. Like it might be that, if I knew the complete quantum state of your brain, I could make more accurate predictions than I can in fact make, not being able to learn the complete quantum state of your brain.
So now you could ask the question: imagine in the future we have these teleportation machines. If you want to travel to Mars in just 20 minutes, there’s just some nanobots that encode everything in your brain as information. They email you to Mars. On Mars, a new version of you gets reconstituted.
The original maybe just gets painlessly euthanized. Don’t worry about it. So there’s like this old question and philosophy. Would you agree to do that? And part of it is the question is like just how high fidelity does the copy have to be right before you’re willing to count that new version as actually you and not just some other thing that is a lot like you do.
You have to get it right all the way down to the level of the quantum state in particular.
Murphy: Well, do you think it matters? Because this is like the Star Trek fans online debate: Is the teleporter a suicide machine? Because every time you go, it’s you. And as far as the rest of the universe is concerned, you were exactly the same. From a philosophical point of view: are you the same person?
Aaronson: We could say that there is an empirical question here. And then there is also a metaphysical question. There’s an empirical question of like, just how accurate of a copy could we make without damaging the original copy. And this is even just from the perspective of a third person observer. Can we make a good enough copy?
But then there’s the metaphysical question, which is supposing that we could make a copy that was so good that, your family, your best friends, could never tell the difference. Still, would it be different from your standpoint?
Would you actually expect to wake up as this other copy. What if there are three copies running around. Which one are you? Which one do you expect to be? Does your soul just flip a coin and decide randomly which one to go to? Is it somehow all of them at once?
Now, interestingly, there is this famous procedure called quantum teleportation. Which was discovered in the early 1990s. And, quantum teleportation actually can transfer a quantum state from one place to another place just using classical communication plus pre-shared entanglement.
But it’s an inherent feature of quantum teleportation that it destroys the original copy in the process of making the new one. You have to destroy it. And the reason why you have to. There’s a basic fact about quantum mechanics called the no cloning theorem that says there is no physical procedure to make a perfect copy of an unknown quantum state.
Classical information can be copied, but quantum information cannot. Does that have anything to do with the privacy or the non-duplicatability that we imagine our personal identity to have? I don’t know if there’s any connection between quantum mechanics and these sorts of philosophical questions. But if there were one then that is where I would be looking.
Superintelligence and The Ultimate Nature of Reality
Murphy: What do you think are the most interesting questions that we might get more insight into about how the universe works?
Aaronson: From building a quantum computer, or just from thinking about it hard, or from AI.
Murphy: In developing like an infinite source of intelligence.
Aaronson: Well, I mean, look, if you really had an infinite source of intelligence, there’s a lot of things you might ask it.
You might ask about the solution to the P versus NP problem, the Riemann hypothesis and so forth. But then maybe that would just be table stakes. Then you would ask what is the true unified theory of quantum gravity. Which is a different question from what you were asking.
But I mean, are closed timelike curves physically possible? And our universe, was there anything before the Big Bang? Might be nice to know the answers to those things. And then, you could ask these philosophical questions: Why is there anything at all? What is consciousness?
So how many questions do I get?
Murphy: Well, you get them all! But you’re right. Can superintelligence answer that question? And if so, what does that mean?
McCarthy: Yeah. Or do you expect that there’s an area within like fundamental physics that, even if we have vastly, vastly more intelligent, maybe not infinite, but like, or that there is just a sort of a block there where it’s time or experimentation?
Aaronson: I want to make a distinction here. There are these profound unsolved problems in science. Like P versus NP, quantum theory of gravity. Explain the constants of the Standard Model. Explain the origin of life. We at least have some idea of what sort of thing would count as a solution for P versus NP.
We clearly do. A solution is just a proof that at some point, you could just literally feed it to a proof checker like lean and it would just output. Yes. This is a valid proof for the scientific questions. We don’t know if there will ever be a satisfying answer to questions like: why did life first arise?
Maybe it was just a freak accident and we just have to eat that. And we’re just here because of the anthropic principle. But certainly it is possible that there is some empirical or theoretical discovery that you can really confirm is true. And that enormously illuminates that question.
Likewise for the origin of the Big Bang, let’s say, or for these other really profound questions in physics, the origin of the constants of the Standard Model. And then there are the philosophical questions like explaining consciousness where there’s even a question of: what kind of answer do you want?
Like is there any answer, is there any illuminating answer that can even be put into words at all? If there were a superintelligence, would it be able to do any more with these questions than we could? And if it could, then could it explain the answer to us, in any way that would be useful to us?
Certainly, if you had a superintelligence and it were cooperative, it were just answering your questions, then obviously these are some of the things you would want to ask it.
Murphy: Imagine sitting here in 25 years. What excites you most about the future right now? Think about yourself back in Y2K. What were you excited about then versus now?
Aaronson: I was excited about progress in quantum computing, which there has been a lot of. I was excited about certain open problems in quantum computing theory, actually, a fair number of which have been answered over the past 25 years. And of course, it was gratifying in a few cases for me, in many more cases from other people to see some of my favorite problems solved.
But I think I was also pretty terrified about where the world was going with climate change, with sort of the popularity of, let’s say, authoritarian worldviews. And now, I can add to my worries, the rise of AI, of superintelligence.
I don’t know that it’s going to exterminate us. I don’t know that there will be a paperclipalypse, but, I also don’t know for sure that there won’t be. If you really take this seriously, then you say, the endpoint of this is that we create some new form of intelligence that basically is to us as we are to baboons, or to termites or something. How well have baboons fared in the human led world?
Murphy: Maybe they’re better off than we are. They’re still just hanging out.
Aaronson: At least the ones that still exist are right. But, they pretty much exist in some, a few jungles and a few zoos, because we care enough to let them. Is that going to be our future, that we’ll just be gawked at by super intelligences in zoos? There’s a whole range of possibilities.
But I think the success of AI has injected even more variance, even more uncertainty into the next 25 years. You asked about excitement. I mean, frankly, I’m pretty terrified about where things are going in the world. Like even in the most conservative scenario, it’s still going to be one of the most consequential technologies that humans have ever created, right up there with anything that humans have made.
And then maybe it’s more than that. Maybe it’s like creating a new species that will become the dominant one on Earth. So I’m terrified about that. I’d be a little bit less terrified if I felt like there were people in charge, at least, who have their finger on the pulse or are going to take actions that are well thought out in the broader interest of humanity.
Suffice it to say, I don’t think that we have that kind of leadership in the world right now, to put it, to put it extremely mildly.
The Battle of the Superintelligences
Murphy: One final thing. It’s just like something you said there where you didn’t say superintelligence. You said Superintelligence(s), what is the competition between them? I’ve never heard this explored before, but like, is this the real danger? Superintelligences competing? Why would there be more than one?
Aaronson: Again, I think there are multiple scenarios on the table. I think, by default, if you can create one of these things, you can create more. And we’ve already seen that, there are now multiple AI companies in the world that are putting out many different models that are racing each other.
Which was a scenario that a decade ago, many people had hoped to avoid. We have not avoided that. But you have at the very least, Open AI, Anthropic. DeepMind, that all have pretty good frontier models. You can argue about who chooses the best at a given month.
But it’s at least a pretty close race. I think the default assumption would be that you’ll have a multiple of these things, because why not? And in fact, digital mines are much easier to copy than the biological kind.
If you can make one, then it seems like you can make as many as you have the hardware to run them on. But, there’s also another theory, some that some people have of a singleton. Like once there’s any superintelligence, then the first, whatever goals it has, it realizes that the first thing it should do to try to achieve its goals is to prevent any other superintelligence from being created.
And so it basically just takes over the whole world or its whole light cone to try to prevent any others. So I don’t know, I don’t pretend to have the confidence to say which of those things would be true. What one interesting idea is that because these superintelligences are in some sense could all be transparent to each other.
They could all examine each other’s code. Like that might enable them to make deals or to make binding commitments to each other beyond the kinds that humans could make. You could see sort of very interesting new forms of interaction in that sense or maybe they’re just squabbling over territory and fighting each other, the same way that we are just in a much more intelligent way.
If we were shrews or something, maybe we would think, man, the humans, they’re just so much more intelligent than us. They must use their intelligence to just all live in peace and harmony. But of course, that would be mistaken.
Murphy: What do the shrews think of us? What do we think of the superintelligence?
Aaronson: Yeah, I mean, so there’s this orthogonality thesis that people talk about in this field that says, like you can have any level of intelligence consistent with any goals. So you could have something that’s far more intelligent than Einstein and von Neumann and so forth.
But that only cares about making paperclips or that only cares about eating waffles or whatever. I think if we were to examine the human case, we would find some evidence in favor of that and some against. We can probably all think in our lives of examples of arbitrarily intelligent or clever people who seem to have arbitrarily stupid goals.
Or who mess up their lives in ways that like, even, even a much less intelligent person could, could look at them and say like, why are you messing up your life in that way?
Murphy: Well, wasn’t the von Neumann thing that he loved drunk driving?
Aaronson: I guess it didn’t go right, but I’m sure that he took risks that he maybe should not have. But we’ve seen extremely intelligent people still succumb to very basic impulses or do things that they regret, whether that’s in sex or drugs or whatever.
And so why should you might ask, why should a superintelligence be any different from that? On the other hand, my day job is that I’m a professor. And we’re just sort of ideologically committed to the idea that if we just teach people stuff, if we teach them more wisdom, then at some point, it ought to make them better people.
Murphy: It ought to!
Aaronson: And whatever evidence there is against that, I guess I do maintain that hope even if intelligence and morality are not perfectly aligned with each other, they’re not completely orthogonal either.
Murphy: That feels like a very good place to wrap it up. Thanks so much for joining us, Scott. It was really a really great conversation.
Scott: Sure!
Murphy: And yeah, hopefully we’ll be able to check in in 25 years.
Aaronson: Hopefully we’ll all be around then in order to have this conversation again.
