Scott Aaronson - Quantum Computing, Complexity, and Creativity

1h 27m

Scott Aaronson is a Professor of Computer Science at The University of Texas at Austin, and director of its Quantum Information Center. 

He's the author of one of the most interesting blogs on the internet: https://www.scottaaronson.com/blog/ and the book “Quantum Computing since Democritus”.

He was also my professor for a class on quantum computing.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Episode website here.

Follow me on Twitter to get updates on future episodes and guests.


Timestamps

(0:00) - Intro

(0:33) - Journey through high school and college

(12:37) - Early work

(19:15) - Why quantum computing took so long

(33:30) - Contributions from outside academia

(38:18) - Busy beaver function

(53:50) - New quantum algorithms

(1:03:30) - Clusters

(1:06:23) - Complexity and economics

(1:13:26) - Creativity

(1:24:07) - Advice to young people



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Press play and read along

Runtime: 1h 27m

Transcript

Speaker 1 You know, it might take you know, years or decades to become, you know, an expert in a whole field,

Speaker 1 you know, and you might be very far from that, but it really doesn't take that long to become the world expert on one particular tiny little problem, right?

Speaker 1 And,

Speaker 1 you know, so try to, you know, become the world expert on something, you know, even something very, very narrow.

Speaker 3 So, Professor Aronson, can you tell us a bit about your early journey? You were a young prodigy. You graduated high school at the age of 15, got your PhD at 22.
Can you tell us about

Speaker 1 that? Yeah, well, I didn't really graduate high school when I was 15. I got a GED

Speaker 1 from New York State.

Speaker 1 But

Speaker 1 I was not happy

Speaker 1 in high school for several reasons. I mean,

Speaker 1 just

Speaker 1 socially,

Speaker 1 academically, and

Speaker 1 I wanted to get out. And I mean,

Speaker 1 I was in a weird situation because my

Speaker 1 I went to

Speaker 1 public school

Speaker 1 in the US

Speaker 1 for junior high and then in Pennsylvania, actually. And then my parents

Speaker 1 moved to Hong Kong. I lived in Hong Kong for one year because

Speaker 1 my dad was working there. And because of the

Speaker 1 miss, I went to an American international school there, but because of a mismatch between the way

Speaker 1 they did things in the US and in Hong Kong,

Speaker 1 I was not able to do math that

Speaker 1 was

Speaker 1 appropriate, I guess.

Speaker 1 I had always been sort of ahead in math. And so the only way to deal with that was for me to skip a grade and skip and go to the high school.
And

Speaker 1 once I had done that, that was sort of a, you know, something, something flipped for me, right? That, you know, I could actually do this.

Speaker 1 I could get out of this environment, you know, where

Speaker 1 I really wanted to be was college, right? I wanted to be in a

Speaker 1 in a place where,

Speaker 1 you know,

Speaker 1 you know,

Speaker 1 and of course, you know, I somewhat

Speaker 1 had an idealistic view of what college was, but, you know, but a place where ideas would matter more than popularity and

Speaker 1 where

Speaker 1 you would be able to choose what to study and advance at your own pace and all of these wonderful things.

Speaker 1 And

Speaker 1 so then I returned from Hong Kong to the US and was in a public high school for a year. And

Speaker 1 as I said, I didn't like it. And

Speaker 1 then I ran out of math to take.

Speaker 1 I took the

Speaker 1 AP calculus and then

Speaker 1 my parents basically suggested to the school, well, why doesn't he just

Speaker 1 do online learning right and do like differential equations or whatever with the, you know, the Stanford has this EPGY program, right? Where you can do these things. And,

Speaker 1 you know, my parents said they would pay for it. The school said no, that's, you know,

Speaker 1 and so I sort of seized on that as my excuse. I

Speaker 1 think I had just seen a brochure for a place called the Clarkson School in upstate New York, which is a part of Clarkson University. But you can live there for a year and take college courses.

Speaker 1 But

Speaker 1 it's for high school students.

Speaker 1 So

Speaker 1 I said,

Speaker 1 even knowing very little about this, I think,

Speaker 1 I want to give this a try. And

Speaker 1 my parents allowed me to do that. I think, you know, we had the car all, you know, packed up to drive there.
And while, you know, when we were

Speaker 1 about to leave, then finally, you know, there was one, you know, actually math teacher at my old high school who was very, very good and who was trying to advocate for me.

Speaker 1 And he was like, okay, guess what? You know, I just got it so that Scott can take the EPGY program. And we said, too late, sorry.

Speaker 1 And

Speaker 1 so

Speaker 1 I went to Clarkson and I, you know, and I generally had a very good experience there.

Speaker 1 I mean, socially, there were a lot of the same problems as at high school, but at least

Speaker 1 I was able to take courses that were a lot more interesting. I was able to sort of meet professors, get started doing research.

Speaker 1 and you you apply from um

Speaker 1 uh you know the the idea is that after a year at this Clarkson program, then you apply to colleges as a freshman, but, you know, but with a year of college credit. So

Speaker 1 I did that.

Speaker 1 You know, I was I was very, you know, just very disappointed at the time at how things turned out because, you know, I got rejected from almost every college that I applied to.

Speaker 1 You know, I had a very weird background, but uh i was lucky that uh Cornell and Carnegie Mellon were kind enough to accept me. And so I decided to go to Cornell.

Speaker 1 But then there was one problem that they required a high school diploma before you could enroll there, which I didn't have.

Speaker 1 And so

Speaker 1 we realized that I needed a GED from, you know, well, okay,

Speaker 1 my old high school, you know, oftentimes

Speaker 1 one's old high school will just give you a diploma after you've gone to Clarkson. My high school would not do that because they said I was missing phys ed.

Speaker 1 I would have to spend that summer doing phys phys ed. But

Speaker 1 so you know, and then New York State said, well, we can't give him a GED because you have to be 17 to have a GED and he's 15.

Speaker 1 And

Speaker 1 my mom eventually convinced them to make an exception and give me a GED.

Speaker 1 So then I went to Cornell. I mean, I've met other people who were homeschooled who you know actually did things that were more radical than what I did.

Speaker 1 I mean, I was accelerated by three years, you know, that was all.

Speaker 1 And then after I was in college, I didn't really accelerate anymore because, you know, I felt like I was, you know, in an environment, you know, where I wanted to be. And, you know, from now on,

Speaker 1 the main limitations were internal. They were no longer external.

Speaker 3 Was it intimidating being in classes with people three or four years older than you?

Speaker 1 The truth is, you know, I would say the majority of them didn't even know that I was younger. If they knew, it was

Speaker 1 like in, you know, maybe

Speaker 1 an oddity a little bit, but then they didn't care that much, right? I mean, because it's not like I was a 10-year-old, right?

Speaker 1 I was, you know, I mean, by the time I started at Cornell, I was 16 by then. And,

Speaker 1 you know, I,

Speaker 1 so

Speaker 1 I feel like

Speaker 1 academically it was fine. I mean, you know, the main issue was social, right? You know, and my parents had warned me that, okay,

Speaker 1 to skip grades is going to completely screw up my social life. And, you know, it's going to make dating

Speaker 1 incredibly difficult and so on and so on. And I sort of brushed all of that aside.

Speaker 1 Of course, that all turned out to be true.

Speaker 1 got i got my uh phd uh before basically before uh i i learned how to drive really or or or learned how to have any kind of a social life you know to speak of i mean or you know any any kind of a dating life really uh

Speaker 1 but um so you know i did things in a weird order you could say and and and and and that did cause you know an enormous amount of stress for me but but my main argument was that you know i was already socially unhappy in in high school right i was i i was already miserable you know without having skipped grades and so i felt like you know as long as i'm going to be miserable socially anyway as it seemed at the time you know i would be that you know at least i could be learning stuff at least uh at least the academics could be better yeah and as far as the academics go uh what were the advantages and disadvantages of specializing so early

Speaker 1 well i mean i i don't get to rerun my life multiple times and compare.

Speaker 1 But I think that,

Speaker 1 you know, mainly I'm grateful that I had the opportunity to

Speaker 1 sort of learn about stuff that

Speaker 1 interested me.

Speaker 1 And it was not... It was not just that I wanted to take only math and CS and nothing else.
It was not like that. I mean,

Speaker 1 I took,

Speaker 1 you know,

Speaker 1 plenty of humanities in college, right? And

Speaker 1 but you know, I wanted to sort of have some freedom to,

Speaker 1 you know, I pick what to learn about. I mean, like

Speaker 1 in high school,

Speaker 1 you know, humanities means, you know, the five paragraph essay.

Speaker 1 right it means uh uh you know you basically have to regurgitate what the teacher wants from you right and you know if you try to you know do your own thing right things in your own way then you will actually fail right and uh you know this i i uh learned this from experience you know this is not uh theoretical right uh

Speaker 1 and um

Speaker 1 you know so even even even the humanities right i was a lot happier with in college than i was in high school uh so so it wasn't just a matter of of specialization but but partly it was i mean i think that by the age of 16 or so, I knew what I was passionate about.

Speaker 1 It was the fundamentals of computing, right? And understanding what computers could or couldn't do. And I was ready to be learning about that and working on it.

Speaker 2 And,

Speaker 1 you know, I think that there is, you know, the entire concept of, you know, of sort of teenagerhood, right?

Speaker 1 That like people are, you know, from from the age of 12 to 18 or now, maybe even 20 or so are still basically children, right? I think that that's largely a modern construction, right?

Speaker 1 If you go back even a few hundred years,

Speaker 1 by the time someone is a teenager, they're an apprentice, right? They can work,

Speaker 1 they can be learning while they're also

Speaker 1 working.

Speaker 1 They can be learning

Speaker 1 the trade that they choose. And

Speaker 1 so I think that that's actually natural.

Speaker 1 I don't feel like I'm that unusual in that respect.

Speaker 1 I think, you know, I mean, you know, maybe, you know, unusual in some respects, but not in wanting to sort of get started with life when, you know, I was 15 or 16.

Speaker 3 Yeah, I had Tyler Cowen on the podcast and he started,

Speaker 3 he wanted to be an economics professor basically in his early teens and he was reading and preparing for that from that time on.

Speaker 1 Economics professor is not what one often thinks of as like a child or an adolescent realizing that they want, but that is the, that, that,

Speaker 1 for, in the case of Tyler Cohen, I could easily believe that.

Speaker 3 Is there some special advantage of learning the fundamentals of the fields you know you're going to go into early on?

Speaker 3 So instead of, except for the fact that you just get more years to accumulate knowledge, is there the special advantage of learning it in your early years?

Speaker 1 You know, I'm not sure.

Speaker 1 It's a very interesting question. I mean, you know, one could imagine that, you know, while someone's brain is still developing, right, that it's good to be exposed to

Speaker 1 certain things at that age. And, you know, I mean, I mean, like,

Speaker 1 we know examples that are like this, right? Like, you know, an obvious one would be learning languages, right?

Speaker 1 Where, you know, there's a window where, you know, children can just soak up languages, you know, like a sponge. And, you know, after that window, uh okay,

Speaker 1 you can learn a language, but it will be a difficult, you know, intellectual puzzle and you'll never speak it as well as

Speaker 1 a native five-year-old will speak it.

Speaker 1 So

Speaker 1 it could be like that, but it could also be that

Speaker 1 our brains are not really adapted for learning any of this stuff, right? I mean, like,

Speaker 1 when we get to theoretical physics, theoretical computer science, and so on, right? You know, all of it

Speaker 1 is like learning a language that we don't natively speak, right? And

Speaker 1 on that model, it would just be a question of getting a head start of

Speaker 1 how much time you have to spend on it.

Speaker 1 I would love for someone to research that question because I don't know the answer.

Speaker 3 I have heard, well, so as you know, like many of the important discoveries in quantum mechanics were made by people who at the time were very young, right? And that's an interesting fact.

Speaker 3 Not only did they learn that stuff early on, but like they made those contributions early on.

Speaker 1 Yeah, but that's not just quantum mechanics. I mean, that's all over, you know, math and physics, right? There are, I mean,

Speaker 1 Newton was, you know, about 22, right, when he, you know, had his miracle year

Speaker 1 of

Speaker 1 inventing calculus, you know, discovering the laws of mechanics.

Speaker 1 Yeah, I mean, no, I mean, I mean, you know,

Speaker 1 I'm already old by the standards of math and physics, right? You know, the,

Speaker 1 Which is weird to think about.

Speaker 1 But

Speaker 1 there are also many examples of

Speaker 1 great contributions that were made by people in their 40s, their 50s, their 60s.

Speaker 1 So

Speaker 1 that's another empirical question that I wonder about, right?

Speaker 1 Is it that people's brains actually slow down as they get older? Or is it simply that they have have less motivation or less free time?

Speaker 1 I mean, like,

Speaker 1 in my case, you know, having two kids, you know, has clearly given me enormously less time for research, right?

Speaker 1 And, you know, like in those cases where I'm, you know, traveling where I'm, you know, away from the kids for a week, you know, maybe like I can work again.

Speaker 1 And it suddenly, you know, it feels a lot like it did when I was in my 20s. Right.

Speaker 1 And

Speaker 1 so,

Speaker 1 you know,

Speaker 1 and then

Speaker 1 there's also the issue of motivation, right? That like when I was,

Speaker 1 before I was established, you know, like my entire conception of myself, you know, my entire like

Speaker 1 goals for my life were wrapped up in, you know, succeeding in research, you know, at,

Speaker 1 you know, doing whatever it took. And now, you know, it's more like, you know, I have all kinds of other concerns.
I have my own students to worry about, you know,

Speaker 1 my postdocs,

Speaker 1 my kids, of course,

Speaker 1 my blog, you know, the popular things I write. And, you know, research just feels like one more thing that I do, right, that my identity is not as much wrapped up in.
So, you know,

Speaker 1 I may have also gotten dumber, right? That's certainly possible, right? But, you know, it's hard to disentangle from those other factors.

Speaker 3 Yeah, you're the third person. and I promise we'll get to the technical questions eventually, but you're the third person

Speaker 3 on the podcast. I'm about to ask this question because it fascinates me.

Speaker 3 Miracle Yours, as you mentioned, it's not just that people make like very important discoveries at young ages, but they make lots of seemingly uncorrelated important discoveries at young ages.

Speaker 3 So as you know, like

Speaker 3 Einstein did Brownian motion

Speaker 3 with special relativity, what was the third one? There were two or three more.

Speaker 1 Well,

Speaker 1 there was the photoelectric.

Speaker 3 Yeah, that's right.

Speaker 3 And these don't seem superficially, at least, to be related. And yet it's interesting they happen in the same year.
But what do you think explains that phenomenon?

Speaker 1 In the case of Einstein, I mean, you know, you're asking me to explain Einstein's miracle year. I mean,

Speaker 1 that's a tall order.

Speaker 1 I mean,

Speaker 1 one can say certain things, like, you know, one can say that, you know, physics in the early 20th century was ripe for these, you know, for

Speaker 1 these interrelated revolutions

Speaker 1 of relativity and quantum mechanics.

Speaker 1 I mean,

Speaker 1 if it wasn't Einstein, it would have been someone else

Speaker 1 not long after with all of those things.

Speaker 1 General relativity, which you know, which took Einstein a decade longer, you know, that was the one thing where if it hadn't been for Einstein, then you know,

Speaker 1 for all we know, it might have been decades before anyone else did that. But, you know, the,

Speaker 1 you know, the stuff that he did in his

Speaker 1 in 1905, I mean, you know, they were all things that physics was kind of ripe for. Maybe it took

Speaker 1 one person just looking at things in a sufficiently different way.

Speaker 1 You know, I'm not, I'm not, I'm not sure really. But,

Speaker 1 you know, but also it might be, you know, the confluence of all of those things is part of why, you know, we think of Einstein as Einstein, right? You know,

Speaker 1 There were many other great physicists around the same time

Speaker 1 who may have done one or two things of that caliber,

Speaker 1 but only Einstein does

Speaker 1 three or four of them.

Speaker 3 I'm glad you brought up the topic of like

Speaker 3 ideas being in the air and ready to pluck, because this leads me directly to the next question.

Speaker 3 I was rereading the lecture notes from your quantum information science class, and there's the lecture on quantum teleportation, And you were answering, how is it, how do people figure this kind of stuff out?

Speaker 3 And you say, it's worth pointing out that quantum mechanics was discovered in 1926, and that quantum teleportation was only discovered in the 90s. And of course, quantum computing,

Speaker 3 I'm a big fan of David Deutsch, and he discovered it or he thought of it in the 80s.

Speaker 3 It seems like these ideas were ready to pluck, you know, back when quantum mechanics was developed. Why did it take so long to have these ideas plucked?

Speaker 1 Yeah,

Speaker 1 that's an extremely interesting question.

Speaker 1 I mean, you know,

Speaker 1 the whole idea of thinking about quantum entanglement, you know,

Speaker 1 not as something like metaphysically weird or, you know,

Speaker 1 spooky or basically how do we explain this away? How do we get rid of this? But in terms of how do we use it, right? How do we use it for, you know, to improve

Speaker 1 information processing, right? I think that's a point of view that,

Speaker 1 as far as I know, really only started with John Bell in the 1960s, right? With

Speaker 1 Bell having this remarkable insight that

Speaker 1 you could do this experiment to test the prediction of entanglement and distinguish it from any possible theory involving local hidden variables.

Speaker 1 And then then a little bit later, Stephen Wiesner, you know, had the idea of

Speaker 1 quantum money, right, or using the uncertainty principle for cryptography, although he was, again, he was not able to publish that until the 80s.

Speaker 1 And

Speaker 1 there are a few things that I could say here. I mean,

Speaker 1 one is that when quantum mechanics was discovered in the 20s,

Speaker 1 it was not at all clear to the people who discovered it that this was sort of

Speaker 1 a final form that the laws of physics would take, right? You know, they thought, you know, I mean,

Speaker 1 that there was a large contingent that

Speaker 1 thought that

Speaker 1 this is some

Speaker 1 scheme that sort of represents our current knowledge,

Speaker 1 but clearly

Speaker 1 one has to improve on it. And then that was very much Einstein's point of view, for example, and I think also Schrödinger's.

Speaker 1 And then,

Speaker 1 you know, like Bohr and Heisenberg, you know, they were very much, you know, opposed to looking for something beyond quantum mechanics, but

Speaker 1 sort of not,

Speaker 1 you know, they like...

Speaker 1 they they sort of sort of not not in a way that was sort of you know in inspiring more research about what you could do with quantum mechanics mechanics.

Speaker 1 They just wanted everyone to just sort of shut up and stop asking about these things, right? So you could say, even though Bohr was right and Einstein was wrong on the issue of local hidden variables,

Speaker 1 there's a deeper sense in which Einstein was the more right one and sort of putting his finger on there is something here that we do not yet understand and that we need to understand.

Speaker 1 And

Speaker 1 indeed,

Speaker 1 I'm sorry.

Speaker 1 you know indeed indeed that that that that thing would not be understood really until until the discovery of the bell inequality in the 60s the other um

Speaker 1 you know issue is that when quantum mechanics was was discovered and into the you know the the 30s you know there was so much to do in terms of you know figuring out how chemistry works right just just applying quantum mechanics to understanding the real world around us

Speaker 1 And you could say on the other side in computer science,

Speaker 1 I mean, for God's sakes, the entire notion of a universal computer was just being born.

Speaker 1 The whole field of classical computing was only just being born.

Speaker 1 So there was sort of so much

Speaker 1 on the plates of both sides that

Speaker 1 it might have seemed wildly premature to anyone to combine the two. And then, of course, World War II, you know, intervened

Speaker 1 with, you know,

Speaker 1 and

Speaker 1 the people who were doing fundamental science, a lot of them went to work, you know, either at the Manhattan Project or Bletchley Park.

Speaker 1 And, you know, and then after that, I mean, I think the theoretical physicists were very

Speaker 1 focused on just, you know, this zoo of new particles that were being discovered and in formulating quantum field theory. You know, it was very, very much out of fashion for decades to think about

Speaker 1 the fundamentals of quantum mechanics itself. And in the meantime,

Speaker 1 people were finally

Speaker 1 building,

Speaker 1 commercializing, figuring out the uses for classical computers. That was

Speaker 1 a very young area itself. And I mean, the entire theory of computational complexity, which is sort of an intellectual prerequisite to quantum computing, right? That only developed in the 1960s.

Speaker 1 And then the theory of P and N P and N P completeness, that was only the 1970s.

Speaker 1 Now, if we think about the people who could have combined these fields, I mean,

Speaker 1 I think of John von Neumann as an obvious possibility because he was, of course,

Speaker 1 one of the great

Speaker 1 pioneers of both of computer science and of quantum mechanics.

Speaker 1 In fact, he invented the notion of entropy of quantum states.

Speaker 1 But

Speaker 1 he had a lot of other things on his plate, and then he died early. He died in the 50s.

Speaker 1 I mean, Alan Turing was also,

Speaker 1 of course,

Speaker 1 you know, founder of computer science, who was also passionately interested in the foundations of quantum mechanics, you know, as we know from his letters to his friends. You know,

Speaker 1 he died when he was 42.

Speaker 1 So,

Speaker 1 you know,

Speaker 1 whatever the case, you know, I feel like quantum mechanics and the theory of computing were both in place by the 1930s, but there was a lot of other stuff on people's plates.

Speaker 3 And, you know, the idea of, you know, thinking of entanglement as a resource, that only starts in the 60s computational complexity that only starts in the 60s and 70s and then you know maybe a decade after that people start thinking about quantum computation interesting um i've heard two other theories and i want to see how what you think of them so the first one is i i think i might be misquoting him but i think at some point david deutsch said the reason he was able to think seriously about um quantum computing was that he took the many worlds interpretation seriously and because people before him hadn't taken many worlds world seriously they hadn't been able to go quantum computing and the second sorry Sorry, go ahead.

Speaker 1 Go on, go on. Okay, maybe, maybe, maybe, should I respond to that one first? Yeah, yeah.

Speaker 1 Okay, so that is definitely true for Deutsch, right? And I know, you know,

Speaker 1 Deutsch well, you know, and then, you know, and he

Speaker 1 actually, you know, became a big believer in the many worlds interpretation when he was here at UT Austin. Right.

Speaker 1 as a student, as you know, and he heard Hugh Everett himself give a lecture about it. Bryce DeWitt, who was

Speaker 1 one of the main early proponents of the many worlds interpretation, was also here at UT Austin and had a big role, had a big influence on Deutsch. And Deutsch

Speaker 1 was thinking about

Speaker 1 how would you sort of make, you know, sort of shake people, how would you sort of make them realize that quantum mechanics is universally valid, that it applies at all scales.

Speaker 1 Well, you know, if you could build a computer

Speaker 1 that

Speaker 1 could do a superposition over different computations, and then you could see the results of interference between them, right?

Speaker 1 That at that point, conceptually, it is almost like making a superposition over, you know, a brain that can think different thoughts, right? And

Speaker 1 then, you know, if you could have that, then, you know, the whole idea of

Speaker 1 that observers collapse the wave function just by being observers is no longer tenable.

Speaker 1 But so one problem is that Richard Feynman had the idea of quantum computing around the same time as Deutsch did, right?

Speaker 1 And Feynman, I mean, he was certainly aware of the many worlds interpretation. He was aware of effort,

Speaker 1 but that was not his motivation for thinking about quantum computing.

Speaker 1 He was,

Speaker 1 you know, as usual,

Speaker 1 he was much more practically focused. He was thinking about how do we simulate

Speaker 1 physics, you know, how do we simulate nature with a computer? And if we use classical computers, we suffer this exponential slowdown, right?

Speaker 1 And so can we build a new kind of computer that will that will solve that problem and give us a

Speaker 1 universal quantum simulator?

Speaker 1 You know, I mean, people were, there was also a whole movement in the 70s and 80s to think about the physics of computation, including like the thermodynamics of computation, can you make computation inherently reversible, you know, and all kinds of other issues, right, that are that are that are not about exponential speed ups.

Speaker 1 But that was also one of the intellectual streams that sort of led directly to quantum computation, right?

Speaker 1 So I think that what you said is very much true for Deutsch, but you know, there were others, you know, besides Deutsch who were thinking about these things.

Speaker 1 Feynman was only one. There were others as well.
I think Benioff,

Speaker 1 you know, and several others. And

Speaker 1 few people in this field are as

Speaker 1 messianic as Deutsch is about the many worlds interpretation.

Speaker 3 It's interesting that, come to think of it, that both Deutsch and Turing were trying to solve a separate philosophical problem when they came up with their model of computation.

Speaker 3 At least in part. Okay, so the second theory I've heard is that since the 1970s, that academia has been less open to new ideas.

Speaker 3 And you mentioned Weisner, Everett, and both of them, I understand, were kind of,

Speaker 3 I don't know, looked down at ostracized or something.

Speaker 3 Is that theory confirmed?

Speaker 1 I think,

Speaker 1 I mean, it is possible that in some parts of academia, you know, it has become harder to explore new ideas, you know,

Speaker 1 than it once was.

Speaker 1 you know i could believe that about the social sciences i could believe that you know about parts of medicine you know people used to do just completely crazy things just you know invent some new concoction inject themselves with it you know see what happens you know write an article about it you know whereas today you know it might take like a billion dollars of investment before you could even get to the point where you know anyone would consider it remotely ethical to do such a thing right so uh on the other hand,

Speaker 1 in physics and

Speaker 1 let's say,

Speaker 1 you know,

Speaker 1 speculation about

Speaker 1 the

Speaker 1 foundations of

Speaker 1 computing

Speaker 1 and cosmology and things like that, I think if anything, things have gone in the opposite direction.

Speaker 1 And

Speaker 1 this is partly because we now have this preprint server, this archive, where

Speaker 1 everyone can post all of their new research ideas with no filter.

Speaker 1 And I think that

Speaker 1 has somewhat transformed the scientific landscape.

Speaker 1 um you know because i mean we still have journals but you know journals are now just like a final stamp of approval that in many fields is, I mean, it's important when you're looking for jobs or things like that, but journals are no longer gatekeepers to, you know, that can sort of prevent people from seeing your paper, right?

Speaker 1 And so,

Speaker 1 you know, if you just look at the quantum physics archive every day, you will see so many far out ideas, right?

Speaker 1 It is so much crazy stuff that it's, you it's very hard to believe that a modern day Wiesner or Everett would feel any barrier to

Speaker 1 getting their idea out there, right? I mean, today the problem is more that there are so many

Speaker 1 bold new ideas that, of course,

Speaker 1 most of them won't go anywhere, right? Most of them will fail, right? And so there's so much to sift through if you're looking for

Speaker 1 what is actually going to be revolutionary.

Speaker 3 I'm sure your comment section and email fills up with these, but have you,

Speaker 3 how many of the important ideas, or do many important ideas in the field come from people outside academia? Or is it mostly people with PhDs working within the system?

Speaker 1 Well, okay. I mean, I mean, those are...
Those are not

Speaker 1 exhaustive categories, right? Because

Speaker 1 there have been breakthroughs that have come from people who were not in academia.

Speaker 1 Very often, what you find is that these are people who were sort of on the margin of academia, like they got a PhD and then they left

Speaker 1 the academic world or

Speaker 1 they did part of a PhD

Speaker 1 or things like that, right? There was a...

Speaker 1 um famous case of yitang zhang who was this mathematician from from china right who uh uh

Speaker 1 proved that there are infinitely many pairs of primes at most 70 million apart, right, which was

Speaker 1 a major advance in number theory, right? And this was a guy who had, like, he had gotten a PhD in math in China, I think, but then moved to the US and then worked

Speaker 1 making sandwiches at Public. uh you know just you know in order to like support his family and and you know work uh

Speaker 1 various other odd jobs

Speaker 1 but continued working on math.

Speaker 1 So

Speaker 1 I don't,

Speaker 1 I mean,

Speaker 1 I hear all of the time from

Speaker 1 people who are like complete auto-didacts, who

Speaker 1 taught themselves yes, and that they think that they've solved the P versus NP problem or whatever, right?

Speaker 1 That that is usually someone who just doesn't understand the question, who has just made some,

Speaker 1 you know,

Speaker 1 and then

Speaker 1 there is a distinction because sometimes people, you know,

Speaker 1 who sort of teach themselves the field, they just, you know, they really want to learn. They just want an expert to talk to, you know, who will tell them, like,

Speaker 1 here's where you're on the right track. Here's where, you know, you made a mistake.
And they will thank you for that.

Speaker 1 Other times you get people who really dig in their heels. And

Speaker 1 the establishment is censoring me. And

Speaker 1 when I taught at MIT, I had

Speaker 1 someone

Speaker 1 writing to the president of MIT to try to get me fired.

Speaker 1 I had another one sending me death threats

Speaker 1 because

Speaker 1 very specific, I had to actually contact the police about them

Speaker 1 because I would not publish their

Speaker 1 proof of P equals N P or their refutation of quantum computing on my blog.

Speaker 1 So

Speaker 1 you get the whole spectrum.

Speaker 1 But

Speaker 1 the other thing you find is

Speaker 1 there are,

Speaker 1 and

Speaker 1 a lot of the readership of my blog comes from people who studied technical subjects in college, studied CS or math or physics, and then went out into industry, right?

Speaker 1 But maintain this connection, you know, maintain their sort of curiosity about fundamental questions. And some of those people,

Speaker 1 you know, actually want to continue to do research. And I'm actually co-authoring a paper with one of them right now.

Speaker 1 I posted this survey article about the busy beaver function on my blog recently. And

Speaker 1 there was a,

Speaker 2 um

Speaker 1 you know i guess a a hobbyist who who solved some of the open problems

Speaker 1 survey and uh yeah and and had a lot of new ideas and and he and i are going to write a paper about it now oh cool interesting by the way is that chuang the same one that uh co-authored the nielsen textbook on quantum information no no no no no there's there's a a yi tong zhang and then there's a uh it was the mathematician and then isaac chuang oh okay yeah is the physicist who co-authored the nielsen Chuang text.

Speaker 3 Okay, good. Mamma Singh.
So let me ask you about busy beaver.

Speaker 3 The

Speaker 3 proposition three was that

Speaker 3 you can't.

Speaker 1 I have no memory for the numbering of propositions. Okay.

Speaker 3 Can you actually just explain what the busy beaver function is before I ask these questions?

Speaker 1 Yeah, okay, sure. So, so the busy beaver function is a really, really remarkable

Speaker 1 sequence of hyper-rapidly growing integers. And the the way that we define it, you know,

Speaker 1 it was invented in 1962 by a mathematician named T. Borado.

Speaker 1 And

Speaker 1 basically what we do is,

Speaker 1 well,

Speaker 1 what we would like to say, let me start with what we'd like to say. We'd like to say, you know, the

Speaker 1 You know, if I want to name the biggest number that I could possibly think to define, then why not just say, you know, the biggest biggest number that can be named using a thousand words or fewer?

Speaker 1 Okay, or something like that, right? And but then I have to be careful because there's an inherent paradox there, right?

Speaker 1 Which is, I also could have said, like, one plus the biggest number that can be named with a thousand words or fewer, but then that, you know, I just named that with fewer than a thousand words, right?

Speaker 1 And so,

Speaker 1 so, so, so, from this, you know, the

Speaker 1 conclusion that

Speaker 1 logicians, philosophers drew more than 100 years ago is that the concept of naming a number in English is not as clear as we think it is.

Speaker 1 It leads to paradox

Speaker 1 if you're not careful about it.

Speaker 1 But what if we could name a number in a way that was completely unambiguous?

Speaker 1 Well, you know,

Speaker 1 since Alan Turing in the 1930s, we have had such a way. That way is computer programs, right, or Turing machines.
And so we could say,

Speaker 1 think of the largest integer that can be generated by a computer program that is at most, let's say, a thousand bits in length. Okay, and

Speaker 1 now you have to be careful. What do you mean by that?

Speaker 1 Because of course a computer program could run forever, right? It could start just, you know, we could easily write a program that says do

Speaker 1 print nine loop, right? And it would just print an infinite sequence of nines. So what we do is we restrict attention to those computer programs that eventually hold,

Speaker 1 right? We say among, let's say for a given n, we consider all of the possible computer programs that are n bits long.

Speaker 1 and say there are two to the n power of them or at most two to the n power right many strings will just uh uh

Speaker 1 lead to things that are not even programs, you know, they won't even compile, so we'll throw those away, right? Now, among all of the valid n-bit programs, some of them run forever.

Speaker 1 We just run them on a blank input, so we throw those away as well, right? We consider only the ones that eventually stop.

Speaker 1 And now among all of the ones that stop, we take the one that runs for the longest number of steps, the largest number of steps until it stops.

Speaker 1 And that number of steps, steps, that is what we call the nth busy beaver number.

Speaker 1 So the way that Rado defined this was using Turing machines, which is just one particular programming language, the one invented by Alan Turing in the 1930s.

Speaker 1 He said busy beaver of n is the largest finite number of steps that any n-state Turing machine can run for.

Speaker 2 And

Speaker 1 so

Speaker 1 the amazing thing is one can prove that this function grows faster than any computable function.

Speaker 1 So no matter what

Speaker 1 sequence of integers, you can

Speaker 1 basically

Speaker 1 if you are in a contest to

Speaker 1 name the largest number, and if you say busy beaver of a thousand, then you will utterly destroy any opponent who doesn't know about the busy beaver function or about anything similar to it. Okay,

Speaker 1 so

Speaker 1 you know, so

Speaker 1 one can say further remarkable things about this function, like that, you know, only a finite number of values of the function can actually

Speaker 1 be proven from the axioms of set theory. right for reasons of Gödel's incompleteness theorem, basically.

Speaker 1 After a certain finite point, you know, the values of this function, you know, we can, we, well, you know, presumably it has definite values, right, because it's this clearly defined function, and yet we could no longer prove what they are.

Speaker 1 Okay, so right now, only, you know, if you if you look, if you take the busy beaver function as Rado defined it in the 60s, only four values of the function are known.

Speaker 1 Okay, the busy beaver of one is one, busy beaver of two is six, busy beaver of three is 21, and busy beaver of four four is 107.

Speaker 1 Busy beaver of five, it is only known that it's at least 47 million.

Speaker 1 Okay, you know, and we don't know how much bigger it might be. Busy beaver of six, it's at least

Speaker 1 about 10 to the 36,000. Okay, might be much bigger.
Busy beaver of seven, it's at least 10 to the 10 to the 10 to the 10 to the 18 million, you know, and then might be enormously larger still.

Speaker 1 Okay, just to just to give people an idea of how this function grows. Yeah, yeah.

Speaker 3 And by the way, I recommend everybody who's listening to check out the busy beaver frontier paper because it was written in such a way that I also want to ask you how you learned to write so well.

Speaker 3 It was written in such a way so that non-expert like me could understand it. And just to clarify from my own understanding, it's not that busy

Speaker 3 there isn't a function that for any n is greater than busy beaver of n. It's just that it won't, busy beaver will eventually be with more states than if you have a derivative.

Speaker 1 That's exactly what I meant when I said grows faster than. Right.
Right. I mean, each particular value of busy beaver is just some positive integer, right? Yeah.
It's this very concrete thing, right?

Speaker 1 You know, like it's six or it's 21, right? But, you know, if you look at the rate of growth of these integers, right, it will dominate it, eventually dominate any computable function. Right.
And

Speaker 1 in practice, not just eventually, but very, very quickly. Yeah.

Speaker 3 And you show very elegantly in the paper that that means you can independently prove the halting problem and Gödel's incompleteness theorem.

Speaker 1 Yeah, yeah.

Speaker 1 I'm looking at, I mean, I mean, I mean, like the

Speaker 1 unsolvability of the halting problem, Gödel's theorem, like these things are so intertwined that there are many, many different ways to prove them all, right?

Speaker 1 But the busy beaver function gives you one way that, yeah, you can prove

Speaker 1 that

Speaker 1 you can use it to prove independently that there is an uncomputable function.

Speaker 1 You can use it to prove Girdle's incompleteness theorem. Right.
Okay.

Speaker 3 So my question, the proposition three was the one that said, okay, for any axiomatic theory, you can't prove all of Busy Beaver with it.

Speaker 3 But so my question was,

Speaker 3 can you

Speaker 3 keep, even though there's no systematic way to extend our set theory, is there plausibly a way that you can keep extending it such that you can prove higher and higher values of Busy Beaver?

Speaker 1 That's a wonderful question.

Speaker 2 You know,

Speaker 1 I would say we don't really know yet, right? Because right now we can't even pin down Busy Beaver of five, okay?

Speaker 1 Now, my guess would be that

Speaker 1 the resources of

Speaker 1 existing set theory are perfectly enough to do that, right? But

Speaker 1 we don't even even know that, right?

Speaker 1 So four years ago,

Speaker 1 a student of mine named Adam Yadidia and I decided to

Speaker 1 look into a question that for some reason no one had looked at before, which was

Speaker 1 what is the smallest n for which we can actually prove that the value of busy beaver of n is independent of set theory, right?

Speaker 1 Like it was clear that there is some n for which this is true, but are we talking about 10 million or are we talking about 10?

Speaker 1 So in practice, what this problem boils down to

Speaker 1 is almost like software engineering. Like you have to construct a Turing machine that checks all of the theorems of set theory and it halts only if it finds a contradiction.

Speaker 1 And you have to build such a Turing machine with as few states as possible.

Speaker 1 If you can build such a machine with only n states, then you've proven that

Speaker 1 set theory cannot determine the value of busy beaver of n. Because if it did, then it would thereby determine its own consistency, which is exactly what Goethe's theorem does not allow.

Speaker 1 So what we managed to do is we managed to find such a machine with 8,000 states. Okay, about 8,000 states.
You know, that was after a lot of optimization, right? And, you know, a lot of coding and

Speaker 1 engineering

Speaker 1 and ideas, right?

Speaker 1 Since then,

Speaker 1 again,

Speaker 1 a hobbyist, to come back to your earlier question, someone outside academia by the name of Stefan O'Rear

Speaker 1 has managed to improve our bound and got it to under 800 states. Yeah.
Okay, and that is the current record. Now, if you could get that down to like, you know,

Speaker 1 it is a wonderful question. Could you get it down to like 10 states or something like that, right?

Speaker 1 And that would tell us that we have to already go beyond the current axioms of set theory, you know, even to just get the next few values of the busy beaver function, right? But maybe not.

Speaker 1 Maybe, maybe, maybe you can even get 100 of them with existing set theory, right?

Speaker 1 We don't know. Now, what you can do is,

Speaker 1 you know,

Speaker 1 at whatever point,

Speaker 1 you know, ZF set, Zermilo-Fraenkel set theory, you know, which is the

Speaker 1 accepted basis for, you know, most of math, at whatever point it runs out of steam, which, you know, we don't know exactly where that point is, but one can then extend it by what are called large cardinal axioms, right, which basically assert that there exists an infinity that is bigger than any infinity that can be defined in ZF set theory, right?

Speaker 1 Or, you know, you can say, you know, you know,

Speaker 1 where infinity is of various particular kinds, right?

Speaker 1 And,

Speaker 1 you know, and

Speaker 1 presumably one could then set all more values of the busy beaver function that way, right?

Speaker 1 But the issue is, you know, no matter what set theory you think of, right, as soon as you can build a Turing machine that enumerates all of the theorems of that set theory, then however many states there are in that Turing machine, that then sets a bound on how many busy beaver numbers that set theory can ever determine.

Speaker 1 So in some sense, the set theories that can determine more and more busy beaver numbers will have to become more and more complicated.

Speaker 1 We know that.

Speaker 1 And

Speaker 1 as you said, right, there will never be a systematic way to search for them, because if there were, then that would make the busy beaver function function computable.

Speaker 1 And, you know, and what it means for there to be no systematic way to search is that

Speaker 1 we could keep proposing more and more set theories. And if you look at modern

Speaker 1 mathematical logic, set theorists do this, right? They do propose more and more large cardinal axioms, but sometimes they actually discover that their axioms are inconsistent, right? Or

Speaker 1 they sort of provisionally

Speaker 1 adopt an axiom and use it, but the community is not really convinced that it won't lead to an inconsistency.

Speaker 1 So there's no surefire way to think of these axioms and be confident that

Speaker 1 you actually still have a consistent system.

Speaker 3 Okay, let me offer a very, very silly thought experiment. Yeah.
Okay. So this is inspired by that joke that if you shoot enough sunlight at the Earth, it'll shoot a Tesla back, right?

Speaker 3 Okay, so the idea is if the Turing principle is true, you can simulate the Earth and the solar system and everything on a very incomprehensibly big computer, right?

Speaker 3 With all the humans on it.

Speaker 1 Given the appropriate initial data.

Speaker 3 Yeah, yeah. And so

Speaker 3 simulating all of that is a computable function.

Speaker 3 And if you can check in like every hundred years and see what is the biggest busy beaver number that humans have proven this century, what is the biggest busy beaver number proved 100 centuries?

Speaker 3 Is that not a computable function where that tells you busy beaver of n indefinitely higher?

Speaker 3 Well,

Speaker 1 it would give you a way to compute arbitrary values of the busy beaver function if humans were indeed to continue,

Speaker 1 you know, you could say

Speaker 1 under the assumptions that number one, you know, the,

Speaker 1 you know,

Speaker 1 you know,

Speaker 1 everything in our physical world is computable. I mean, you know, we

Speaker 1 let that, you and I just let that assumption pass almost without comment, right? Although, you know, of course, that's an enormous question in itself, right?

Speaker 1 With some brilliant people like Penrose on the other side of that. But all right, but, you know, and then

Speaker 1 so assuming that, you know,

Speaker 1 everything we're doing is computable.

Speaker 1 And also assuming that we could somehow, you know, continue finding more and more values of the busy beaver beaver function indefinitely, right? If that were true, then we would have a contradiction.

Speaker 1 Yeah,

Speaker 1 yeah, yeah.

Speaker 3 So

Speaker 3 then either everything is occupying, either the Turing principle is false or we can't indefinitely keep extending.

Speaker 1 Yeah,

Speaker 1 you could say a very conservative way out would be to say, well, you know, our quest to compute more and more busy beaver numbers will come to an end. Right.
Okay, that's very interesting.

Speaker 1 And, you know, in fact, there has not been another busy beaver number determined since the early 1980s.

Speaker 3 Yeah, that's interesting.

Speaker 1 Which is when Busy Beaver 4 was pinned down.

Speaker 3 Okay.

Speaker 3 Okay, so I'm looking forward to the paper you published with

Speaker 3 the hobbyists now. Yeah,

Speaker 1 Bruce Smith is his name.

Speaker 3 Bruce Smith. Okay, very interesting.

Speaker 3 Okay, so let me ask you now.

Speaker 3 I remember in class last year, you said, you know, basically we have

Speaker 3 a few very important quantum algorithms like Grovers and Shores that are discovered in the 90s. And now a lot of stuff now is just an extension of those algorithms.

Speaker 3 What do you think is the potential of finding?

Speaker 3 Is there a good reason to think that there are other quantum algorithms to be discovered that are as fundamental and important as Grovers and Shores were?

Speaker 1 I would love it if there were, right?

Speaker 1 I'd be thrilled to, you know, discover such an algorithm, have one of my students discover it, right? You know, this is, this is,

Speaker 1 you know,

Speaker 1 I mean,

Speaker 1 that's the kind of discovery that we enter this field for, right? I mean,

Speaker 1 now,

Speaker 1 if we're being intellectually honest, we have to admit that

Speaker 1 it's been 25 years since Grover's algorithm was discovered.

Speaker 1 you know you know maybe maybe no other quantum algorithm as fundamental as shor's or grover's has been discovered you know in the last 25 years uh

Speaker 1 uh what we have um discovered discovered, you know, as you as you learned, and because you took my class, was

Speaker 1 a lot of, you know, an enormous number of generalizations and new applications and variations of Shor's and Grover's algorithms,

Speaker 1 including what are called quantum walk algorithms,

Speaker 1 including

Speaker 1 phase estimation-based algorithms.

Speaker 1 And

Speaker 1 some totally different quantum algorithms were also discovered, although the problems they solve are maybe more obstrue, right? Or

Speaker 1 it's harder to explain what problem

Speaker 1 they're solving.

Speaker 1 And

Speaker 1 so

Speaker 1 you could wonder,

Speaker 1 is it lack of imagination

Speaker 1 on our part? Or is it, I mean,

Speaker 1 another

Speaker 1 possibility is, you know, if you look at the history of classical computer science, you know, what you find is that there are a few basic techniques that were discovered very early on in the history of classical CS.

Speaker 1 One of them is dynamic programming,

Speaker 1 like, you know, dividing, you know, breaking down your problem like recursively into sub-problems. I mean, we could say, you know, in general, recursion, right?

Speaker 1 So, you know, divide and conquer, greedy algorithms uh you know uh uh convex programming you know linear programming uh you know um

Speaker 2 um

Speaker 1 gaussian elimination right uh

Speaker 1 and you know and and and and most of the you know i mean the field of classical algorithms is enormous and yet you know most of the classical algorithms that we know are somehow built up out of these motifs that are this that were discovered very early on right and and we don't normally think of that as a failure of classical algorithms.

Speaker 1 We just think of it as, you know, there are these fundamental features of the algorithmic universe that, you know, that people noticed as soon as they started looking, right?

Speaker 1 And then, you know, and then you can go much further, but you go much further by building on the basic things that you have, right? And so

Speaker 1 maybe we should think of Shor's algorithm and Grover's algorithm not as just specific algorithms, but as sort of some of the basic design motifs of the world of quantum algorithms.

Speaker 1 And then, you know, it's not surprising that they were discovered very early on, just like dynamic programming was discovered, you know, right at the beginning of the history of classical algorithms.

Speaker 1 So,

Speaker 1 you know, so

Speaker 1 you know, I mean, I mean,

Speaker 1 that's one point of view.

Speaker 1 Now, another point of view is, you know, if like when people ask for more quantum algorithms, or they'll say they hold us to account for our failure to discover more quantum algorithms, you know, I like to answer that question with another question, which is, well, what are the problems that you would like these algorithms for?

Speaker 1 Right. And amazingly, that question almost always stops.
You know, they ask it, right? Because like they didn't even.

Speaker 1 you know, well, well, because no matter what problem they name, you know, there's an excellent chance that people in quantum algorithms have thought about it.

Speaker 1 You know, and we know, you know, something about how much speed up you can get from a Grover type algorithm, you know, but we have good reasons to think that you're not going to be able to get better than that.

Speaker 1 Or we say, well, maybe there's like if in the case of the graph isomorphism problem, yeah, sure, maybe there's a polynomial time quantum algorithm, but probably just because there's a polynomial time classical algorithm, right?

Speaker 1 And that, you know, and it's just, it's that that hasn't been discovered yet, right? Although, you know, there's been major progress toward it.

Speaker 1 So, so,

Speaker 1 you know, no matter what problem they name, right, I mean, probably someone has studied it in the context of quantum algorithms, and I could then tell them for that problem, you know, exactly what is the current situation, and, you know,

Speaker 1 what are people stuck on, you know, and so it might be that if we want to discover fundamentally new quantum algorithms, that the way to do it will be to realize fundamentally new problems,

Speaker 1 problems that people hadn't even thought about

Speaker 1 designing an algorithm for

Speaker 1 previously.

Speaker 1 Another way that I like to put it is

Speaker 1 in any given area of math or science, right, there is this phenomenon of low-hanging fruit that gets picked very, very early on, right? And then those of us who come into the field a little bit later

Speaker 1 have to weep higher, you know, if we want to find any fruit, right? But

Speaker 1 I think

Speaker 1 the ultimate solution to the problem of low-hanging fruit being picked is to find a new orchard, you know?

Speaker 1 And so

Speaker 1 figure out what are

Speaker 1 potential problems that quantum computers could solve that that that that no one has been thinking about uh before you know maybe people actually starting to get quantum computers as they finally are today that they can experiment with will help stimulate the discovery of those you know new problems or new applications just like with grover's algorithm is there some reason to expect that from from first principles there are other algorithms that can be solved by uh quantum algorithms yeah so so it is um

Speaker 1 uh you know,

Speaker 1 like

Speaker 1 anything where it's sort of as basic as Grover's algorithm, that you just look at, you know, the way that amplitudes are changing over time and think that an algorithm is going to exist, right?

Speaker 1 That probably would have been snapped up by now just because quantum algorithms have become so much more sophisticated compared to what they were 25 years ago. Okay, but there are some problems where

Speaker 1 there is some evidence that a quantum algorithm might exist, even though we don't yet know it,

Speaker 1 if it does exist.

Speaker 1 A good example is computing the edit distance between two strings, right, which means the minimum number of insertions and deletions and changes that I have to make to change one string to another string.

Speaker 1 This is a fundamental problem for a DNA sequence alignment, for example.

Speaker 1 The best known algorithm for it takes quadratic time. It's based on dynamic programming, in fact.

Speaker 1 uh and you know there is some evidence that there might be a quantum algorithm that would take n to the three halves time or something like that but but if so uh it has not yet been discovered so um

Speaker 1 so so so so so there are cases like that um

Speaker 2 um

Speaker 2 yeah

Speaker 3 gotcha okay um the next question i wanted to ask you was why do many of the important discoveries not just in this field but in many other fields come from closely communicating groups of collaborators or people within such groups?

Speaker 3 Um,

Speaker 3 you said on Sean Carroll's podcast that uh

Speaker 3 uh uh Shor and Grover were collaborators of Bell Labs.

Speaker 1 I'm not sure that they actually were collaborators. I mean, they they uh they they worked in the same building, I believe.

Speaker 1 You know, I think they they you know, you know, I think the discovery of Shor's algorithm created a lot of excitement, you know, uh, well, you know, all over the place, but certainly at Bell Abs, which is where Shur was, right?

Speaker 1 And I think that Grover, who worked also at Bell Labs, but in a completely different department, I think he was affected by that excitement. And,

Speaker 1 you know,

Speaker 1 I'm not even sure if they knew each other prior to Grover's discovery of Grover Cell.

Speaker 1 But

Speaker 1 I could ask them that.

Speaker 1 But,

Speaker 1 more broadly,

Speaker 1 it is certainly true that in the history of

Speaker 1 ideas,

Speaker 1 major innovations seem to come in clusters all the time. Bell Labs was a huge example of that.

Speaker 1 I mean,

Speaker 1 the Shores and Grovers algorithms were really, really at the tail end of the heyday of Bell Labs, which was mostly the uh uh uh you know the uh the 40s 50s 60s 70s right but i mean uh you know you had the invention of the transistor you know of the communication satellite uh and so many other things right from this this one place um

Speaker 1 uh um you know another example uh uh would be um um you know, I mean, you know, we could take Athens and the ancient world, right?

Speaker 1 We could take florence uh we could we could look at uh at cambridge university right

Speaker 1 let's say you know at the the turn of the 20th century right that had just so many mathematicians economists uh philosophers uh physicists who who revolutionized the world uh now

Speaker 1 uh uh you know this might be because you know something about the environment right that uh

Speaker 1 uh uh

Speaker 1 ideas bounce off of each other, right? People see something,

Speaker 1 they see someone achieve something spectacular and they're either inspired by that or

Speaker 1 they view it as a challenge,

Speaker 1 they want to compete against that. and come up with their own thing, right?

Speaker 1 A Silicon Valley

Speaker 1 would be another big example, although more for technology than for science.

Speaker 1 So that would be one explanation. And a different explanation would be that

Speaker 1 these certain places at certain points in time just attract all of the people who

Speaker 1 maybe anyway would have had these great ideas, but

Speaker 1 that kind of person wants to go to these

Speaker 1 centers

Speaker 1 wherever they are. And so, so, so, so, so, so these centers will just collect the kind of people who are likely to discover these things.

Speaker 1 Right.

Speaker 3 Correlation doesn't equal causation in this case. Okay.

Speaker 1 Uh,

Speaker 3 uh, let me ask you now. I've interviewed a lot of economists on this podcast.
I think this question will be interesting to the listeners.

Speaker 3 In your paper on why philosophers should care about complexity, um, you talk about how the difficulty in finding match equilibria might be relevant to discussions on economics. Can you explain?

Speaker 3 Uh, can you explain this?

Speaker 1 Yeah, okay, so

Speaker 1 there was a big advance in theoretical computer science 14 years ago when it was

Speaker 1 theoretical evidence was finally discovered for why computing a Nash equilibrium is a hard problem, basically.

Speaker 1 And this confirmed a suspicion that people had had for a long time, right? Because

Speaker 1 if we look at

Speaker 1 a von Neumann equilibrium, which is like an equilibrium of a,

Speaker 1 let's say, of a two-player zero-sum game, right? Then this can be found easily using linear programming.

Speaker 1 But a Nash equilibrium is somehow a more complicated beast.

Speaker 1 And

Speaker 1 the way that Nash proved that they exist in the 50s was using the what's called the Kakutani fixed point theorem, right? It's some fixed point theorem from topology.

Speaker 1 And if you try to actually unwind the existence proof into an actual algorithm to calculate the equilibrium, then what you get is an algorithm that ends up taking exponential time, right?

Speaker 1 You know, it eventually hits the equilibrium, but

Speaker 1 it may have to follow an exponentially long trail before it reaches it.

Speaker 1 If you're interested in this, the best, by far the best things that have been written about it, I think, are by Christos Papadimitrio.

Speaker 1 And

Speaker 1 Papadimetrio was one of the discoverers

Speaker 1 in 2006, along with Goldberg and Das Kalakis, of this hardness theorem, which, you know, it doesn't prove that it's hard to find a Nash equilibrium. In fact, it doesn't even prove that it's NP-hard.

Speaker 1 This problem kind of doesn't have the right structure to be an NP-complete problem just because of Nash's theorem that tells us that a Nash equilibrium always exists, right?

Speaker 1 Like in order to be NP-complete in any way that we currently understand,

Speaker 1 there has to be a decision problem.

Speaker 1 Is there a solution or is there not a solution?

Speaker 1 But for finding a Nash equilibrium, there always is a solution, right?

Speaker 1 There's only the problem how to find it.

Speaker 1 But what was shown is that basically finding a Nash equilibrium is at least as hard as any other problem

Speaker 1 for which a solution is guaranteed to exist because of the same kinds of principles. Okay.

Speaker 1 So it's sort of, it is complete for that complexity class of problems for which a solution is guaranteed to exist for this sort of reason. So what does this mean for economics?

Speaker 1 Well, it's not clear, right, if it has a sort of direct implication, but it sort of fits into this general narrative of,

Speaker 1 you know, just because an equilibrium exists, you know, that's not the end of the story, right?

Speaker 1 I mean, you know, if the market can't actually find the equilibrium, right, in any, or you could say, you know, if calculating this equilibrium would take exponential time, then we shouldn't expect the market to be able to find it either, right?

Speaker 2 And

Speaker 1 so, you know, now economists are well aware, right, that there are these issues, right, that

Speaker 1 people are not perfectly rational,

Speaker 1 even if they want to be perfectly rational, which they don't always,

Speaker 1 being perfectly rational might involve computations that they're just not able to do, right? And

Speaker 1 they've, with varying degrees of success, they've tried to account for such phenomena.

Speaker 1 But, you know, I would say, you know, the, you know, Nash equilibria are so central to economic theory right that you know the hardness of finding nash equilibria i think you know is a uh uh um you know maybe maybe a non-trivial result you know underscoring that that general point

Speaker 3 that that's incredibly interesting uh but do you think uh high ex knowledge problem or the way he phrased it might be related to uh uh complexity as well so in terms of like uh central planning in order to satisfy some constraints said by bureaucrats might be like an np complete problem where it's like

Speaker 1 I want to separate two

Speaker 1 different things, right? One is lack of knowledge, right,

Speaker 1 about what is going on in the economy and so forth. And the other one is lack of ability to do computations on the knowledge that you have.

Speaker 1 So

Speaker 1 the hardness of Nash equilibria is talking about the latter issue. Yeah, I see what you're saying.
I mean, you know, they're related in a way, right? They're both, you know,

Speaker 1 they're both different kinds of deviations from perfect omniscience, right? But they're but they're different kinds of deviations from omniscience. And, you know,

Speaker 1 in theoretical computer science, you know, very, very often we have to distinguish them.

Speaker 1 So,

Speaker 1 you know, I mean, often like people will ask me if some

Speaker 1 problem is you know is or isn't NP complete when you know what they what they really mean is like how hard is it to collect the information, right?

Speaker 1 Which is, you know, it's kind of like apples and oranges, it's a category mistake, right?

Speaker 1 Once, you know, it's for like to even talk about whether a problem is in P or is in NP or whatever, we assume that an input is given to you, right?

Speaker 1 So all of the information that you need, you know, you have it in front of you.

Speaker 1 And then you know, we are exclusively concerned with the difficulty of calculating something about that information, right?

Speaker 1 Now there is also

Speaker 1 the difficulty that

Speaker 1 people who are in

Speaker 1 economic actors

Speaker 1 don't have the information that they need, or certainly central planners don't have the information that they need.

Speaker 1 And there actually is a whole subfield of economics

Speaker 1 that's the economics of information, right? How much do you pay to learn something about what is going going on? Or

Speaker 1 how do you design an auction in a way that you elicit the information that you want from the participants in the auction and things like that?

Speaker 1 I think that economists maybe have an easier time dealing with those things. Or

Speaker 1 that stuff has been better integrated into economics than the computational considerations have been.

Speaker 3 Okay, that's incredibly interesting.

Speaker 3 Just a few more questions.

Speaker 1 Okay, sure.

Speaker 3 Yeah, I'm going to bring us back to David Deutsch and creativity.

Speaker 3 In the Ask Me Anything chapter of quantum computing since Democritus, you have a student ask you

Speaker 3 what complexity class is creativity in? And you say,

Speaker 3 part of what you say is,

Speaker 3 we've got a billion years of natural selection giving us a very good toolbox. of heuristics of solving certain kinds of search problems, like problems in NP.

Speaker 3 But that makes it kind of sound like we have more heuristics to solve these problems than chimpanzees do. Chimpanzees have more than ants.

Speaker 3 Um, I don't know if this is how you meant it, but do you see like the algorithm for creativity as a thing you have or you don't have, or is it like you just have better heuristics for searching through different?

Speaker 1 I don't, I don't know that there is such a thing as the algorithm for creativity, right? In fact, you know,

Speaker 1 the phrase is almost oxymoronic, right? That if there were such an algorithm, well, then whatever it output would no longer be creative, would it?

Speaker 1 Because it would just be the output of that algorithm, right?

Speaker 1 So, you know, I think that,

Speaker 1 you know,

Speaker 1 it seems like there is such a thing as, you know, general purpose reasoning ability or general purpose ability to invent creative solutions to problems, which

Speaker 1 you know let's say you know uh einstein had more of than some random person off the street but the random person off the street has more of than a chimpanzee, and a chimpanzee has more of than an ant.

Speaker 1 But it is somehow very, very hard to articulate what we mean by that, you know, in a way that would actually support these, you know, comparisons across, you know, vastly different evolutionary histories and goals in life and all these things.

Speaker 3 From the beginning of Infinity, do you buy David Deutsche's

Speaker 3 term a universal explaner, that people are universal explainers, AIs will be universal explainers, but non-human animals aren't, and that's like the only demarcation that matters.

Speaker 1 I think Deutsch is like, he's

Speaker 1 incredibly optimistic and also incredibly categorical in his thinking, right?

Speaker 1 You know, I don't know anyone else who is sort of as optimistic or, you know, and hardly anyone else who is as black and white.

Speaker 1 I mean, I,

Speaker 1 you know,

Speaker 1 It does seem likely that there is some kind of threshold that you cross in going from a chimpanzee to a human, right? Where, like, yes, you know, a chimpanzee is smarter than a cow, right?

Speaker 1 But, like, you know, if you stare at both of them, you know, it doesn't seem like the chimpanzee is noticeably closer than the cow is to, you know, being able to land on the moon, right? Or

Speaker 1 prove Ferma's last theorem, right, or any of these things, right? And,

Speaker 1 you know, with

Speaker 1 humans, you know, you had like in succession, you had a few

Speaker 1 extremely important milestones that, you know, that had not been crossed before in the animal kingdom, right? You have universality of language, right?

Speaker 1 You have, you know, I mean, the ability to have a recursive language that can sort of

Speaker 1 express thoughts of

Speaker 1 unbounded complexity.

Speaker 1 You had

Speaker 1 the invention of writing, the ability to transmit those thoughts across generations.

Speaker 1 a number system that could refer to arbitrarily large numbers,

Speaker 1 and then

Speaker 1 computers,

Speaker 1 which are universal machines, right? The ability to build these kinds of machines. And, you know, and all of this went along with

Speaker 1 being able to explain the world around us

Speaker 1 in explicit theories, you know, to

Speaker 1 an extent that no animal species, no other animal species

Speaker 1 was ever able to do.

Speaker 1 Having said that, you know, I don't actually know if people are universal explainers. That is,

Speaker 1 I have,

Speaker 1 you know,

Speaker 1 I have no idea if we can explain everything or even if we can explain everything that is explainable.

Speaker 1 You know, of course, I hope that we will continue being able to explain a lot more. than we can explain right now.

Speaker 1 But I mean, you know, Deutsch uses words in unusual ways. Like he,

Speaker 1 like, like when he talks about why he is so optimistic, you know, part of his optimism is like, you know, when he uses the word people, he also includes extraterrestrials. Right.

Speaker 1 So he says like, oh yeah, you know, it's possible that

Speaker 1 humans on Earth will just all kill themselves out.

Speaker 1 You know, there will be a nuclear war or an environmental catastrophe, but that's not a big deal because people in the broader sense of, you know, life elsewhere in the universe will

Speaker 1 go and do all the amazing things anyway that we would have done. I mean, that may be called comfort

Speaker 1 to most of us here on Earth, right?

Speaker 1 And so when he says something like people are universal explainers, you know, you always have to press him on, you know, not only what does he mean by a universal explainer, but even what does he mean by people?

Speaker 3 Right.

Speaker 3 His claim on the universal explainer part is that just as many worlds is the parsimonious way to describe quantum mechanics, so you don't have to postulate an arbitrary

Speaker 3 collapse.

Speaker 3 Since we have no reason to expect there's things, since we have no proof that there are things we cannot explain, the most parsimonious explanation is that we can explain everything.

Speaker 1 Okay,

Speaker 1 I don't know. I mean, you know, there are.

Speaker 1 There are certain questions, like the hard problem of consciousness, let's say, or the question of why is there a universe at all, where it's not just that we don't have an explanation, it's that the mind sort of spins in circles when we try to contemplate what could possibly

Speaker 1 consist of an explanation, right?

Speaker 1 What could an explanation possibly look like, even in principle?

Speaker 1 That could just be a lack of imagination, right?

Speaker 1 But

Speaker 1 it could be that there are,

Speaker 1 I mean, like we all know, you know, the two-year-old who just, you know, you, you know, asks why, and then you tell them, and they ask why, and you tell them, and, you know,

Speaker 1 and they ask why. And, you know, after,

Speaker 1 you know,

Speaker 1 a half dozen whys, you know, you're all the way back at the Big Bang, right? You know, you're back at,

Speaker 1 you know, the

Speaker 1 beginning of the universe. And, you know, they continue asking why, right? And

Speaker 1 it it it it could be that you know there are there are questions with with the property that you know that every um um

Speaker 1 um so okay i mean i mean i mean i mean i mean i mean first of all you know uh i think even even deutsch thinks that that we will not you know that there is no one time where we'll have an explanation of everything right because because deutsch you know says that that each

Speaker 1 uh uh each each question that we answer will lead to further questions, right? You know, each time you explain something,

Speaker 1 you know, there's then the question of, you know, whatever the explanation is based on, you know, why is that, right? So just like that two-year-old, right, we can always dig deeper and deeper.

Speaker 1 Okay, but now, you know, just to loop back to earlier in this conversation, like if we think about the busy beaver function, right, we know that, you know, it's not just that,

Speaker 1 you know, like with,

Speaker 1 you know, you need more and more resources to compute more and more values of the busy beaver function. And so you'll never know all of them.

Speaker 1 It's that there are fixed values, like busy beaver of 800, right, where

Speaker 1 the existing axioms of set theory, you know, provably will not suffice to let you determine that.

Speaker 1 right and so likewise for all i know there could be fixed questions where you know maybe the hard problem problem of consciousness, maybe why is there a universe, where what we currently consider to be an explanation just will not suffice to ever explain these things.

Speaker 1 But I don't know.

Speaker 1 I feel like

Speaker 1 I'm unlike Deutsch, I don't want to assert that I know the answer from first principles.

Speaker 1 I want to continue looking for explanations of these things.

Speaker 1 When you're searching for explanations, you know, it can be psychologically helpful to assume that the explanation exists.

Speaker 1 But, you know, but

Speaker 1 let's not make that into more than it is, right? Let's not take a useful heuristic and elevate it into a basic principle of reality.

Speaker 1 Right.

Speaker 3 But on

Speaker 3 this point,

Speaker 3 Joyce wrote a book called The Fabric of Reality, where he talks about how Gödel's incompleteness theorem actually verifies the importance of creativity.

Speaker 3 So that if we need to come up with new axioms to prove Busy Beaver of 800, that's the entire point of creativity. And as far as like,

Speaker 3 I think he thinks the hard problem of consciousness can be solved, but even if it can't be solved, like the reason it's so hard is not because it's not an artifact of our mind.

Speaker 3 It just seems like we can't imagine a possible mind. And that might itself be an artifact of our mind.
We can't imagine a possible way that you could solve it, regardless of what kind of mind you had.

Speaker 3 The final question is, what advice would you give to a 20-year-old who's interested in technical subjects?

Speaker 3 Maybe he's not doing a PhD program like you were at the time, but just in college interested in technical subjects?

Speaker 2 Um,

Speaker 1 just uh learn all that you can. I mean, you know, there

Speaker 1 you know, has never been a time when sort of more resources were available to anyone who wants to learn things. So, so

Speaker 1 you know, take courses, talk to your professors,

Speaker 1 you know, go

Speaker 1 on the the internet and

Speaker 1 read books,

Speaker 1 delve deeply into a subject. And

Speaker 1 you might be surprised at sort of how low the barriers sometimes are, right?

Speaker 1 If you

Speaker 1 know let's say that it was quantum computing that you were interested in, right? It doesn't have to be, it could be anything else.

Speaker 1 But if you, you know, the entire literature of quantum computing pretty much is available for free on you know on archive.org and uh you know if you go and like look every every night at the new quantum computing papers that come out and just flag the ones that are interesting to you and read them you know each paper will raise new questions that that the authors don't know the answer to or yet uh

Speaker 1 and you know sometimes they'll be explicitly listed in an open problem section you know other times times there'll be ones that you could think of. And

Speaker 1 you can study those problems.

Speaker 1 If you have ideas about them, you can

Speaker 1 talk to the authors of the paper.

Speaker 1 And

Speaker 1 it might take

Speaker 1 years or decades to become an expert in a whole field.

Speaker 1 And you might be very far from that, but it really doesn't take that long to become the world expert on one particular tiny little problem.

Speaker 1 And

Speaker 1 so try to become the world expert on something,

Speaker 1 even something very, very narrow.

Speaker 1 And

Speaker 1 once you've done that, then you can... you know, write an article about it or

Speaker 1 do a project about it. And then, you know, that will lead to more things, right? It will lead to, you know, maybe collaborations in the future,

Speaker 1 you know, and it will lead to, you know, you can then try to become an expert on something a little bit wider, something a little bit wider, and so on. Yeah,

Speaker 3 that's very excellent advice. I love that.
Okay, well, Professor Adam, thank you so much for your time.

Speaker 3 Hey, if you enjoyed this podcast, please consider sharing it with your friends and posting it on social media. Word of mouth is incredibly valuable for a new and a small podcast like this one.

Speaker 3 So thanks for watching.