David Deutsch - AI, America, Fun, & Bayes

1h 24m

David Deutsch is the founder of the field of quantum computing and the author The Beginning of Infinity and The Fabric of Reality.

Read me contra David on AI.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Read the full transcript with helpful links here.

Follow David on Twitter. Follow me on Twitter for updates on future podcasts.


Timestamps

(0:00:00) - Will AIs be smarter than humans? 

(0:06:34) - Are intelligence differences immutable / heritable?

(0:20:13) - IQ correletation of twins seperated at birth

(0:27:12) - Do animals have bounded creativity?

(0:33:32) - How powerful can narrow AIs be?

(0:36:59) - Could you implant thoughts in VR?

(0:38:49) - Can you simulate the whole universe?

(0:41:23) - Are some interesting problems insoluble?

(0:44:59) - Does America fail Popper's Criterion?

(0:50:01) - Does finite matter mean there's no beginning of infinity?

(0:53:16) - The Great Stagnation

(0:55:34) - Changes in epistemic status is Popperianism

(0:59:29) - Open ended science vs gain of function

(1:02:54) - Contra Tyler Cowen on civilizational lifespan

(1:07:20) - Fun criterion

(1:14:16) - Does AGI through evolution require suffering?

(1:18:01) - Would David enter the Experience Machine?

(1:20:09) - (Against) Advice for young people



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Press play and read along

Runtime: 1h 24m

Transcript

Speaker 1 Okay, today I'm speaking with David Deutsch. Now, this is a conversation that I've been eagerly wanting to have for years, so this is very exciting for me.
So, first, let's talk about AI.

Speaker 1 Can you briefly explain why you anticipate that AIs will be no more fundamentally intelligent than humans?

Speaker 2 I suppose you mean AGIs.

Speaker 1 Yes.

Speaker 2 And

Speaker 2 by fundamentally intelligent, I suppose you mean

Speaker 2 capable of all the same types of cognition as humans are in principle.

Speaker 1 Yes.

Speaker 2 So that would include, you know,

Speaker 2 doing science and doing art and

Speaker 2 in principle also falling in love and

Speaker 2 being good and being evil and all that. So the reason, uh it it um

Speaker 2 the reason is twofold

Speaker 2 and

Speaker 2 uh one half is about computation hardware computation hardware and the other is about um software so if we take the hardware um

Speaker 2 we know that uh our our brains are turing complete um bits of hardware and therefore can uh

Speaker 2 exhibit the functionality of running any computable

Speaker 2 function program for any computable function.

Speaker 2 Now, when I say any,

Speaker 2 I don't really mean any because you and I sitting here, you know, we're having a conversation and we could say, you know, we could have any conversation. Well,

Speaker 2 we can assume that maybe in a hundred years' time we'll both be dead and therefore the number of conversations we could have is strictly limited.

Speaker 2 And also, some conversations depend on speed of computation.

Speaker 2 So, you know, if we're going to be solving the traveling salesman problem,

Speaker 2 then

Speaker 2 there are many traveling salesman problems that we wouldn't be able to solve in the age of the universe.

Speaker 2 So

Speaker 2 when I say any,

Speaker 2 what I mean is that we're not limited in the programs we can run apart from by speed and memory capacity. So all limitations on us, hardware limitations on us, boil down to speed and memory capacity.

Speaker 2 And both those can be augmented to the level of any other entity. that is in the universe.

Speaker 2 Because if somebody builds a computer that can think faster than the brain, then we can use that very computer or that very technology to make our thinking go just as fast as that.

Speaker 2 So that's the hardware.

Speaker 2 As far as explanations go,

Speaker 2 can we reach the same kind of explanations as any other entity? Let's say,

Speaker 2 usually this is said not in terms of AGIs, but in terms of

Speaker 2 extraterrestrial intelligences. But also, it's said about AGIs, you know, what if they are to us as we are to ants

Speaker 2 and so on? Well, again, part of that is just hardware, which is easily fixable by adding more hardware. So let's forget about that.

Speaker 2 So, really, the idea is:

Speaker 2 are there

Speaker 2 concepts that we are inherently incapable of comprehending? I think Martin Rees believes this.

Speaker 2 He thinks that

Speaker 2 we can comprehend quantum mechanics, apes can't, and maybe the extraterrestrials can comprehend

Speaker 2 something

Speaker 2 beyond quantum mechanics, which we can't comprehend. And no amount of brain add-ons with extra hardware can give us that because they have

Speaker 2 hardware that is

Speaker 2 that is

Speaker 2 adapted to having these concepts, which we haven't. The same kind of thing is said about maybe certain qualia,

Speaker 2 that maybe we can experience love and an AGI couldn't experience love because it has to do with our hardware, not just memory and speed, but specialized hardware. And

Speaker 2 I think that falls victim to the same argument. The thing is, this specialized hardware can't be anything except a computer.

Speaker 2 And if there's hardware that

Speaker 2 is needed for love, let's say that somebody is born without that hardware, then that hardware, that bit of the brain that does love or that does mathematical insight or whatever, is just a bit of the brain and it's connected to the rest of the brain.

Speaker 2 in the same way that the other part of the brain is connected to the rest of the brain, namely by neurons

Speaker 2 passing electrical signals and by chemicals whose concentrations are altered and so on. So, therefore, an artificial device

Speaker 2 that computed which signals were to be sent and which

Speaker 2 chemicals were to be adjusted

Speaker 2 could do the same job and it would be indistinguishable. And therefore, a person augmented with one of those who couldn't feel love could feel love after that augmentation.

Speaker 2 So, those are those are, and I think those two

Speaker 2 things are the only relevant ones. So, that's why I think

Speaker 2 that AGIs and humans have the same range in the sense I've defined.

Speaker 1 Okay, interesting.

Speaker 1 Okay, so

Speaker 1 I think the software question is more interesting than the harder one immediately, but I do want to take issue with

Speaker 1 the idea that the memory and speed of human brains can be arbitrarily and easily expanded, but we can get into that later.

Speaker 1 We can just start with this question. Can all humans explain everything that even the smartest humans can explain, right?

Speaker 1 So if I took the village idiot and I asked him to create the theory of quantum computing,

Speaker 1 should I anticipate that if he wanted to, he could do this? And just for frame of reference, about 21 to 24% of Americans on the National Built Literacy Survey,

Speaker 1 they fall in level one, which means that they can't even perform basic tasks like identifying the expiry date of a driver's license, for example, or totaling a bank deposit slip. So

Speaker 1 are these humans capable of explaining quantum computing or creating the Deutsch-Joseph algorithm? And if they're not capable of doing this,

Speaker 1 doesn't that mean that the theory of universal explainers falls apart?

Speaker 2 Well, there are people who

Speaker 2 So these tasks that you're talking about are tasks that no ape could do.

Speaker 2 However, there are humans who are brain damaged to the extent that they can't even do the tasks that an ape can do. And there comes a point when

Speaker 2 installing the program that would be able to read a driver's license or whatever would require augmenting their hardware as well as their software.

Speaker 2 So if a person, we don't know that much, we don't know enough about the brain

Speaker 2 yet, but if some of the people that you're talking about, if it's 24% of the population, then it's definitely not hardware. So

Speaker 2 I would say that for those people, it's definitely software.

Speaker 2 If it was hardware, then

Speaker 2 getting them to do this would be a matter of repairing the

Speaker 2 imperfect hardware. If it's software, it is not just a matter of them wanting to, or them wanting to be to be taught or whatever

Speaker 2 it is it is a matter of um whether the existing software is

Speaker 2 um

Speaker 2 what word can i use instead of wants to uh

Speaker 2 is is conceptually ready to do that for for example

Speaker 2 um

Speaker 2 Brett Hall has often said that

Speaker 2 he would like to speak Mandarin Chinese.

Speaker 2 And so he wants to, but he will never be able to speak Mandarin Chinese because he's never going to want it

Speaker 2 enough to be able to go through the process

Speaker 2 of acquiring that program.

Speaker 2 But there is nothing about his hardware that prevents him learning Mandarin Chinese. And there's nothing about his software either,

Speaker 2 except

Speaker 2 that,

Speaker 2 well, what word can we use to say that he doesn't want to go through that process? I mean, he does want to learn it.

Speaker 2 He does want to learn it, but he doesn't want to go through the process of being programmed with that program.

Speaker 2 But if his circumstances changed, he might well want to. So for example,

Speaker 2 many of my relatives a couple of generations ago were forced to migrate to very alien places where they had to learn languages that they never thought they would ever speak and never wanted to speak.

Speaker 2 And yet, very quickly, they did speak those languages.

Speaker 2 Again, was it because what they wanted changed?

Speaker 2 In the big picture, perhaps you could say what they wanted changed. So, if your

Speaker 2 driving license blind people

Speaker 2 wanted

Speaker 2 to be educated to read driving licenses in the sense that my ancestors wanted to learn languages, then yes, they could learn that.

Speaker 2 There is a level of dysfunction below which they couldn't, and I think those are hardware limitations. On the borderline between those two,

Speaker 2 there's not that much difference. It's like, you know, that's like the question of:

Speaker 2 could apes be programmed with a fully human intellect?

Speaker 2 I think the answer to that is yes, but

Speaker 2 although

Speaker 2 programming them would not require hardware, you know, surgery in the sense that repairing a defect, that it would be repairing a defect, but it would require intricate changes at the neurone level.

Speaker 2 and

Speaker 2 that so that to transfer the program from a human mind into

Speaker 2 the ape's mind.

Speaker 2 I would guess that that is possible because although the ape has far less

Speaker 2 far less memory space than humans do, and also doesn't have certain specialized modules that humans have, neither of those things is a thing that we use to the full anyway.

Speaker 2 I mean, when I'm speaking, to you now, there's a lot of knowledge in my brain that I'm not referring to at all, like, you know, the fact that I can

Speaker 2 play the piano or drive a car is not being used in this conversation. So I don't think the fact that we have such a large memory capacity would affect this

Speaker 2 project, although the project would be highly immoral because you'd be intentionally creating

Speaker 2 a

Speaker 2 person

Speaker 2 inside a deficient

Speaker 2 brain hardware.

Speaker 1 So suppose it's hardware differences that distinguish

Speaker 1 different humans in terms of their intelligence. If it were just up to the people who are not even functionally literate, right? So these are, again, people.
But wait, wait, wait.

Speaker 2 I said that it could only be hardware at the low level.

Speaker 2 Well, either at the level of brain defects or at the level of using up the whole of our allocation of memory or speed or whatever. So apart from it, I don't think it can be hardware.

Speaker 1 By the way, is software analogous or

Speaker 1 is hardware synonymous with genetic influences for you? Or can software be genetic too?

Speaker 2 Software can be genetic too,

Speaker 2 though that doesn't mean it's immutable.

Speaker 2 It just means it's there at the beginning.

Speaker 1 Okay.

Speaker 1 The reason I suspect it's not software is because these people also happen to be the same people who, let's suppose it was software and something that they chose to do or if something they could change.

Speaker 1 it's mysterious to me why these people would also choose to accept jobs that have lower pay but are less cognitively demanding, or why they would choose to do worse on academic tests or IQ tests.

Speaker 1 So why they would choose to do exactly the sort of thing somebody who's less cognitively powerful would do.

Speaker 1 It seems more Parson Monte's explanation there is just that they are cognitively less powerful.

Speaker 2 Not at all. Why would someone choose not to go to school, for instance, if they were given the choice and and not to have any lessons? Well, there are many reasons why they might choose that.

Speaker 2 Some of them good, some of them bad. And the people who,

Speaker 2 you know,

Speaker 2 calling some jobs cognitively demanding is already begging the question, because

Speaker 2 you're just referring to a choice that people make, which I think is a software choice. as being by definition

Speaker 2 forced on them by hardware. It's not cognitively deficient, it's just that they don't want to do it.
The same way,

Speaker 2 if there was a culture that required Brett Hall to

Speaker 2 be able to speak fluent Mandarin Chinese in order to do a wide range of tasks, and if he didn't know Mandarin Chinese, he'd be relegated to low-level tasks, then he would be

Speaker 2 choosing the low-level tasks rather than the quote cognitively demanding task. But it's only the culture that makes that cognitively demanding task

Speaker 2 that assigns a hardware interpretation to the difficulty of doing that task.

Speaker 1 Right. I mean, it doesn't seem that arbitrary to say that the kind of jobs you could do sitting down on a laptop

Speaker 1 require a different cognitive,

Speaker 1 require probably more cognition than the ones you can do in a construction site. And

Speaker 1 if it's not cognition that distinguishes, or if there's not something like intelligence or cognition or whatever you want to call it, that is a thing that is measured by both these literacy tests and by what you're doing at your job, then what is the explanation for why there's such a high correlation between people who are not functionally literate and, or I guess, an anti-correlation between people who are not functionally literate and people who are doing, like, let's say, programmers, right?

Speaker 1 Like, I guarantee you, people working at Apple, all of them are above level one on this literacy survey.

Speaker 1 Why do they just happen to make the same choices? Why is that their correlation?

Speaker 2 Well,

Speaker 2 there are correlations everywhere. And

Speaker 2 the culture is built

Speaker 2 in order to make certain

Speaker 2 abilities,

Speaker 2 make use of certain abilities that people have. So

Speaker 2 if you're setting up a company that is going to employ 10,000

Speaker 2 employees, then it's best to make the way that the company works.

Speaker 2 You know, it's best, for example, to make the signs above the doors or the signs on the doors or the numbers on the dials all be ones that people in that culture who are highly educated can read.

Speaker 2 You could, in principle, make each label on each door a different language. I don't know, you know, there are thousands of human languages.

Speaker 2 Let's say there are 5,000 languages and 5,000 doors in the company. You could,

Speaker 2 given the same meaning, make them all different languages.

Speaker 2 The reason that they're all the same language, and what's more, not just any old language, it's a language that many educated people know fluently. That's why.
And then

Speaker 2 you can misinterpret that as saying, oh, there is something,

Speaker 2 there is some hardware reason why

Speaker 2 everybody speaks the same language. Well, no, there isn't.
It's a cultural reason.

Speaker 1 Okay. So if the culture was different somehow, maybe if there was some other way of communicating ideas,

Speaker 1 do you think that the people who are currently designated as not functionally literate could be in a position to learn about quantum computing, for example?

Speaker 1 And if they made the right choices, or not the right choices, but the choices that could lead to them understanding quantum computing?

Speaker 2 Well,

Speaker 2 so I don't want to evade the question. The answer is yes, but

Speaker 2 the way you put it is,

Speaker 2 again,

Speaker 2 rather begs the question. It's not only language that is like this, it's all knowledge.

Speaker 2 So

Speaker 2 just learning. So if someone doesn't speak English,

Speaker 2 quantum computing is a field in which English is the standard language.

Speaker 2 Used to be German, now it's English. Now, someone who doesn't know English is at a disadvantage learning about quantum computers, but not only because of their deficiency in language.

Speaker 2 If they come from a culture in which the culture of physics and of mathematics and

Speaker 2 of logic and so on

Speaker 2 is equivalent, and only the language is different, then if they just learn the language, they will find it as easy as anyone else.

Speaker 2 But if a whole load of things are different, if a person doesn't think in terms of, for example, logic, but thinks in terms of pride and manliness and

Speaker 2 fear and

Speaker 2 all sorts of

Speaker 2 concepts that

Speaker 2 fill the lives of,

Speaker 2 let's say,

Speaker 2 prehistoric people

Speaker 2 or pre-enlightenment people then to be able to understand quantum computers they would have to learn a lot more than just the language of the civilization they'd have to learn all of other well not all but a a range of other features of the civilization and on that basis it the people who can't read driving licenses are similarly in a different culture which they would also have to learn if they are to increase their IQ, i.e., their ability to function at a high level in intellectual culture in our civilization.

Speaker 2 If they did, they would be able to.

Speaker 1 Okay, so if it's those kinds of differences, then how do you explain the fact that identical twins separated at birth and adopted by different families, they tend to have a,

Speaker 1 you know,

Speaker 1 the most of the variance that does exist between humans in terms of IQ doesn't exist between identical identical twins.

Speaker 1 In fact, the correlation is 0.8, which is the correlation that you would have when you took the test on different days, like depending on how good a day you were having.

Speaker 1 And these are people who are adopted by families who have different cultures, who are often in different countries.

Speaker 1 Yet, in fact,

Speaker 1 a hardware theory explains very well why they would have similar scores on IQ tests, which are themselves correlated with literacy and job performance and so on.

Speaker 1 Whereas I don't know how software would explain why being adopted by different families. Well,

Speaker 2 the hardware theory explains it in in the sense that it might be hardware, might be true.

Speaker 2 So

Speaker 2 it doesn't have an explanation beyond that, and nor does the software theory.

Speaker 2 Sorry, go on.

Speaker 1 I mean, so there are actually like differences at the level of brain that are correlated with that hue, right? So

Speaker 1 actual skull size has like a 0.3 correlation with that hue. There's a few more like this.
They don't explain

Speaker 1 the entire variance in human intelligence or the entire genetic variance in human intelligence. But we have identified a few actual harbor differences that correlate with that view.

Speaker 2 Suppose, on the contrary,

Speaker 2 suppose that the results of these experiments had been different.

Speaker 2 Suppose that the result was

Speaker 2 that

Speaker 2 people

Speaker 2 who

Speaker 2 are brought up in the same family and differ only

Speaker 2 in

Speaker 2 the amount of hair they have,

Speaker 2 or in the amount in their appearance in any other way, that

Speaker 2 none of those differences make any difference to their IQ.

Speaker 2 Only who their parents were makes a difference. Now, wouldn't that be surprising?

Speaker 2 Wouldn't it be surprising that there's nothing else correlated with IQ other than who your parents are?

Speaker 2 Yes.

Speaker 2 Now,

Speaker 2 how much correlation should we expect? There are correlations everywhere. There are these things on the internet, which

Speaker 2 joke

Speaker 2 memes or whatever you call it,

Speaker 2 but they make a serious point where they correlate things like

Speaker 2 how many

Speaker 2 adventure movies have been made in a given year correlated with how much the GNP per capita. And that's a bad example because there's an obvious relation, but you know what I mean.

Speaker 2 It's the number of films made by a particular actor against the

Speaker 2 number of outbreaks of bird flu.

Speaker 2 And

Speaker 2 part

Speaker 2 of being surprised by randomness

Speaker 2 is

Speaker 2 the fact that correlations are everywhere.

Speaker 2 It's not just that correlation isn't causation. It's that correlations are everywhere.
It's not a rare event to get a correlation between two things. And the more things you

Speaker 2 ask about,

Speaker 2 the more you are going to get correlations.

Speaker 2 So

Speaker 2 it's not

Speaker 2 what is surprising is that the

Speaker 2 things that are correlated

Speaker 2 are

Speaker 2 uh

Speaker 2 are things that you expect to be correlated and measure. For example, they when they when they um

Speaker 2 do these twin studies and measure the IQ, they control for certain things.

Speaker 2 Uh, and like you said, uh, identical twins reared together, they've got to be reared together or apart

Speaker 2 or apart, yes.

Speaker 2 So, uh, but

Speaker 2 there's infinitely more things that they don't control for.

Speaker 2 So

Speaker 2 it could be that the real determinant of

Speaker 2 IQ is, for example, how well a child is treated between the ages of three and a half and four and a half, where well is defined by something that we don't know yet, but you know, something like that.

Speaker 2 then you would expect that thing, which we don't know about and nobody has bothered to

Speaker 2 control for in these experiments. We would expect that thing to be correlated with IQ.

Speaker 2 But unfortunately, that thing is also correlated with whether someone's an identical twin or not.

Speaker 2 So it's not the identical twinness that is causing the similarity, it's this other thing. Right.
This, say, an aspect of appearance or something. And if you were to surgically change a person

Speaker 2 with a view, if you knew what this thing was and surgically change the person, you would be able to have the same effect as making an identical twin would have.

Speaker 1 Right. But I mean, as you say, in science or in to explain any phenomenon, there's an infinite amount of possible explanations, right? You got to pick the best one.

Speaker 1 So it could be that there's some unknown trait, which is so obvious to adopted parents, different adopted parents, that they can use it as a basis for discrimination or for different treatment.

Speaker 2 But that is, I mean, I would assume they don't know what it is.

Speaker 1 But then aren't they using it as a basis to treat kids differently at the age of three?

Speaker 2 Not by consciously identifying it.

Speaker 2 It's like it would be something like getting the idea that this child is really smart.

Speaker 2 But I'm just trying to show you that it could be something that the parents are not aware of.

Speaker 2 If you ask parents to list the traits in their children that cause them to behave differently towards their children, they might list like 10 traits, but then there are another thousand traits that they're not aware of, which also affect their behavior.

Speaker 1 So, we'd first need an explanation for what this trait is that researchers have not been able to identify, but it's so obvious that even unconsciously, parents are able to reliably use it as a way to

Speaker 2 be obvious at all because parents

Speaker 2 have a huge amount of information about their children, which they are

Speaker 2 processing in their minds. And

Speaker 2 most of it, they don't know what it is.

Speaker 1 Okay, all right.

Speaker 1 Okay, so I guess

Speaker 1 let's leave this topic aside for now, and then let me bring us to animals. So, if creativity is something that doesn't exist in increments, it's or you know, the capacity to create explanations.

Speaker 1 You can just use a simple example, go on YouTube and look up cat opening a door, right? So, you'll see, for example,

Speaker 1 a cat develops a theory that applying torque to this handle, to this metal thing, will open a door.

Speaker 1 Now, it hasn't, and then what it'll do is it'll climb onto a countertop and it'll jump on top of that door handle. It hasn't seen another cat do it.

Speaker 1 It hasn't seen another human like get on a countertop and try to open the door that way. But it conjectures that this is a way,

Speaker 1 given its morphology, that it can access the door.

Speaker 1 And then, you know, so that's his theory. And then the experiment is, will the door open?

Speaker 1 This seems like a classic, this seems like a classic cycle of conjecture and refutation.

Speaker 1 Is this compatible with the cat not being at least having some bounded form of creativity?

Speaker 2 I think it's perfectly compatible.

Speaker 2 So

Speaker 2 animals are amazing things, and

Speaker 2 instinctive animal knowledge is

Speaker 2 designed to make animals easily capable of

Speaker 2 thriving in environments that they've never seen before.

Speaker 2 In fact,

Speaker 2 if you go down to the level of detail,

Speaker 2 animals have never seen the environment before. I mean, maybe a goldfish in a goldfish bowl

Speaker 2 might have, but

Speaker 2 when

Speaker 2 a wolf runs through the forest, it sees a pattern of trees that it has never seen before, and it has to

Speaker 2 create strategies

Speaker 2 for avoiding each tree. And not only that, for

Speaker 2 actually catching the rabbit that it's running after as well, in a way that has never been done before.

Speaker 2 So,

Speaker 2 the way to understand this, I think, now this is because of a vast amount of knowledge that is in the wolf's genes.

Speaker 2 What kind of knowledge is this? Well, it's not the kind of knowledge that says first turn left, then turn right, then jump, and so on. It's not that kind of instruction.

Speaker 2 It's an instruction that takes input from the outside and then generates

Speaker 2 a behavior that is relevant to that input.

Speaker 2 It doesn't involve creativity, but it involves a degree of sophistication in the program that human robotics has not yet reached anywhere near that.

Speaker 2 And by the way, then when it sees a wolf of the opposite sex, it may decide to leave the rabbit and go and have sex instead. And a program for a robot to

Speaker 2 locate another robot of the right species and then have sex with it is, again,

Speaker 2 I think, beyond present-day robotics. But

Speaker 2 it will be done

Speaker 2 and it does not, it clearly does not require creativity because

Speaker 2 that same program will lead the next wolf to do the same thing in the same circumstances. The fact that the circumstances are ones that it's never seen before and it can still function, is a

Speaker 2 testimony to the incredible sophistication of that program, but it has nothing to do with creativity.

Speaker 2 So

Speaker 2 humans do

Speaker 2 tasks that require much, much less

Speaker 2 programming sophistication than that, such as sitting around a campfire telling each other a scary story about a wolf that almost ate them. Now,

Speaker 2 animals can do the wolf running away thing.

Speaker 2 They can enact a story that's more complicated even than the one the human is telling, but they can't tell a story. They don't tell a story.

Speaker 2 Telling a story is a sort of typical creative activity. It's the same kind of activity as forming an explanation.

Speaker 2 So I don't think it's at all surprising that cats can

Speaker 2 jump on handles because it's the same.

Speaker 2 I can easily imagine that the same amazingly sophisticated program that lets it jump on a branch so that the branch will get out of its way in some sense.

Speaker 2 will also function in this new environment that it's never seen before. But there are all sorts of other things that it can't do.

Speaker 1 Oh, that's definitely true, which was my point: is that it has a bounded form of creativity. And if bounded forms of creativity can exist, then humans could be in one such.

Speaker 1 But so I'm having a hard time imagining the ancestral circumstance in which a cat could

Speaker 1 have genetic gained a genetic knowledge that jumping on a metal rod would get a wooden plank to open and give it access to the

Speaker 1 other side.

Speaker 2 Well,

Speaker 2 I thought I just gave an example. I mean, if

Speaker 2 we don't know, at least I don't know

Speaker 2 what kind of environment the ancestor of the domestic cat lived in.

Speaker 2 But if it was, for example, if it contained undergrowth,

Speaker 2 then

Speaker 2 dealing with undergrowth requires some very sophisticated programs. Otherwise, you will just get stuck somewhere and starve to death.
Now, I think a dog,

Speaker 2 if it gets stuck in a bush,

Speaker 2 it has no program to get out other than just shaking itself about until it gets out. It doesn't have a concept of

Speaker 2 doing something which temporarily makes matters worse and then allows you to get out.

Speaker 2 I think dogs can't do that. But it's just, it's not because that's a particularly complicated thing, it's just that its programming just doesn't have that.

Speaker 2 But an animal's programming easily could have that if it if it lived in an environment in which that happened a lot

Speaker 1 is your uh theory of ai compatible with ais that have narrow objective functions but functions which uh if fulfilled would give the creator of the ai a lot of power so if i for example i wrote a deep learning program i traded over financial history and i asked it uh make me a trillion dollars on the stock market um do you think that this would be impossible or and if you think this would be possible then it seems like i do it's not an age but it seems like a very powerful AI, right?

Speaker 1 So it seems like AI is getting somewhere.

Speaker 2 Yeah, well, if you want to be powerful,

Speaker 2 you might do better inventing a weapon or something,

Speaker 2 but or a better mousetrap is even better because it's non-violent. So you can invent a paperclip

Speaker 2 to use an example that's often used in this context. You can invent, if paperclips hadn't been invented, you can invent a paperclip and make a fortune.

Speaker 2 And that's an idea, which is, but it's not an AI because it's not the paperclip that's going out there. It's really your idea in the first place that has caused the whole value of the paperclip.

Speaker 2 And similarly, if you invent a

Speaker 2 dumb arbitrage

Speaker 2 machine which seeks out complicated trades to make,

Speaker 2 which are more complicated than anyone else is trying to do. And that makes you a fortune.
Well, the thing that made you a fortune was not the arbitrage machine. It was your idea for

Speaker 2 how to search for arbitrage opportunities that no one else sees. Right.
That's what was valuable. And that's the usual way of making money in the economy.
You have an idea and then you implement it.

Speaker 2 Right.

Speaker 2 That was an AI is beside the point. It could have been a paperclip.

Speaker 1 But the thing is, so the models that are used nowadays are

Speaker 1 not expert systems like the chess engines of the 90s. They're, you know, something like AlphaZero or AlphaGo.
This is just like almost a blank neural net.

Speaker 1 And that they were able to help, you know, let it win, go.

Speaker 1 So if such a neural network that was kind of blank, and if you just arbitrarily throw financial history at it, wouldn't it be fair to say that the AI actually figured out what the right trades were, even though it's not a general intelligence?

Speaker 2 Well, I think it's possible in chess,

Speaker 2 but not in the economy, because the value in the economy is being created by creativity.

Speaker 2 And most, you know, arbitrage is one thing that can sort of skim value off the top by taking opportunities that were too expensive for other people to take.

Speaker 2 So you can, you know, you can make money, you can make a lot of money.

Speaker 2 that way if you if you know if you have a good idea about how to do it but most of the value in the economy is created by the creation of knowledge.

Speaker 2 Somebody has the idea that a smartphone would be good to have, even though

Speaker 2 most people think that that's not going to work. And that idea cannot be anticipated by anything less than an AGI.

Speaker 2 An AGI could have that idea, but no AI could.

Speaker 1 Okay.

Speaker 1 So there's definitely other topics I want to get to. So let's talk about virtual reality.

Speaker 1 So in the fabric of reality, you discussed the possibility that virtual reality generators could plug in directly into our nervous system and give us sense data that way.

Speaker 1 Now, as you might know, many meditators, you know, people like Sam Harris, speak of

Speaker 1 both thoughts and senses as intrusions into consciousness that have a sort of similar, they can be welcome intrusions, but they are both things that come into consciousness. So,

Speaker 1 do you think

Speaker 1 that a virtual reality generator could also place thoughts as well as sense data into the mind? mind?

Speaker 2 Yes, but that's only because I think that this model is wrong. It's basically the Cartesian theater, as Daniel Dennett puts it,

Speaker 2 with the stage cleared of all the characters. So

Speaker 2 that's conscious, pure consciousness without content, as Sam Harris envisages it. But I think that all that's happening there is that you are conscious of this theater

Speaker 2 and you're envisaging it as having certain properties, which by the way, it doesn't have, but that doesn't matter. We can imagine lots of things that don't happen.

Speaker 2 You know, in fact, you know, that's in a way characterizes what we do all the time.

Speaker 2 So

Speaker 2 one can interpret one's

Speaker 2 thoughts about this empty stage as being thoughts about nothing. One can interpret

Speaker 2 the actual hardware of the stage that one is imagining as being

Speaker 2 pure conscious, contentless consciousness, but it's not. It has the content of a stage or

Speaker 2 a space or, you know, however you want to envisage it.

Speaker 1 Okay. And then let's talk about the Turing principle.
So this is a term you coined. It's otherwise been called the Church Turing Deutsch principle.

Speaker 1 Would this principle imply that you could? run? So, by the way, it states that a universal computer can simulate any physical process.

Speaker 1 Would this principle imply that you could simulate the whole of the universe, for example, in a compact efficient computer that was smaller than the universe itself?

Speaker 1 Or is it constrained to physical processes of a certain size?

Speaker 2 Again,

Speaker 2 no,

Speaker 2 it couldn't simulate the whole universe. That would be an example of a task where it was

Speaker 2 computationally able to do it, but it wouldn't have enough memory or time.

Speaker 2 So, the more memory and time you gave it, the more closely it could simulate the whole universe, but it couldn't ever simulate the whole universe or anything near the whole simulation universe, probably, because it,

Speaker 2 well,

Speaker 2 If you want it to simulate itself as well, then there are logical reasons why there are limits to that.

Speaker 2 But even if you wanted it to simulate the whole universe apart from itself, just the sheer size of the universe makes that

Speaker 2 impossible. Even if we discovered ways of

Speaker 2 encoding information extremely densely, like some people have said, maybe quantum gravity would allow

Speaker 2 a totally amazing

Speaker 2 density of information, it still couldn't simulate the universe because that would mean, because of the universality of the laws of physics, that would mean that the rest of the universe also was that complex because quantum gravity applies to the whole rest of the universe as well.

Speaker 2 But I think it's significant

Speaker 2 being limited by the available time and memory

Speaker 2 to separate that from being limited by

Speaker 2 computational capacity, because it's only when you separate those that you realize

Speaker 2 what computational universality is and and i think that's universality like turing or quantum universality is the most important thing uh in the theory of computation because uh computation doesn't even make sense unless you have a concept of a of a universal computer

Speaker 1 What could falsify your theory that all interesting problems are soluble?

Speaker 1 So I ask this because, as I'm sure you know, there are people who have tried offering explanations for why certain problems or questions like why is there something rather than nothing, or how could mere physical interactions explain consciousness.

Speaker 1 They've offered explanations for why these problems are in principle insoluble. Now, I'm not convinced they're right, but do you have a strong reason for, in principle, believing that they're wrong?

Speaker 2 No.

Speaker 2 So,

Speaker 2 this is a philosophical theory and could not be proved wrong by experiment.

Speaker 2 However, I think I have

Speaker 2 a good argument for why they aren't, namely that

Speaker 2 each individual case of this

Speaker 2 is a bad explanation. So

Speaker 2 let's say that

Speaker 2 some people say, for example, that

Speaker 2 simulating a human brain is impossible. Now, I can't prove that it's possible.

Speaker 2 Nobody can prove that it's possible until they actually do it, or unless they have a design for it, which they prove will work. So

Speaker 2 pending that,

Speaker 2 there is no way of proving that

Speaker 2 it's not true that this is a fundamental limitation. But the trouble is with that idea that it is a fundamental limitation, that the trouble with that is that it could be applied to anything.

Speaker 2 For example,

Speaker 2 it could be applied to the theory that you have recently, just a minute ago, been replaced by a humanoid robot,

Speaker 2 which

Speaker 2 is going to say for the next few minutes, just a pre-arranged set of things, and you're no longer a person.

Speaker 1 I can't believe you figured it out.

Speaker 2 Yeah, well, that's the first thing you'd say. So

Speaker 2 there is no way to

Speaker 2 refute that by experiment short of actually doing it, short of actually talking to you and so on. So it's the same with all these other things.

Speaker 2 In order for it to make sense to have a theory that something is impossible, you have to have an explanation for why it is impossible.

Speaker 2 So we know that, for example, almost all mathematical propositions are undecidable.

Speaker 2 So

Speaker 2 that's not because somebody has said, oh, maybe, maybe we can't decide everything because

Speaker 2 thinking we could decide everything is hubris. That's not an argument.

Speaker 2 You need an actual functional argument to prove that that is so. And then,

Speaker 2 at being a functional argument

Speaker 2 in which the

Speaker 2 steps of the argument make sense and relate to other things and so on, you can then say, well, what does this actually mean? Does this mean that maybe we can never understand the laws of physics?

Speaker 2 Well, it doesn't, because if the laws of physics included an undecidable function, then we would simply write f of x, and f of x is an undecidable function. We couldn't evaluate f of x.

Speaker 2 It would limit our ability to make predictions. But then,

Speaker 2 our ability to make predictions is totally limited anyway. But it would not affect our ability to understand the properties of the function f and therefore the properties of the physical world.

Speaker 1 Okay, is a system of government like America's, which has distributed powers and checks and balances, is that incompatible with Popper's criterion?

Speaker 1 So, the reason I ask is the last administration had a theory that if you build a wall, there will be positive consequences.

Speaker 1 And, you know, that theory could have been tested, and then the person could have been evaluated on whether that theory succeeded.

Speaker 1 But because our system of government has distributed powers, you know, Congress opposed the testing of that theory, and so it was never tested.

Speaker 1 So, if the American government wanted to fulfill Popper's criterion, would we need to give the president more power, for example?

Speaker 2 Um, it's not as simple as that. So, I agree that this is this is a big defect in the American system of government.
No country has a system of government that perfectly fulfills Popper's

Speaker 2 criterion.

Speaker 2 Um,

Speaker 2 we can always improve. I think the British one is actually the best

Speaker 2 in the world, and it's far from far from optimal.

Speaker 2 Making Making a single change like that is not going to be the answer.

Speaker 2 The constitution of a polity

Speaker 2 is a very complicated thing, much of which is inexplicit. So,

Speaker 2 the founding fathers,

Speaker 2 the American founding fathers, realized they had a tremendous problem.

Speaker 2 What they wanted to do,

Speaker 2 what they thought of themselves as doing, was to implement the British Constitution. In fact, they thought they were the defenders of the British Constitution and that the

Speaker 2 British king had violated it and

Speaker 2 was bringing it down. They wanted to retain it.
The trouble is that they all, in order to do this, to gain the independence to do this, they had to get rid of the king.

Speaker 2 And then they wondered whether they should get an alternative king. Whichever way they did it, there were problems.

Speaker 2 The way they decided to do it, I think, made for a system that was inherently much worse than the one they were replacing. But they had no choice.

Speaker 2 If they wanted to get rid of a king, they had to have a different system for having a head of state. Therefore, they had to have

Speaker 2 an it wanted to be democratic.

Speaker 2 That meant that the

Speaker 2 president had a legitimacy in legislation that the king never had.

Speaker 2 Oh, sorry, never had it. The king did used to have it in medieval times, but the king,

Speaker 2 by the time of the Enlightenment and so on,

Speaker 2 no longer had full legitimacy to legislate. So

Speaker 2 they had to implement a system where

Speaker 2 him seizing power was prevented by something other than tradition.

Speaker 2 And so they instituted these checks and balances.

Speaker 2 So the whole thing that they instituted was immensely sophisticated. It's an amazing intellectual achievement.
And that it works as well as it does is

Speaker 2 something of a miracle. But the inherent flaws are there.

Speaker 2 And one of them is this, the fact that there are checks and balances means that responsibility is dissipated and nobody is ever to blame for anything in the American system,

Speaker 2 which is terrible.

Speaker 2 In the British system, blame is absolutely focused.

Speaker 2 Everything is sacrificed

Speaker 2 to the

Speaker 2 end

Speaker 2 of focusing blame and responsibility down to

Speaker 2 the government.

Speaker 2 Past the law courts, past the parliament, right to the government.

Speaker 2 That's where it's all focused into.

Speaker 2 And

Speaker 2 there are no systems that do that better. But

Speaker 2 as you well know, the British system also has

Speaker 2 flaws. And we recently saw with

Speaker 2 the sequence of events with

Speaker 2 the Brexit referendum and then Parliament bulking at implementing

Speaker 2 some laws that it didn't agree with. And then

Speaker 2 that being referred to the courts, and so there was the courts, and the parliament, and the government, and the prime minister all blaming each other.

Speaker 2 And there was a sort of mini-constitutional crisis, which could only be resolved by

Speaker 2 having an election and then having a majority government, which is by the mathematics of how the government works, that's how it usually is in Britain.

Speaker 2 Although, you know, we have been unlucky several times recently in not having a majority government.

Speaker 1 Okay, so this could be wrong, but it seems to me in an expanded universe, there will be like a finite amount of total matter that will ever exist in our light cone, right?

Speaker 1 There's a limit.

Speaker 1 And that means that there's a limit on the amount of computation that this matter can execute, the amount of energy it can provide, perhaps even the amount of economic value it can sustain, right? So

Speaker 1 it would be weird if the GDP per atom could be arbitrarily large.

Speaker 1 So does this impose some sort of limit on your concept of the beginning of infinity?

Speaker 2 So what you've just recounted is a cosmological theory.

Speaker 2 The universe could be like that, but

Speaker 2 we know very little about cosmology. We know very little about the universes in the large.
Like theories of cosmology are changing on a time scale of about a decade.

Speaker 2 So it doesn't make all that much sense to speculate about what the ultimate asymptotic form of cosmological theories will be. At the same time,

Speaker 2 we don't have a good idea about the asymptotic form of very small things. Like we know that

Speaker 2 our conception of physical processes must break down somehow at the level of quantum gravity, like 10 to the minus 42 seconds and

Speaker 2 that kind of thing.

Speaker 2 But we have no idea what happens below that. Some people say it's got to stop below that, but there's no argument for that at all.

Speaker 2 It's just that we don't know what happens beyond that.

Speaker 2 Now, what happens beyond that may be a finite limit, similarly, the way what happens on a large scale may impose a finite limit, in which case, computation is bounded by a finite limit imposed by the cosmological initial conditions of this universe, which is still different from its being imposed by inherent

Speaker 2 hardware limitations. For example, if there's a finite amount of

Speaker 2 GMP

Speaker 2 available in the distant future, then it's still up to us whether we spend that on

Speaker 2 mathematics or music or

Speaker 2 political systems or

Speaker 2 any of the thousands of even more worthwhile things that have yet to be invented. So it's up to us which ideas we fill the 10 to the 10 to the 10 to the 10 bits with.

Speaker 2 Now,

Speaker 2 my guess is that there are no such limits, but my worldview is not affected by whether there are such limits,

Speaker 2 Because, as I said, it's still up to us what to fill them with.

Speaker 2 And then if we get chopped off at some point in the future, then everything will have been worthwhile up to then.

Speaker 1 Gotcha.

Speaker 1 Okay, so the way I understand your concept of union infinity, it seems to me that the more knowledge we gain,

Speaker 1 the more knowledge we're in a position to gain. So there should be like an exponential growth of knowledge.

Speaker 1 But if we look at the last 50 years, it seems that there's been a a slowdown in or decrease in research productivity, economic growth, productivity growth.

Speaker 1 And this seems compatible with the story that, you know, there's a limited amount of fruit on the tree, that we pick the low-hanging fruit, and now there's less and less fruit, and harder and harder fruit to pick.

Speaker 1 And, you know, eventually the orchard will be empty.

Speaker 1 So, do you have an alternative explanation for what's going on in the last 50 years?

Speaker 2 Yes, I think it's very simple.

Speaker 2 There are sociological factors

Speaker 2 in academic life which have

Speaker 2 stultified

Speaker 2 the culture.

Speaker 2 Not totally and not everywhere, but that has been a tendency in what has happened. And it has resulted in a

Speaker 2 loss of productivity

Speaker 2 in many sectors, in many ways, but not in every sector, not in every way. And

Speaker 2 the,

Speaker 2 for example, I think there was a,

Speaker 2 I've often said,

Speaker 2 there was a stultification in

Speaker 2 theoretical physics

Speaker 2 starting in, let's say, the 1920s, and it still hasn't fully dissipated.

Speaker 2 If it wasn't for that,

Speaker 2 quantum computers would have been invented in the 1930s and built in the 1960s.

Speaker 2 So that is

Speaker 2 just an accidental fact, but

Speaker 2 it just goes to show that there are no guarantees. The fact that

Speaker 2 our horizons are unlimited does not guarantee that we will get anywhere, that we won't start declining tomorrow. I don't think we are currently declining.
I think

Speaker 2 these declines that we see are parochial effects caused by specific mistakes that

Speaker 2 have been made and which can be undone.

Speaker 1 Okay, so

Speaker 1 I want to ask you a question about Bayesianism versus Popperianism. So, one reason why people prefer Bayes is because there seems to be a way of describing

Speaker 1 changes in epistemic status when the relative status of a theory hasn't changed. So, I'll give you an example.
Currently, the many world

Speaker 1 explanation is the best way to explain quantum mechanics, right? But suppose we in the future.

Speaker 1 yeah, okay.

Speaker 1 But suppose in the future, we

Speaker 1 were able to build an AGI on a quantum computer and be able to design some clever interference experiment, as you suggest, to have it be able to report back being in a superposition across many worlds.

Speaker 1 Now, it seems that

Speaker 1 even though many worlds remains the best or the only explanation, somehow its epistemic status has changed as a result of the experiment.

Speaker 1 And in a Bayesian terms, you could say the credence of this theory has increased. How would would you describe these sorts of changes in a Popperian view?

Speaker 2 So,

Speaker 2 what has happened there is that at the moment, we have only one explanation that can't be immediately knocked down.

Speaker 2 If we did that thought experiment,

Speaker 2 we might well decide

Speaker 2 that this will

Speaker 2 provide the ammunition to knock down even

Speaker 2 ideas for alternative explanations that have not been thought of yet.

Speaker 2 I mean, obviously, it wouldn't be enough to knock down every possible explanation because, for a start, we know that quantum theory is false.

Speaker 2 We don't know for sure that the next theory will have many worlds in it. I mean, I think it will, but you know,

Speaker 2 we can't prove anything like that. But

Speaker 2 I would replace the idea of increased credence with

Speaker 2 the theory that

Speaker 2 the

Speaker 2 experiment

Speaker 2 will provide

Speaker 2 a a

Speaker 2 quiver full of arrows or a a a

Speaker 2 repertoire of arguments that goes beyond

Speaker 2 the the known arguments, the known bad arguments, and

Speaker 2 will reach into other types of arguments because

Speaker 2 the reason I would say that is that

Speaker 2 some of the existing misconceptions about quantum theory reside in misconceptions about

Speaker 2 the methodology of science. Now, I've written a paper about what I think is the right methodology of science, where that doesn't

Speaker 2 apply, but

Speaker 2 many physicists and many philosophers would disagree with that, and they would

Speaker 2 advocate

Speaker 2 a methodology of science that's more

Speaker 2 based on empiricism.

Speaker 2 Of course, I think that empiricism is a mistake and can be knocked down in its own terms. So we shouldn't, but not everybody thinks that.

Speaker 2 Now, once we have an experiment, such as my thought experiment, if that was actually done, then people could not use their arguments based on a fallacious idea of empiricism because their theory would have been refuted even by the standards of empiricism,

Speaker 2 which shouldn't have been needed in the first place.

Speaker 2 But you know, so that's why I think that's the way I would express that the repertoire of arguments would become more powerful if that experiment were done successfully.

Speaker 1 The next question I have is:

Speaker 1 how far do you take the principle that open-ended scientific progress is the best way to deal with existential dangers? To give one example, many people have suggested,

Speaker 1 so

Speaker 1 you have something like gain of function research, right? And it's conceivable that it could lead to more knowledge and how to stop dangerous pathogens.

Speaker 1 But I guess, at least in Bayesian terms, you could say it seems even more likely that it can or has led to the

Speaker 1 spread of a man-made pathogen that would have not otherwise been naturally developed. So,

Speaker 1 would your belief in open-ended scientific progress allow us to say, okay, let's stop gain-of-function research?

Speaker 2 No, it wouldn't allow us to say, let's stop it. It might

Speaker 2 make it reasonable to say,

Speaker 2 let us do research into how to make laboratories more secure before we do gain of function research. It's really part of the same thing.
thing.

Speaker 2 It's like saying,

Speaker 2 let's do research into how to make the plastic hoses through which the reagents pass more impermeable before we actually do the experiments with the reagents. So it's all part of the same experiment.

Speaker 2 I wouldn't want to stop something just because new knowledge might be discovered.

Speaker 2 That's the no-no in my... view.
But which knowledge we need to discover first, that's the problem of scheduling, which is non-trivial, non-trivial part of any research and of any learning.

Speaker 1 But would it be conceivable for you to say that until we figure out how to make sure these laboratories are safe to a certain standard,

Speaker 1 we will stop the research as it exists now.

Speaker 1 And then

Speaker 1 meanwhile, we'll focus on doing the other kind of research so grain of function can restart. But until then, it's not allowed.

Speaker 2 Yes, in principle, that will be reasonable. I don't know enough about the actual situation to have a view.
You know, I don't know how these labs work. I don't know

Speaker 2 what the precautions consist of. And

Speaker 2 when I hear people talking about, for example, a lab leak,

Speaker 2 I think, well, the most likely lab leak is that one of the people who works there walks out of the front door.

Speaker 2 So the leak is not a leak from the lab to the outside the the the the leak is from the test tube to the person and then from the person walking out the door uh and uh

Speaker 2 i don't know enough about what these precautions are or what what the state of the art is to know to what extent the risk is actually minimized it could be that the the culture of these labs is not good enough, in which case it would be part of the next experiment to improve the culture in the labs.

Speaker 2 But

Speaker 2 I am very suspicious of saying that all labs have to stop and meet a criterion, because I'm sure that,

Speaker 2 well, I suspect that the stopping wouldn't be necessary and the criterion wouldn't be appropriate.

Speaker 2 Again, which criterion to use depends on the actual research being done.

Speaker 1 When I had Tyler Cowan on my podcast, I asked him why he thinks. So he thinks that human solidization is only going to be around for 700 more years.

Speaker 1 And then, so I asked him, I gave him, you know, your rebuttal, or what I understand to be your rebuttal, that, you know, creative optimistic societies will innovate ways of, you know, safety technologies faster than totalitarian static societies can innovate way at destructive technologies.

Speaker 1 And he responded, you know, maybe, but the cost of destruction is just so much lower than the cost of building.

Speaker 1 And, you that trend has been going on for a while now. What happens when a nuke costs $60,000?

Speaker 1 Or what happens if there's a mistake like the kinds that we saw many times over in the Cold War? How would you respond to that?

Speaker 2 First of all, I think we've been getting safer and safer throughout the entire history of civilization.

Speaker 2 There were these plagues that wiped out

Speaker 2 a third of the population of the world or half,

Speaker 2 and it could have been 99% or 100%.

Speaker 2 We went through some kind of

Speaker 2 bottleneck 70,000 years ago, I understand, which they can tell

Speaker 2 from genetics. All our cousin species have been wiped out.

Speaker 2 So

Speaker 2 we were much less safe then than now. Also,

Speaker 2 if a

Speaker 2 asteroid, 10-kilometer asteroid, had been on target with the Earth at any time

Speaker 2 in the past two million years or whatever it is, history of the genus Homo, that would have been the end of it. Whereas now,

Speaker 2 it'll just mean higher taxation for a while.

Speaker 2 You know,

Speaker 2 that's how much amazingly safer we are

Speaker 2 now.

Speaker 2 I would never say that it's impossible that we'll destroy ourselves. That would be contrary to the universality of the human mind.
We can make wrong choices.

Speaker 2 We can make so many wrong choices that we'll destroy ourselves.

Speaker 2 And

Speaker 2 on the other hand, the atomic bomb accident sort of thing would have had no zero chance of destroying civilization. All they would have done is cause a vast amount of suffering.

Speaker 2 But I don't think we have the technology to end civilization, even if we wanted to.

Speaker 2 I think all we would do if we just deliberately unleashed hell all over the world is we would cause a vast amount of suffering.

Speaker 2 But there would be survivors and they would resolve never to do that again.

Speaker 2 So I don't think we're even able to, let alone that we would do it accidentally. But

Speaker 2 as for the bad guys, well,

Speaker 2 I think we are doing the wrong thing largely in regard to both external and internal threats. But

Speaker 2 I don't think we're doing the wrong thing to an existential risk level. And over the next 700 years or whatever it is, well, I don't want to prophesy because I don't know

Speaker 2 most of the advances that are going to be made in that time. But

Speaker 2 I see no reason why, if we are solving problems,

Speaker 2 we won't solve problems.

Speaker 2 I don't think this,

Speaker 2 to take another metaphor,

Speaker 2 Nick Bostrom's

Speaker 2 jar with white balls, and there's one black ball, and you take out a white ball, and a white ball, and a white ball, and then you hit the black ball, and that's the end of you.

Speaker 2 I don't think it's like that, because every white ball you take out and have reduces the number of black balls in the jar.

Speaker 2 So

Speaker 2 again,

Speaker 2 I'm not saying that's a law of nature. It could be that the very next ball we take out will be the black one.
That'll be the end of us. It could be.

Speaker 2 But I think all arguments that it will be are fallacious.

Speaker 1 I do want to talk about the fun criterion. Is your definition of fun different from how other people define other positive emotions like eudaimonia or well-being or satisfaction?

Speaker 1 Is a fun a different emotion?

Speaker 2 I don't think it's an emotion. And

Speaker 2 all these things are

Speaker 2 not very well defined. They can't possibly be very well defined until we have

Speaker 2 a satisfactory theory of qualia, at least, and probably more satisfactory theory of creativity, how creativity works, and so on.

Speaker 2 I think that

Speaker 2 the choice of the word fun for the thing that I

Speaker 2 explain more precisely, but still not very precisely, as

Speaker 2 a

Speaker 2 creation of knowledge without

Speaker 2 where the different kinds of knowledge, inexplicit, unconscious, conscious, explicit,

Speaker 2 are

Speaker 2 all

Speaker 2 in harmony with each other.

Speaker 2 I think that is actually

Speaker 2 the only way in which the

Speaker 2 everyday usage of the word fun differs from that is that fun is considered frivolous or

Speaker 2 seeking fun is considered as seeking frivolity. But I think that isn't so much a different use of the word.

Speaker 2 It's just a different pejorative theory about whether this is a good or a bad thing but but nevertheless i can't define it precisely the important thing is that there is a thing which has these this property of fun that that you can't

Speaker 2 you can't compulsorily enact it

Speaker 2 so in in

Speaker 2 in in um

Speaker 2 in some views you know no pain no gain Well, then you can find out mechanically whether the thing is causing pain and whether it's doing it according to the theory that says that you will have gain if you have that pain and so on.

Speaker 2 So, that can all be done mechanically. And therefore, it is subject to the criticism.
And another way of looking at the fun theory is that it's a mode of criticism.

Speaker 2 It's subject to the criticism that this isn't fun, i.e.,

Speaker 2 this is making an uh privileging one kind of knowledge arbitrarily over another rather than being rational and letting content decide.

Speaker 1 Is this placing a limitation on universal explainers then? If they can't create some sort of theory about why a thing could or should be fun, why anything could be fun?

Speaker 1 And

Speaker 1 it seems to me that sometimes we actually can make things fun that aren't. Like, for example, take exercise: no pain, no gain.

Speaker 1 It's like when you first go, it's not fun, but you know, once you start going, you understand the mechanics, you develop a theory for why it can and should be fun.

Speaker 2 Yes, yes. Well, that's quite a good example because

Speaker 2 there you see that fun cannot be defined as the absence of pain.

Speaker 2 So you can be having fun

Speaker 2 while experiencing physical pain, and that physical pain is not sparking

Speaker 2 suffering but joy.

Speaker 2 However, there is such a thing as physical pain not sparking joy,

Speaker 2 as Marie Kondo would say.

Speaker 2 And that's important because

Speaker 2 if you are

Speaker 2 dogmatically

Speaker 2 or uncritically

Speaker 2 implementing in your life a theory of the good that involves pain and which excludes the criticism that maybe this can't be fun, or maybe this isn't yet fun, or maybe I should make it fun.

Speaker 2 And if I can't, that's a reason to stop. You know, all those things.
If all those things are excluded, because by definition, the thing is good, and your pain, your suffering doesn't matter,

Speaker 2 then

Speaker 2 that opens the door to not only to

Speaker 2 suffering, but to stasis.

Speaker 2 You won't be able to get to a better theory.

Speaker 1 And then why is

Speaker 1 fun central to this instead of another emotion? So, you know, like, for example, Aristotle thought that, like, I guess a sort of

Speaker 1 widely defined sense of happiness is what should be the goal of our endeavors.

Speaker 1 Why fun instead of something like that?

Speaker 2 Well,

Speaker 2 that's defining it vaguely enough so that what you said might very well be fun. The point is,

Speaker 2 the underlying thing is, as far as

Speaker 2 going one level below, we're really to understand it, we'd need to go about seven levels below that, which we can't do yet. But

Speaker 2 the important thing is that there are several kinds of knowledge in our brains. And the one that is written down in the exercise book that says you should do this number of reps and

Speaker 2 you should power through this, and it doesn't matter if you feel that, and so on. That's an explicit theory, and it contains some knowledge, but it also contains error.

Speaker 2 That's like all our knowledge is like that. We also have other knowledge, which is which is contained in our

Speaker 2 biology, it's contained in our genes.

Speaker 2 We have knowledge that is inexplicit, like our knowledge of grammar is always my favorite example, as we know why certain sentences are acceptable and why they're unacceptable, but we can't state explicitly

Speaker 2 or in every case why it isn't or why it is.

Speaker 2 And then,

Speaker 2 so that there's explicit and inexplicit knowledge, there's conscious and unconscious knowledge. All those are

Speaker 2 bits of program in the brain, they're ideas,

Speaker 2 they

Speaker 2 are bits of knowledge in this, if you define knowledge as information with causal power,

Speaker 2 they are all information with causal power. They all contain truth and they all contain error.
And it's always a mistake to shield something, to shield one of them from criticism or replacement.

Speaker 2 Not doing that is what I call the fun criterion. Now, you might say that's a bad name, but you know, it's the best I can find.

Speaker 1 So, why would creating an AGI through evolution necessarily entail suffering?

Speaker 1 Because the way I see it, or it seems to me, your theory is that you need to be a general intelligence in order to feel suffering.

Speaker 1 But by the point, an evolved simulated being is a general intelligence, we can just stop the

Speaker 1 simulation. And so, where's the suffering coming from?

Speaker 2 Okay.

Speaker 2 So, the kind of simulation by evolution that I'm thinking of, I mean,

Speaker 2 there may be several kinds, but the kind that I'm thinking of, and which I said would be the greatest crime in history, is the kind that just simulates the actual evolution of humans from pre-humans that weren't people.

Speaker 2 So you have a population of non-people, which in this simulation would be some kind of NPCs,

Speaker 2 and then

Speaker 2 they would just evolve. We don't know what the criterion would be.

Speaker 2 We just have an artificial universe which simulated the surface of the earth, and they'd be walking around, and some of them might or might not become people.

Speaker 2 And now, the thing is, when you're part of the way there, what is happening is that you have

Speaker 2 the way that I, the only way that I can imagine the evolution of personhood or

Speaker 2 explanatory creativity happened was that

Speaker 2 the hardware needed for it was first needed for something else. I have proposed that it was needed to transmit memes.

Speaker 2 So there'd be people who were transmitting memes creatively, but they were running out of resources. So they weren't running out of resources

Speaker 2 before

Speaker 2 it managed to increase their stock of memes. So

Speaker 2 in every generation, there was a stock of memes that was being passed down to the next generation.

Speaker 2 And once they got beyond a certain complexity, they had to be passed down by the use of creativity by the recipient.

Speaker 2 So So

Speaker 2 there may well have been a time, and as I say, I can't think of any other way it could have been, where there was genuine creativity being used, but it ran out of resources very quickly, but not so quickly that it didn't increase the mean bandwidth.

Speaker 2 Then, in the next generation, there was more mean bandwidth, and then, after

Speaker 2 a certain number of generations, there would have been some opportunity to use this

Speaker 2 hardware or whatever it is, you know, spermware, I expect, to use this firmware for something other than just blindly transmitting memes, or rather creatively transmitting memes, but they were blind memes.

Speaker 2 So

Speaker 2 in that time,

Speaker 2 it would have been very unpleasant to be alive. It was already very unpleasant to, sorry, it was

Speaker 2 very unpleasant to be alive

Speaker 2 when we did have enough resources to think as well as do the memes. But

Speaker 2 I don't think there would have been a moment at which you would say, yes,

Speaker 2 now this suffering begins to matter because it's not just blind memes. I think the people were already suffering at the time when they were blindly transmitting memes.
Because they were using

Speaker 2 genuine creativity. They were just not using it to any good effect.

Speaker 1 Gotcha.

Speaker 1 Would being in the experience machine be compatible with the fun criterion? So you're not aware that you're in

Speaker 1 the experience machine, it's all virtual reality,

Speaker 1 but you're still doing the things that would make you have fun, in fact, more so than in the real world.

Speaker 1 So would you be tempted to get into the experience machine? Would it be

Speaker 1 compatible with the fun criterion? I guess there are different questions, but

Speaker 2 I'm not sure what the experience machine is. I mean, if it's just

Speaker 2 so, I mean, is it just a virtual reality world in which things work better than in the real world or something?

Speaker 1 Yeah, so it's a thought experiment by Robert Nozick. And the idea is that you would enter this world, but you would forget that you're in virtual reality.

Speaker 1 So all,

Speaker 1 I mean, the world would be perfect in every possible way that it could be perfect, or not perfect, but it would be better in every possible way. It could be better.

Speaker 1 But you would think the relationships you you have here are real, the knowledge you're discovering here is novel, and so on.

Speaker 1 Would you be tempted to enter such a world?

Speaker 2 Well, no,

Speaker 2 I certainly wouldn't want to enter a world, any world, which involves erasing the memory that I have come from this world.

Speaker 2 Related to that is the fact that the laws of physics in this virtual world

Speaker 2 couldn't be the true ones because the true ones aren't aren't yet known.

Speaker 2 So I'd be in a world in which I was trying to learn laws of physics, which aren't the actual laws.

Speaker 2 And they would have been designed by somebody for some purpose to manipulate me, as it were.

Speaker 2 Maybe it would be designed to

Speaker 2 be a puzzle that would take 50 years to solve.

Speaker 2 but it would have to be by definition a finite puzzle and it wouldn't be the actual world and meanwhile in the actual world, things are going wrong, and I don't know about this.

Speaker 2 And eventually, they go so wrong that my computer runs out of power.

Speaker 2 And then, where will I be?

Speaker 1 The final question I always like to ask the people I interview is: what advice would you give to young people?

Speaker 1 So, somebody in their 20s, is there something that you would like to some advice you would give them?

Speaker 2 Well, I try very hard not to give advice because

Speaker 2 it's not a good relationship to be

Speaker 2 in with somebody to give them advice. I can have opinions about things.

Speaker 2 So, for example, I may have an opinion that

Speaker 2 it's dangerous to

Speaker 2 condition your short-term goals by reference to some long-term goal.

Speaker 2 And I have

Speaker 2 what I think is a good epistemological reason for that, namely that

Speaker 2 if your short-term goals are subordinate to your long-term goal, then you won't find if your long-term goal is wrong or deficient in some way, you won't find out until you're dead.

Speaker 2 So

Speaker 2 it's a bad idea because it is subordinating the things that you could error correct now

Speaker 2 or in six months time or in a year's time. to something that you could only error correct on a 50-year time scale and then it'll be too late.
So, I'm

Speaker 2 uh suspicious of advice of the form set your goal, and even more suspicious of make your goal be so-and-so.

Speaker 1 Interesting. So, that's an example of

Speaker 2 advice that isn't advice.

Speaker 1 But why is it,

Speaker 1 why do you think the

Speaker 1 relationship between advicee and advice giver is dangerous?

Speaker 2 Oh, well, because it's one of authority.

Speaker 2 Again, you know,

Speaker 2 I tried to make this example of quote advice that I just gave, I tried to make it non-authoritative. I just gave an argument for why certain other arguments are bad.

Speaker 2 So, but if it's advice of the form

Speaker 2 a healthy mind in a healthy body, or

Speaker 2 don't drink coffee before 12 o'clock, or you know, something like that,

Speaker 2 it's it's

Speaker 2 well, it's a non-argument.

Speaker 2 If I have an argument, I can give the argument and not tell the person what to do.

Speaker 2 Who knows what somebody might do with an argument? They might change it to a better argument, which actually implies different behavior.

Speaker 2 I can contribute to the world

Speaker 2 arguments, make arguments as best I can. I don't claim that they are privileged over other arguments.

Speaker 2 I

Speaker 2 just put them out because I think that this argument works. And I expect other people not to think that they work.
I mean, we've just done this in this very podcast.

Speaker 2 You know, I put out an argument about AI and that kind of thing, and you criticize it.

Speaker 2 Now,

Speaker 2 if I was in the position of

Speaker 2 making that argument and saying that therefore you should do so and so

Speaker 2 that's a relationship of authority which i think is immoral to have well david thanks so much for um thanks so much for coming on the podcast and thanks so much for doing so much fascinating

Speaker 2 fascinating thank thank you for um

Speaker 2 for inviting me