Fin Moorhouse is a Research Scholar and assistant to Toby Ord at Oxford University's Future of Humanity Institute. He co-hosts the Hear This Idea podcast,

...">
Fin Moorhouse - Longtermism, Space, & Entrepreneurship

Fin Moorhouse - Longtermism, Space, & Entrepreneurship

July 27, 2022 2h 19m

Fin Moorhouse is a Research Scholar and assistant to Toby Ord at Oxford University's Future of Humanity Institute. He co-hosts the Hear This Idea podcast, which showcases new thinking in philosophy, the social sciences, and effective altruism.

We discuss for-profit entrepreneurship for altruism, space governance, morality in the multiverse, podcasting, the long reflection, and the Effective Ideas & EA criticism blog prize.

Watch on YouTube. Listen on Spotify, Apple Podcasts, etc.

Episode website + Transcript here.Follow Fin on Twitter. Follow me on Twitter.

Subscribe to find out about future episodes!

Timestamps

(0:00:10) - Introduction

(0:02:45) - EA Prizes & Criticism

(0:09:47) - Longtermism

(0:12:52) - Improving Mental Models

(0:20:50) - EA & Profit vs Nonprofit Entrepreneurship

(0:30:46) - Backtesting EA

(0:35:54) - EA Billionares

(0:38:32) - EA Decisions & Many Worlds Interpretation

(0:50:46) - EA Talent Search

(0:52:38) - EA & Encouraging Youth

(0:59:17) - Long Reflection

(1:03:56) - Long Term Coordination

(1:21:06) - On Podcasting

(1:23:40) - Audiobooks Imitating Conversation

(1:27:04) - Underappreciated Podcasting Skills

(1:38:08) - Space Governance

(1:42:09) - Space Safety & 1st Principles

(1:46:44) - Von Neuman Probes

(1:50:12) - Space Race & First Strike

(1:51:45) - Space Colonization & AI

(1:56:36) - Building a Startup

(1:59:08) - What is EA Underrating?

(2:10:07) - EA Career Steps

(2:15:16) - Closing Remarks

Please share if you enjoyed this episode! Helps out a ton!



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Listen and Follow Along

Full Transcript

Today, I have the pleasure of interviewing Finn Morehouse, who is a research scholar at the Oxford University's Future of Humanity Institute, and he's also an assistant to Toby Ord, and also the host of the Hear This Idea podcast. Finn, I know you've got a ton of other projects under your belt.
So do you want to talk about all the different things you're working on and how you got into EA and this kind of research? I think you nailed the broad strokes there. I think yeah, I've kind of failed to specialize in a particular thing.
And so I found myself just dabbling in projects that seem interesting to me, trying to help get some projects off the ground and just doing research on, you know, things which seem maybe underrated. I probably won't bore you with the list of things.
And then, yeah, how did I get into EA? Actually also a fairly boring story, unfortunately. I really loved philosophy.
I really loved kind of pestering people by asking them all these questions, you know, why are you not you still eating meat Red's kind of Peter Singer and Will McCaskill and I realized I just wasn't actually living these things out myself I think there's some just like force of consistency that pushed me into really getting involved I think the second piece was just the people I was lucky enough to have this student group where I went to university and I think there's some dynamic of realizing realizing that this isn't just a kind of free floating set of ideas, but there's also just like a community of people I really get on with and have all these like incredibly interesting kind of personalities and interests. So those two things, I think.
Yeah. And then so what was the process like? I know a lot of people who are vaguely interested in EA, but not a lot of them then uh very quickly transitioned to you know working on research with top ea researchers so uh yeah just walk me through how you ended up where you are yeah i think i got lucky with the timing of the pandemic which is not something i suppose many people can say i did my degree i was quite unsure about what i wanted to do there was some option of taking some kind of close to default path of maybe something like you know consulting or whatever and then I was kind of I guess forced into this natural break where I had time to step back and I you know I guess I was lucky enough that I could afford to kind of spend a few months just like figuring out what I wanted to do with my life and that space was enough to like maybe start like reading more about these ideas also to try kind of teaching myself skills i hadn't really tried yet so try to you know learn to code for a lot of this time and so on and then i just thought well i might as well wing it there are some things i could apply to i don't really rate my chances but the cost to applying to these things is so low it just seems worth it and then um yeah i guess i got very lucky am.
Awesome. Okay.
So let's talk about one of these things you're working on, which is that you're, you've set up, um, and are going to be helping judging these prizes about, uh, EA writing. One is you're giving out, uh, five prizes for a hundred thousand dollars each for, um, blogs that discuss effective altruist related ideas.
Another is, uh, five prizes of $20,000 each to criticize EA ideas.

So to talk more about these prizes,

why is now an important time to be talking about and criticizing EA?

That is a good question.

I want to say I'm reluctant to frame this as me personally.

Okay, for sure.

I certainly have helped set up these initiatives.

I heard on the inside that actually you've been pulling a lot of the weight on these projects certainly yeah i've found myself uh with the time to kind of like get these things over the line which i'm yeah i'm pretty happy with so yeah the criticism thing let's start with that i want to say something like in general being receptive to criticism is just like, obviously really important. And if as a movement, you want to succeed, where succeed means not just like achieve things in the world, but also like end up having just close to correct beliefs as you can get, then having this kind of property of being like, anti-fragile with respect to being being wrong like really celebrating and endorsing changing your mind in a kind of loud and public way that just seems really important and so i know this is just like a kind of prima facie obvious case of wanting to incentivize criticism but you might also ask like why now there's a few things going on there one is i think the effective altruism movement overall has reached its place where it's actually beginning to do a lot of really incredible things.
There's a lot of funders now kind of excited to find fairly ambitious, scalable projects. And so it seems like if there's a kind of an inflection point, you want to get the criticism out the door and you want to respond to it earlier rather than later because you want to set the the path in the right direction rather than a just course which is more expensive later on well mccaskill made this point uh a few months ago you can also point to this dynamic in some other social movements where the kind of really exciting beliefs that kind of have this like period of plasticity in the early days they kind of ossify and you end up with this like set of beliefs it's kind of like trendy um or socially rewarded to hold in some sense you feel like you need to hold certain beliefs in order to kind of get credits from you know certain people and the costs to like publicly questioning some practices or beliefs become too high and that is just like a failure mode and it seems like one of the more salient failure modes for a movement like this so it just seems really important to like be quite quite proactive about celebrating this dynamic where you notice you're doing something wrong and then you change track and then maybe that means shutting something down right you set up project the project seems really exciting you get some like feedback back from the world feedback looks more negative than you expected and so you stop doing the project and in some important sense that is like a success like you did the correct thing and it's important to celebrate that so i think these are some of the things that go through my through my head just like framing criticism in like this kind of positive way yeah seems pretty important right right i mean analogously it said that uh losses are as important as profit in terms of motivating uh economic consentos and it seems pretty important.
Right, right. I mean, analogously, it said that losses are as important as profit in terms of motivating economic incentives.
And it seems very similar here. In a Slack, we were talking and you mentioned that maybe one of the reasons it's important now is if a prize of $20,000 can help somebody help us figure out how to, or not me, I don't have the money, but like help SVF figure out how to better allocate like $10 million dollars that that's a steal it's really impressive that effective altruism is a movement that is willing to find criticism of itself i i don't know is there any other example of a movement in history where that's been so interested in criticizing itself and becoming anti-fragile in this way i guess one thing i want to say is like the proof is in the pudding here like it's one thing to kind of make noises to the effect that you're like interested in being criticized and i'm sure lots of movements make that another thing to like really follow through on them and you know ea is a fairly young movement so i guess time will tell whether it really does that well i'm very hopeful i also want to say that this like particular prize is like you know one kind of part of a much um a much bigger thing hopefully that's a great question i actually don't know if i have good answers but that's not to say that there are none i'm sure there are like political liberalism as a strand of thought like in political philosophy comes to mind maybe an example one other random thing i want to point out or mention you mentioned profits and just like doing the maths and what's the like ev of like investing in just red teaming an idea like shooting an idea down i think thinking about the difference between the for-profit and non-profit space is quite an interesting analogy here.
You have this very obvious feedback mechanism in for-profit land, which is you have an idea. No matter how excited you are about the idea, you can very quickly learn whether the world is as excited, which is to say you can just fail.
And that's like a tight, useful feedback loop to figure out whether what you're doing is worth doing.

Those feedback loops don't, by default, exist if you don't expect to get anything back when you're doing these projects.

And so that's like a reason to want to implement those things like artificially.

Like one way you can do this is with like charity evaluators, which in some sense impose a kind of market-like mechanism where like now you have an incentive to actually be achieving the thing that you're like ostensibly setting out to achieve because there's this third actor that's a party that's kind of assessing whether you're you're getting it but i think that that framing i mean we can try saying say more about it but that's like a really useful framing i think to me anyway and uh yeah but one other reason this seems important to me is if you have a movement that's about like 10 years old like this, you know, we have we have like strains of ideas that are thousands of years old that have significant improvements made to them that were that were missing before. So just on that alone, it seems to me that the reason to expect some mistakes, either at a sort of like theoretical level or in in the applications that does seem like I do have a strong prior that there are such mistakes that could be identified in a reasonable amount of time yeah I guess one framing that I like as well is not just thinking about here's a set of claims we have we want to like figure out what's wrong but some really good

criticism can look like look you just missed this distinction which is like a really important

distinction to make or you miss this like addition to this kind of naive like conceptual framework you're using and it's really important to make that addition a lot of people are like skeptical about progress in in kind of non-empirical fields like philosophy for instance it's like oh we've been thinking about these questions for thousands of years but we're still kind of unsure and i that misses like a really important kind of non-empirical fields, so like philosophy, for instance. It's like, oh, we've been thinking about these questions for thousands of years, but we're still kind of unsure.
And I think that misses a really important kind of progress, which is something you might call conceptual engineering or something, which is finding these really useful distinctions and then building structures on top of them. And so it's not like you're making claims which are necessarily true or false, but there are other kinds of useful criticism which include just getting all kind kind of models like more more useful speaking of just making progress on questions like these one thing that's really surprising to me and maybe this is just like my ignorance of the philosophical history here it's super surprising to me that the movement like long-termism at least in its modern form it took thousands of years of philosophy before somebody had the idea that oh like the future could be really big therefore the future matters a future matters a lot.
Um, and so maybe you could say like, oh, you know, there's been lots of movements in history that have emphasized, I mean, existential risk maybe wasn't a prominent thing to think about before nuclear weapons, but that have emphasized that civilizational collapse is a very prominent factor that, uh, might be very bad for many centuries. So we should try to make sure society is stable or something.
But, uh, do of you have a philosophy background so do you have some sense what is the philosophical background here and to the extent that these are relatively new ideas how did it take so long yeah that's like such a good question i think one name that comes to mind straight away is this historian called uh tom moynihan who so he wrote this book about something like the history of how people

think um about existential risk and then more recently he's been doing work on the question you asked which is like what took people so long to reach this like what now seems like a fairly natural thought i think part of what's going on here is it it's really hard or easy i should say to underrate just how much,

I guess it's somewhat related

to what I mentioned in the last question just how much kind of conceptual apparatus we have going on that's like a bit like the water we swim in now and so it's hard to notice so one example that comes to mind is thinking about probability as this thing we can talk formally about this This is like a shockingly new thought. Also the idea that human history might end and furthermore that that might be within our control, that is to decide or to prevent that happening prematurely.
These are all like really surprisingly new thoughts. I think it just like requires a lot of imagination and effort to put yourself into the shoes of people living earlier on who just didn't have the kind of yeah like i said the kind of tools for thinking that make these ideas pop out much more naturally and of course as soon as those those tools are in place then the like conclusions fall out pretty quickly but it's not easy and i agree that i appreciate that actually wasn't a.
Just because it's such a hard question. Yeah.
So, you know, what's interesting is that more recently, maybe I'm unaware of the full context of the argument here, but I think I've heard Holden Kornoski write somewhere that he thinks that there's more value in thinking about the issues that EA has already identified rather than identifying some sort of unknown risk that, for example, what AI might have been like 10, 20 years ago, AI alignment, I mean. Given this historical experience that you can have some very fundamental tools for thinking about the world missing and consequently miss some very important moral implications, does that imply that we should expect the space that AI alignment occupies in terms of our priorities? Should we expect something as big or bigger coming up? Or just generally tools of thinking like, you know, expected value thinking, for example? Yeah, that's a good question.
I think one thing I want to say there is it seems pretty likely that the most important, like kind of useful concepts for finding important things are also going to be the lowest hanging and i don't know i think it's like very roughly correct that we did in fact like over the course of building out kind of conceptual frameworks we picked them like the most important ideas first and now we're kind of like refining things and adding maybe somewhat more peripheral things um that's at least if that like trend is roughly gonna hold and that's a reason for um not expecting to find like some kind of earth shattering new concept from left field although i think that's like a very weak and vague argument to be honest um also i guess i guess it depends on what you think your time span is like if your time span is the entire span of time that humans have thinking about things, then maybe you would think that actually it's kind of strange that it took like 3,000 years before, maybe even longer. I guess it depends on when you define the start point.
It took, you know, 3,000 years for people to realize, hey, we should think in terms of probabilities and in terms of expected impact. So in that sense, maybe it's like, I don't know, it took like 3,000 years of thinking to get to this very basic, very basic idea.
seems to us like a very important and basic idea i feel like maybe i have i want to say two things if you imagined lining up like every person who ever lived just like in a row and then you kind of like walked along that line saw how much progress people have made across the line so you're going across people rather than across time then i think like progress in how people think about stuff looks a lot more like linear and in fact started earlier then maybe you might think by just looking at like progress over time and if it was faster early on then if you're kind of following the very long run trend then maybe you should expect like um not to find these kind of again totally left field ideas soon but i think a second thing which is maybe more important is like i also buy this idea that in some sense um progress about thinking into thinking about what's like most important is really kind of boundless like david deutsch talks about this kind of thought a lot when you come up with new ideas that just generates new problems new questions actually some more ideas um that's very well and good I think there's some sense in which you know one priority now could just be framed as giving us time to like make that progress and even if you thought that like we have this kind of boundless capacity to come up with a bunch of new important ideas, it's pretty obvious that that's like a prerequisite. And therefore, in some sense, that's like a robust argument for thinking that like trying not to kind of throw humanity off course and preventing mitigating some of these catastrophic risks is always just going to shake out as like a pretty important thing to do.
one of the most important things yeah i i i think that's reasonable um but but then then there's a question of like even if you think that the existential risk is the most important thing um to what extent have you discovered all the again that like x risk uh argument and i but by the way earlier I thought earlier, what you said about, you know, trying

to extrapolate what we might know from the limits of physical laws. Um, you know, if that can kind of constrain what we think might be possible, I think that's an interesting idea, but I wonder like partly like one argument is just like, we don't even know how to define those physical constraints.
And like, before you had the theory of computation, it wouldn't even make sense to say like, oh, like this much matter can sustain like so much, so much, uh, flops, uh, floating point operations per second. And then second is like, yeah, if you know that number, it still doesn't tell you like what you could do with it.
I, you know, what I think, um, uh, an interesting thing that whole, uh, Karnofsky talks about is, uh, he has this article called this can't go odd, where he makes the argument that, listen, if you just have a compounding economic growth at some point you'll get to the point where i've um you know like you'll have many or many many many times earth's economy per atom in the affectable universe and so it's hard to see like how you could keep having economic growth beyond that point but that that itself seems like um i don't know if that's true then there has to be like a physical law that's like the maximum g gdp per atom is this right like if there's no such constant then you can like you should be able to surpass it i guess it still leaves a lot to be desired even if you could know such a number you don't know like how interesting or what kinds of things could be done at that point yeah i guess the first one is you know even if you think that like preventing these kind of very large scale risks that might like curtail human potential even if you think that's just incredibly important you might miss some of those risks because you're you just unable to articulate them or really like conceptualize them i feel like i just want to say at some point we have a pretty good understanding of kind of roughly what looks what looks most important like for instance if you kind of i don't know get stranded on a camping trip and you're like we need to just survive long enough to to make it out and it's like okay what do we look out for i don't really know what the wildlife is here because i haven't been here before but probably it's going to look a bit like this i can at least imagine you know the risk of dying of thirst even though i've never died of thirst before and then it's like what what if we haven't like even begun to think of like the other it's like yeah maybe but it's kind of there's just some like you know table something practical reason for uh focusing on the things which are most salient and like definitely spending some time thinking about things we haven't thought of yet but um it's not like that list is just like completely endless and there's a kind of i guess a reason for that and then you said the second thing which i don't actually know if i have like a ton of interesting things to say about although maybe you could try like kind of zooming in on what what you're interested in there uh i come to think of it i don't think the second thing has uh a big implications for this argument but for this argument. But we have like 20 other topics that are just as interesting that we can't move on to.
But just as a closing note, the analogy is very interesting to me. The camping trip, you're trying to do what needs to be done to survive.
I don't know. OK, so to extend that analogy, it might be like, I don't know, somebody like a laser discovers, oh, that berry that we're all about to eat because we feel like that's the only way to get sustenance here while we're, you know, just almost starving.
Don't eat that berry because that berry is poisonous. And then maybe somebody could point out, okay, so given the fact that we've discovered one poisonous food in this environment, should we expect there to be other poisonous foods that we don't know about? I don't't know i don't know if there's anything more to say on that topic i mean one thing well like one i guess kind of angle you could put on this is you can ask this question like we have precedent for a lot of things like we know now that uh igniting nuclear weapons does not ignite the atmosphere which was a worry that some people had um so we at least have some kind of bounds on how bad certain things can be and so if you ask this question like what is worth worrying about most in terms of what kinds of risks might um reach this level of potentially posing an existential risk well it's going to be the kinds of things we haven't done yet that we haven't like got some experience with and so you can ask this question like what is what things are there in this space of like kind of big seeming but totally novel precedent free changes or events and it actually does seem like you can kind of try generating that that list and at answers.
This is why maybe, or at least one reason why AI sticks out, because it's like fulfills this criteria of being pretty potentially big and transformative. And also the kind of thing we don't have any experience with yet.
But again, it's not as if that list is like in some sense endless. Like there are only so many things we can do in the space of uh decades right okay yeah so moving on to another topic we're talking about the for-profit entrepreneurship as uh as a potentially impactful thing you can do sorry is it maybe not in this conversation but like we separately we had one point yeah yeah um yeah so okay so clarify, this is not just for profit in order to do earning to give.
So you become a billionaire and you give your wealth away. To what extent can you identify opportunities where you can just build a profitable company that solves an important problem area or makes people's lives better? One example of this is Wave.
It's a company, for example, that helps with, you know, transferring money and banking services in Africa. Probably has boosted people's well-being in all kinds of different ways.
So to what extent can we expect just a bunch of for-profit opportunities for making people's lives better yeah that's a great question and there is really a sense in which some of the more like innovative big for-profit companies just are like doing an incredibly useful thing for the world they're like providing a service that wouldn't otherwise exist and people are obviously using it because they are a successful for-profit company yeah so. So I guess the question is something like, you know, you're stepping back, you're asking, how can I like have a ton of impact with what I do? The question is, are we like underrating, just starting a company? So I feel like I want to throw a bunch of kind of disconnected observations.
We'll see if they like tie together. There is a reason why you might in general expect a non-profit route to do well.
And this is like obviously very naive and simple, but where there is a for-profit opportunity, you should just expect people to kind of take it. Like this is why we don't see $20 bills lying on the sidewalk.
But the natural incentives for, in some sense, taking opportunities to like help people where there isn't, um, a profit opportunity, they're going to be weaker. And so if you're thinking about the, like difference you make compared to whether you do something or whether you don't do it in general, you might expect that to be bigger where you're doing something nonprofit.
Like in particular, this is where there isn't a market for a good thing. So it might be because the things you're helping like aren't humans maybe because they like live in the future so they can't pay for something it could also be because maybe you want to or get a really impactful technology off the ground in those cases you get a kind of free rider dynamic i think where there's less reason to like where you can't protect the ip and patent something there's less reason to be the first mover and so this is like maybe it's not for profit but starting a helping kind of get a technology off the ground which could eventually be a space for a bunch of for-profit companies to make a lot of money that seems really exciting also creating markets where there aren't markets seems really exciting so for instance setting up like amcs advanced market commitments or prizes or just giving yeah creating incentives where there aren't any so you get the like efficiency and competition kind of gains that you get from the for-profit space that seems great but that's not really answering your question because the question is like what about actual for-profit companies i don't know what i have to see, like in terms of whether they're being underrated.
Yeah, actually, I'm just curious what you think. Okay, so I think I have like four different reactions to what you said.
I've been remembering the number four, just in case I'm at three and I'm like, I think I had another thing to say. Okay, so yeah, so I had a draft, an essay about this.
I didn't end up publishing, but that led to a lot of interesting discussions between us.

So that's why we might have, I don't know, in case the audience feels like they're interrupting a conversation that was already preceded the beginning here. So one is that to what extent should we expect this market to be efficient? So one thing you can think is, listen, the amount of potential startup ideas are so vast, and the amount of great founders is so small that you can have a situation where the most profitable ideas are, yeah, it's right, that somebody like Elon Musk will come up and pluck up maybe the $100 billion ideas.
but if you have like a company like uh wave i'm sure they're doing doing really well. But if it's not obvious how it becomes the next Google or something, and I guess more importantly, if it requires a lot of context, for example, you talked about neglected groups.
I guess this doesn't solve for animals and future people. But if you have something in global health where you're like a neglected group is, for example, people living in Africa, right? The people who could be building companies don't necessarily have experience with the problems that these neglected groups have.
So if you have, it's very likely, or I guess it's possible that you could come upon an idea if you were specifically looking at how to help, for example, you know, people suffering from poverty in the poor parts of the world, you could identify a problem that just people who are programmers in Silicon Valley just wouldn't know about. Okay, so a bunch of other ideas regarding the other things you said.
One is, okay, maybe a lot of progress depends on fundamental new technologies and companies coming at the point where the technology is already available and somebody needs to really implement and put all these ideas together. Yeah, two things on that.
One is, like, we don't need to go in a rabbit hole on this. One is the argument that actually the invention itself, not the invention, the innovation itself is a very important aspect and potentially a bottleneck aspect of this, of getting an invention off the ground and scaled.
Another is if you can build a $100 billion company or a trillion dollar company, or maybe not even just like a billion dollar company, you have the resources to actually invest in R&D. I mean, think of a company like Google, right? Like how many billions of dollars have they basically poured down the drain on like harebrained schemes.

You can have like reactions to DeepMind with regards to AI alignment,

but I mean, just like other kinds of research things they've done seem to be like really interesting and really useful.

And yeah, all the other fame companies have like a program like this,

like Microsoft Research or I don't know what Amazon's thing is.

And then another thing you can point out is with regards to setting up a market that would make

other kinds of ideas possible and other kinds of businesses possible. In some sense, you could

maybe make the argument that maybe some of the biggest companies, that's exactly what they've

done, right? If you think of Uber, it's not a market for companies. Or maybe Amazon is a much

better example here where theoretically you had an incentive before like if a pandemic happens i'm going to manufacture a lot of masks right but amazon provides uh makes the market so much more liquid so that you can you can just start manufacturing masks and now immediately put them up on amazon so it seems in these ways actually maybe starting a company is it really uh is an effective way to deal with those kinds of problems yeah man we've gone so async here i should have just like said one thing and then um yeah so i'm sorry for throwing lots of things at you as far as i can remember those are all great points um yeah i think my like high level thought is i'm not sure how much we disagree but i guess one thing i want to say is again thinking about like in general what should you expect the real biggest opportunities to typically be for like just having a kind of impact you know one thing you might think of is if you can optimize for two things separately that is optimize for the first thing and then use that to optimize for the second thing versus trying to optimize for some like combination of the two at the same time you might expect to do better if you do the first thing so for instance you can do a thing which looks a bit like trying to do good in the world and also like make a lot of money um like social enterprise and often that goes very well but you can also do a thing which is try to make a lot of money and just just you know make a useful product that is not directly aimed at you know proving humanities prospects or anything but it's just kind of just great and then use the success of that first thing to then just think squarely like how do i just do the most good um without worrying about whether there's some kind of profit mechanism i think often that strategy is going to pan out well there's this thought about the kind of the tails coming apart if you've had this thought that at the extremes of like either kind of scalability in terms of opportunity to make a lot of profit and at the extreme of doing like a huge amount of goods you might expect it's better to be like not such a strong correlation again one reason like in particular that you might think that is because you might think the like future really matters like humanity's future and um sorry to be like a stuck record but like because there's not really like a natural market there because these people don't haven't been born yet that is like a rambly way of saying that okay that's not always going to be like a stuck record, but like, there's not really like a natural market there because these people don't,

haven't been born yet.

That is like a rambly way of saying that,

okay, that's not always going to be true.

But I basically just agree that,

yeah, I would want to resist

a framing of doing good,

which just leaves out,

like also doing some,

starting some successful for-profit company.

Like there are just a ton

of really excellent examples

of where that's just been a huge success and yeah, should be celebrated um so yeah i think i disagree with the spirit um maybe we disagree somewhat on the like how much we should kind of relatively emphasize these different things but um doesn't seem like a kind of very deep disagreement yeah yeah maybe i've been spending too much time with brian kaplan or something So by the way, the tales coming apart, I think is a very interesting, very interesting way to think about this. Scott Alexander has a good article on this.
And like one thing he points out is like, yeah, generally you expect like different parts of different types of strength to correlate. But the guy who has the strongest grip strength in the world is probably not the guy who has the biggest squad in the world.
Right. Yeah.
OK. So I think that's an interesting place to leave that idea.
Okay. Another thing I wanted to talk to you about was backtesting EA.
So if you have these basic ideas of we want to look at problems that are important, neglected, intractable, and apply them throughout history. So like a thousand years back, 2000 years back, a hundred years back, is there a context in which applying these ideas would maybe lead to a perverse outcome, an unexpected outcome? Are there examples where, I mean, there's many examples in history where things, you could have easily made things much better, but maybe made it much better than even conventional morality or present-day ideas would have made them.
So we'll get to part of the question which as i understand it is something like can we think about what some kind of effective altruism like movement or if these ideas were in the water like significantly earlier whether they might have misfired sometimes or maybe they might have succeeded in that in fact how do we think about that at all I guess one thing I want to say is that very often the correct decision, ex ante, is a decision which might do really well in like some possible outcomes, but you might still expect to fail, right? The kind of mainline outcome is, this doesn't really pan out, but it's a moonshot. And if it goes well, it goes really well goes really well this is i guess similar to certain kinds of investing where if that's the case that you should expect even if you follow the exact like correct strategy you should expect to look back on the decisions you make made rather and uh see a bunch of failures um where failure is you know you just have very little impact and i think it's important not to to kind of resist the temptation to like really kind of negatively update on whether that was the correct strategy because it didn't pan out and so i don't know if something like ea type thinking was in the water and was like thought through very well yep i think it would go wrong a bunch of times and that shouldn't be kind of terrible news when i say go wrong i mean like not pan out rather than do harm if it did harm okay that's like a different thing um i think one thing this points to by the way is like you could take it you could choose to take a strategy which looks something like mini max regret right so you have a bunch of options you can ask about the kind of roughly worst case outcome um or just kind of like you know default outcome on each option and one strategy is just like choose the option with the least bad kind of meh case and if you take the strategy you should expect to look back on the decisions you make and like not see as many failures so that's one point in favor of it another strategy is just like do the best thing in expectation like if i made these decisions constantly what in the long run just ends up like making the world best and this looks a lot like just taking the highest ev option maybe you don't want to like run the risk of causing harm so you know that's okay to include and you know i happen to think that like that kind of second strategy is very often going to be a lot better and it's really important not to be misguided by this kind of feature of the mini max regret strategy where you look back and kind of feel a bit better about yourself in many cases if that makes sense yeah that's super interesting i mean if you think about uh back testing in terms of the models for the stock market, to analogize this, one thing that tends to happen is that a strategy of just trying to maximize returns from a given trade, that results very quickly in you going bankrupt.
because sooner or later, there will be a trade where you lose all your money. And so then there's something called the Cali Criterion,

where you reserve a big portion of your money

and you only bet with a certain part of it,

which sounds more similar to the minimized regret thing here. Um, unless your expected value includes a possibility that, I mean, in this context that like, you know, like losing all your money is like an existential risk.
Right. So, uh, maybe you like bake into the cake in the definition of expected value, the odds of like losing all your money.
Yeah, yeah, yeah. That's a great, that's a really great point.
Like, I guess in some cases you want to take something which looks a bit more like the Kelly bet. But if you act to your margins, like relatively small margins compared to the kind of pot resources you have, then I think it often makes sense to take just the do the best thing, but not worry too much about what's the kind of like size of the Kelly bet.
But yeah, that's a great point. And like, I guess a naive version of doing this is just kind of losing your bankroll very quickly because you've like taken two enormous bets and forgotten that they might not pan out.
Yeah. So I appreciate that.
Oh, what did you mean by add to the margins? So if you think that there's a kind of a pool of resources from which you're drawing, which is something like maybe philanthropic funding for the kind of work that you're interested in doing. And you're only a relatively marginal actor.
Then that's unlike being like an individual investor where you're more sensitive to the risk of just running out of money. And when you're more like an individual investor, then you want to like pay attention to what the size of the kelly bet is if you're acting at margins then maybe that is less of like a big consideration although it is obviously still a you know very important point well and then um by the way i don't know if you saw my recent blog post about why i think there'll be more billionaires yes okay yeah yeah uh i don't know what your reaction to any of the ideas there is but like my claim is that we should expect the total funds dedicated to you to grow quite quite a lot um yeah i think i really liked it by the way i think it was great one thing it made me think of is that there's quite an important difference between trying to maximize returns for yourself

and then trying to get the most returns

just like for the world,

which is to say just doing the most good.

Where one consideration we've just talked about,

which is a risk of just like losing your bankroll,

which is where like Kelly betting becomes relevant.

Another consideration is that as an individual, just like trying to do the best for yourself you have like pretty steeply diminishing returns from money or just like how well your life goes with that extra money right so like if you have like 10 million in the bank and you make another 10 million does your life get twice as good obviously not right and as such you should be kind of risk averse when you're thinking about the possibility of like making a load of money if on the other hand you just care about like making the world go well um then the world's an extremely big place and so you basically don't run into these diminishing returns like at all and for that reason like if you're making money at least in part to in some sense give it away or otherwise just like have a positive effect in some impartial sense then you're going to be less risk averse which means maybe um you fail more often but it also means that people who succeed like succeed really hard um yeah so i don't know if that's, in some sense, I'm just recycling what you said, but I think it's like a really kind of neat observation. Well, and another interesting thing is that not only is that true, but then you're also, you're also in a movement where everybody else's has a similar idea.
And not only is that true, but also the movement is full of people who are young, techie, smart, and as you said, risk neutral. So basically people who are going to be way overrepresented in the ranks of future billionaires.
And they're all hanging out and they have this idea that, you know, we can become rich together and then make the world better by doing so. You would expect that this would be exactly the kind of situation that would lead to people teaming up and starting billion-dollar companies.
All right. Yeah, so a bunch of other topics and effective altruism that I wanted to ask you about.
So one is, should it impact our decisions in any way as if the many worlds interpretation of quantum mechanics is true? I know the argument that, oh, you can just think of, you can just translate amplitudes to probabilities. And if it's just probabilities, then decision theory doesn't change.

My problem with this is I've gotten very lucky in the last few months. Now, I think it changes my perception of that if I realize actually most me's, and okay, I know there's problems with saying me's to what extent they're fungible.
most branches of the multiverse,

I'm significantly worse often that, that makes it worse than, oh, I just got lucky. Um, but like now I'm here.
And another thing is if you think of existential risk, um, and do think that it, even if like existential risk is very likely in some branch of the multiverse, survives i don't know that that seems better in the end than oh the probability was really low but like it just resolved to we didn't survive does that make sense okay all right there's there's a lot there i guess rather than doing a terrible job at trying to explain what this many worlds thing is about maybe it's worth just kind of pointing people towards you know just googling it i i should also add this enormous caveat that i don't really know what i'm talking about this is just kind of an outsider who's taking this kind of i know there's just this stuff seems interesting yeah okay so they just there's this question of like what if the many worlds view is true what if anything could that mean uh with respect to questions about like what should we do or what's important and one thing i want to say is just like without zooming into anything it just seems like a huge deal like every second every day i'm in some sense like just kind of dissolving into this like cloud of me's and like just kind of unimaginably large number of of me's and that each of me's is kind of in some sense of dissolving into more clouds um this is just like wild also seems somewhat likely um to be true as like far as i can tell okay so like what does this mean you yeah you point out that you can talk about having a measure over worlds in some sense you can there's actually a problem of how you get probabilities or how you make sense of probabilities on the many worlds view. And there's a kind of neat way of doing that, which makes use of questions about how you should make decisions.
That is, you should just kind of weigh future use according to, in some sense, how likely they are. But it's really the reverse.
You're like explaining what it means for them to be more likely in terms of how it's rational to weigh them um and then i think it's like a ton of very vague things i can try saying so maybe i'll just try doing like a brain dump of things you might think that like many worlds being true could push you towards being more risk neutral in certain cases if you weren't before um because in certain cases you're like translating from some chance of this thing happening or it doesn't into some fraction of you know worlds this thing does happen and another fraction it doesn't that's kind of like i do think it's worth reading too much into that because i think a lot of the like important uncertainties about the world are still like subjective uncertainties about how most worlds will in fact turn out but But it's kind of interesting and notable that you can like convert between overall uncertainty about how things turn out to like more certainty about the fraction of ways things turn out. I think another like interesting feature of this is that, so the question of like how you should act is no longer the question of like, how should you kind of benefit this person who is you in the future who's one person it's more like how do you benefit this like cloud of people who are all success of you there's just kind of like diffusing into the future and i think you point out that you could just like basically salvage a lot of basically all decision things even if that's true but the like picture on changes.
And in particular, I think just intuitively, like, it feels to me like the gap between acting in a self-interested way and then, like, acting in an impartial way where you're, like, helping other people, it kind of closes a little in a way. Like, you're already benefiting many people by doing the thing that's kind of rational to benefit you um which isn't so far from benefiting people who aren't like continuous with you in this special way so i kind of like that as a thing huh um it's interesting yeah and then okay there is also this like slightly more out there thought which is here's the thing you could say many worlds is true then there is at least a sense in which there are very very many more people in the future uh compared to the past uh like just unimaginably many more and even like the next second from now there are many more people so you might think that should like make us have a really steep negative discount rate on the future which is to say we should like value future times much more than present times and like in a way which would just kind of it wouldn't like modify how we should act it just like explodes how we should think about this this definitely doesn't seem right maybe one way to think about this is that if this thought was true or like was kind of directionally true then that might also be a reason for being extremely surprised that we're both speaking at like an earlier time rather than a later time because if you think you're just like randomly drawn from all the people who ever lived it's like absolutely mind-blowing that we get drawn from like today rather than tomorrow yeah given that it's like 10 to the something many more people than tomorrow um so it's probably wrong and wrong for reasons i don't have a very good handle on because i'm just like don't know what i'm talking about i mean i can kind of try parroting the reasons but like it's something i'm you know i'm just trying to really crock those reasons a bit more that that's really interesting uh i i i didn't think about that argument uh for uh the selection argument i think one resolution i've heard about this is that you can think of the proportion of you know hilbert space or like the proportion of all the the like the universe's wave function that could be the uh like the probability rather than each different branch uh you know what i just realized the selection argument you can you made maybe that's an argument against Bostrom's idea of we're living in a simulation.

Because basically his argument is that there will be many more simulations than there are real copies of you, therefore you're probably in a simulation. The thing about saying that all the simulations plus you are, your prior should be equally distributed among them seems similar to saying your prior of being distributed along each possible like branch of uh the wave function should be your prior across them should be the same whereas i think um in the context of the wave function you were arguing that maybe it should be like you shouldn't think about it that way you should think about like maybe a proportion of the total uh wave uh total holbert space um yeah does that make sense i don't know if i put it wait say it again how how it links into simulation type stuff instead of thinking about each possible simulation as an individual thing across which that is equally as likely each individual instance of a simulation is equally as likely as you're living in the real world maybe simulation as a whole is equally likely to you living in the real world just as you being alive today rather than tomorrow is equally likely despite the fact that there will be many more branches a new branches of the wave function tomorrow yeah okay there's a lot going on i feel like there are people who like actually know what they're talking about here just tearing their hair out like you might do this obvious thing um so you mentioned that's nature of having a podcast the point um you but by the way if you are one such person please do uh like email me or dm me or something i'm very interested um so yeah you mentioned like it is obviously there is a measure over over worlds and this like lets you talk about things being sensible again also maybe like one minor thing to comment on is talking about probabilities is kind of hard because ever in on many worlds just everything happens that can happen and so it's like difficult to get the language exactly right but anyway so totally the point.
And then the question of how it maps on to simulation type thoughts.

Here's a, I don't know, like maybe a thought which kind of connects to this.

Do you know like sleeping beauty type problems?

No, no.

Okay, certainly a vaguely remembered example.

But let's start.

So in the original sleeping beauty problem, you go to sleep.

Okay. And then I flip a coin.
Or coin um or you know whoever someone flips a coin if it comes up tails they wake you up once if it comes up heads they wake you up once and then they uh you go back to sleep and you know your memory is wiped and then you're woken up again as if you're being woken up uh in the other world and um okay so you go back to sleep and you know your memory is wiped and then you're woken up again as if you're being woken up in the other world and okay so you go to sleep you wake up and you ask what is the chance that the coin came up heads or tails and it feels like there's a kind of really intuitive reasons for both 50 and one third here's a related question which is maybe bit simpler, at least in my head. I flip a coin.
It comes up heads. I like just make a world with one observer in it.
And if it comes up tails, I make a world with a hundred observers in it. Maybe it could be like running simulations with a hundred people.
You like wake up in one of these worlds. You don't know how many other people are there in the world.
You just know that someone has flipped a coin and decided to make a world with either one or 100 people in it. What is the chance that you're in the world with 100 people? And there's a reason for thinking it's half, and there's a reason for thinking that it's 100 over 101.
Does that make sense? So I understand the logic behind the half. What is the reason for thinking? I mean, regardless of where you ended up as the observer,

it seems like if the odds of the coin coming up... Oh, I guess, is it because you'd expect

there to be more observers in the other universe?

Wait, yeah, so what is the logic for thinking

it might be 100 over 101?

Well, you might think of it like this.

How should I reason about where I am?

Well, maybe it's something like this. I'm just a random observer, right? Of all the possible observers that could have come it like this.
How should I reason about where I am? Well, maybe it's something like this.

I'm just a random observer, right?

Of all the possible observers that could have come out of this.

And there are 101 possible observers.

And you can just imagine that I've been like randomly drawn.

Okay.

And if I'm randomly drawn from all the possible observers,

then it's overwhelmingly likely that I'm in the big world.

Huh.

That's super interesting.

I should say, actually, I should plug someone who does know what they're talking about on this which is joe carl smith who has like a series of like really excellent uh blog posts oh um he's coming on the podcast next week yes amazing okay you should ask him about this because he he's like really gonna talk you about it i don't want to like okay i don't want to scoop him but one thought that comes, which is just like really cool, maybe just to kind of round this off, is if you're like a 100 over 101 on examples like this, and you think there's any chance that like the universe is infinite in size, then you should think that the chance you're in a universe that is infinite in extent is just like one or close to one, if that makes sense i see yeah yeah okay so in the end then does your awareness in many worlds is like a good explanation has that has that impacted your view of what should be done in any way yeah so i don't really know if i have a good answer my best guess is that things just shake out to kind of where they started as long as you started off in this kind of like relatively risk neutral place i suspect that if many words is true this might have like this might make it much harder to hold on to kind of intuitive views about personal identity for the reason that like there isn't this like one person who you're like continuous with throughout time and no other people. It's how people tend to think about what it is to like be a person.
And then there's this kind of like vague thing, which is just occasionally I, you know, just like remember like every other month or so that maybe many was is true. I just kind of like blows my mind.
I don't know what to do about it. And I just like go on with my day.
That's about where I am. Okay.
All right. topics to uh talk about talent search what is the what is ea doing about identifying let's say more people like you basically right but maybe even like people like you who are not in like places yeah where they're not next to oxford i don't know where you actually are from originally but um like i if they're from like some like i don't know like china or india or something um well what is the a doing to recruit more fins from uh from places where they might not otherwise work on the a yeah it's a great question and yeah to be clear i just won the lottery on things going right to kind of um be lucky enough to do what i'm doing now and so yeah in some sense the question is how do you like print more winning lottery tickets and indeed like find those people who really deserve them but like just currently not being identified a lot of this comes i just i read that book talent um tyler cowen and daniel grace recently and yeah there's something really powerful about this fact that this like business of, you know, finding really like smart driven people and connecting them with opportunities to like do the things they really want to do.
This is like really kind of still inefficient. And there's just still like so many people out there who like aren't kind of getting those opportunities.
I actually don't know if I have much more like kind of insight to add there other than this is just a big deal. And it's like, there's a sense of which it is an important consideration for this like project of trying to do the most good.

Like you really want to find people who can like put these ideas in practice.

And I think there's a special premium on that kind of person now,

given that there's like a lot of philanthropic kind of funding ready to like be deployed.

There's also a sense in which this is just like, in some sense, like a cause in its own right.

Thank you. of person now given that there's like a lot of philanthropic kind of funding ready to like be deployed there's also a sense in which this is just like in some sense like a cause in its own right it's kind of analogous to open borders in that sense at least in my mind hadn't really like appreciated it in on some kind of visceral level before i wrote that book and then another thing he talks about in the book is you want to get them when they're young you can really shape somebody's ideas about what's worth doing if you have and then also their ambition about what they can do if you catch them early um and you know tyler khan also had an interesting blog post a while back where he pointed out that a lot of people applying to his emergent ventures uh program a lot of young people applying um are heavily influenced by effective altruism which seems very like it's going to be a very important factor in the long term.
I mean, eventually these people will be in positions of power. Yeah, so maybe effective altruism is already succeeding to the extent that a lot of the most ambitious people in the world are identified that way, at least, I mean, given the selection effect that the telecomments program has.
But yeah, so what is it that can be done to get people when they're young um yeah i mean it's a very good question and i think like what you point out there is is right there's some nick whittaker has this blog post to something like the it's called the lamplight model of talent curation um he like draws this distinction between casting um like a very wide net that's just kind of very legibly prestigious and then you know filtering through thousands of of applications or in some sense like putting out the bat signal that in the first instance just like attracts the like really promising people and maybe actually drives away people who would be better fit for something else um so like an example is if you were to hypothetically write a quite quite a wonky economics blog like every day for however many years and then run some fellowship program you're just like automatically selecting for people who read that blog and that's like a pretty good kind of starting population to begin with so i really like that that kind of thought of just like not needing to be like incredibly loud and like prestigious sounding but rather just like being quite honest about what this thing is about. So you just attract the people who, who like really sort it out because that's just quite a good feature.
I think another thing that, again, this is like not a very interesting point to make, but something I've really realized the value of is like having physical hubs. And so there's this model of you know running like fellowships for instance where you just like find really promising people and then there's just so much to be said for like putting those people in the same place and you know surrounding them with maybe people who are a bit more like senior and just kind of like letting this natural process happen where people just get really excited that there is this like community of people working on stuff that previously you just been kind of reading about in your bedroom on like some blogs that like as a source of motivation.
I know it's like less tangible than other things, but yeah, just like so, so powerful. And like probably, I know one of the reasons I'm like working here, maybe.
Yeah, it is one aspect of working from home that you don't get that um regarding the first point so i think uh um maybe maybe that should update in favor of not doing community outreach and community building like maybe that's negative marginal utility because like if i think about for example um my local so there was an effective altruism group at my college that didn't attend. Um, and there's also like an effective altruism group for the city as a whole, um, in Austin that I don't attend.
Um, and the reason is just because I don't know the people who there's some sort of, um, adverse selection here where the people who are leading organizations like this are people who couldn't just like, aren't directly doing the things that effective altruism says they might be, might consider doing, um, and are more interested in the social aspects of altruism. Um, so I don't know, I, I'd be, I'd be much less impressed with the movement.
If my first introduction to it was these specific groups that I, like I've had the personal, I've personally interacted with rather than, I don't know, just like hearing Will McCaskill on a podcast. By the way, the fourth ladder being my first introduction to effective algorithm.
Yeah, interesting. I feel like I really don't want to underwrite the job that community builders are doing.
I think, in fact, it's turned out to have been, and still is just like incredibly valuable especially just looking at the numbers of like what you can achieve as like a group organizer at your university like maybe you could just change the course of like more than one person's career over the course of like a year of your time that's like pretty incredible but yeah i guess part of what's going on is that the difference between like going to your like local group or like engaging with stuff online is that you get to kind of choose the stuff you engage with and like maybe one upshot here is that they're like kind of set of ideas that might get associated with um ea it's like very big and you don't need to buy into all of it or just like be passionate about all of it like if this kind of ai stuff just like really seems interesting but maybe other stuff is just like more peripheral then you know one yeah like this could push towards wanting to have like just a specific group for people who are just like you know this ai stuff seems cool other stuff not my like cup of tea um so yeah i mean in the future is like things get scaled up as well as kind of scaling out i think also maybe having this like differentiation and kind of diversification of like different groups i mean seems pretty good but just like more of everything also seems good yeah yeah i i'm i'm probably overfitting on my own experience and given the fact that i don't didn't uh didn't actively interact with any of those communities i'm uh probably not even informed on those experiences of loves um but there was an interesting post on an effective altruism forum that somebody sent me where they were making the case that um at their college as well they got the sense that uh the ea community building stuff had the negative impact because people were kind of turned off by their peers and also there's a difference between like i don't know somebody like saying baker and frito robert mccaskell uh advising you and obviously virtually um to um uh to do these kinds of things versus like i don't know some some sophomore at your university starting philosophy right um no offense um yeah um yeah i do um i think my guess is that like on net these these efforts are still just like overwhelmingly positive but um yeah i think it's like pretty interesting that people have the experience you describe as well yeah and interesting to think about ways to kind of like get around that so long reflection is uh it seems like a bad idea no i'm so glad you asked um yeah i want to say i want to say no i think in some sense i've like come around to it as an idea but yeah okay maybe it's worth like oh really interesting maybe it's worth i guess like trying to explain what's going on this idea um so if you like were to zoom out like really far over time and consider our place now like in history and you could like ask this question about suppose in some sense humanity just became like perfectly coordinated what's the plan like what what kind of in general should we be prioritizing and like in what stages and um you might say something like this it looks like this moment in, which is to say maybe this century or so, just looks kind of wildly and unsustainably dangerous. Or kind of so many things are happening at once.
It's really hard to know how things are going to pan out. But it's possible to imagine things panning out really badly and badly enough to just more or less end history history okay so before we can like worry about some kind of longer term considerations let's just get our act together and make sure we don't mess things up so okay like that seems like a pretty good first priority but then okay suppose that you succeed in that and like we're in a significantly safer kind of time uh what then you might notice that the scope for like what we could achieve is like really extraordinarily large like maybe kind of larger than most people kind of like typically entertain like we could just do a ton of really exceptional things but also this is kind of feature that maybe in the future, not especially long-term future, we might more or less for the first time be able to embark on these like really kind of ambitious projects that are in some important sense, like really hard to reverse.
And that might make you think, okay, at some point it'd be great to like, you know achieve our that potential that we have and just like like for instance a kind of lower band on this is lifting everyone out of poverty who remains in poverty and then like going even further just making everyone even wealthier able to do more things that they want to do making more scientific discoveries whatever so we want to do that but maybe something should come in between these two things which is like figuring out what is actually good um and okay why should we think this i think one thought here is it's very plausible i guess it's kind of links to what we were talking about earlier that the way we think about you know like really positive futures like one of the best futures it's just like really kind of incomplete almost certainly we're just getting a bunch of things wrong by this kind of pessimistic induction on the past like a bunch of smart people thought really reprehensible things like 100 years ago so we're getting things wrong and then it's like second thought is i don't know seems possible to actually make progress here in thinking about what's good there's this kind of interesting point that most like work in i guess you might call it like moral philosophy has focused on the negatives so you know avoiding, fixing harms, avoiding bad outcomes. But this idea of like studying the positive, studying like what we should do if we can kind of do like many different things, this is just like super, super early.
And so we should expect to be able to make a ton of progress. And so, okay, again, imagining that the world is like perfectly coordinated, would it be a good idea to like spend some time, maybe a long period of time, kind of deliberately holding back from embarking on these like huge irreversible projects, which maybe involve like leaving Earth in kind of certain, you know, scenarios, or otherwise just like doing things which are hard to undo? Should we spend some time thinking before then? Yeah, sounds good.
and then i guess the very obvious response is okay that's a pretty huge assumption um that we can just like coordinate around that and i think the answer is yep it is but as a kind of directional ideal should we push towards or away from the idea of like taking our time holding our horses kind of getting people together who haven't really like been part of this like conversation and like hearing them. Yeah, definitely seems worthwhile.
All right. So I have another good abstract idea that I want to entertain by you.
So, you know, it seems like kind of wasteful that we have these different companies that are building the same exact product. But, you know, because they're really the same exact product, they don't have economies of scale and they don't have coordination.
There's just a whole bunch of loss that comes from that, right? Wouldn't it be better if we could just coordinate and just like figure out the best person to produce something together and then just have them produce it. And then we could also coordinate to figure out like, well, what is the right quantity and quality for them to produce? I'm not trying to say this is like communism or something something I'm just saying it's ignoring what would be required like in this analogy you're ignoring like what kinds of information gets lost and um what kinds of uh let's say what what it requires to do that so-called coordination um in the communism example um in this example it seems like that you're not uh whatever would be required to prevent somebody from realizing something like let's say somebody has a vision for like we want to colonize a star system we want to like i don't know make some new technology right that that's part of something that the long reflection would curtail maybe you're getting this wrong but it seems like it would require almost a global panopticon uh totalitarian straight state to be able to like prevent people from escaping the reflection okay so there's a continuum here and i basically agree that some kind of panopticon like thing not only is impossible but actually sounds pretty bad but something where you're just like pushing in the direction of being more coordinated on the international level about things that matter seems like desirable and possible um and in particular like preventing really bad things rather than like trying to get people to like all do the same thing um so the biological weapons convention just strikes me as an example which is like imperfect and underfunded but you know nonetheless kind of directionally good and maybe an extra point here is that there's like a sense in which the long reflection option or i guess the better framing is like aiming for a bit more reflection rather than less that's like the conservative option that's like doing what we've already been doing um just a bit longer rather than some like radical option so i i agree it's like pretty hard to imagine like you know some kind of super long period where everyone's like perfectly agreed on on doing this um but yeah i think framing it as like a directional ideal seems pretty worthwhile and i guess i know maybe i'm kind of naively hopeful about the possibility of coordinating better around things like that uh there's two reasons why this seems like a bad idea to me one is okay first of all who is going to be deciding when we've come to a good consensus about uh uh okay so we've decided like this is the way things should go um now we're like ready to escape the long reflection and realize our vision for the rest of the lifespan of the universe who is going to be doing that it's

the people who are presumably in charge of the long reflection almost by definition it'll be

the people who have an incentive in preserving whatever power uh well power balances exist at

the end of the long reflection and then the second thing you'd ask is like um there's like a difference

between i think having a consensus on not using biological weapons or something like that, where you're limiting a negative versus it seems like when we've had, when we've required society-wide consensus on what we should aim towards achieving, the outcome has not been good in history. it seems better than on the positive end to just leave it open-ended and then just maybe um when necessary say that like the very bad things we might want to restrict together yeah yeah okay i think i kind of just agree with a lot of what you said so i think the best like framing of this is the version where when you're

preventing something which most people can agree is negative which is to say some actor unilaterally deciding to like do this huge irreversible or set out on this huge irreversible project like something you said was that the outcome is going to reflect the like values of whoever is like in charge and then not just the values i mean also i mean just like think about how guilds work right it's like um if the it whenever we for example in industry we let how the industry should. We let those kinds of decisions we made collectively by the people who are currently dominant in the industry, um, uh, you know, gills or something like that.
Um, or, um, or like industrial conspiracies, uh, as well, it seems like the outcome is, um, uh, outcome is just, uh, bad. Like, uh, and so like my prior would be that at the end of such a situation our ideas about what we should do would actually be worse than going into the long reflection i mean obviously uh the uh it really depends on how it's implemented right so i'm not saying that but i just just like broadly given all possible implementations and maybe the most

likely implementation given how governments run now yeah yeah yeah i should say that like i am in fact like pretty uncertain i just kind of i know it's more more enjoyable to like give this thing it's hearing no no i i enjoy the uh the parts where we have disagreements yeah so one thought here is if you're worried about the future like the course of the future being determined by some single actor i mean that that worry is just symmetrical with the worry of letting whoever wins some race first go and do you know go and do the thing the like project where they more or less kind of determine what happens to the rest of humanity so the option where you like kind of deliberately wait and let people like have some uh like global conversation i know it seems like that is less worrying even if the worry is still there i should also also say, I can imagine the outcome is not unanimity.

In fact, it'd be like pretty wild if it was, right?

But you want the outcome to be some kind of like

stable, friendly disagreement

where now we can kind of like

maybe reach some kind of Kosian solution

and we'd like go and do our own things.

There's like a bunch of projects

which kind of go off at once.

I know that feels like really great to me compared to whoever gets there first determining how things turn out. But yeah, it's hard to talk about stuff because it's somewhat speculative.
But I think it's just a useful north star or something to try pointing towards. Okay, so maybe to make it more concrete.
I wonder if your expectation that the consensus view would be better than the first mover view. In today's world, maybe, okay, either we have the form of government, not just government, but also the industrial and logistical organization that, I don't know, like Elon Musk has designed for Mars, either that is, so if he's the first mover for Mars, would you prefer that? Or we have the UN come to a consensus between all the different countries about how we should have the first Mars colony organized.
Or would the Mars colony run better if after 10, 20 years of that that they're the ones who decide how the first mars colony goes global consensus views to be better than first moon reviews yeah that's a good question and i mean one obvious point is not always right like there are certainly cases where the consensus view is just like somewhat worse um i think you limit the downside with the consensus view right because you give people space to express why they just don't think one idea is bad. I don't know if this is your question but it's a really good one.
You can imagine the UN-led thing is going to be way slower. It's going to probably be way more expensive.
The International Space Station is a good example where I don't know. I turned out pretty pretty well but a like private version of that would have happened like in terms of a lot more effectively i guess i'm not like the elon example is is kind of a good one because it's not obvious why that's like super worrying the thing i have in mind in the like long reflection example is maybe like a bit more kind of wild um but it it's really hard to make it concrete so i'm yeah somewhat floundering there's there's also another reason to like um to the extent that somebody has the resources i i don't know maybe this like just gets to an irreconcilable um question about your priors about other kinds of political things but to the extent that somebody has been able to build up resources resources privately to be able to be a first mover in a way that is going to matter for the long term, what do you think about what kind of views they're likely to have and what kind of competencies they're likely to have versus assuming that the way governments work and function and the quality of their governance doesn't change that much for the next 100 years, what kind of outcomes you will have from...
Basically, if you think the likelihood of leaders like Donald Trump or Joe Biden is going to be similar for the next 100 years, and if you think the richest people in the world are the first movers are going to be people that are similar to Elon Musk, I can see people having genuinely different reasonable views about who should... Should Elon Musk of 100 years from now or the Joe Biden of 100 years from now have the power to decide of the long-run course of humanity is that a fulcrum in this debate that you think is important or is that maybe is that not as relevant as i might think yeah i guess i'll try saying some things and maybe it'll like respond to that kind of two things are going through my head so one is something like you should expect these questions about like what should we do when we have the capacity to do like a far larger range of

things that we currently have the capacity to do.

That question is going to hinge like much more importantly on like theories

people have and like worldviews and very kind of particular details much more

than it does now.

And I'm going to do a bad job of trying to articulate articulate this but there's some kind of analogy here where if you're like fitting a curve to some points you can like overfit it and in fact you can overfit it in various ways and they all look pretty similar but then if you like extend the axes so you like see what happens to the curves like beyond the points those different ways of fitting it can you like go all over the place and so like there's some analogy here where when you kind of expand the like the space of what we could possibly do um different views which look kind of similar right now or at least come to similar conclusions they just like go all over the shop and so that is not responding to your point but i think it's like maybe worth saying like this is a reason for expecting reflecting on what the right view is to be quite important and like and then i guess i'll listen to a second thought which is something like i guess there's two things going on one is the thing you mentioned which is there are basically just a bunch of political dynamics where you can just like reason about where you should expect values to head for like political reasons in some sense is like now better than the default and what is that default and then there's a like a kind of different way of thinking about things which is like separately from political dynamics can we actually make progress and like thinking better about what's best to do in the same way that we can like make progress in science like kind of separately from the fact that like people's views about science are influenced by like political dynamics and maybe like a disagreement here is a disagreement about like how much scope there is to just get better at thinking about these things i mean one like reason i can give i guess i kind of mentioned this earlier is this project here of like thinking about what's best to do maybe kind of thinking better about ethics is not the thing it's like maybe more relevant to think this is like on the order of kind of 30 years old rather than on the order of 2000 years old you might call it like secular ethics perfect writes about this right he's like talks about this kind of there are at least reasons for hope um we haven't ruled out that we can make not make a lot of progress because the thing we were doing before like we were trying to think systematically about what's best to do was just very unlike the thing that we should be interested in i'm sorry that was like a huge rabble but hopefully there's something there yeah i i want to get go back to what you were saying earlier about how um you can you think you can think of uh don't know, global consensus as the reduced variance version of future views. And, you know, so I think that like, to the extent that you think a downside is really bad.
Um, I think that's a, that's a good argument. Um, and then, yeah, I mean, it's like similar to my argument against like a monarchist, which is that like, actually, I think it is reasonable to expect that if you could like reasonably, uh, you could, uh, have people like lee kuan yu who are in charge of your country and you have a monarchy that things might be better than a democracy it's just that the the bad outcome is just so bad that it's like better just having like a low variance uh thing like democracy is if i want to talk about maybe one last kind of trailing thought on what you said is um i think i guess popper has this thought and also david deutsch like did a really good job at kind of explaining it about one like underrated value of democracy is not just in some sense having this function to like combine people's views into like some kind of you know optimal path which is like some mishmash of what everyone thinks it's also like having the ability for um people who are being governed to just like cancel this current experiment in governance and try again so it's something you know it's like we'll give you freedom to you, implement this kind of governance plan that seems really exciting.

And then we're just going to, like, pull the brakes when it goes wrong.

And that kind of option to, like, start again, in general, just feels, like, really important as some kind of tool you want in your, like, toolkit when you're thinking about these, like, pretty big futures. I guess my hesitation about this is I can't imagine a form of government where at the end of it, I would expect that a consensus view from, I mean, not just like nerdy communities like EA, but like an actual global consensus would be something that I think is a good path.
Maybe it's something like, I don't think it's like the worst possible path. But I mean, one thing about reducing variance is like, if you think the far future could be really, really good, then by reducing variance,'re like cutting out off a lot of expected value right um and then you can think like democracy works much better in cases where the problem is like closer to something that the people can experience it's like i don't know if the democracies don't have famines because like if there's a famine you get voted out right or like you have major wars as well right but um if you're talking about like a some form of uh consensus way of deciding uh what should the far far future look like it's not clear to me why uh the consensus view on that would be is likely to be correct yeah yeah yeah i think maybe some of what's going on here is i'd want to resist the and it's my fault for i think like suggesting this framing that is like you just you spend a bunch of time thinking and like having this conversation and then you take this like international vote on what we should do um i think maybe another framing is something like let's just give the time for the people who like want to be involved in this to like make the progress that could be possible on thinking about these things and then just like see where we end up where i don't know there's like a very weak analogy to progress in like other fields where we don't make progress in like uh mathematics or science by like taking enormous votes on what's true um but we can by just like giving people who are interested in making progress there's space and time to do that and then at the end it's like often pretty obvious what turns out that's like very begging the question because it's like way more obvious um what's right and wrong if you're like doing math compared to doing this kind of thing but um no but also like what happens if this seems similar to like the question about uh monarchy where it's like what happens if you pick the wrong person or like the wrong politburo to pick what the what what the what the charter you take to the rest of the universes yeah it seems like a hard problem to ensure that you have the group of people who will be deciding this either if it's a consensus or if it's a single person or anything in between like it has to be some decision maker right um i think you could just imagine there being no decision maker right so like the thing could be let's agree to have some time to reflect on what is best and we might come to something and then at the end like you know one version of this is just let things happen like there's no final decision when someone walks up it's just like that time between doing the thing and thinking about it just like extending that time for a bit seems good i see okay okay uh yeah sorry i missed that uh earlier okay uh so actually one of the major things you were going to discuss this is like one all the things we discussed so far was one like quadrant of like of the conversation actually you know before we talk about space governance let's talk about uh podcasting so you have your own podcast um i have my own what have you uh why did you start it and like what have you what have you been your like experiences so far what have you learned about the the joy and impact of podcasting so story is uh luca who's a close friend of mine who i do this podcast with

we're both at university together and we're like both podcast nerds and i think i remember we're

in our last year and we had this conversation like we're like surrounded by all these people

who just seem like incredibly interesting like all these you know like academics we really love

to talk about um or talk to and if we just like email them saying we're doing a podcast and

Thank you. academics we really love to talk about um or talk to and if we just like email them saying we're doing a podcast and wanted to interview them that could be a pretty good excuse to talk to them so let's see how easy it is to do this turns out the startup costs on doing a podcast are like pretty low if you want to do like a scrappy version of it right did that it turns out that like academics especially but just like tons of people really love being asked to talk about the things they think about all day right it's like a complete win-win where you're like you're trying to boost the ideas of someone or some actual person who you think deserves more air time that person gets to like talk about their work and you know spread their ideas so it's like huh there's like no downsides to doing this other than the time also i should say that the kind of yes rate on our emails was like considerably higher than we thought we were you know like two random undergrads with microphones um but there's this really nice like kind of snowball effect where if someone who is like well known is like gracious enough to say yes, despite knowing not really knowing what you're about.
And then you do an interview and like it's pretty good interview. When you're emailing the next person, you don't have to like sell yourself.
You can just be like, hey, I spoke to this other impressive person. And of course, you get this like this kind of snowball.
So no, it's definitely a's great isn't the best kind of fancy scheme though podcasts as like a form of media are just like incredibly special um there's something about just the incentives between like guest and host just like aligned so much better than like i know if this was like some journalistic interview it'd be like way kind of uncomfortable there's something about the fact that it's still kind of hard to like search transcripts so there's less of a worry about like forming all your words in the right way so it's just like more relaxed yeah yeah recommended yeah i know it's it's and it's um it's such a natural form of you can think of writing as a sort of way of imitating conversation. And audiobooks is a way of trying to imitate a thing that's trying to imitate conversation.
You're like, audiobooks seem like they're supposed to – because writing is like, yeah, you're visually perceiving what was originally an ability you had originally for understanding, you know, audible ideas. But then audiobooks, it's like you're going through two layers of translation there where you don't have the natural repetition and the ability to gauge the other person's reaction and so on that, and the back and forth, obviously, that an actual conversation has has and um yeah so that that's why it's like people um potentially listen to like podcasts too much where i don't know that they're just like they have something in the areas the whole day which you can't imagine for like body books right yeah a few things this makes me think of one is there's some experiment where i guess you can just do it yourself when if you force people not to use disfluences disfluencies sorry like ums and ahs those people just get like much worse at reading words in some sense like disfluencies like help us i guess i'm using the word like right now communicate thoughts for some reason and then if you take a podcast like this i guess i can speak myself.
And then you word for word transcribe what you're saying.

When I say you, I mean me.

It's like hot garbage.

It's like I've just learned how to talk.

Yes.

But that pattern of speech, like you point out, is in fact easier to digest.

Or at least it requires less kind of stamina or effort. No, yeah, and it seems to have an interesting point about this in Antifragile, I'm vaguely remembering this, but he makes a point that sometimes when a signal is distorted in some way, it makes it, you retain or absorb more of it because you have to go through extra effort to understand it, which is a reason, for example, I think his example was, if, I don't know, if somebody is like speaking, but they're like far away or something, so their audio is muted, you have to like apply more concentration, which makes it actually, which means you retain more of their content.
So if you like overlay what someone says with a bit of noise, or on the volume, very often people have better comprehension of it because of the thing you just said, which is like you're paying more attention. Also, I think maybe I was misremembering the thing I mentioned earlier, or maybe it's a different thing, which is you can take perfect speech, like recordings, and then you can insert ums and ah us and like make it worse and then you can do

like a comprehension test where people listen to the different versions and remember it and they do better with the versions which are like less perfect um is it just about having more space between uh words like or is it actually the um like if you just added space instead of ums would that have the same effect or is that there's something specific about and there's a limit to I can write like some global

global continent or is there something specific about there's a limit to how much I can stretch up from my second year psychology course I'm in some global consonant that I've just like it's like ohm or something evokes absolute concentration I'm curious to ask you I want to know what you feel like you've learned from doing podcasting so I don't know maybe what, maybe one question here is like, yeah, what's some kind of underappreciated difficulty of, uh, trying to ask good questions. I mean, you're like, obviously are currently asking excellent questions.
So what have you learned? That, um, well, one thing you, um, I think I've heard this advice that you want to do something where a thing that seems easier to you is difficult for other people. Like I have tried.
OK, so one obvious thing you can do is like ask on Twitter. I'm interviewing this person.
What should I ask them? And you'll observe that all the like all the questions that people like propose are like terrible. And so but maybe it's just like, oh, yeah, there's there's adverse selection.
The people who actually could come up with good questions are not going to spend the time to like reply to your tweet. Um, but then I've even like, um, hopefully they're not listening, but I've, I've even like tried to like hire, like, I don't know, research partners or research assistants who can help me come up with questions more recently.
And the questions they come up with also seem, um, like it, it just left like, uh, you know, how to grow in the Midwest, like change your views about blah blah it just like i have it's just a question that's whose answer is not interesting it's not a question you would organically have if you at least i hope one of you i hope you wouldn't have organically i want to ask them if you were only talking to them one-on-one so um it does seem like the skill is um harder than i would have uh it's rarer than i would have expected. I don't know why.
I don't know if you have a good sense of, because you have an excellent podcast where you ask good questions. I don't know.
What do you think? Have you observed this where the asking good questions is a rarer skill than you might think? Certainly have observed that it's a really hard skill. I still feel like it's's really difficult i also at least like to think that we've got a bit better first thing i thought there was this example you gave of like what was it like growing up in the midwest we always ask those kinds of questions so you know like how did you get into behavioral economics and why do you think it was so important these are just like guaranteed to be kind of uninspiring answers.

So specificity seems like a really good,

like kind of.

What is your book about?

Yeah, exactly.

Exactly.

Yeah.

Tell us about yourself.

This is why I love conversations with Tyler.

It's one of the many reasons I love it is he'll just like launch with,

you know, like the first question will be like about some footnote in this person's like undergrad dissertation and that just sets the tone so well um also i think cutting off which i've made very difficult for you i guess cutting off answers one of the interesting thing has been said and the elaboration or like the um caveats on the like meat of the answer are often just like way less worth hearing i think i'm trying to ask questions which a person has no hope of knowing the answer to even though it'd be great if they knew the answer to like so what should we do about this policy is a pretty bad move also um if you speak to people who are like familiar with asking questions about like their book for instance in some sense you need to like flush out the kind of pre-prepared like spiel that they have in their heads um like i know you could even just do this like before the interview right and then like it gets to the good stuff where they're actually being made to think about things rob wiblin has a really good um like list of interview tips which i think i know i guess the reason this is kind of nice to talk about other than the fact this is like good to have some kind of like inside baseball talk is that you know like skills of interviewing feel pretty transferable to just asking people good questions which is like a generally useful skill um hopefully um so yeah I guess i found that it's like really difficult i still get pretty frustrated with how hard it is but um it's like a cool thing to realize that you are able to like kind of slowly learn yeah so okay so what is how do you think about the value you're writing through your podcast and then what what advice do you have for somebody who might want to start their own? Yeah. So I know kind of one reason you might think podcasts are really useful in general is I guess the way I think about this is like, you can imagine there's a kind of just stock of like ideas that seem really important.
Like if you just have a conversation with, I don't know, someone who's like researching some cool topic and they tell you all this cool stuff that's like isn't isn't written up anywhere you're like oh my god this needs to like kind of exist in the world i think in many cases this like stock of important ideas just grows faster than it's you're able to like in some sense pay it down and like put out into the world um and that's just a bad thing so there's this overhang you want to fix and then you can ask this question of okay what's the just like the most one of the most effective ways to like communicate ideas um relatively well put them out into the world well i know just like having a conversation with that person is just like one of the most kind of efficient ways of doing it i think it's like interesting in general to consider like the kind of rate of information transfer for different kinds of like media and stuff like transmitting and receiving ideas. So like on the like best end of the spectrum, right? I'm sure you've had kind of conversations where you, everyone you're talking with, like shares a lot of context.
And so you can just kind of blurt out this like slightly incoherent three minute. like I just had this kind of thought in the shower and they can fill in the gaps and basically just like get the idea.

And then at the kind of opposite end, like maybe you want to write an article, like a kind of prestigious outlet.

And so you're like kind of covering all your bases and making it like really well written.

And then just like the information per kind of effort is just like so much lower. And I lower.
And I guess certain kinds of academic papers are way out on the other side. So yeah, as a way of solving this kind of problem of there's this overhang of important ideas, bookguards just seem like a really good way to do that.
I guess when you don't successfully put ideas out into the world, you these little kind of like clusters or like fogs of uh like contextual knowledge where everyone knows these ideas in the right circles but they're hard to pick up from like legible sources and and it's like kind of maps onto this idea of like context being that thing which is scarce i remember like tyler cohen talking about that i kind of like eventually made sense in that context i will mention that um um it seems like kind of a uh the the thing you mentioned about either just head off on a podcast and explain your idea or you know take the time to uh do it in like a prestigious place it seems very much like a barbell strategy um whereas the middle ground of spending like four or five hours writing a blog post where it's not going to be in something that plays super prestigious you might as well just like either just put it up in a podcast if it's the thing you just want to get over with or you know spend some time a little bit more time getting in a more prestigious place the argument against it i guess is that the idea seems more accessible if it's in the form of a blog post for i don't know for posterity if you just uh want that to be like the canonical source uh for something but again if you want it to be the canonical source you should just make it a uh sort of like more official thing because if it's just a youtube clip then it's it's a it's a little difficult for people to like reference to it and you can kind of get the best of both worlds so you can put your recording into like there are you know the software that transcribes your podcast right you can put it into that if you're lucky enough to have someone to help you with this you can get someone or you can just do it yourself like go through the podcast the transcript to make sure it's kind of there aren't any like glaring mistakes and now you have this like artifact that is in text form that like lives on the internet and it's just like way cheaper than writing it in the first place but yeah that's a great point and also people should read your uh barbells for life is that it bubble strategies for life yeah yeah that's it yeah cool maybe one last thing that seems worth saying on this topic of uh podcasting is like it's quite easy to start doing a podcast and um my guess is it's often worth at least trying right so i know i guess there are probably a few people listening to this who've like kind of entertained the idea uh one thing to say is it doesn't need to be the case that if you just like stop doing it it doesn't really pan out after like five episodes or even fewer that it's a failure like you can frame it as i wanted to make like a small series it's just like a useful artifact to have in the world which is like i don't know here's this kind of bit of history that i think is underrated i'm going to tell the story in like four different hour-long episodes if you like set out to do that then you have this like self-contained chunk of work uh so yeah maybe that's like a useful framing and there's a bunch of resources which i'm sure it might be possible to link to on just like how to set up a podcast i like tried writing like collecting some of those resources the thing to emphasize i think is that you i i think i've talked to like at least i don't know three or four people at this point to have told me like oh i have this idea for a podcast um it's gonna be about you know architecture, it's going to be about like VR or whatever.

They seem like good ideas.

I'm not, I'm not making one of the ideas themselves.

I, but I just like, I talked to them like six months later and it's like, they haven't started it yet.

And I just tell them like, literally just email somebody right now, whoever you want

me to be your first guest.

I mean, I cold emailed, uh, Brian Kaplan and he ended up being my first guest.

Um, just email them and like set something on the calendar because I don't know what

it is.

Maybe it's just about life in general.

I don't know if it's specific to podcasting, the amount of people i've talked to who have like vague plans of starting a podcast and have nothing scheduled uh or like no immediate like they i don't know what they're they're expecting like some mp3 file to appear on their hard drive on some fine day um so yeah but yeah I just do like get it on the calendar now yeah that seems that seems good also there's like some way of thinking about this where you could just like if you just write off in advance that your first i know let's say seven episodes i just gonna be like embarrassing to listen to um that is more freeing because it probably is the case um but you like need to go through the like the bad episodes before you start getting good at anything i guess it's like not even a podcast point um yeah also there's if you're just like brief and polite there's like very little cost in being ambitious with the people you reach out to um so yeah just go for it right there brian had us an interesting uh he wrote an interesting argument about this somewhere where he was pointing out that actually the cost of cold emailing are much lower if you're like an unknown quantity than if you are like somebody who was like has somewhat of a reputation because if you if you're just nobody then they're gonna forget you ever cold email them right they're gonna ignore it in their inbox if you ever run into them in the future they're just like not gonna register have registered you the first time if you're like somebody who has like somewhat of a reputation then there's like a mystery of like why are we not getting introduced to somebody who should know both of us right if you claim to be i don't know like a professor who wants to start a podcast um um yeah but anyways they did just reinforcing the point that the cost is really low. All right, cool.
Okay. Let's talk about space, space governance.
So this is an area where you've, uh, you've been writing about and researching, uh, recently. Okay.
One concern, uh, you might have is, you know, that Toby Horda has that book, uh, book about the, uh, the precipice about how we're in this time of peril of peril um where we have like a one in six odds of going extinct this century is there some reason to think that once we get to space this will no longer be a problem or the the risk of extinction for humanity go uh you know asymptote to zero i think one point here so actually maybe it's worth beginning with a kind of like naive case for thinking that like spreading through space is just like the ultimate hedge against extinction um and this is you know you can imagine just like duplicating civilization or at least having kind of civilizational backup like things which are like in different places in space if the risk of any one of them like being hit by an asteroid or like otherwise encountering some existential catastrophe if those risks are independent then you get this like it's exponential like a power law with every new um backup right it's like it's like having multiple kind of backups of some some data in different places in the world right so if those risks are independent yeah it is in fact fact the case that going to space is just an incredibly good strategy. I think they're pretty compelling reasons to think that a lot of the most worrying risks are really not independent at all.
So one example is, you can imagine very dangerous pathogens. If there's any travel between these places, then the pathogens are going to travel.
But maybe the more pertinent example is if you think it's worth being worried about artificial general intelligence that is unaligned, that goes wrong, and really relentlessly pursues really terrible goals, then just having some physical space between two different places is really not going to work as a real kind of uh hedge so i'd say something like you know space seems kind of it seems net useful to like diversify go to different places but like absolutely not sufficient for like getting through this kind of time of perils then yeah i guess there's kind of this follow-up question which is like okay well why expect that there is any hope of like getting through this kind of time of perils. Then, yeah, I guess there's kind of this follow-up question, which is like, okay, well, why expect that there is any hope of like getting the risk down to sustainable levels? If you're sympathetic to the possibility of like really just transformative, unofficial general intelligence, like arriving, you might think that in some sense, getting that transition right, where the outcome is that now you have this thing on your side which like has your interests in mind or has like good values in mind but has this like general purpose kind of reasoning capability that in some sense this just like tilts you towards being safe just like indefinitely long and one reason is if bad things pop up like some unaligned thing then you have this much better established safe and aligned thing which has this kind of defensive advantage so that's one consideration and then if you're like less sympathetic to this ai story i think you'd also just tell a story um about like being optimistic for our kind of capacity to like catch up along some kind of wisdom or coordination dimension if you like really zoom out and look at how like quickly we just invented all this like kind of insane technology that is like a roughly kind of exponential process you might think that that might kind of eventually like slow down but our like improvements and just how well we're able to coordinate ourselves like continues to increase and so that you get this like defensive advantage in the long run those are two pretty weak arguments so i think it's like actually just a very good question to think about I like, you know, also kind of acknowledge it's like not a very kind of compelling answer.
I'm wondering if there are aspects that you can discern from first principles about the safety of space, which suggests that either, I don't know, either there's no good reason to think the time of perils ever ends. I mean because because the thing about ai is like that's true whether you go to space or not right like um if it's aligned then i guess it can indefinitely reduce existential risk i mean one thought you can have is maybe i don't know contra the long reflection thing we're talking about which is that if you think that one of the bottlenecks to a great future could be um i don't know like some sort of tyrannical um tyrannical is like a kind of a coded term in terms of like conventional political thought but you know what i mean then the diversity of political models that being spread out would have uh maybe that's a positive thing um on the other hand a gwirn has this interesting blog post about um about space wars where he points out that the logic of mutually destroyed destruction goes away in space.
So maybe we should expect more conflict because it's hard to identify who the culprit is. If like an asteroid was redirected to your planet and, you know, they can sweet it up sufficiently fast.
They can basically destroy your above ground civilization. yeah so i mean is there something we can discern from first principles about how violent and how i don't know how how how pleasant the time in space will be um yeah it's a really good question i will say that i think i have not like reflected on that question enough to like give a really authoritative answer.

Incidentally,

one person who absolutely has is Anders Sandberg,

who has been thinking about almost exactly these questions for a very long

time.

And in some point of the future might have a,

have a book about this.

So watch that space.

One consideration is that you can start at the end.

You can consider what happens very far out in the future.

And it turns out that because the universe is expanding,

for any point in space,

so if you consider the next cluster over,

or maybe even the next galaxy over,

there'll be a time in the future

where it's impossible to reach that other point in space no matter how long you have to to get there so even if you sent out like a signal in the form of light it would never reach there because there'll always be a time in the future where you start expanding faster than the speed of light relative to that other place so okay like there's a small consolation there which is if you last long enough to get to this kind of era of isolation then suddenly you become independent again in the like strict sense

i don't think that's especially relevant when we're considering kind of more

i guess i guess relatively speaking narrow term things um gwen's point is really nice so

gwen starts by pointing out that we have this like logic with nuclear weapons on earth or

mutually assured destruction where the emphasis is on a second strike so if i receive a first strike

I'll see you next week. like logic with nuclear weapons on earth or mutually assault destruction where the emphasis is on a second strike so if i receive a first strike from someone else i can identify the someone else that first strike came from and i can kind of like credibly commit to retaliating um and the thought is that this like disincentivizes that person from launching the first strike in the first place which like makes like, makes a ton of sense.
Goin's point, I guess the thing you already mentioned, is in space there are reasons for thinking it's going to be much harder to, like, attribute where a strike came from. That means that you don't have, like, any kind of credible way to threaten a retaliation.
And so mutually assured destruction doesn't work. And that and that's kind of like actually a bit of an uncomfortable thought because the alternative to mutually assured destruction in some sense is just first strike which is if you're worried about some other actor being powerful enough to destroy you then you should destroy their capacity to destroy you um so yeah it's like a slightly bleak blog post i think there are like a ton of other considerations but some of which are a bit more hopeful ones that you might imagine is in general a kind of like defensive advantage in space over offensive one reason is that space is uh this like dark canvas in 3d where there's absolutely nowhere to hide and so you can't sneak up on anyone um but yeah i think this is like a lot of there's a lot of stuff to say here and a little bit i don't quite fully understand yet but i guess that makes it uh that makes it an interesting and important place to be a subject to be studying if we uh we don't know that much about how it's going to turn out so um von neumann has this vision that you would have you would have set up a sort of virus-like probe that infests a planet and uses of its usable resources to build more probes which go on and infect more planets is uh is the long-run future of the universe that the all the available low- low hanging resources are burnt up in uh you know some sort of like fire out you know expanding fire of von neumann probes because it seems like as long as one person decides that this is something they want to do then you know yeah the the low hanging fruit in terms of uh it was spreading out will just be burned up by somebody who built something like this.
Yeah, that's a really good question. So, okay, maybe there's like an analogy here where we have on Earth, we have organisms which can like convert raw resources plus sunlight into more of them and they replicate.
It's notable that they don't like blanket the earth although i do just as a dendon i remember someone mentioned there's a thought of like an alien you know arrived on earth and asked the question of what is the most successful species it would probably be grass but again the the reason that like particular organisms that just reproduce using sunlight don't just kind of have this like green goo uh dynamic is because there are competing organisms there are things like you know antivirals and so on um so i guess like you mentioned as long as if as soon as this thing gets seeded it's game over you can imagine trying to catch up with these things and stop them and i don't know what's the equilibrium here where you have things that are trying to catch things and things which are also spreading it's like pretty unclear but it's not clear that it's everything gets burned down although i know it seems like worth having on the table as a possible outcome.

And then another thought is, I guess, something you also like basically mentioned. Robin Hanson has this paper called, I think, Burning the Cosmic Commons.
I think the things he says are like a little bit subtle, but I guess to kind of like bastardize the overall point, there's an idea that you should expect kind of selection effects on what you observe in the long run of like which kinds of things have won out and there's a kind of like race for different parts of space and in particular the things you should expect to win out are these things which like burn resources very fast and are like greedy in terms of grabbing um as much space as possible and i don't know that seems like roughly correct he also has a more recent bit of work called grabby aliens i think there's a website grabbyaliens.com which kind of expands on this point and asks this question about what well you know we should expect to see such kind of grabby civilizations. Yeah, I mean, one, maybe kind of one like slightly fanciful upshot here is you don't want this like greedy von Neumann type probes to win out that are also just like dead.
They have no kind of nothing of value. And so if you think you have something of value to spread, maybe that is a to spread um more quickly than you otherwise like would have planned once you've like figured out what that thing is if that makes sense yeah so then does this militate um towards the logic of a space race where similar to the first strike where if you're not sure that you're going to retaliate you want to do a first strike maybe there's a logic to as long as you have like at least somewhat of a compelling vision of what the far future should look like you should try to make sure it's you who's the first actor that goes out into space even if you don't have everything sorted out even if you have like concerns about how uh yeah yeah like you'd ideally like to spend more time my guess is that the time scales on which these dynamics are relevant are like extremely long timescales compared to what we're familiar with so i don't think that any of this like straightforwardly translates into you know wanting to speed up on the order of decades and in fact if any like delay on the order of decades i know presumably also centuries gives you like a marginal improvement in your like long run speed then just because of the like again the time scales and the distances involved you almost always want to take that trade-off so yeah i guess i'd want i'd be wary of like reading too much into all this stuff in terms of like what we should expect for some kind of race in the near term it just turns out the space is like extremely big and there's like a ton of stuff there.
So in anything like the near term, I think this reasoning about like, oh, we'll run out of useful resources probably won't kick in. But that's just me speculating.
So I don't know if I have a kind of clear answer to that. Okay, so if we're talking about space governance, is there any reason to think, okay, in the far future, we can expect that space will be colonized either by a fully artificial intelligence or by simulations of humans like EMS.
In either case, it's not clear that these entities would feel that constrained by whatever norms of space governance we detail now what is the reason for thinking that you know any sort of charter or constitution that the u.n might build regardless of how i don't know how sane it is will be the basis of which like the actual long run fate of space uh is decided upon yeah yeah so i guess i know the first thing i want to say is that it does in fact feel like an extremely long shot to expect that any kind of norms you end up agreeing on now even if they're good flow through to the point where they really matter um if they ever do but okay so you can ask like what are the worlds in which this this like early thinking does end up being uh being good on the end point i don't know like i can imagine for instance the u.s constitution surviving in importance at least to some extent if digital people come along for the ride it's not obvious why there's some discontinuity there i guess the the important thing is considering what happens after anything like kind of transformative artificial intelligence arrives my guess is that the worlds in which this is like even kind of remotely there's like you know super long-term what norm should we have for settling space the worlds in which this is like even kind of remotely, this like, you know, super long-term, what norms should we have for settling space? The worlds in which this matters or does anything worthwhile are worlds in which, you know, alignment goes well, right? And it goes well in the sense that there's a significant sense in which humans are still in the driving seat. And when they're looking forents they just look to existing like institutions and norms so i don't know that seems kind of there's like so many variables here that this seems like a fairly narrow kind of set of worlds but i don't know it seems seems pretty possible and then it's also kind of like you know settling the moon or mars where that is just like much easier to imagine how this stuff actually kind of ends up influencing or positively influencing how things turn up feels worth pointing out that there are things that really plausibly matter when we're thinking about space that aren't just like thinking about these crazy kind of very long-run sci-fi um scenarios although they are like pretty fun to think.
One is that there's just a ton of like pretty important infrastructure kind of currently orbiting the earth and also anti-satellite weapons are being built. And my impression is, well, in fact, I think it's the case that there is kind of worryingly small amount of agreement and regulation about the use of those weapons.
Maybe that puts you in a kind of analogous position to not having many agreements over the use of nuclear weapons, although maybe less worrying in certain respects, but still seems worth taking that seriously and thinking about how to make progress there. Yeah, I think it's like a ton of other kind of near term considerations.
There's this great graph actually on our data, which I guess I can send you the link to after this, which shows the number of objects launched into orbit, especially low-Earth orbit, just over time. And it's like perfect hockey stick.
And I know it's like quite a nice illustration of why you might kind of pay to like, think about how to make sure this stuff goes well. And the story behind that graph is kind of fun as well i was like messing around on some un website which had it was just like this database incredible database which has more or less every kind of officially recorded launch logged with like all this data about like how many objects were contained whatever it was like the clunkiest api you've ever seen um you have to like manually like click through each page it takes like five seconds to load and you have to like scrape it so i was like okay this is great that this exists i am not like remotely sophisticated enough to know how to like make use of it but i emailed the hour and day to people saying fyi this exists if you happen to have like you know a ton of time to burn then then have added and um ed ed meto from i want on data got back to me like a month later like hey i had a free day all done and it's like up on the website it's so cool so cool cool okay i think that's that's my space rambling do i guess i i'd quite like to ask you a couple questions if that's all right i realize i've been kind of hogging the airwaves.
Yeah. So here's one thing I've just been interested to know.
You're doing this like blogging and podcasting right now, but yeah. What's next? Like 2024, what is he doing? I think, um, I'll probably be, I've just, I don't know that the idea of, building a startup has been very compelling to me.
And not necessarily from, I think it's the most impactful thing that could possibly be done. Although I think it is very impactful, but it just, I don't know.
It just like, if you have a, people tend to have like different things that are like, I want to be a doctor or something that it's, you know, it's it's something that's, uh, stuck in your head. So yeah, I think that's probably what I'll, um, be attempting to do in 2024.
I don't, I think the situation in which I like remain a blogger and podcaster is if it just turns out to be like, I don't know if, if I have, if the podcast becomes like really huge, right. At that point, it might be it might be more sense that oh like actually this is a way currently i think the impact the podcast has is like like like 0.00000001 and then the 0.01 is just me getting to learn about a lot of different things so i i think for to have any um not necessarily that it has to be thought of in terms of impact, but in terms of like how useful is it?

I think it's only the case if it like really becomes much bigger.

Nice.

That sounds great.

Maybe this is going right to start that conversation.

What about a nonprofit startup?

All the same like excitement.

If you have a great idea,

you kind of skip the fundraising stage.

More freedom because you don't need to like make.

Well, no, you still have to raise money, right?

Sure.

But like, if it's a great idea, then I'm sure there'll be like support to make it happen yeah if there's something where i don't see a way to like profitably do it and i think it's very important that it be done uh yeah i definitely wouldn't be opposed to it but is that by the way where you're leaning like i asked you in 2024 what is finn doing uh do you have a nonprofit startup? I don't have something concrete in mind. That kind of thing feels very exciting to me to at least try out.
Gotcha, gotcha. Yeah, I think I guess my prior is that there are profitable ways to do many things if you're more creative about it.
There are obvious counterexamples of so many different things where, yeah, I could not tell you how you could make that profitable, right? Like if you're more creative about it there are obvious counter examples of so many different things where yeah i could not tell you how you could make that profitable right like if you have like something like one day sooner where they're trying to you know speed up challenge trials it's like how is that a startup it's not clear um so yeah i i think that there's like a big branch of the decision tree where i think that's the most compelling thing I could do. Nice.
And maybe a connected question is I'm curious what you think EA in general is underrating from your point of view. Also, maybe another question you can answer said is like what you think, like I am personally getting wrong or got wrong, but maybe the kind of the more general question is a more interesting one for most people.
So think when you have statements um which are somewhat ephemeral or ambiguous like let's say there's like a some historian like uh toynbee right he wrote a study of history and one of the things he says in it is like civilizations die when um the elites lose confidence in the norms that they're setting and their uh in there, it lose the confidence to rule that. So I don't think that's like actually an SX risk, right? I'm just like trying to use that as an example, like something that comes up off the top of my head.
It's the kind of thing it like it could be true. I don't know how I would think about it in a sort of, I mean, it doesn't seem tractable.
I don't know how to even analyze whether it's true or not using the modes of analyzing the importance of topics that we've been using throughout this conversation. I don't know what that implies for EA because it's not clear to me.
Maybe EA shouldn't be taking things that are vague and ambiguous like that seriously to begin with, right? Yeah, if there is some interesting way to think about statements like that from a perspective that eas could appreciate including myself from a perspective that i could appreciate um i'd be really interested uh to see what that would be because there does seem to be a disconnect where when i talk to my friends who are intellectually um inclined um who have a lot of interesting ideas requiring a sort of like translation layer almost like a compiler uh that you uh or like a transpiler that that you know uh converts uh you know code from this language into like assembly here um it does create a little bit of inefficiency and potentially a loss of topics that could be talked about nice that feels like a great answer i just say it's something i'm kind of worried about as well especially in leaning towards a more kind of speculative long-termist end um seems really important to like keep hold of like some real truth-seeking attitudes where those kind of like where the obvious feedback of whether you're getting things right or wrong is is much harder and often you don't have the luxury of having that so yeah i think just like keeping that attitude in mind just seems like very important i like that what is your answer by the way what do you think that you should improve on yeah i guess off the top of my head maybe i have two answers which like go in exact opposite directions so one answer is that one kind of something that looks a bit like a failure mode that i'm a bit worried about is as or if the movement grows significantly then the ideas that kind of originally motivated it that were like quite new and exciting and important ideas somewhat kind of dilute maybe because it's like you i guess it's related to what you said you kind of lose these attitudes of like just taking weird ideas seriously like scrutinizing one another quite a lot and it becomes a bit like i don't know greenwashing or something where like the language stays but the real kind of like fire behind it of like taking impact really seriously rather than just saying the right things that kind of fades away so i don't know if i don't think i want to say is currently underrating that in any important sense but it's like something that seems worth having as a you know kind of worry on the radar and then like the roughly opposite thing that seems also worth worrying about is i think it's just really worth paying attention or like it's worth considering best case outcomes

where a lot of this stuff maybe grows quite considerably.

You know, thinking about how this stuff is like could become mainstream.

I think thinking about really scalable projects as well as just like little fun kind of interventions on margins. There's at least some chance that becomes like very important.
And so as such, you know, one part of that is maybe just like learning to make a lot of these fields legible and attractive to people who could contribute, who are like learning about it for the first time. um, and just, yeah, in general planning for the best case, which could mean just like being thinking in very ambitious terms, thinking about things going very well.
That just also seems worth doing. So I think it's a very vague answer, but maybe that's, um, yeah, maybe not worth keeping in, but that's my answer.
Perhaps, uh, you know, opposite to what you were saying about not taking weird ideas too seriously in the future is is maybe they're taking weird ideas too seriously now. It could be the case that just following basic common sense morality, kind of like what Tyler Cowen talks about in Suburban Attachments, is really the most effective ways to deal with many threats, even weird threats.
If you have areas that are more speculative, like biorisk or AI ai where it's not even clear that the things you're doing to address them are necessarily making them better i know there's concern in the movement like the initial grant that they gave to open ai might have like sped up um ai doom um maybe the best case scenario in cases where there's a lot of ambiguity is to just do more common sense things like, and maybe this is also applicable to things like global health where malaria nets are great, but the way that hundreds of millions of people have been lifted out of poverty is just through implementing capitalism, right? It's not through targeted interventions like that. Again, I don't know what this implies for the movement in general.
Even if just implementing the like the neoliberal agenda is the best way to like decrease poverty like what does that mean that somebody should do if they're yeah what does that mean you should do with the marginal million dollars right so it's not clear to me it's something i hope i'll know more about in like five to ten years is i'd be very curious to talk to future me about like what does he think about common sense morality versus taking weird ideas seriously i think like one way of thinking about quote-unquote weird ideas is that in some sense they are the result of like taking a bunch of common sense starting points and then just like really reflecting on them hard and seeing what comes out yeah so i think maybe the question is like how much trust should we place on those like reflective processes versus like how much what should I prior be on like weird ideas being true because they're because they're weird is that like good or bad and then like separately I know one thing that just seems kind of obvious and important is if you take these ideas like first of all you should ask yourself whether you like actually believe them or whether they are like kind of fun to like say or like you're just kind of saying that you of saying that you believe them. And then sometimes I know it's like fun to say with ideas, but it's like, okay, I actually don't have good grounds to believe this.
And then second of all, if you do in fact believe something, it's like really valuable to ask, if you think this thing is really important and true, why aren't you working on it? If you have the opportunity to work on it, this is like the hamming question, right? What's the most important problem in your field and then um what's stopping you from from working on it and obviously many people have the luxury of like dropping out everything and and working on the things that they in fact believe are really important but if you do have that that opportunity that's a question which i know is maybe just valuable to to ask maybe this like this is a meta objection to ea which is that i'm aware of a lot of potential objections to ea like the ones we were just talking about but there's so many other ones where people will identify yeah yeah that's an interesting point and then like nobody knows what to do about it right it's like oh you know should we take common sense morality more seriously should we take weird ideas more seriously it's like oh that is an interesting debate and then but how do you resolve that i don't know how to resolve that i don't know if somebody's come up with a good way to resolve that uh i guess it kind of hooks into the long reflection stuff a little bit because one answer here is just time so i think the story of people raising concerns about ai is maybe instructive here where you know early on you get some real kind of just like radical out there researchers or writers who are kind of raising this as a worry. There's a lot of kind of like weird baggage attached to the, what they write.
And then maybe you get like a first book or two, and then you get like more kind of prestigious or established people expressing concerns. I think one way to accelerate that process when it's like worth accelerating is just to ask that question right like do i in fact see like can i go along with this argument do i see a hole in it and then if the answer is no like if it just kind of checks out even if you're obviously going to always going to be uncertain but if it's like yeah it seems seems kind of reasonable then um by default you might just like spend a few years being like just kind of living like oh yeah this is thing that i guess i think is true but i'm not really acting on you can't just like skip that step and be like well i'm not sure i agree i think um maybe maybe an analogy here is like i don't know you're in a relationship and you think like, oh, well, I don't see what's wrong with this relationship.
So instead of just waiting a few years to like try to find something wrong with it, might as well just tie the knot now and get married. I think it's something similar with, I think of failure mode, if you maybe not, because we're EAs, you wouldn't see it in EA, but we can see generally in the world yeah is that people just come to conclusions about how the world works or how the world ought to work too early in life where when they don't seem to know that much about what is optimal and what is possible that's a great point so yeah maybe they should just wait a little longer maybe just like integrate like these weird radical ideas as things that exist in the world and wait until you're like late 20s until you decide actually this is the thing i should do with the rest of my career or with my political uh rights or whatever yeah i think that's actually that's just a really good point i think maybe i'd want to kind of walk back what i said based on on that but i think there's some version of it which i'd still really endorse which is maybe like you know i've spent like some time reflecting on this such that i don't expect further reflection is going to like radically change what i think um you can maybe talk about uh this being the case of like a group of people rather than a particular person and i could just like really see this thing playing out where i just like believe

it's important for a really long time without acting on it and that's the thing which seems

worth sure skipping i mean to be to be a little tiny bit more concrete like if you really think

some of this these potentially catastrophic risks just like are real and you think there are things

that we can do about it then sure seems good to start working on this stuff. And you really want to avoid that regret of some years down the line, like, oh, I really could have just started working on that earlier.
There are occasions where this kind of thinking is useful, or at least kind of asking this question, like, what would I do right now if I just did what my kind of idealized self would endorse doing? Maybe that's useful. So it seems that if you're trying to pursue, I don't know, a career related to EA, there's like two steps where the first step is you have to get a position like the one you have right now, where you're, you know, learning a lot and figuring out future steps.
And then the one after that is where you actually lead or take ownership of a specific project, like a nonprofit startup or something. Do you have any advice for somebody who is before step one? Huh, that's a really good question.
I also will just do the annoying thing of saying. Definitely other things can do other than that, that kind of like two step trajectory,

but yeah.

As in go directly to step two or just never go to step two and just like be a

really excellent researcher or communicator and like anything else.

I think like where you have the luxury of doing it,

not kind of rushing into the most salient,

like career option and then retroactively justifying why it was the correct option i think is like quite a nice thing to bear in mind i suppose often it's quite uncomfortable obviously i don't want to do something like consulting yeah something like that yeah i mean the kind of the obvious advice here is that there is a website designed to answer this question uh which is 80 000 hours oh yeah so maybe some there's a particular bit of advice from adk which i found very useful which was after i left uni i was like really unsure what i wanted to do um i was choosing between a couple options and i was like oh my god this is like such a big decision because i guess in this context it's not only not only do you have to answer the question of what might be a good fit for me what I might enjoy but also like in some sense what is like actually most important maybe and how am I supposed to answer that given that like there's a ton of disagreement and so I just like found myself like bashing my head against the wall of trying to get to a point where i discerned that like one option was better than the other and um the piece of advice that i found useful was that often you should just write off the possibility of becoming fully certain about what option is best instead what you should do is you should reflect on the decision like proactively that is you know talk to people write down your like thoughts and just like keep iterating on that until the like the dial stops moving backwards and forwards it just kind of settles on some particular uncertainty so it's like look i guess i'm i'm kind of 60 70 option a is better than b that hasn't really changed, having done like a bunch of extra thinking. That's, roughly speaking, the point where it might be best to make the decision rather than holding out for certainty.
Does that make sense? Yeah, it's like kind of like gradient descent where if the loss function hasn't changed in the last iteration, you call it loss. Yeah, nice it.
Nice. Like it, like it.
Yeah. That's super interesting though.
I guess one problem maybe that somebody might face is that before they've actually done things, it's hard to know, uh, that like, that's actually a, like, not that this is actually going to be my career, but I would have like the podcast was just something I did as in like, I was bored during COVID and I, I, I, yeah, I, the classes went online and i just didn't have anything else to do i don't think it's something i would have pursued if i ever thought of it well i never thought of it as a career right so it's like uh but just doing things like that can potentially lead you down interesting um interesting yeah yeah yeah i think that's a great point there was um i guess we're both involved with this uh blog prize and there was a uh like a kind of mini prize last month for people writing about like the idea of agency and what you just said I think links into that really nicely there's this kind of property of going from realizing you can do something to doing it which just seems like both really valuable and learnable so yeah just like having going from the idea of i could maybe do a little podcast series to like actually testing it and like being open to the possibility that it fails but you learn something from it just really valuable also we were talking about sending cold emails in that same bit of the conversation right like um if there's someone you look up to and you have you think it's like very plausible that you might end up in their like line of research and you think there's a bunch of things you could learn from them as long as you're not like demanding a huge amount of their time or attention then you can't just like ask to talk to them i think finding a mentor in places like this is uh just like so useful and just like asking people if they could fill that role like again in a kind of friendly way it's just um you know maybe it's a kind of a move people don't opt for a lot of the time but yeah it's like taking the non-obvious options being proactive about connecting to other people seeing if you can like physically meet other people who are like interested in the same kind of weird things as you yeah this is all like extremely obvious but i guess it's stuff i kind of would really have benefited from learning um earlier on yeah and the unfortunate thing is it's like uh not clear how you should apply that in your own circumstance when you're um when you're trying to decide what to do um okay so yeah let's uh let's close out by talking about uh just like plugging the effective ideas uh the blog prize you just mentioned,

and then the red teaming EA contest.

You want to talk,

we already mentioned that earlier,

but if you just want to leave like links

and just again,

it summarizes them.

Cool.

I appreciate that.

Yeah.

So the criticism contest,

the deadline is the 1st of September.

The kind of canonical post that announces that is an EA forum post, which I'd very grateful if you could link to somewhere but i'm you know happy to do that and then price pool is um at least a hundred thousand dollars but possibly more if there's just like a lot of exceptional um entries um and then hopefully all the kind of relevant information is there and then yeah this this blog prize as prize as well, which I've been kind of helping run. I think you mentioned right at the start.
So the like overall prize is $100,000 and up to five of those prizes. But also there are these smaller monthly prizes that I just mentioned.
So last month was, the theme was agency. And the theme this month is to write some response or some reflection on this series of blog posts called the Most Important Century blog post series by Holun Karnofsky, which incidentally people should just read anyway.
I think it's just really like truly excellent and kind of remarkable that like one of the most

affecting like series of the most affecting like

series of blog posts i've basically ever read was written by the co-ceo of his like enormous um like philanthropic organization in his spare time it's just kind of insane um yeah so uh the the website is uh effective ideas uh dot org yeah and then um obviously do uh where can people find you so your website twitter handle and then um yeah where can people find your podcast oh yeah so website is my name.com then morehouse.com twitter is my name um and podcast uh is called hear this idea as in listen this idea so it's just that phrase.com um and i'm sure if you if you kind of google it it'll it'll come up um but by the way what is your uh what is your probably distribution of how impactful these um criticisms end up uh being or just how good they end up being like if you had to guess what what is like your median outcome uh and then what is like your 99th or 90th percentile outcome of how good these end up being yeah okay that's a good question i feel like i want to say that doing this stuff is really hard so um i don't like discourage posting by saying this but i think you know maybe the median submission is you know like really robustly useful absolutely worth writing and submitting that said maybe the difference between the most valuable posts of this kind or work of this kind and the median kind of effort it's probably very large which is to say that the the ceiling is is really high if you think you have a one percent chance of influencing 100 million dollars of philanthropic spending then there is some sense in which a you know impartial philanthropic donor might be willing to spend uh roughly one percent of that amount to kind of find out that information right which is like a million dollars so yeah this So yeah, this stuff can be really, really important, I think, yeah. Yeah, okay, excellent.
Yeah, so the stuff you're working on seems really interesting and the blog presses seem like they might have a potentially very big impact. I mean, our worldviews have been shaped so much by some of these bloggers we've talked about.
So yeah, if this leads to one more of those, that alone could be very valuable. So Finn, thanks so much for coming on the podcast.
This was the longest, but also one of the most fun, one of the most fun conversations I've gotten a chance to do. The whole thing was so much fun.
Thanks so much for having me. Thanks for watching.
I hope you enjoyed that episode. If you did, and you want to support the podcast, the most helpful thing you can do

is share it on social media and with your friends. Other than that, please like and subscribe on

YouTube and leave good reviews on podcast platforms. Cheers.
I'll see you next time.