Fin Moorhouse - Longtermism, Space, & Entrepreneurship
Fin Moorhouse is a Research Scholar and assistant to Toby Ord at Oxford University's Future of Humanity Institute. He co-hosts the Hear This Idea podcast, which showcases new thinking in philosophy, the social sciences, and effective altruism.
We discuss for-profit entrepreneurship for altruism, space governance, morality in the multiverse, podcasting, the long reflection, and the Effective Ideas & EA criticism blog prize.
Watch on YouTube. Listen on Spotify, Apple Podcasts, etc.
Episode website + Transcript here.Follow Fin on Twitter. Follow me on Twitter.
Subscribe to find out about future episodes!
Timestamps
(0:00:10) - Introduction
(0:02:45) - EA Prizes & Criticism
(0:09:47) - Longtermism
(0:12:52) - Improving Mental Models
(0:20:50) - EA & Profit vs Nonprofit Entrepreneurship
(0:30:46) - Backtesting EA
(0:35:54) - EA Billionares
(0:38:32) - EA Decisions & Many Worlds Interpretation
(0:50:46) - EA Talent Search
(0:52:38) - EA & Encouraging Youth
(0:59:17) - Long Reflection
(1:03:56) - Long Term Coordination
(1:21:06) - On Podcasting
(1:23:40) - Audiobooks Imitating Conversation
(1:27:04) - Underappreciated Podcasting Skills
(1:38:08) - Space Governance
(1:42:09) - Space Safety & 1st Principles
(1:46:44) - Von Neuman Probes
(1:50:12) - Space Race & First Strike
(1:51:45) - Space Colonization & AI
(1:56:36) - Building a Startup
(1:59:08) - What is EA Underrating?
(2:10:07) - EA Career Steps
(2:15:16) - Closing Remarks
Please share if you enjoyed this episode! Helps out a ton!
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Press play and read along
Transcript
Speaker 1 Today, I have the pleasure of interviewing Finn Morehouse, who is a research scholar at the Oxford University's Future of Humanity Institute, and he's also an assistant to Toby Ord and also the host of the Hear This Idea podcast.
Speaker 1 Finn, I know you've got a ton of other projects under your belt. So, do you want to talk about all the different things you're working on and how you got into EA and this kind of research?
Speaker 2 I think you nailed the broad strokes there. I think, yeah, I've kind of failed to specialize in a particular thing.
Speaker 2 And so I found myself just dabbling in projects that seem interesting to me, trying to help get some projects off the ground and just doing research on, you know, things which seem maybe underrated.
Speaker 2
I probably won't bore you with the list of things. And then, yeah, how did I get into EA? Actually, also a fairly boring story, unfortunately.
I really loved philosophy.
Speaker 2 I really loved kind of pestering people by asking them all these questions, you know, why are you not, why are you still eating meat? Read kind of Peter Singer and Walmart Keskell.
Speaker 2 And I realized I just wasn't actually living these things out myself. I think there was some just like force of consistency that pushed me into really getting involved.
Speaker 2
And I think the second piece was just the people. I was lucky enough to have this student group where I went to university.
And I think there's some...
Speaker 2 dynamic of realizing that this isn't just a kind of free-floating set of ideas, but there's also just like a community of people I like really get on with and have all these like incredibly interesting kind of personalities and interests.
Speaker 2 So those two things, I think.
Speaker 1 Yeah, and then so what was the process like?
Speaker 1 I know a lot of people who are vaguely interested in EA, but not a lot of them then very quickly transitioned to, you know, working on research with top EA researchers.
Speaker 1 So yeah, just walk me through how you ended up where you are.
Speaker 2 Yeah, I think I got lucky with the timing of the pandemic, which is not something I suppose many people can say. I did my degree, I was quite unsure about what I wanted to do.
Speaker 2 There was some option of taking some kind of close to default path of maybe something like, you know, consulting or whatever.
Speaker 2 And then I was kind of, I guess, forced into this natural break where I had time to step back.
Speaker 2 And I, you know, I guess I was lucky enough that I could afford to kind of spend a few months just like figuring out what I wanted to do with my life.
Speaker 2 And that space was enough to like maybe start like reading more about these ideas also to try kind of teaching myself skills I hadn't really tried yet.
Speaker 2 So try to, you know, learn to code for a lot of this time and so on.
Speaker 2 And then I just thought, well, I might as well wing it. There are some things I can apply to.
Speaker 2 I don't really rate my chances, but the cost to applying to these things is so low, it just seems worth it. And then,
Speaker 2 yeah, I guess I got very lucky and here I am.
Speaker 1
Awesome. Okay.
So let's talk about one of these things you're working on, which is that you've set up and are going to be helping judging these prizes about EA writing.
Speaker 1 One is you're giving out five prizes for $100,000 each for
Speaker 1
blogs that discuss effective altruist related ideas. Another is five prizes of $20,000 each to criticize the ideas.
So talk more about these prizes.
Speaker 1 Why is it now an important time to be talking about and criticizing EA?
Speaker 2 That is a good question.
Speaker 2 I want to say I'm reluctant to frame this as like me personally, like
Speaker 2 I certainly have helped set up these initiatives. Well, so
Speaker 1 I heard on the inside that actually
Speaker 1 you've been pulling a lot of the weight on these projects.
Speaker 2 Certainly, yeah, I've
Speaker 2
found myself with the time to kind of like get these things over the line, which I'm, yeah, I'm pretty happy with. So yeah, the criticism thing, let's start with that.
I want to say something like,
Speaker 2 in general,
Speaker 2 being receptive to criticism is just like obviously really important.
Speaker 2 And if as a movement you want to succeed, where succeed means not just like achieve things in the world, but also like end up having just close to correct beliefs as you can get, then having this kind of property of being like anti-fragile with respect to being wrong, like really celebrating and endorsing, changing your mind in a kind of loud and public way.
Speaker 2 That just seems really important.
Speaker 2 And so, I don't know, this is just like a kind of prima facie obvious case of wanting to incentivize criticism. But you might also ask, like, why now?
Speaker 2 There's a few things going on there. One is, I think the effective altruism movement overall has reached this place where it's actually beginning to do like a lot of really incredible things.
Speaker 2 There's a lot of like funders now kind of excited to find
Speaker 2 kind of fairly ambitious, scalable projects.
Speaker 2 And so it seems like if there's a kind of an inflection point, you want to get the criticism out the door and you want to respond to it like earlier rather than later because you want to set the path in the right direction rather than a just course, which is more expensive later on.
Speaker 2 Will McCaskill made this point a few months ago.
Speaker 2 You can also point to this dynamic in some other social movements where the kind of really exciting beliefs that kind of have this like period of plasticity in the early days, they kind of ossify and you end up with this like set of beliefs that's kind of like trendy or socially rewarded to hold.
Speaker 2 In some sense, you feel like you need to hold certain beliefs in order to kind of get credit from
Speaker 2 certain people.
Speaker 2 And the costs to like publicly questioning some practices or beliefs become too high. And that is just like a failure mode.
Speaker 2 And it seems like one of the more salient failure modes for a movement like this.
Speaker 2 So it just seems really important to like be quite proactive about celebrating this dynamic where you notice you're doing something wrong and then you change track.
Speaker 2
And then maybe that means shutting something down, right? You sell a project. A project seems really exciting.
You get some like feedback back from the world.
Speaker 2
Feedback looks more negative than you expected. And so you stop doing the project.
And in some important sense, that is like a success. Like you did the correct thing.
Speaker 2 And it's important to celebrate that. So I think these are some of the things that go through
Speaker 2 my head. Just like framing criticism in like this kind of positive way.
Speaker 1 Yeah, seems pretty important right right i mean analogously it said that uh losses are as important as profit in terms of motivating uh economic incentives and it seems very similar here in a slack we were talking and you mentioned that uh maybe one of the reasons it's important now is if if a prize of twenty thousand dollars can help somebody help us figure out how to or not me i don't have the money but like help svf figure out how to uh how to better allocate like ten million dollars that that's a steal it's really impressive that effective altruism is a movement that is willing to find criticism of itself.
Speaker 1 I don't know, is there any other example of a movement in history that's been so interested in criticizing itself and becoming anti-fragile in this way?
Speaker 2 I guess one thing I want to say is like the proof is in the pudding here.
Speaker 2 It's one thing to kind of make noises to the effect that you're interested in being criticized, and I'm sure lots of movements make that. Another thing to really follow through on them.
Speaker 2 And EA is a fairly young movement, so I guess time will tell whether it really
Speaker 2
does that well. I'm very hopeful.
I also want to say that this particular prize is one kind of part of
Speaker 2
a much bigger thing, hopefully. That's a great question.
I actually don't know if I have good answers, but that's not to say that there are none. I'm sure there are.
Speaker 2 Like political liberalism as a strand of thought like in political philosophy comes to mind as maybe an example.
Speaker 2 One other random thing I want to point out or mention, you mentioned profits and just like doing the maths and what's the like EV of like investing and just red teaming an idea, like shooting an idea down.
Speaker 2 I think thinking about the difference between the for-profit and non-profit space is quite an interesting analogy here.
Speaker 2 You have this very obvious feedback mechanism in for-profit land, which is you have an idea, no matter how excited you are about the idea, you can very quickly learn whether the world is as excited, which is to say you can just fail.
Speaker 2 And that's like a tight, useful feedback loop to figure out whether what you're doing is worth doing.
Speaker 2 Those feedback loops don't by default exist if you don't expect to get anything back when you're doing these projects. And so that's like a reason to
Speaker 2 want to implement those things like artificially.
Speaker 2 Like one way you can do this is with charity evaluators, which in some sense impose a kind of market-like mechanism where like now you have an incentive to actually be achieving the thing that you're ostensibly setting out to achieve because there's this third actor that's or party that's kind of assessing whether you're getting it.
Speaker 2 But I think that that framing, I mean, we can try to say more about it, but that's like a really useful framing, I think, to me anyway.
Speaker 1 And
Speaker 1 one other reason this seems important to me is if you have a movement that's about like 10 years old, like this, you know, we have like like strains of ideas that are thousands of years old that have significant improvements made to them
Speaker 1 that were missing before. So
Speaker 1 just on that alone, it seems to me that the reason to expect some mistakes, either at a sort of like theoretical level or in the applications, that does seem like, I do have a strong prior that there are such mistakes that could be identified in a reasonable amount of time.
Speaker 1 Yeah.
Speaker 2 I guess one
Speaker 2 framing that I like as well is not just thinking about here's a set of claims we have, we want to like figure out what's wrong, but some really good criticism can look like, look, you just missed this distinction, which is like a really important distinction to make, or you missed this addition to this kind of naive conceptual framework you're using, and it's really important to make that addition.
Speaker 2 A lot of people are like skeptical about progress in in kind of non-empirical fields, so like philosophy, for instance.
Speaker 2 It's like, oh, we've been thinking about these questions for thousands of years, but we're still kind of unsure.
Speaker 2 And I think that misses like a really important kind of progress, which is something you might call like conceptual engineering or something, which is finding these
Speaker 2 really useful distinctions and then like building structures on top of them.
Speaker 2 And so it's not like you're making claims, which are necessarily true or false, but there are other kinds of useful criticism, which include just like getting your kind of models like more, more useful.
Speaker 1 Speaking of just making progress on questions like these, one thing that's really surprising to me, and maybe this is just like my ignorance of the philosophical history here, it's super surprising to me that a movement like long-termism, at least in its modern form, it took thousands of years of philosophy before somebody had the idea that, oh, like the future could be really big, therefore the future matters a lot.
Speaker 1 And so maybe you could say, like, oh, you know, there's been lots of movements in history that have emphasized, I mean, existential risk maybe wasn't a prominent thing to think about before nuclear weapons, but that have emphasized that civilizational collapse is a very prominent factor that might be very bad for many centuries.
Speaker 1
So we should try to make sure society is stable or something. But do you have some sense of, you have a philosophy background.
So do you have some sense? What is the philosophical background here?
Speaker 1 And to the extent that these are relatively new ideas, how did it take so long?
Speaker 2 Yeah, that's like such a good question, I think.
Speaker 2 One name that comes to mind straight away is this historian called Tom Moynihan, who so he wrote this book about something like the history of how people think about existential risk.
Speaker 2 And then more recently, he's been doing work on the question you asked, which is like...
Speaker 2 What took people so long to reach this? Like, what now seems like a fairly natural thought?
Speaker 2 I think part of what's going on here is that it's really hard, or easy I should say, to
Speaker 2 underrate just how much, I guess it's somewhat related to what I mentioned in the last question, just how much kind of conceptual apparatus we have going on that's like a bit like the water we swim in now and so it's hard to notice.
Speaker 2 So one example that comes to mind is thinking about probability as this thing we can talk formally about. This is like a shockingly new thought.
Speaker 2 Also the idea that human history might end and furthermore that that might be within our control, that is, to decide or to prevent that happening prematurely.
Speaker 2 These are all like really surprisingly new thoughts.
Speaker 2 I think it just like requires a lot of imagination and effort to put yourself into the shoes of people living earlier on who just didn't have the kind of
Speaker 2 yeah, like I said, the kind of tools for thinking that make these ideas pop out much more naturally.
Speaker 2
And of course, as soon as those tools are in place, then the like conclusions fall out pretty quickly. But it's not easy.
And I agree that I appreciate that.
Speaker 2 It actually wasn't a very good answer just because it's such a hard question.
Speaker 1 Yeah, so what's interesting is that more recently, maybe I'm unaware of the full context of the argument here, but I think I've heard Holden Kernowski write somewhere that he thinks that there's more value in thinking about the issues that EA has already identified rather than identifying some sort of unknown risk.
Speaker 1 That, for example, what AI might have been like 10, 20 years ago, AI 11, I mean,
Speaker 1 given Given this historical experience that you can have some very fundamental tools for thinking about the world missing and consequently some very important moral implications,
Speaker 1 does that imply that we should expect the space that AI alignment occupies in terms of our priorities, should we expect something as big or bigger coming up?
Speaker 1 Or just generally tools for thinking like, you know, expected value thinking, for example?
Speaker 2 Yeah.
Speaker 2 That's a good question. I think one thing I want to say there is it seems pretty likely that
Speaker 2 the most important, like kind of useful concepts for finding important things are also going to be the lowest hanging.
Speaker 2 And I don't know, I think it's like very roughly correct that we did in fact, like over the course of building out kind of conceptual frameworks, we picked the most important ideas first.
Speaker 2 And now we're kind of like refining things and adding maybe somewhat more peripheral things.
Speaker 2 That's at least, if that like trend is roughly gonna hold, and that's a reason for not expecting to find like some kind of earth-shattering new concept from left field.
Speaker 2 Although I think that's like a very weak and vague argument, to be honest.
Speaker 1 Also, I guess it depends on what you think your time span is.
Speaker 1 Like, if your time span is the entire span of time that humans have been thinking about things, then maybe you would think that actually it's kind of strange that it took like 3,000 years before, maybe even longer.
Speaker 1 I guess it depends on when you define the start point. It took 3,000 years for people to realize, hey, we should think in terms of probabilities and in terms of expected impact.
Speaker 1 So, in that sense, maybe it's like, I don't know, it took like 3,000 years of thinking to get to this very basic, very basic idea. What seems to us like a very important and basic idea.
Speaker 2 I feel like maybe I have, I want to say two things.
Speaker 2 If you imagined lining up like every person who ever lived, just like in a row, and then you kind of like walked along that line and saw how much progress people have made across the line.
Speaker 2 So, you're going like across people rather than across time, then I think like progress in how people think about stuff looks a lot more like linear and, in in fact started earlier than
Speaker 2 maybe you might think by just looking at like progress over time
Speaker 2 and if it was faster early on then
Speaker 2 if you're kind of following the very long run trend then maybe you should expect like
Speaker 2 not to find these kind of again totally left field ideas soon but I think a second thing which is maybe more important is like I also buy this idea that in some sense progress about thinking in thinking about what's like most important is really kind of boundless.
Speaker 2 Like David Deutsch talks about this kind of thought a lot. When you come up with new ideas, that just generates new problems, new questions, leads to more ideas.
Speaker 2 That's very well and good. I think there's some sense in which
Speaker 2 one priority now
Speaker 2 could just be framed as giving us time to make that progress.
Speaker 2 And even if you thought that we have this kind of boundless capacity to come up with a bunch of new important ideas, it's pretty obvious that that's like a prerequisite.
Speaker 2 And therefore, in some sense, that's like a robust argument for thinking that
Speaker 2 trying not to kind of throw humanity off course and
Speaker 2 preventing mitigating some of these catastrophic risks is always just going to shake out as like a pretty important thing to do, maybe one of the most important things.
Speaker 1 Yeah,
Speaker 1 I think that's reasonable.
Speaker 1 But then there's a question of like, even if you think that the existential risk is the most important thing,
Speaker 1 to what extent have you discovered all the
Speaker 1 again, that like X-risk argument? And by the way, earlier, what you said about
Speaker 1 trying to extrapolate what we might know from the limits of physical laws,
Speaker 1 that can kind of constrain what we think might be possible.
Speaker 1 I think that's an interesting idea, but I wonder, like, partly, like, one argument is just like, we don't even know how to define those physical constraints.
Speaker 1 And before you had the theory of computation, it wouldn't even make sense to say, like, oh, like this much matter can sustain
Speaker 1
so much flops, floating point operations per second. And then second is like, yeah, if you know that number, it still doesn't tell you like what you could do with it.
You know what, I think
Speaker 1 an interesting thing that Karnoxy talks about is he has this article called This Can't Go Odd, where he makes the argument that, listen, if you just have a compounding economic growth, at some point you'll get to the point where,
Speaker 1 you know, like you'll have many
Speaker 1 or many, many, many times Earth's economy per atom in the affectable universe. And so it's hard hard to see like how you could keep having economic growth beyond that point.
Speaker 1 But that itself seems like, I don't know, if that's true, then there has to be like a physical law that's like the maximum GDP per atom is this, right? Like
Speaker 1 if there's no such constant, then you can like, you should be able to surpass it. I guess it still leaves a lot to be desired.
Speaker 1 Even if you could know such a number, you don't know how interesting or what kinds of things could be done at that point.
Speaker 2 Yeah, I guess the first one is, you know, even if you think that like preventing these kind of very large-scale risks that might like curtail human potential, even if you think that's just incredibly important, you might miss some of those risks because you're just unable to articulate them or really like conceptualize them.
Speaker 2 I feel like I just want to say at some point,
Speaker 2 we have a pretty good understanding of kind of roughly what looks most important.
Speaker 2 Like for instance, if you kind of, I don't know, get stranded on a camping trip and you're like, we need to just survive long enough to make it out. And it's like, okay, what do we look out for?
Speaker 2 I don't really know what the wildlife is here because I haven't been here before, but probably it's going to look a bit like this i can at least imagine you know the risk of dying of thirst even though i've never died of thirst before and then it's like what what if we haven't cons like even begun to think of like the other and it's like yeah maybe but it's kind of there's just some like
Speaker 2 you know table thumping practical reason for uh focusing on the things which are most salient and like definitely spending some time thinking about the things we haven't thought of yet but um It's not like that list is just like completely endless.
Speaker 2 And there's a kind of, I guess, a reason for that. And then you say the second thing, which I don't actually know if I have like a ton of
Speaker 2 interesting things to say about, although maybe you could try like kind of zooming in on what you're interested in there.
Speaker 1 I come to think of it, I don't think the second thing has
Speaker 1 a big implications for this argument, but the two,
Speaker 1 yeah, we have like 20 other topics that are just as interesting that we haven't walked about. Yeah, but just as of a, I don't know, as a closing note,
Speaker 1
the analogy is very interesting to me. The camping trevor you're trying to like do what needs to be done to survive.
I don't know.
Speaker 1 Okay, so to extend that analogy, it might be like, I don't know, somebody like Eliezer discovers, oh, that berry that we're all about to eat, because we feel like that's the only way to get sustenance here while we're
Speaker 1 just almost starving.
Speaker 1 Don't eat that berry because that berry is poisonous.
Speaker 1 And then
Speaker 1 maybe somebody could point out, okay, so given the fact that we've discovered one poisonous food in this environment, should we expect there to be other poisonous foods that we don't know about?
Speaker 1 But I don't know. I don't know if there's anything more to say on that topic.
Speaker 2 I mean, one thing, well, like, one, I guess, kind of angle you could put on this is, you can ask this question, like,
Speaker 2 we have precedent for a lot of things. Like, we know now that igniting nuclear weapons does not ignite the atmosphere, which was a worry that some people had.
Speaker 2 So we at least have some kind of bounds on how bad certain things can be.
Speaker 2 And so if you ask this question, like, what is worth worrying about most in terms of what kinds of risks might reach this level of potentially posing an existential risk.
Speaker 2 Well, it's going to be the kinds of things we haven't done yet, that we haven't like got some experience with.
Speaker 2 And so you can ask this question, like, what is, what things are there in this space of like
Speaker 2 kind of big, seeming, but totally novel, precedent-free changes or events? And
Speaker 2 it actually does seem like you can kind of try generating that list and getting at answers.
Speaker 2 This is why maybe, or at least one reason why AI sticks out because it's like fulfills this criteria of being pretty potentially big and transformative, and also the kind of thing we don't have any experience with yet.
Speaker 2 But again, it's not as if that list is like, in some sense, endless. Like, there are only so many things we can do in the space of decades, right?
Speaker 1 Okay, yeah, so moving on to another topic, we're talking about for-profit entrepreneurship as
Speaker 1 a potentially impactful thing you can do. Sorry,
Speaker 1 maybe not in this conversation, but like we separately, we've had one point. Yeah, yeah.
Speaker 1 Yeah, so
Speaker 1 to clarify, this is not just for-profit in order to
Speaker 1 do earning to give. So you become a billionaire and you give your wealth away.
Speaker 1 To what extent can you identify opportunities where you can just build a profitable company that solves an important problem area or makes people's lives better?
Speaker 1 One example of this is Wave. It's a company, for example, that helps with
Speaker 1 transferring money and banking services in Africa.
Speaker 1 Probably has boosted people's
Speaker 1 well-being in all kinds of different ways.
Speaker 1 So, to what extent can we expect just a bunch of for-profit opportunities for making people's lives better?
Speaker 2 Yeah, that's a great question. And there is really a sense in which some of the more like innovative big for-profit companies just are like doing an incredibly useful thing for the world.
Speaker 2 They're like providing a service that wouldn't otherwise exist, and people are obviously using it because they are a successful for-profit company. Yeah, so I guess the question is something like:
Speaker 2 you know, you're stepping back, you're asking, How can I have a ton of impact with what I do? The question is, are we like underrating just starting a company?
Speaker 2
So, I feel like I want to throw a bunch of kind of disconnected observations. We'll see if they like tie together.
There is a reason why you might, in general, expect a non-profit route to
Speaker 2 do well.
Speaker 2 And this is like obviously very naive and simple, but where there is a for-profit opportunity, you should just expect people to kind of take it.
Speaker 2 Like, this is why we don't see $20 bills lying on the sidewalk. But the natural incentives for,
Speaker 2 in some sense, taking opportunities to help people where there isn't a profit opportunity, they're going to be weaker.
Speaker 2 And so, if you're thinking about the like difference you make compared to whether you do something or whether you don't do it, in general, you might expect that to be bigger where you're doing something non-profit.
Speaker 2 Like in particular, this is where there isn't a market for a good thing. So it might be because the things you're helping like aren't humans.
Speaker 2 It might be because they like live in the future, so they can't
Speaker 2 pay for something. It could also be because maybe you want to or get a really impactful technology off the ground.
Speaker 2 In those cases, you get a kind of free rider dynamic, I think, where there's less reason to like, well, you can't protect the IP and patent something. There's less reason to be the first mover.
Speaker 2 And so this is like, maybe it's not for-profit, but starting a, or helping kind of get a technology off the ground, which could eventually be a space for a bunch of for-profit companies to make a lot of money, that seems really exciting.
Speaker 2 Also, creating markets where there aren't markets seems really exciting.
Speaker 2 So for instance, setting up like AMCs, advanced market commitments or prizes, or just giving, yeah, creating incentives where there aren't any, so you get the like, efficiency and competition kind of gains that you get from the for-profit space.
Speaker 2 That seems great. But that's not really answering your question, because the question is like, what about actual for-profit companies?
Speaker 2 I don't know what I have to say here, like in terms of whether they're being underrated.
Speaker 2 Yeah, actually, I'm just curious what you think.
Speaker 1 Okay, so I think I have like four different reactions to what you said. I've been remembering the number four, just in case I'm at three and I'm like, I think I got another thing to say.
Speaker 1 Okay, so yeah, so I had a draft, an essay about this that I didn't end up publishing, but that led to a lot of interesting discussions between us. So that's why we might have,
Speaker 1 I don't know, in case the audience feels like they're interrupting a conversation that was already preceded the beginning here.
Speaker 1 So one is that
Speaker 1 to what extent should we expect this market to be efficient? So one thing you can think is
Speaker 1 Listen, the amount of potential startup ideas are so vast and the amount of great founders is so small that you can have a situation where the most profitable ideas are, yeah, it's right, that like somebody like Elon Musk will come up and like pluck up like all the, maybe like the $100 billion ideas.
Speaker 1 But if you have like a company like Wave, I'm sure they're doing really well. But
Speaker 1 if it's not obvious how it becomes the next Google or something, and I guess more importantly, if it requires a lot of context, for example, you talked about like neglected groups.
Speaker 1 I guess this doesn't solve for animals and future people. But if you have somebody, something in global health where you're like a neglected group is, for example, people living in Africa, right?
Speaker 1 The people who could be building companies don't necessarily have experience with the problems that these neglected groups have.
Speaker 1 So if you have, it's very likely, or I guess it's possible, that you could come upon an idea if you were specifically looking at how to help, for example, people suffering from poverty in the poorest parts of the world.
Speaker 1 You could identify a problem that just like people who are programmers in Silicon Valley just wouldn't know about. Okay, so a bunch of other ideas regarding the other things you said.
Speaker 1 One is, okay, maybe a lot of progress depends on fundamental new technologies and companies coming at the point where the technology is already available and somebody needs to really implement and put all these ideas together.
Speaker 1 Yeah, two things on that. One is
Speaker 1 like, we don't need to go in rabbit hole on this.
Speaker 1 One is the argument that actually the invention itself, not the invention, the innovation itself is a very important aspect and potentially a bottleneck aspect of this, of getting an invention off the ground and scaled.
Speaker 1 Another is if you can build a hundred billion dollar company or a trillion dollar company or maybe not even just like a billion dollar company You have the resources to actually invest in R ⁇ D I mean think of a company like Google right like how many billions of dollars have they basically poured down the drain on like hair brain schemes
Speaker 1 You can have like reactions to deep mind with regards to AI alignment, but I mean just like
Speaker 1 other kinds of research things they've done seem to be like really interesting and really useful.
Speaker 1 And
Speaker 1 yeah, all the other fan companies have have like a program like this, like Microsoft Research, or I don't know what Amazon's thing is.
Speaker 1 And then another thing you can point out is with regards to setting up a market that would make other kinds of ideas possible
Speaker 1 and other kinds of businesses possible.
Speaker 1 In some sense, you could make the argument that
Speaker 1 maybe some of the biggest companies, that's exactly what they've done, right? If you think of like Uber,
Speaker 1 it's not a market for companies, or maybe Amazon is a much better example here where
Speaker 1 theoretically who had an incentive before, like if a pandemic happens, I'm going to manufacture a lot of masks, right?
Speaker 1 But Amazon provides, makes the market so much more liquid so that you can just start manufacturing masks and now immediately put them up on Amazon.
Speaker 1 So it seems in these ways, actually, maybe starting a company
Speaker 1 is an effective way to deal with those kinds of problems.
Speaker 2 Yeah, man, we've gone so async here. I should have just like said one thing and then.
Speaker 2 Yeah, so I'm sorry for throwing those things in you.
Speaker 2 There's a lot there.
Speaker 2 As far as I can remember, those are all great points.
Speaker 2 Yeah, I think my high-level thought is, I'm not sure how much we disagree, but I guess one thing I want to say is, again, thinking about in general, what should you expect the real biggest opportunities to typically be for just having a kind of impact?
Speaker 2 One thing you might think of is if you can optimize for two things separately, that is optimize for the first thing and then use that to optimize for the second thing, versus trying to optimize for some like combination of the two at the same time, you might expect to do better if you do the first thing.
Speaker 2 So, for instance, you can do a thing which looks a bit like trying to do good in the world and also like make a lot of money,
Speaker 2 like social enterprise, and often that goes very well. But you can also do a thing which is try to make a lot of money and just
Speaker 2 make a useful product that is not directly aimed at improving humanity's prospects or anything, but it's just kind of just great.
Speaker 2 And then use the success of that first thing to then just think squarely, like, how do I just do the most good
Speaker 2 without worrying about whether there's some kind of profit mechanism? I think often that strategy is gonna pan out well. There's this thought about the kind of the tails coming apart.
Speaker 2 If you've heard this thought that at the extremes of like either kind of scalability in terms of opportunity to make a lot of profit and at the extreme of doing like a huge amount of goods, you might expect expect there to be like not such a strong correlation.
Speaker 2 Again, one reason like in particular that you might think that is because you might think the like future really matters, like humanities future. And
Speaker 2 sorry to be like a stuck record, but like because there's not really like a natural market there because these people don't haven't been born yet.
Speaker 2 That is like a rambly way of saying that, okay, that's not always going to be true.
Speaker 2 But I basically just agree that, yeah, I would want to resist a framing of doing good, which just leaves out like also
Speaker 2 doing some s starting some successful for-profit company. Like there are just a ton of really excellent examples of where that's just been a huge success and yeah, should be celebrated.
Speaker 2 So yeah, I think I disagree with the spirit.
Speaker 2 Maybe we disagree somewhat on the like how much we should kind of relatively emphasize these different things, but it doesn't seem like a kind of very deep disagreement.
Speaker 1 Yeah, yeah. Maybe I've been spending too much time with Brian Kauflin or something.
Speaker 1 So by the way, the tail is coming apart, I think, is a very interesting,
Speaker 1 very interesting way to think about this. Scarlet Zander is a good article on this.
Speaker 1 And one thing he points out is, yeah, generally you expect different parts of different types of strength to correlate, but the guy who has the strongest grip strength in the world is probably not the guy who has the biggest squad in the world, right?
Speaker 1
Yeah, okay. So I think that's an interesting place to leave that idea.
Oh, yeah. Another thing I wanted to talk to you about was...
Speaker 1 Back testing EA. So if you have these basic ideas of we want to look at problems that are important, neglected, intractable, and apply them throughout history.
Speaker 1 So like a thousand years back uh two thousand years back a hundred years back is there a context in which applying these ideas um would maybe lead to a perverse outcome an unexpected outcome um and are there examples where
Speaker 2 i mean there's many examples in history where things you could have like easily made things much better but maybe made it much better than even conventional morality or like present day uh ideas would have made them so i'll react to the first part of the question which as i understand it is something like can we think think about what some kind of effective altruism like movement, or if these ideas were in the water like significantly earlier, whether they might have misfired sometimes, or maybe they might have succeeded in that?
Speaker 2 In fact, how do we think about that at all? I guess one thing I want to say is that very often the correct decision, ex-ante,
Speaker 2 is a decision which
Speaker 2 might do really well in like
Speaker 2 some possible outcomes, but you might still expect to fail, right? The kind of mainline outcome is this doesn't really pan out, but it's a it's a moonshot, and if it goes well, it goes really well.
Speaker 2 This is, I guess, similar to certain kinds of investing, where if that's the case, then you should expect, even if you follow the exact like correct strategy, you should expect to look back on the decisions you make, made rather, and
Speaker 2 see a bunch of failures. Um, where failure is, you know, you just have very little impact.
Speaker 2 And I think it's important not to kind of resist the temptation to like really kind of negatively update on whether that was the correct strategy because it didn't pan out.
Speaker 2 And so, I don't know, if something like EA type thinking was in the water and was like thought through very well, yep, I think it would go wrong a bunch of times.
Speaker 2
And that shouldn't be kind of terrible news. When I say go wrong, I mean like not pan out rather than do harm.
If you did harm, okay, that's like a different thing.
Speaker 2 I think one thing this points to, by the way, is like
Speaker 2 you could take it you could choose to take a strategy which looks something like mini max regrets, right? So you have a bunch of options. You can ask about the kind of roughly worst case outcome
Speaker 2 or just kind of like, you know, default eh outcome on each option. And one strategy is just to like choose the option with the least bad kind of meh case.
Speaker 2 And if you take the strategy, you should expect to look back on the decisions you make and like
Speaker 2
not see as many failures. So that's one point in favor of it.
Another strategy is just like do the best thing in expectation.
Speaker 2 Like, if I made these decisions constantly, what in the long run just ends up like making the world best? And this looks a lot like just taking the highest EV option. Maybe you don't want to like
Speaker 2
run the risk of causing harm. So, you know, that's okay to include.
And,
Speaker 2 you know, I happen to think that that kind of second strategy is very often going to be a lot better.
Speaker 2 And it's really important not to be misguided by this kind of feature of the mini max regret strategy where you look back and kind of feel a bit better about yourself in many cases, if that makes sense.
Speaker 1 Yeah, that's very interesting.
Speaker 1 I mean, if you think about back testing in terms of the stock, like, you know, models for the stock market, one thing that to analogize this, one thing that tends to happen is that a strategy of just like trying to maximize returns from a given trade that results very quickly in you going bankrupt because like sooner or later there will be a trade where you lose all your money and so then there's something called the cali criterion where you reserve a big portion of your money and you only bet with a certain part of it which sounds more similar to the minimized regret thing here.
Speaker 1 Unless your expected value includes a possibility that, I mean, in this context, that like, you know, like losing all your money is like an existential risk, right? So
Speaker 1 maybe you like bake into the cake in the definition of expected value the odds of like losing all your money.
Speaker 1 Yeah, yeah, yeah, yeah.
Speaker 2 That's a great, that's a really great point. Like, I guess in some cases, you want to take something which looks a bit more like the Kelly bet.
Speaker 2 But if you act to your margins, like relatively small margins compared to the kind of pot resources you have, then I think it often makes sense to take just the do the best thing bet and not worry too much about what's the kind of like size of the Kelly bet.
Speaker 2 But yeah, that's a great point.
Speaker 2 And I guess a naive version of doing this is just kind of losing your bankroll very quickly because you've taken two enormous bets and forgotten that it might not pan out. Yeah, so I appreciate that.
Speaker 1 What did you mean by add to the margins?
Speaker 2 So if you think that there's a kind of
Speaker 2 a pool of resources from which you're drawing, which is something like maybe philanthropic funding for the kind of work that you're interested in doing,
Speaker 2 And you're only a relatively marginal actor, then that's unlike being like an individual investor where you're more sensitive to the risk of just running out of money. And
Speaker 2 when you're more like an individual investor, then you want to pay attention to what the size of the Kelly bet is. If you're acting out margins, then maybe that is less of like a big consideration.
Speaker 2 Although it is obviously still a very important point.
Speaker 1 Well, and then
Speaker 1 by the way, I don't know if you saw my recent blog post about why I think there will be more EA billionaires.
Speaker 2 Yes, I'll say that. Okay, yeah, yeah.
Speaker 1 I don't know what your reaction to any of the ideas there is. But my claim is that we should expect the total funds dedicated to EA to grow quite a lot.
Speaker 2
Yeah, I think I really liked it, by the way. I think it was great.
One thing it made me think of
Speaker 2 is
Speaker 2 that there's quite an important difference between trying to maximize returns for yourself and then trying to get the most returns just like for the world, which is to say just doing the most good.
Speaker 2 Where one
Speaker 2 consideration we've just talked about, which is a risk of just like losing your bankroll, which is where like Kelly betting becomes relevant.
Speaker 2 Another consideration is that as an individual, just like trying to do the best for yourself, you have like pretty steeply diminishing returns.
Speaker 2
from money or just like how well your life goes with that extra money. Right.
So like if you have like 10 million in the bank and you make another 10 million, does your life get twice as good?
Speaker 2 Obviously, not, right? And as such, you should be kind of risk-averse when you're thinking about the possibility of like making a load of money.
Speaker 2 If, on the other hand, you just care about like making the world go well,
Speaker 2 then the world is an extremely big place, and so you basically don't run into these diminishing returns like at all.
Speaker 2 And for that reason, like if you're making money, at least in part, to in some sense give it away, or otherwise just like have a positive effect in some impartial sense then
Speaker 2 you're gonna be less risk averse which means
Speaker 2 maybe
Speaker 2 you fail more often but it also means that people who succeed like succeed really hard yeah so I don't know if that's in some sense I'm just recycling what you said but I think it's like a really kind of neat observation Well, and another interesting thing is that not only is that true, but then you're also
Speaker 1 in a movement where everybody else has a similar idea.
Speaker 1 and not only is that true but also the movement is full of people who are young techie smart and as you said risk neutral so basically people who are going to be way overrepresented in the in the ranks of future billionaires and they're all hanging out and they have this idea that you know we can become rich together and then make the world better by doing so um you would expect that this would be the exactly the kind of situation that would lead to people teaming up and starting billion dollar companies.
Speaker 1
All right. Yeah.
So a bunch of other topics in effective altruism that I wanted to ask you about.
Speaker 1 So, one is: should it impact our decisions in any ways if the many worlds interpretation of quantum mechanics is true?
Speaker 1 I know the argument that, oh, you can just think of, you can just translate amplitudes to probabilities, and if it's just probabilities, then decision theory doesn't change.
Speaker 1 My problem with this is I've gotten like very lucky in the last few months. Now, I think it like changes my perception of that if I realize actually most me's, and okay,
Speaker 1 I know there's like problems with saying me's, to what extent they're fungible. Most branches of the of the multiverse, like I'm like significantly worse often.
Speaker 1 That makes it worse than, oh, I just got lucky,
Speaker 1 but like now I'm here. And another thing is, if you think of existential risk
Speaker 1 and you think that
Speaker 1 even if like existential risk is very likely in some branch of the multiverse, humanity survives.
Speaker 1 I don't know, that seems better in the end than, oh, the probability was really low, but like it just resolved to we didn't survive.
Speaker 1 Does that make sense?
Speaker 2
Okay. All right.
There's a lot there.
Speaker 2 I guess rather than doing a terrible job at trying to explain what this many worlds thing is about, maybe it's worth just kind of pointing people towards, you know, just Googling it.
Speaker 2 I should also add this enormous caveat that I don't really know what I'm talking about.
Speaker 2 This is just kind of an outsider who's taken this kind of, I don't know, this just, this stuff seems interesting. Yeah, okay, so they just, there's this question of like, what,
Speaker 2 if the many worlds view is true, what, if anything, could that mean with respect to questions about like, what should we do or what's important?
Speaker 2 And one thing I want to say is, just like, without zooming into anything, it just seems like a huge deal.
Speaker 2 Like, every second, every day, I'm in some sense, like, just kind of dissolving into this like cloud of me's and like just kind of unimaginably large number of me's.
Speaker 2 And that each of those me's is kind of in some sense dissolving into more clouds.
Speaker 2
This is just like wild. Also seems somewhat likely to be true, as far as I can tell.
Okay, so like, what does this mean?
Speaker 2 You, yeah, you point out that you can
Speaker 2 talk about having a measure over worlds. In some sense, you can, there's actually a problem of how you get like probabilities, or how you make sense of probabilities on the many worlds view.
Speaker 2 And there's a kind of neat way of doing that, which like makes use of questions about how you should make decisions.
Speaker 2 That is, you should just kind of weigh future use according to, in some sense, how likely they are. But it's really the reverse.
Speaker 2 You're like explaining what it means for them to be more likely in terms of how it's rational to weigh them.
Speaker 2 And then I think it's like a ton of very vague things I can try saying. So maybe I'll just try doing a brain dump of things.
Speaker 2 You might think that many worlds being true could push you towards being more risk neutral in certain cases if you weren't before.
Speaker 2 Because in certain cases, you're like translating from some chance of this thing happening or it doesn't into some fraction of you know worlds, this thing does happen and another fraction it doesn't.
Speaker 2 That's kind of like, I don't think it's worth reading too much into that because I think a lot of the like important uncertainties about the world are still like subjective uncertainties about how most worlds will in fact turn out.
Speaker 2 But it's kind of interesting and notable that you kind of like convert between overall uncertainty about how things turn out to like more certainty about the fraction of the ways things turn out.
Speaker 2 I think another like interesting feature of this is that so the question of like how you should act
Speaker 2 is no longer the question of like
Speaker 2 how should you kind of benefit this person who is you in the future who's one person it's more like how do you benefit this like cloud of people who are all success of you that's just kind of like diffusing into the future and i think you point out that you could just like basically salvage a lot of basically all decision theory even if that's true but the like picture of what's going on changes and in particular i think just intuitively like it feels to me like the gap between acting in a self-interested way and then like acting in an impartial way where you're like helping other people, it kind of closes a little in a, in a way.
Speaker 2 Like
Speaker 2 you're already benefiting many people by doing the thing that's kind of rational to benefit you, which isn't so far from benefiting people who aren't like continuous with you in this special way.
Speaker 2 So I kind of like that as a thing.
Speaker 2
It's so interesting. Yeah.
And then, okay, there is also this like slightly more out there
Speaker 2 thought
Speaker 2 which is here's the thing you could say if many worlds is true then there is at least a sense in which there are very very many more people in the future compared to the past.
Speaker 2 Like just unimaginably many more and even like the next second from now there are many more people.
Speaker 2 So you might think that should like make us have a really steep negative discount rate on the future, which is to say we should like value future times much more than present times.
Speaker 2
And like in a way which would just kind of, it wouldn't like modify how we should act. It just like explodes how we should think about this.
This definitely doesn't seem right.
Speaker 2 Maybe one way to think about this is that if this thought was true or like was kind of directionally true, then that might also be a reason for being extremely surprised that we're both speaking at like an earlier time rather than a later time.
Speaker 2 Because if you think you're just like randomly drawn from all the people who ever lived, it's like absolutely mind-blowing that we get drawn from like today rather than tomorrow. Yeah, yeah.
Speaker 2 Given that there's like 10 to the something, many more people than tomorrow.
Speaker 2 So it's probably wrong and wrong for reasons I don't have a very good handle on because I just like don't know what I'm talking about.
Speaker 2 I mean, I can kind of try parroting the reasons, but like it's something I'm, you know, I'm interested in trying to really crock those reasons a bit more.
Speaker 1 That's really interesting.
Speaker 1 I didn't think about that argument for the selection argument.
Speaker 1 I think one resolution I've heard about this is that you can think of the proportion of Hilbert space or like the proportion of of all the
Speaker 1 universe's wave function, that could be the
Speaker 1 probability rather than each different branch. You know what? I just realized the selection argument you made, maybe that's an argument against
Speaker 1 Bostrom's idea of we're living in a simulation. Because basically, his argument is that there'll be many more simulations than there are real copies of you, therefore you're probably in a simulation.
Speaker 1 The thing about saying that all the simulations plus you are
Speaker 1 your prior should be equally distributed among them seems similar to saying your prior of being distributed along each possible like branch of the wave function should be your prior across them should be the same.
Speaker 1 Whereas I think in the context of the wave function, you were arguing that maybe it should be like, you shouldn't think about it that way.
Speaker 1 You should think about like maybe a proportion of the total wave total Hobart space.
Speaker 1 Yeah.
Speaker 1 Does that make sense?
Speaker 2 I don't know if I put it. Wait, say it again, how it links into simulation type stuff.
Speaker 1 Instead of thinking about each possible simulation as an individual thing, a crotch move
Speaker 1 that is equally as likely, each individual instance of a simulation is equally as likely as you living in the real world. Maybe simulation as a whole is equally likely to you living in the real world.
Speaker 1 Just as you being alive today rather than tomorrow is equally likely, despite the fact that there will be many more branches,
Speaker 1 new branches of the wave function tomorrow.
Speaker 2
Yeah, okay, there's a lot, I get a lot going on. I feel like there are people who actually know what they're talking about here, just tearing their hair out.
Like, it was this obvious thing.
Speaker 1 So, you mentioned that's the nature of having a podcast.
Speaker 1 But by the way, if you are one such person, please do email me or DM me or something. I'm very interested.
Speaker 2 So, yeah, you mentioned like it is, obviously, there is a measure over worlds, and this
Speaker 2 lets you talk about things being sensible again.
Speaker 2 Also, maybe one minor thing to comment on is talking about probabilities is kind of hard because every in on many worlds, just everything happens that can happen.
Speaker 2
And so it's like difficult to get the language exactly right. But anyway, so totally get the point.
And then it's the question of how it maps onto simulation type thoughts.
Speaker 2 Here's a I don't know, like maybe a thought which kind of connects to this.
Speaker 2 Do you know like sleeping beauty type problems?
Speaker 1 No, no.
Speaker 2
Okay, this is only a vaguely remembered example. But let's try it.
So in the original sleeping beauty problem, you go to sleep,
Speaker 2 okay, and then I flip a coin,
Speaker 2 or you know, whoever, someone flips a coin.
Speaker 2 If it comes up tails, they wake you up once. If it comes up heads, they wake you up once.
Speaker 2 And then
Speaker 2
you go back to sleep and your memory is wiped. And then you're woken up again.
as if you're being woken up in the other world.
Speaker 2 And
Speaker 2 okay, so you go to sleep, you wake up, and you ask, what is the chance that the coin came up heads or tails?
Speaker 2
And it feels like there's kind of really intuitive reasons for both 50% and one-third. Here's a related question, which is maybe a bit simpler, at least in my head.
I flip a coin.
Speaker 2
If it comes up heads, I like just make a world with one observer in it. And if it comes up tails, I make a world with 100 observers in it.
Maybe it could be like running simulations with 100 people.
Speaker 2 You You wake up in one of these worlds. You don't know how many other people are there in the world.
Speaker 2 You just know that someone has flipped a coin and decided to make a world with either one or 100 people in it. What is the chance that you're in the world with 100 people?
Speaker 2 And
Speaker 2 there's a reason for thinking it's half, and there's a reason for thinking that it's like, I don't know, 100 over 101.
Speaker 1 Does that make sense? So I understand the logic behind the half.
Speaker 1 What is the reason for thinking, I mean, regardless of where you ended up as the observer, it seems like if the odds of the coin coming up.
Speaker 1 oh, I guess is it because you'd expect there to be more observers in the other universe? Like, wait, yeah, so what is the logic for thinking? It might be 100 over 100.
Speaker 2
Well, you might think of it like this. How shall I reason about where I am? Well, maybe it's something like this.
I am just a random observer, right?
Speaker 2
Of all the possible observers that could have come out of this. And there are 101 possible observers.
And you can just imagine that I've been randomly drawn. Okay,
Speaker 2 and if I'm randomly drawn from all the possible observers, then it's overwhelmingly likely that I'm in the big world.
Speaker 1 Huh. That's super interesting.
Speaker 2 I should say, actually, I should plug someone who does know what they're talking about on this, which is Joe Carl Smith, who has like a series of really excellent blog posts. Oh.
Speaker 2 He's coming on the podcast next week. Yes, amazing.
Speaker 2 I'm going to ask you about this because he's really good at talking about it. I don't want to like, okay, I don't want to scoop him, but one thought that...
Speaker 2 comes from him, which is just like really cool, maybe just to kind of round this off, is
Speaker 2 if you're like a 100 over over 101er on examples like this, and you think there's any chance that like the universe is infinite in size, then you should think that the chance you're in a universe that is infinite in extent is just like one or close to one, if that makes sense.
Speaker 1 I see, yeah, yeah. Okay, so in the end, then
Speaker 1 does your awareness of many worlds is like a good explanation?
Speaker 1 Has that impacted your view of what should be done in any way?
Speaker 2 Yeah, so I don't really know if I have a good answer.
Speaker 2 My best guess is that things just shake out to kind of where they started, as obviously started off in this kind of like relatively risk neutral place.
Speaker 2 I suspect that if Many Worlds is true, this might have like this might make it much harder to hold on to kind of intuitive views about personal identity for the reason that like there isn't this like one person who you're like continuous with throughout time and no other people, which is how people tend to think about what it is to like be a person.
Speaker 2 And then there's this kind of like vague thing, which is just occasionally I, you know, just like remember like every other month or so that maybe many worlds is true.
Speaker 2
And it just kind of like blows my mind that I don't know what to do about it. And I just like go on with my dating.
That's about where I am.
Speaker 1 Okay. All right.
Speaker 1
Other interesting topics to talk about. Talent search.
What is the what is EA doing about identifying, let's say, more people like you, basically, right?
Speaker 1 But maybe even like people like you who are not in like places where they're not next to Oxford, for example. I don't know where you actually are from originally, but
Speaker 1 like if they're from like some, like, I don't know, like China or India or something.
Speaker 1 What is EA doing to recruit more fins from
Speaker 1 places where they might not otherwise work on EA?
Speaker 2 Yeah, it's a great question. And yeah, to be clear, I just won the lottery on things going right to kind of
Speaker 2 be lucky enough to do what I'm doing now. So, yeah, in some sense, the question is, how do you like print more winning lottery tickets and indeed find those people who really deserve them?
Speaker 2 But like, I'm just currently not being
Speaker 2 identified a lot this comes from i just i read that book talent um ty like cowan and daniel burrows recently and yeah there's something really powerful about this fact that this like business of you know finding really like smart driven people and connecting them with opportunities to like do the things they really want to do This is like really kind of still inefficient.
Speaker 2 And there are just still like so many just people out there who like aren't kind of getting those opportunities.
Speaker 2 I actually don't know if I have much more like kind of insight to add there other than this is just a big deal.
Speaker 2 And it's like there's a sense of which it is an important consideration for this like project of trying to do the most good.
Speaker 2 Like you really want to find people who can like put these ideas in practice.
Speaker 2 And I think there's a special premium on that kind of person now, given that there's like a lot of philanthropic kind of funding ready to like be deployed.
Speaker 2 There's also a sense in which this is just like, in some sense, like a cause in its own right. It's kind of analogous to open borders in that sense, at least in my mind.
Speaker 2 Hadn't really like appreciated it on some kind of visceral level before I wrote that book.
Speaker 1 And then another thing he talks about in the book is you want to get them when they're young. You can really shape somebody's ideas about what's worth doing if you,
Speaker 1 and then also their ambition about what they can do if you catch them early.
Speaker 1 And, you know, Tyler Carn also had an interesting blog post a while back where he pointed out that a lot of people applying to his emergent ventures program, a lot of young people applying are heavily influenced by effective altruism, which seems very like it's going to be a very important factor in
Speaker 1 the long term. I mean, eventually these people will be in positions of power.
Speaker 1 Yeah, so maybe effective altruism is already succeeding to the extent that a lot of the most ambitious people in the world are identified that way, or at least, I mean, given the selection effect that the telecoms program has.
Speaker 1 But yeah, so what is it that can be done to get people when they're young?
Speaker 2 Greg.
Speaker 2 Yeah, I mean, it's a very good question. And I think what you point out there is
Speaker 2 right. There's some, Nick Whittaker has this blog post, which is something like the, it's called the Lamplight Model of Talent Curation.
Speaker 2 And he draws this distinction between casting
Speaker 2 like a very wide net that's just kind of very legibly prestigious. and then you know filtering through thousands of of applications or in some sense, like putting out the bat signal that
Speaker 2 in the first instance just attracts the really promising people and maybe actually drives away people who would be a better fit for something else.
Speaker 2 So an example is if you were to hypothetically write quite a wonky economics blog every day for however many years and then run some furniture program, you're just like automatically selecting four people who
Speaker 2 read that blog. And that's like a pretty good kind of starting population to begin with.
Speaker 2 So, I really like that kind of thought of just not needing to be incredibly loud and like prestigious sounding, but
Speaker 2 rather just like being quite honest about what this thing is about. So, you just attract the people who like really sort it out because that's just quite a good feature.
Speaker 2 I think another thing that, again, this is like not a very interesting point to make, but something I've really realized the value of is like having physical hubs. And so there's this model of
Speaker 2 running like fellowships, for instance, where you just like find really promising people.
Speaker 2 And then there's just so much to be said for like putting those people in the same place and surrounding them with maybe people who are a bit more like senior and just kind of like letting this natural process happen where people just get really excited that there is this like community of people working on stuff that previously you'd just been kind of reading about in your bedroom on like some blogs.
Speaker 2 That, like as a source of motivation, I know it's like less tangible than other things, but yeah, just like so, so powerful. And like probably the, I know, one of the reasons I'm like
Speaker 2 working here, maybe.
Speaker 1 Yeah,
Speaker 1 it is one aspect of working from home that you don't get that.
Speaker 1 Regarding the first point, so I think
Speaker 1 maybe that should update in favor of not doing community outreach and community building. Like maybe that's negative marginal utility because like if I think about, for example,
Speaker 1 my local, so there was an effective altruism group at my college that I didn't attend.
Speaker 1 And there's also like an effective altruism group for the city as a whole in Austin that I don't attend.
Speaker 1 And the reason is just because, I don't know, the people who, there is some sort of
Speaker 1 adverse selection here where the people who are leading organizations like this are people who couldn't just like aren't directly doing the things that effective altruism says they might be might consider doing um and are more interested in the social aspects of altruism um
Speaker 1 so i i don't know i i'd be i'd be much less impressed with the movement if my first introduction to it was the specific groups that i like i've had the personal i've personally interacted with rather than i don't know just like hearing womackell on a podcast um
Speaker 2 so that by the way the four latter being my first introduction to effective altruism yeah interesting um i feel like i really don't want to like underwrit the job that that community builders are doing.
Speaker 2 I think, in fact, it's turned out to have been like, and still is, just like incredibly valuable, especially just looking at the numbers of like what you can achieve as like a group organizer at your university.
Speaker 2 Like maybe you could just change the course of like more than one person's career over the course of like a year of your time. That's like pretty incredible.
Speaker 2 But yeah, I guess part of what's going on is that the difference between like going to your like local group
Speaker 2 or like engaging with stuff online is that you get to kind of choose the stuff you engage with. And like maybe one upshot here is that the like
Speaker 2 kind of set of ideas that might get associated with EA is like very big and you don't need to buy into all of it or just like be passionate about all of it. Like
Speaker 2 if this kind of AI stuff just like really seems interesting but maybe other stuff is just like more peripheral then you know one yeah like this could push towards wanting to have like just a specific group for people who are just like, you know, this AI stuff seems cool.
Speaker 2 Other stuff, not my like cup of tea.
Speaker 2 So yeah, I mean, in the future, as like things get scaled up, as well as kind of scaling out, I think also maybe having this like differentiation and kind of diversification of like different groups, I mean, seems pretty good.
Speaker 2 But just like more of everything also seems good.
Speaker 1 Yeah, yeah.
Speaker 1 I'm probably overfitting on my own experience. And given the fact that I
Speaker 1 didn't actively interact with any of those communities, I'm probably not even informed on those experiences of loves.
Speaker 1 But there was an interesting post on an effective autism forum that somebody sent me where they were making the case that
Speaker 1 at their college as well, they got the sense that the EA community building stuff had the negative impact because people were kind of turned off by their peers.
Speaker 1 And also there's a difference between like, I don't know, somebody like Sang Baker Fried or Roman Caskell advising you,
Speaker 1 obviously virtually,
Speaker 1 to
Speaker 1 do these kinds of things versus like, I don't know, some sophomore at your university studying philosophy, right?
Speaker 1 No offense.
Speaker 2 Yeah,
Speaker 2 I think my guess is that like on net, these efforts are still just like overwhelmingly positive. But yeah, I think it's like pretty interesting that people have the experience you describe as well.
Speaker 2 Yeah, and interesting to think about ways to kind of like get around that.
Speaker 1 So long reflection is a, it seems like a bad idea, no?
Speaker 2 I'm so glad you asked.
Speaker 2
Yeah, I want to say, I want to say no. I think in some sense, I've like come around to it as an idea.
But yeah, okay, maybe it's worth like.
Speaker 1 Oh, really? Interesting.
Speaker 2 Maybe it's worth, I guess, like trying to explain what's going on with this idea.
Speaker 2 So if you were to zoom out really far over time
Speaker 2 and consider our place now, like in history, and you could ask this question about, suppose in some sense, humanity just became perfectly coordinated. What's the plan?
Speaker 2 What kind of in general should we be prioritizing? And like in what stages?
Speaker 2 And
Speaker 2 you might say something like this.
Speaker 2 It looks like this moment in history, which is to say maybe this century or so, just looks kind of wildly and like unsustainably dangerous, like
Speaker 2 or kind of
Speaker 2 so many things are happening at once.
Speaker 2 It's really hard to know how things are going to pan out, but it's like possible to imagine things panning out really badly and badly enough to just like more or less end history.
Speaker 2 Okay, so before we can like worry about some kind of longer term considerations, let's just get our act together and make sure we don't mess things up.
Speaker 2 So, okay, like that seems like a pretty good first priority. But then, okay, suppose that you succeed in that and like we're in a significantly safer kind of time.
Speaker 2 What then?
Speaker 2 We might notice that
Speaker 2 the scope for what we could achieve is like really extraordinarily large, like maybe kind of larger than most people kind of like typically entertain.
Speaker 2 Like, we could just do a ton of really exceptional things. But also, this is kind of a feature that maybe in the future,
Speaker 2 especially long-term future, we might more or less for the first time be able to embark on these like really kind of ambitious projects that are in some important sense
Speaker 2 like really hard to reverse. And that might make you think, okay, at some point, it'd be great to like
Speaker 2 in some like, you know, achieve our, that potential that we have.
Speaker 2 And just like, like, for instance, a kind of lower band on this is lifting everyone out of poverty, who remains in poverty, and then like going even further, just making everyone even wealthier, able to do more things that they want to do, making more scientific discoveries, whatever.
Speaker 2 So we want to do that, but maybe something should come in between these two things, which is like figuring out what is actually good.
Speaker 2 And okay, why
Speaker 2 should
Speaker 2 we think this? I think one thought here is
Speaker 2 it's very plausible. I guess this kind of links to what we were talking about earlier, that the way we think about, you know, like
Speaker 2 really positive futures, like what are the best futures, it's just like really kind of incomplete.
Speaker 2 Almost certainly, we're just getting a bunch of things wrong by this kind of pessimistic induction on the past. Like a bunch of smart people thought really reprehensible things like 100 years ago.
Speaker 2 So we're getting things wrong. And then this second thought is,
Speaker 2 I don't know, it seems possible to actually make progress here in thinking about what's good. There's this kind of interesting point that most like work in,
Speaker 2 I guess you might call it like moral philosophy, has focused on the negatives. So, you know, avoiding doing things wrong, fixing harms, avoiding bad outcomes.
Speaker 2 But this idea of like studying the positive, studying like what we should do if we can kind of do like many different things. This is just like super, super early.
Speaker 2 And so we should expect to be able to make a ton of progress. And so, hey, again, imagining that the world is like perfectly coordinated.
Speaker 2 Would it be a good idea to like spend some time, maybe a long period of time, kind of deliberately holding back from embarking on these like huge irreversible projects, which maybe involve like leaving Earth in kind of certain scenarios, or otherwise just like doing things which are hard to undo?
Speaker 2 Should we spend some time thinking before then? Yeah, sounds good. And then I guess the very obvious response is, okay, that's a pretty huge assumption that we can just like coordinate around that.
Speaker 2 And I think the answer is, yep, it is.
Speaker 2 But as a kind of directional ideal, should we push towards or away from the idea of taking our time, holding our horses, kind of getting people together who haven't really been part of this conversation and hearing them?
Speaker 2 Yeah, definitely seems worthwhile.
Speaker 1 All right, so I have another good abstract idea that I want to entertain by you.
Speaker 1 So, you know, it seems like kind of wasteful that we have these different companies that are building the same exact product.
Speaker 1 But, you know, because they're building the same exact product, they don't have economies of scale and they don't have coordination. There's just a whole bunch of loss that comes from that, right?
Speaker 1 Wouldn't it be better if we could just coordinate and just like figure out the best person to produce something together and then just have them produce it?
Speaker 1 And then we could also coordinate to figure out like what is the right quantity and quality for them to produce. I'm not trying to say this is like communism or something.
Speaker 1 I'm just saying it's ignoring what would be required. Like in this analogy, you're ignoring like what kinds of information gets lost and what kinds of
Speaker 1 what it requires to do that so-called coordination
Speaker 1 in the communism example. In this example, it seems like you're not
Speaker 1 whatever would be required to prevent somebody from realizing,
Speaker 1 like, let's say somebody has a vision for like, we want to colonize a star system. We want to like, I don't know, make some new technology, right?
Speaker 1 That's part of something that the long reflection would curtail. Maybe I'm getting this wrong, but it seems like it would require almost a global panopticon totalitarian
Speaker 1 state to be able to prevent people from escaping the reflection.
Speaker 2 Okay, so there's a continuum here, and I basically agree that some kind of panopticon-like thing,
Speaker 2 not only is impossible, but actually sounds pretty bad.
Speaker 2 But something where you're just like pushing in the direction of being more coordinated on the international level about things that matter seems like desirable and possible.
Speaker 2 And in particular, preventing really bad things rather than try to to get people to like all do the same thing.
Speaker 2 So the Biological Weapons Convention strikes me as an example, which is like imperfect and underfunded, but
Speaker 2 nonetheless kind of directionally good. And maybe an extra point here is that there's like a sense in which the long reflection option, or I guess the better framing is like
Speaker 2
aiming for a bit more reflection rather than less. That's like the conservative option.
That's like doing what we've already been doing just a bit longer rather than some like radical option. So
Speaker 2 I agree. It's like pretty hard to imagine like
Speaker 2 you know some kind of super long period where everyone's like perfectly agreed on doing this.
Speaker 2 But yeah, I think framing it as like a directional ideal seems pretty worthwhile.
Speaker 2 And I guess I know maybe I'm kind of naively hopeful about the possibility of coordinating better around things like that.
Speaker 1 There's two reasons why this seems like a bad idea to me. One is, oh, yeah, first of all, who is going to be deciding when we've come to a good consensus about,
Speaker 1 okay, so we've decided like this is the way things should go.
Speaker 1 Now we're like ready to escape the long reflection and realize our vision for the rest of the lifespan of the universe. Who's going to be doing that?
Speaker 1 It's the people who are presumably in charge of the long reflection.
Speaker 1 Almost by definition, it'll be the people who have an incentive in preserving whatever power,
Speaker 1 well, power balances exist at the end of the long reflection. And then the second thing you'd ask is like,
Speaker 1 there's like a difference between, I think, having a consensus on not using biological weapons or something like that, where you're limiting a negative, versus it seems like when we've had, when we've required society-wide consensus on what we should aim towards achieving,
Speaker 1 the outcome has not been good in history.
Speaker 1 It seems better that on the positive end to just leave it open-ended and then just maybe when necessary say that like the the very bad things
Speaker 1 uh we might want to restrict together
Speaker 2 yeah yeah yeah okay i think i kind of just agree with a lot of what you said so i think the best like framing of this is the version when you're preventing something which most people can agree is negative which is to say some actor unilaterally deciding to like do this huge irreversible or set out on this huge irreversible project.
Speaker 2 Like something you said was
Speaker 2 that the outcome is going to reflect the
Speaker 2 like values of whoever is like in charge.
Speaker 1 And then not just the values. I mean, also, I mean, just like think about how guilds work, right? It's like,
Speaker 1 if the it whenever we, for example, in industry, we led how the industry should progress, we led those kinds of decisions to be made collectively by the people who are currently dominant in the industry,
Speaker 1 you know, guilds or something like that,
Speaker 1 or
Speaker 1 like industrial conspiracies as well. It seems like the
Speaker 1 outcome is just bad.
Speaker 1 And so my prior would be that
Speaker 1 at the end of such a situation, our ideas about what we should do would actually be worse than going into the long reflection.
Speaker 1 I mean, obviously,
Speaker 1 it really depends on how it's implemented, right?
Speaker 1 So I'm not saying that, but just like broadly, given all possible implementations, and maybe the most likely implementation, given how governments run now.
Speaker 2 Yeah, yeah, yeah.
Speaker 2 I should say that, like, I am, in fact, like pretty uncertain. I just kind of, I don't know, it's more enjoyable to like give this thing its hearing.
Speaker 1 No, no, I enjoy the
Speaker 1 parts where we have disagreements. Yeah.
Speaker 2 So one
Speaker 2 thought here is: if you're worried about the future, like the course of the future being determined by some single actor, I mean, that worry is just symmetrical with the worry of letting whoever wins some race first go and do,
Speaker 2 you know, go and do the thing, the like project where they
Speaker 2 more or less kind of determine what happens to the rest of humanity. So, the option where you like kind of deliberately wait and let people like have some
Speaker 2 like global conversation. I don't know, it seems like that
Speaker 2
is less worrying, even if the worry is still there. I should also say, I kind of imagine the outcome is not unanimity.
In fact, it'd be like pretty wild if it was, right?
Speaker 2 But you want the outcome to be some kind of like
Speaker 2 stable, friendly disagreement where now we can kind of like
Speaker 2 maybe reach some kind of coast and solution and we just go and do our own things. There's like a bunch of projects which kind of go off at once.
Speaker 2 I don't know, that feels like really great to me compared to whoever gets there first determining how things turn out. But yeah, I mean, it's hard to talk about stuff, right?
Speaker 2 Because it's like somewhat speculative. But I think it's just like a useful
Speaker 2 North Star or something to try pointing towards.
Speaker 1 Okay, so maybe to make it more concrete, I wonder if
Speaker 1 your expectation that the
Speaker 1 consensus view would be better than the first mover view.
Speaker 1 In like today's world, maybe, okay,
Speaker 1 either we have the form of government and not just government, but also the
Speaker 1 industrial and logistical organization that, I don't know, like Elon Musk has designed for Mars. Either that is, so if he's the first mover for Mars, would you prefer that?
Speaker 1 Or we have the UN come to a consensus between all the different countries about like how we should have the first Mars colony organized? Or
Speaker 1 would the Mars colony run better if after like 10, 20 years of that,
Speaker 1 they're the ones who decide how the first Mars colony goes? Global consensus views to be better than first mover views.
Speaker 2 Yeah, that's a good question. And I mean, one obvious point is not always, right? Like, there are certainly cases where the consensus view is just like somewhat worse.
Speaker 2 I think you limit the downside with a consensus view, right? Because you give people space to express why they just don't think this like one idea is bad.
Speaker 2 I don't know if there's an answer to your question, but like it's a really good one.
Speaker 2
You can imagine the kind of the UN-led thing is going to be like way slower. It's going to probably be way more expensive.
The International Space Station is a good example where, um,
Speaker 2 I don't know, I think that turned out pretty, pretty well, but a like private version of that would have happened like
Speaker 2 in some sense a lot more effectively. I guess I'm not like the Elon example is kind of a good one because it's not obvious why that's like super worrying.
Speaker 2 The thing I have in mind in the like long reflection example is maybe like a bit more kind of wild, but it's really hard to make it concrete. So, I'm yeah, somewhat floundering.
Speaker 1 There's also another another reason to like
Speaker 1 to the extent that somebody has the resources,
Speaker 1 I don't know, maybe this just gets to an irreconcilable question about your priors about
Speaker 2 other kinds of political things.
Speaker 1 But to the extent that somebody has been able to build up resources privately to be able to be a first mover in a way that is going to matter for the long term, what do you think about what kind of views they're likely to have and what kind of competencies they're likely to have?
Speaker 1 Versus assuming that the way governments work and function and the quality of their governance doesn't change that much for the next 100 years, what kind of outcomes you will have from,
Speaker 1 basically, if you think like the likelihood of leaders like Donald Trump or Joe Biden is like going to be similar for the next 100 years, and if you think like the richest people in the world or the first movers are going to be people that are similar to Elon Musk, I can see people having genuinely different reasonable views about who should like the...
Speaker 1 Should the Elon Musk of 100 years from now or the Joe Biden of 100 years from now have the power to decide the long-run course of humanity?
Speaker 1 Is that a fulcrum in this debate that you think is important, or is that maybe not as relevant as I might think?
Speaker 2
Yeah, I guess I'll try saying some things, and maybe it'll like respond to that. Kind of two things are going through my head.
So, one is something like
Speaker 2 you should expect these questions about like what should we do when we have the capacity to do like a far larger range of things than we currently have the capacity to do.
Speaker 2 That question is going to hinge like much more importantly on like theories people have and like worldviews and very kind of particular details much more than it does
Speaker 2 now.
Speaker 2 And I'm going to do a bad job at trying to articulate this, but there's some kind of analogy here where if you're like fitting a curve to some points, you can like overfit it.
Speaker 2 And in fact, you can overfit it in various ways. And they all look pretty similar.
Speaker 2 But then if you like extend the axes, so you like see what happens to the curves like beyond the points, those different ways of fitting it can go all over the place.
Speaker 2 And so like, there's some analogy here where when you kind of expand the possibility, like the space of what we could possibly do,
Speaker 2 different views which look kind of similar right now, or at least come to similar conclusions, they just like go all over the shop.
Speaker 2 And so that is not responding to your point, but I think it's like maybe worth saying, like, this is a reason for expecting, reflecting on what the right view is to be quite important. And like,
Speaker 2 then I guess that leads into a second thought,
Speaker 2 which is something like, I guess there's two things going on.
Speaker 2 One is the thing you mentioned, which is there are basically just a bunch of political dynamics where you can just like reason about where you should expect values to head for like political reasons.
Speaker 2 In some sense, it's like now better than the default. And what is that default?
Speaker 2 And then there's like a kind of different way of thinking about things, which is like separately from political dynamics.
Speaker 2 Can we actually make progress and like thinking better about what's best to do? In the same way that we can like make progress in
Speaker 2 science, like kind of separately from the fact that like
Speaker 2 people's views about science are influenced by like political dynamics.
Speaker 2 And maybe like a disagreement here is a disagreement about like how much scope there is to just get better at thinking about these things. I mean one
Speaker 2 like reason I can give, I guess I kind of mentioned this earlier, is this project here of like thinking about what's best to do, maybe kind of thinking better about ethics, it's not the thing,
Speaker 2 it's like maybe more relevant to think that this is like
Speaker 2
on the order of kind of 30 years old rather than on the order of 2,000 years old. You might call it like secular ethics.
Parfit writes about this, right?
Speaker 2 He's like, talks about this kind of, there are at least reasons for hope.
Speaker 2 We haven't ruled out that we can not make a lot of progress because the thing we were doing before, like we were trying to think systematically about what's best to do, was just very unlike the thing that we should be interested in.
Speaker 2 I'm sorry that was like a huge ramble, but hopefully there's something there.
Speaker 1 Yeah, I want to go back to what you were saying earlier about how
Speaker 1 you can think of,
Speaker 1 I don't know, global consensus as the reduced variance version of future views. And, you know, so I think that, like, to the extent that you think a downside is really bad, I think
Speaker 1 that's a good argument.
Speaker 1 And then, yeah, I mean, it's like similar to my argument against like monarchists, which is that, like, actually, I think it is reasonable to expect that if you could, like, reasonably,
Speaker 1 you could reliably have people like Lee Kuan Yew who are in charge of your country and you have a monarchy, that things might be better than a democracy.
Speaker 1 It's just that the bad outcome is just so bad that it's like better just having a low variance
Speaker 1 thing like democracy.
Speaker 2 It's a fun one to talk about. Maybe one last kind of trailing thought on what you said is
Speaker 2 I think I guess Popper has this thought and also David Deutsch like did a really good job at kind of explaining it about
Speaker 2 one like underrated
Speaker 2 value of democracy is not just in some sense having this function to like
Speaker 2 combine
Speaker 2 people's views into like some kind of you know optimal path which is like some mishmash of what everyone thinks.
Speaker 2 It's also like having the ability for um people who are being governed to just like cancel this current experiment in governance and try again so it's something you know it's like we'll give you freedom to you know implement this kind of governance plan that seems really exciting and then we're just gonna like pull the brakes when it when it goes wrong and that kind of the option to like start again in general just feels like really important as some kind of tool you want in your like toolkit when you're thinking about these like pretty big futures.
Speaker 1 I guess my hesitation about this is I can't can't imagine
Speaker 1 a form of government where, at the end of it, I would expect that a consensus view from, I mean, not just like nerdy communities like EA, but like an actual global consensus would be something that I think is a good path.
Speaker 1 Maybe it's something like I don't think it's like the worst possible path.
Speaker 1 But I mean, one thing about reducing variance is like, if you think the far future can be really, really good, then by reducing variance, you're like cutting off a lot of expected value, right?
Speaker 1 And then you can think like democracy works much better in cases where the problem is like closer to something that the people can experience.
Speaker 1 It's like, I don't know, if democracies don't have famines, because it's like if there's a famine, you get voted out, right?
Speaker 1 Or like you have major wars as well, right? But if you're talking about like
Speaker 1 some form of consensus way of deciding what should the far-far future look like, it's not clear to me why the consensus view on that would be, it's likely to be correct.
Speaker 2 Yeah, yeah, yeah. I think maybe some of what's going on here is i'd want to resist the
Speaker 2 and it's my fault for i think like suggesting this framing that it's like you just you spend a bunch of time thinking and like having this conversation and then you just hate this like international vote on what we should do um
Speaker 2 i think maybe
Speaker 2 another framing is something like let's just give the time for the people who like want to be involved in this to like make the progress that could be possible on thinking about these things and then just like see where we end up where
Speaker 2 I don't know there's like a very weak analogy to progress in like other fields where we don't make progress in like mathematics or science by like taking enormous votes on what's true
Speaker 1 but we can by just like giving people who are interested in making progress the space and time to do that and then at the end it's like often pretty obvious what turns out that's like very begging the question because it's like way more obvious um what's right and wrong if you're like doing maths compared to doing this kind of thing but um no but also like what what happens if you this seems similar to like the question about uh monarchy where it's like what happens if you pick the wrong person or like the wrong polit bureau to pick what the what the what the charter you take to the rest of the universe is yeah it seems like a hard problem to ensure that you have uh that the group of people who will be deciding this
Speaker 1 either if it's a consensus or if it's a single person or anything in between Like it has to be some decision maker, right?
Speaker 2 I think you just imagine there being no decision maker, right? So like
Speaker 2 the thing could be, let's agree to have some time to reflect on what is best. And we might come to some decision.
Speaker 2 And then at the end, like, you know, one version of this is just let things happen. Like, there's no final decision when somebody's.
Speaker 2 It's just like that time between doing the thing and thinking about it, just like extending that time for a bit seems good.
Speaker 1
I see. Okay.
Okay. Yeah, sorry, I missed that.
Speaker 1 Okay.
Speaker 1 So actually, one of the major things you were going to discuss, this is like one, all the things we've discussed so far was one like quadrant of like
Speaker 1
of the conversation. Actually, you know what, before we talk about space governance, let's talk about podcasting.
So you have your own podcast.
Speaker 1 I have my own. What have you,
Speaker 1 why did you start it and like what have you, what have you been your like experiences so far? What have you learned about the joy and impact of podcasting?
Speaker 2 So story is, Luca, who's a close friend of mine, who I do this podcast with, we're both at university together and we were like both podcast nerds.
Speaker 2 And I think I remember we were in our last year and we had this conversation like
Speaker 2 we're like surrounded by all these people who just seem like incredibly interesting. Like all these, you know, like academics we really love to talk about
Speaker 2 or talk to.
Speaker 2 And if we just like
Speaker 2 email them saying we're doing a podcast and wanted to interview them, that could be a pretty good excuse to talk to them.
Speaker 2 So let's see how easy it is to do this. Turns out the startup costs on doing a podcast are like pretty low if you want to do like a scrappy version of it, right? Did that.
Speaker 2 It turns out that like academics especially, but just like tons of people really love being asked to talk about the things they think about all the all day, right?
Speaker 2 It's like a complete win-win where you're like, you're trying to boost the ideas of someone or some actual person who you think deserves more airtime.
Speaker 2 That person gets to like talk about their work and, you know, spread their ideas. So it's like, huh, there's like no downsides to doing this other than the time.
Speaker 2 Also, I should say that the kind of yes rate on our emails was like considerably higher than we thought. We were, you know, like
Speaker 2 two random undergrads with microphones.
Speaker 2 But there's this really nice like kind of snowball effect where
Speaker 2 if someone who is like well known
Speaker 2 is like gracious enough to say yes despite knowing not really knowing what you're about. And then you do an interview and then like it's a pretty good interview.
Speaker 2 When you're emailing the next person, you don't have to like sell yourself. You can just be like, hey, I spoke to this other impressive person.
Speaker 2 And of course, you get this like this kind of snowball. So
Speaker 1 no, it's definitely a Ponzi scheme. It's a great.
Speaker 2 Is that the best kind of Ponzi scheme, though? Podcasts as a form of media are just incredibly special.
Speaker 2 There's something about just the incentives between guest and host just like aligned so much better than like, I know, if this was like some journalistic interview, it'd be like way kind of more uncomfortable.
Speaker 2
There's something about the fact that it's still kind of hard to like search transcripts. So there's less of a worry about forming all your words in the right way.
So it's just like more relaxed.
Speaker 2 Yeah, recommend it.
Speaker 1 Yeah, I know.
Speaker 1 It's such a natural form of.
Speaker 1 You can think of writing as a sort of way of imitating conversation. And audiobooks is a way of trying to imitate like a thing that's trying to imitate conversation.
Speaker 1 You're like, audiobooks seem like they're supposed to.
Speaker 1 Writing is like, yeah, you're visually perceiving what was originally
Speaker 1 an ability you had originally for understanding, you know,
Speaker 1 audible ideas.
Speaker 1 But then audiobooks, it's like you're going through two layers of translation there where you don't have the natural repetition and the ability to gauge the other person's reaction.
Speaker 1 and so on that and the back and forth obviously that an actual conversation has and um yeah so
Speaker 2 that's why it's like people um potentially listen to like podcasts too much where i don't know that they're they're just like they have something in the areas the whole day which you can't imagine for other audiences right yeah a few things this makes me think of one is there's some experiment where i guess you can just do it yourself when if you force people not to use disfluences disfluencies sorry like ums and ours those people just get like much worse at uh reading words uh in some sense like disfluencies like help us i guess i'm using the word like right now communicate thoughts for some reason and then if you take a
Speaker 1 podcast,
Speaker 2 I guess I can speak for myself. And then you,
Speaker 2 word for word, transcribe what you are saying. Well, when I say you, I mean me.
Speaker 2
It's like hot garbage. It's like I've just learned how to talk.
Yes.
Speaker 2 But that pattern of speech, like you point out, is in fact easier to digest, or at least it's... it requires less kind of stamina or effort.
Speaker 1 No, yeah, and then the seems to have an interesting point about in Anti-Fragile. I'm vaguely remembering this, but he makes a point that
Speaker 1 sometimes when a signal is distorted in some way, it makes it
Speaker 1 you retain or absorb more of it because you have to go through extra effort to understand it,
Speaker 1 which is a reason, for example, I think his example was if
Speaker 1 I don't know, if somebody is like speaking, but they're like far away or something, so their audio is muted, you had to apply more concentration, which makes it
Speaker 1 which makes it actually, which means you retain more of their content.
Speaker 2 So if you overlay what someone says with a bit of noise, or you turn on the volume, very often people have better comprehension of it because of the thing you just say, which is like you're paying more attention.
Speaker 2 Also, I think maybe I was misremembering the thing I mentioned earlier, or maybe it's a different thing, which is you can take perfect speech, like recordings, and then you can insert umz and ahs and like make it worse.
Speaker 2 And then you can do like a comprehension test where people listen to the different versions and kind of remember it. And they do better with the versions which are like less perfect.
Speaker 1 Is it just about having more space between words? Or is it actually the um like if you just added space instead of ums, would that have the same effect? Or is that something specific about
Speaker 1 um is some global consonant that of just like it's like om or something. It like evokes it evokes like absolute concentration.
Speaker 2 Yeah, exactly.
Speaker 2 I'm curious to ask you, like, I know, I want to know what you feel like you've learned from doing podcasting. So, I don't know, maybe one question here is, like, yeah, what's some kind of
Speaker 2 underappreciated difficulty of trying to ask good questions? I mean, you like, obviously, you are currently asking excellent questions. So, what have you learned?
Speaker 1 That
Speaker 1 one thing you, I think I've heard this advice that you want to do something where a thing that seems easier to you is difficult for other people.
Speaker 1 Like, I have tried, okay, so one obvious thing you can do is like ask on Twitter, hey, I'm interviewing this person, what should I ask them?
Speaker 1 And you'll observe that, like, all the questions that people will like propose are like terrible.
Speaker 1 And so, but, but maybe it's just like, oh, yeah, there's, there's adverse selection.
Speaker 1 The people who actually could come up with good questions are not going to spend the time to like reply to your tweet. Um, but then I've even like, um,
Speaker 1 hopefully they're not listening, but I've, I've even like tried to like hire like, I don't know, research partners or research assistants who can help me come up with questions more recently.
Speaker 1 And the questions they come up with also seem
Speaker 1 like it just left like, how did grow in the Midwest like change your views about blah, blah, blah. It just like I have the, I, I have, it's just a question that's, uh, whose answer is not interesting.
Speaker 1 It's not a question you would organically have if you, at least I hope one of you, I hope you wouldn't have organically want to ask them if you were only talking to them one-on-one. So
Speaker 1 it does seem like the skill is harder than I would have.
Speaker 1 It's rarer than I would have expected. I don't know why.
Speaker 1 I don't know if you have a good sense of, because you have an excellent the podcast where you ask good questions uh i i don't know what do you think it have you observed this where i i the the the asking good questions is a rarer skill than you might think certainly i've observed that it's a really hard skill i still feel like kind of
Speaker 2 i still feel like it's really difficult i also at least like to think that we've got a bit better first thing i thought there was this example you gave of like what was it like growing up in the midwest we always ask those kinds of questions so you know like how did you get into behavioral economics And why do you think it was so important?
Speaker 2 These are just like guaranteed to be kind of uninspiring answers. So specificity seems like a really good, like, kind of.
Speaker 1 What is your book about?
Speaker 2 Yeah, exactly. Exactly.
Speaker 2 Yeah, tell us about yourself.
Speaker 2 This is what I love conversations with Tyler. It's one of the many reasons I love it.
Speaker 2 He'll just like launch with, you know, like the first question will be like about some footnote in this person's like undergrad dissertation and that just sets the tone so well.
Speaker 2 Also, I think cutting off, which I've made very difficult for you, I guess, cutting off answers once the interesting thing has been said.
Speaker 2 And the elaboration or like the caveats on the like meat of the answer are often just like way less worth hearing.
Speaker 2 I think trying to ask questions which a person has no hope of knowing the answer to, even though it'd be great if they knew the answer to, like, so what should we do about this policy?
Speaker 2 Is a pretty bad move.
Speaker 2 Also,
Speaker 2 if you speak to people who are like familiar with asking questions about like their book, for instance, in some sense, you need to like flush out the kind of pre-prepared like spiel that they have in their heads.
Speaker 2 Like, I know, you could even just do this like before the interview, right? And then like it gets to the good stuff where they're actually being made to think about things.
Speaker 2 Rob Wiblin has a really good um like list of interview tips, which I think, I don't know, I guess a reason this is kind of nice nice to talk about, other than the fact this is like good to have some kind of like inside baseball talk, is that you know, like skills of interviewing feel pretty transferable to just asking people good questions, which is like a generally useful skill,
Speaker 2 hopefully.
Speaker 2 So, yeah, I guess I've found that it's like really difficult. I still get pretty frustrated with how hard it is, but
Speaker 2 it's just like a cool thing to realize that you are able to like kind of slowly learn.
Speaker 1 Yeah, so okay, so what what is but how do you think about the value you're adding through through your podcast? And then what advice do you have for somebody who might want to start their own?
Speaker 2 Yeah, so
Speaker 2 I don't know, kind of one reason you might think podcasts are really useful in general is
Speaker 2 I guess the way I think about this is like you can imagine there's a kind of just stock of like ideas that seem really important.
Speaker 2 Like if you just have a conversation with, I don't know, someone who's like researching some cool topic and they tell you all this cool stuff that's like isn't isn't written up anyway.
Speaker 2 And you're like, oh my god, this needs to like kind exist in the world.
Speaker 2 I think in many cases, this stock of important ideas just grows faster than you're able to, in some sense, pay it down and put it out into the world. And that's just a bad thing.
Speaker 2 So there's this overhang you want to fix. And then you can ask this question of, okay, what's just
Speaker 2 one of the most effective ways to
Speaker 2 communicate ideas
Speaker 2 relatively well. put them out into the world.
Speaker 2 Well, I don't know, just like having a conversation with that person is is just like one of the most kind of efficient ways of doing it.
Speaker 2 I think it's like interesting in general to consider like the kind of rate of information transfer for different kinds of like media and stuff, like transmitting and receiving ideas.
Speaker 2 So on the like best end of the spectrum, right? I'm sure you've had kind of conversations where you, everyone you're talking with like shares a lot of context.
Speaker 2 And so you can just kind of blurt out this like slightly
Speaker 2 incoherent three minute like I just had this kind of thought in the shower and they can fill in the gaps and basically just like get the idea.
Speaker 2 And then, at the kind of opposite end, like, maybe you want to write an article in like a kind of prestigious outlet, and so you're like
Speaker 2 kind of covering all your bases and making it like really well written,
Speaker 2 and then just like the information per kind of effort is just like so much lower. And I guess, like, academic, certain kinds of academic things are like way out on the other side.
Speaker 2 So, yeah, just like as a way of solving this kind of problem of this overhang of important ideas, podcasts just seem like a really kind of good way to do that.
Speaker 2 I guess when you don't successfully put ideas out into the world,
Speaker 2 you get these little kind of like
Speaker 2 clusters or like fogs of
Speaker 2 like contextual knowledge where everyone knows these ideas in the right circles, but they're hard to pick up from like legible sources.
Speaker 2
And it's like kind of maps onto this idea of like context being that thing which is scarce. I remember like Tyler Cohen talking about that.
And I kind of like eventually made sense in that context.
Speaker 1 I will mention that
Speaker 1 it seems like kind of a
Speaker 1 the thing you mentioned about either just
Speaker 1 head off on a podcast and explain your idea or take the time to do it in like a prestigious place. It seems very much like a bargain strategy.
Speaker 1 Whereas the middle ground of spending like four or five hours writing a blog post where it's not going to be in something that plays super prestigious, you might as well just like either just put it up in a podcast if it's a thing you just want to get over with or
Speaker 1 spend some time,
Speaker 1 a little bit more time getting into more prestigious plays the argument against it i guess is that
Speaker 1 the the idea seems more accessible if it's in the form of a blog post for i don't know for posterity if you just uh want that to be like the canonical source uh for something but again if you want it to be the canonical source you should just make it a uh a sort of like more official thing because if it's just a youtube clip then it's it's a it's a little difficult for people to like reference to it and you can kind of get the best of both worlds so you can
Speaker 2 put your recording into, like, there are, you know, the software that transcribes your podcast, right?
Speaker 1 You can put it into that.
Speaker 2 If you're lucky enough to have someone to help you with this, you can get someone, or you can just do it yourself.
Speaker 2 Like, go through the podcast, the transcript to make sure it's kind of, there aren't any like glaring mistakes.
Speaker 2
And now you have this like artifact that is in text form that like lives on the internet. And it's just like way cheaper than writing it in the first place.
But yeah, that's that's a great point.
Speaker 2 And also, people should read your Barbells for Life. Is that it? Barbell Strategies for Life?
Speaker 1 Yeah, yeah, that's it.
Speaker 2 Yeah, cool. Maybe one
Speaker 2 last thing that seems worth saying on this topic of podcasting is, like,
Speaker 2 it's quite easy to just start doing a podcast. And
Speaker 2 my guess is it's often worth at least trying, right? So I don't know. I guess there are probably a few people listening to this who've like kind of entertained the idea.
Speaker 2 One thing to say is it doesn't need to be the case that if you just like stop doing it and it doesn't really pan out after like five episodes or even fewer, that it's a failure.
Speaker 2 Like you can frame it as, I wanted to make like a small series.
Speaker 2 There's just like a useful artifact to to have in the world, which is like, I don't know, here's this kind of bit of history that I think is underrated.
Speaker 2
And I'm going to tell the story in like four different hour-long episodes. If you're set out to do that, then you have this self-contained chunk of work.
So yeah, maybe that's like a useful framing.
Speaker 2 And there's a bunch of resources, which I'm sure it might be possible to link to on just like how to set up a podcast. I like tried writing, like collecting some of those resources.
Speaker 1 The thing to emphasize, I think, is that you,
Speaker 1 I think I've talked to like at least, I don't know, three or four people at this point who have told me like, oh, I have this idea for a podcast. It's going to be about, you know, like architecture.
Speaker 1
It's going to be about like VR or whatever. They seem like good ideas.
I'm not making up the ideas themselves.
Speaker 1 But I just like, I talk to them like six months later and it's like, they haven't started it yet.
Speaker 1 And I just tell them like, literally just email somebody right now, whoever you want me to be your first guest. I mean, I cold emailed Brian Kaplan and he ended up being my first guest.
Speaker 1 Just email them and like set something on the calendar because I don't know what it is. Maybe just about life in general.
Speaker 1 I don't know if it's specific to podcasting, but the amount of people I've talked to who have like vague plans of starting a podcast and have nothing scheduled
Speaker 1 or like no immediate, like they, I don't know what they're, they're expecting like some MP3 file to appear on their hard drive on some fine day.
Speaker 1 So yeah, but yeah, just do it. Like get it on the calendar now.
Speaker 2 Yeah, that seems, that seems good.
Speaker 2 Also, there's like some way of thinking about this where you could just like, If you just write off in advance that your first, I don't know, let's say seven episodes are just going to be like embarrassing to listen to.
Speaker 2 That is more freeing because it probably is the case.
Speaker 2 But you like need to go through the
Speaker 2 bad episodes before you start getting good at anything. I guess it's like not even the podcast point.
Speaker 2 Yeah, also,
Speaker 2 if you're just like brief and polite, there's like very little cost in being ambitious with the people you reach out to.
Speaker 2 So yeah, just go for it, right?
Speaker 1 Bragg Catherine is an interesting, he wrote an interesting argument about this somewhere where he was pointing out that actually the costs of cold emailing are much lower if you're like an unknown quantity than if you are like somebody who's like has somewhat of a reputation.
Speaker 1 Because if you if you're just nobody, then they're gonna forget you ever cold email them, right? They're just gonna ignore it in their inbox.
Speaker 1 If you ever run into them in the future, they're just gonna like not gonna register registered you the first time.
Speaker 1 If you're like somebody who has like somewhat of a reputation, then there's like a mystery of like, why are we not getting introduced to somebody who should know both of us, right?
Speaker 1 If you claim to be, I don't know, like a professor who wants to start a podcast.
Speaker 1 um i yeah but anyways they did
Speaker 1 just reinforcing the point that the cost is really low all right cool okay uh let's talk about space space governance so this is an area where you've uh you've been writing about and researching uh recently okay one concern uh
Speaker 1 you might have is you know that toby horda has that book uh book about the uh the precipice about how we're in this time of peril um where we have like a one in six odds of going extinct this century is there some reason to think that once we get to space, this will no longer be a problem or
Speaker 1 the risk of extinction for humanity go
Speaker 1 asymptote to zero?
Speaker 2 I think one point here, so actually maybe it's worth beginning with a kind of like naive case for thinking that like spreading through space is just like the ultimate hedge against extinction.
Speaker 2 And this is, you know, you can imagine just like duplicating civilization or at least having kind of civilizational backup like things which are like in different places in space.
Speaker 2 if the risk of any one of them like being hit by an asteroid or like otherwise encountering some existential catastrophe, if those risks are independent, then you get this like it's exponential, it's like a power law with every new
Speaker 2 backup, right? It's like
Speaker 2 it's like having multiple kind of backups of some data in different places in the world, right?
Speaker 2 So if those risks are independent, then it is in fact the case that like going to space is just like incredibly good strategy.
Speaker 2 I think there are pretty compelling reasons to think that a lot of the most worrying risks are really not independent at all.
Speaker 2 So
Speaker 2 one example is, you can imagine very dangerous pathogens. If there's any travel between these places, then the pathogens are going to travel.
Speaker 2 But maybe the more pertinent example is
Speaker 2 if you think it's worth being worried about artificial general intelligence that is like unaligned, that goes wrong, and like really relentlessly pursues really terrible goals, then just having some like
Speaker 2 some just physical space between two different places is really not
Speaker 2 going to work as a real kind of hedge.
Speaker 2 So I'd say something like, you know, space seems kind of, it seems net useful to like diversify, go to different places, but like absolutely not sufficient for getting through this kind of time of perils.
Speaker 2 Then yeah, I guess there's kind of this follow-up question, which is like, okay, well, why expect that there is any hope of like getting the the risk down to sustainable levels if you're sympathetic to the possibility of like really just transformative artificial general intelligence like arriving you might think that in some sense getting that transition right where the outcome is that now you have this thing on your side which like has your interests in mind or has like good values in mind but has this like general purpose kind of reasoning capability that in some sense this just like tilts you towards being safe just like indefinitely long and one reason is if bad things pop up like some unaligned thing then you have this much better established safe and aligned thing which has this kind of defensive advantage
Speaker 2 so that's one consideration and then if you're like less sympathetic to this AI story then I think you'd also just tell a story um
Speaker 2 about like being optimistic for
Speaker 2 our kind of capacity to
Speaker 2 catch up along some kind of wisdom or coordination dimension.
Speaker 2 If you really zoom out and look at how quickly we just invented all this kind of insane technology, that is a roughly kind of exponential process.
Speaker 2 You might think that that might kind of eventually slow down, but our improvements and just how well we're able to coordinate ourselves like continues to increase.
Speaker 2
And so that you get this defensive advantage in the long run. Those are two pretty weak arguments.
So I think it's actually just a very good question to think about.
Speaker 2 And I like, you know, also kind of acknowledge that's like not a very kind of compelling answer.
Speaker 1 I'm wondering if there are
Speaker 1 aspects that you can discern from first principles about the safety of space,
Speaker 1 which suggests that either, I don't know, either there's no good reason to think the time of perils ever ends.
Speaker 1 I mean, because the thing about AI is like, that's true whether you go to space or not, right? Like Like
Speaker 1 if it's aligned, then I guess it can indefinitely reduce existential risk. I mean, one thought you can have is maybe,
Speaker 1 I don't know, contra the long reflection thing we're talking about, which is that if you think that one of the bottlenecks to a great future could be, I don't know, like some sort of tyrannical.
Speaker 1 Tyrannical is like a kind of a coded term in terms of like conventional political thought, but you know what I mean.
Speaker 1 Then the diversity of political models that being spread out would have, maybe that is a positive thing.
Speaker 1 On the other hand, Gwern has this interesting blog post about
Speaker 1 space wars where he points out that the logic of a mutually destroyed destruction goes away in space. So maybe we should expect more conflict because
Speaker 1 it's hard to identify who the culprit is if an asteroid was redirected to your planet. And if they can speed it up sufficiently fast, they can basically destroy your above-ground civilization.
Speaker 1 Yeah, so I mean,
Speaker 1 is there something we can discern from First Principles about how violent and how
Speaker 1 I don't know
Speaker 1 how
Speaker 1 pleasant the time in space will be?
Speaker 2 Yeah, it's a really good question. I will say that I think I have not like reflected on that question enough to like give a really authoritative answer.
Speaker 2 Incidentally, one person who absolutely has is Anders Samberg, who has been thinking about almost exactly these questions for a very long time and in some point in the future might have a have a book about this.
Speaker 2 So
Speaker 2
watch that space. One consideration is that you can start at the end.
You can
Speaker 2 consider what happens very far out in the future.
Speaker 2 And it turns out that because the universe is expanding
Speaker 2 for any just like point in space, so if you just like consider the next like cluster over or maybe even the next galaxy over, there'll be a time in the future where it's impossible to reach that other point in space, no matter how long you have to get there.
Speaker 2 So, even if you sent out a signal in the form of light, it would never reach there because there'll always be a time in the future where you start expanding faster than the speed of light relative to that other place.
Speaker 2 So, okay, like there's a small constellation there, which is if you last long enough to get to this kind of era of isolation, then suddenly you become independent again in the like strict sense.
Speaker 2 I don't think that's especially relevant when we're considering kind of more, I guess, I guess relatively speaking, nearer term things. Gwen's point is really nice.
Speaker 2 So, Gwen starts by pointing out that we have this like logic with nuclear weapons on Earth or mutually assault destruction, where the emphasis is on a second strike.
Speaker 2 So, if I receive a first strike from someone else, I can identify the someone else that first strike came from, and I can kind of like credibly commit to retaliating.
Speaker 2 And the thought is that this like disincentivizes that person from launching the first strike in the first place, which makes a ton of sense.
Speaker 2 Gwen's point, I guess, the thing you already mentioned is in space, there are reasons for thinking it's going to be much harder to like attribute where a strike came from.
Speaker 2 That means that you don't have
Speaker 2 any kind of credible way to threaten a retaliation.
Speaker 2 And so mutually assured destruction doesn't work.
Speaker 2 And that's kind of like actually a bit of an uncomfortable thought because the alternative to mutually assured destruction in some sense is just first strike, which is if you're worried about some other actor being powerful enough to destroy you, then you should destroy their capacity to destroy you.
Speaker 2 So yeah, it's like a slightly bleak vlog post. I think there are like a ton of other other considerations, some of which are a bit more hopeful.
Speaker 2 One is that you might imagine as in general a kind of like defensive advantage in space over offensive. One reason is that space is
Speaker 2 this like
Speaker 2 dark canvas in 3D where there's absolutely nowhere to hide. And so
Speaker 2 you can't sneak up on anyone.
Speaker 2 But yeah, I think there's like like a lot of there's a lot of stuff to say here and a lot of it I don't quite fully understand yet.
Speaker 1 But I guess that makes it
Speaker 1 an interesting and important place to be a subject to be studying if
Speaker 1 we don't know that much about how it's going to turn out. So
Speaker 1 von Neumann has this vision that you would have you would set up a sort of virus-like probe that infests a planet and uses of its usable resources to build more probes which go on and infect more planets.
Speaker 1 Is the long-run future of the universe that all the available low-hanging resources are burnt up in
Speaker 1 some sort of like fire,
Speaker 1 expanding fire of von Neumann probes? Because it seems like as long as one person decides that this is something they want to do, then
Speaker 1 the low-hanging fruit in terms of
Speaker 1 spreading out will just be burned up by somebody who built something like this.
Speaker 2 Yeah, that's a really good question.
Speaker 2 So, okay, maybe there's like an analogy here where we have on Earth, we have organisms which can like convert raw resources plus sunlight into more of them, and they replicate.
Speaker 2 It's notable that they don't like blanket the earth. Although I do just as a tendency, I remember someone
Speaker 2 mentioned
Speaker 2 this is thought if like an alien, you know, arrived on earth and asked a question of what is the most successful species? It would probably be grass.
Speaker 2 But okay, the the reason that like particular organisms that just reproduce using sunlight don't just kind of have this like green goo dynamic is because there are competing organisms there are things like you know antivirals and so on
Speaker 2 so I guess like you mentioned it's not as if as soon as this thing gets seeded it's game over you can imagine trying to catch up with these things and stop them And I don't know, what's the equilibrium here where you have things that are trying to catch things and things which are also spreading?
Speaker 2 It's like pretty unclear, but it's not clear that it's everything gets burned down. Although, I don't know, it seems like worth having on the table as a possible outcome.
Speaker 2 And then another thought is, I guess, something you also like basically mentioned. Robin Hansen has this paper called, I think, Burning the Cosmic Commons.
Speaker 2 I think the things he says there are like a little bit subtle, but I guess to kind of like bastardize the overall point, there's an idea that you should expect kind of
Speaker 2 selection effects on what you observe in the long run of like which kinds of things have won out and there's a kind of like race for different parts of space.
Speaker 2 And
Speaker 2 in particular, the things you should expect to win out are these things which like burn resources very fast and are like greedy in terms of grabbing as much space as possible.
Speaker 2 And I don't know, that seems like roughly correct. He also has a more recent bit of work called Grabby Aliens.
Speaker 2 I think there's a website grabbyaliens.com, which kind of expands on this point and asks this question about what we should expect to see such
Speaker 2 kind of grabby civilizations.
Speaker 2 Yeah, I mean, one, maybe kind of one like slightly fanciful upshot here is you don't want this like greedy evil nomen type probes to win out that are also just like dead they have no kind of nothing of value um and so if you think you have something of value to spread maybe that is a reason to spread um more quickly than you otherwise like would have planned once you've like figured out what that thing is if that makes sense yeah so then does this militate um
Speaker 1 towards the logic of a space race where similar to the first strike where if you're not sure that you're going to retaliate you want to do a first strike maybe there's a logic to as long as you have like at least somewhat of a compelling vision of what the far future should look like you should try to make sure it's you who's the first actor that goes out into space, even if you don't have everything sorted out, even if you have like concerns about how
Speaker 1 you'd ideally like to spend more time.
Speaker 2 My guess is that the time scales on which these dynamics are relevant are like extremely long time scales compared to what we're familiar with.
Speaker 2 So I don't think that any of this like straightforwardly translates into, you know, wanting to speed up on the order of decades.
Speaker 2 And in fact, if any like delay on the the order of decades, I know, presumably also centuries, gives you a marginal improvement in your long-run speed,
Speaker 2 then just because of the time scales and the distances involved, you almost always want to take that trade-off.
Speaker 2 So, yeah, I guess I'd want, I'd be wary of reading too much into all this stuff in terms of what we should expect for some kind of race in the near term.
Speaker 2 It just turns out the space is like extremely big and there's like a ton of stuff there. So, in in anything like the NINTEM,
Speaker 2 I think this reasoning about like, oh, we'll run out of useful resources probably won't kick in. But that's like that's just me speculating.
Speaker 2 So, I, yeah, I don't know if I have a kind of clear answer to that.
Speaker 1 Okay, so
Speaker 1 if we're talking about space governance, is there any reason to think, okay, in the far future, we can expect that space will be colonized either by, you know, like a fully artificial
Speaker 1 intelligence or by simulations of humans like Ms.
Speaker 1 In either case, it's not clear that these entities would feel that constrained by whatever norms of space governance we detail now.
Speaker 1 What is the reason for thinking that
Speaker 1 any sort of charter or constitution that the UN might build,
Speaker 1 regardless of how, I don't know, how sane it is, will be the basis of which like the actual long run fate of space is decided upon?
Speaker 2 Yeah, yeah, yeah.
Speaker 2 So So, I guess I know the first thing I want to say is that it does, in fact, feel like an extremely long shot to expect that any kind of norms you end up agreeing on now, even if they're good,
Speaker 2 flow through to the point where they really matter,
Speaker 2 if they ever do.
Speaker 2 But,
Speaker 2 okay, so you can ask, like, what are the worlds in which this
Speaker 2 early thinking does end up being
Speaker 2 good?
Speaker 2 On the M point, I don't know. Like, I can imagine, for instance, the US Constitution surviving in importance, at least to some extent, if digital people come along for the ride.
Speaker 2 It's not obvious why there's some discontinuity there. I guess the important thing is considering what happens
Speaker 2 after anything like kind of transformative artificial intelligence arrives. My guess is that the worlds in which this is like even kind of remotely, this like, you know, super long-term,
Speaker 2 what norms should we have for settling space,
Speaker 2 the worlds in which this
Speaker 2 matters or does anything worthwhile are worlds in which, you know, alignment goes well, right?
Speaker 2 And it goes well in the sense that there's a significant sense in which humans are still in the driving seat.
Speaker 2 And when they're looking for precedence, they just look to existing institutions and norms.
Speaker 2 So I don't know, that seems kind of, there's like so many variables here that this seems like a fairly narrow kind of set of worlds, but I don't know,
Speaker 2 seems pretty possible.
Speaker 2 And then it's also kind of like, you know, settling the moon or Mars, where that is just like much easier to imagine how this stuff actually kind of ends up influencing or positively influencing how things turn out.
Speaker 2 Feels worth pointing out that there are things that really plausibly matter when we're thinking about space that aren't just like thinking about these crazy, kind of very long-run sci-fi scenarios, although they are like pretty fun to think about.
Speaker 2 One is that there's just a ton of like pretty important infrastructure kind of currently orbiting the earth and also anti-satellite weapons are being built and my impression is well
Speaker 2 in fact think it's the case that there is a kind of worryingly small amount of agreement and regulation about the use of those weapons
Speaker 2 maybe that puts you in a kind of analogous position to not having many agreements over the use of nuclear weapons although maybe less worrying in certain respects but still it seems worth taking that seriously and thinking about how to make progress there yeah I think it's just like a ton of other kind of near-time considerations.
Speaker 2 There's this great graph, actually, on Arwarden Data, which I guess I can send you the link to after this, which shows the number of objects launched into orbit, especially low-earth orbit, just over time.
Speaker 2 And it's like perfect hockey stick. And I know, it's like quite a nice illustration of why you might kind of pay to like
Speaker 2 think
Speaker 2 about how to make sure this stuff goes well. And the story behind that graph is kind of fun as well.
Speaker 2 I was like messing around on some UN website, which had, it was just this database, incredible database, which has more or less every kind of officially recorded launch logged with like all this data about like how many objects were contained or whatever.
Speaker 2 It was like the clunkiest API you've ever seen.
Speaker 2 You have to like manually click through each page and it takes like five seconds to load and you have to like scrape it somewhere. So I was like, okay, this is great that this exists.
Speaker 2 I am not like remotely sophisticated enough to know how to like make use of it.
Speaker 2 But I emailed the hourland data people saying fyi this exists if you happen to have like you know a ton of time to burn then then have at it
Speaker 2 and um
Speaker 2 eds ed mato from our and data got back to me like a month later like hey i had a free day all done and it's like up on the website it was so cool so cool cool okay i think that's that's my space rambling dwakesh i i i'd quite like to ask you a couple questions if that's all right i realize i've been kind of hogging the airwaves yeah so here's one thing i'm just interested to know you're doing this like blogging and podcasting right now, but yeah, what's next?
Speaker 2 Like 2024 Dwarquesh, what is he doing?
Speaker 1 I think
Speaker 1 I'll probably be.
Speaker 1 I've just had, I don't know, the idea of building a startup has been very compelling to me. And not necessarily from, I think it's the most impactful thing that could possibly be done.
Speaker 1 Although I think it is very impactful, but it just, I don't know, it just like if you have a,
Speaker 1 people tend to have like different things that are like, I want to be a doctor or something, that it's, uh, you know, it's something that's stuck in your head.
Speaker 1 So yeah, I think that's probably what I'll
Speaker 1 be attempting to do in 2024.
Speaker 1 I don't, I think the situation in which I like remain a blogger and podcaster is if it just turns out to be like a, I don't know, if, if I have, if the podcast just becomes like really huge, right?
Speaker 1 At that point, it might make more sense that, oh, like, actually, this is a way. Currently, I think the impact the podcast has is like
Speaker 1 0.00000001. And then the 0.01 is just me getting to learn about a lot of different things.
Speaker 1 So I think for it to have any, not necessarily that it has to be thought of in terms of impact, but in terms of like how useful is it, I think it's only the case if it like really becomes much bigger.
Speaker 2
Nice. That sounds great.
Maybe we're just getting right to the start of the conversation. What about what about a non-profit startup?
Speaker 2
All the same excitement. If you have a great idea, you kind of skip the fundraising stage.
More freedom because you don't need to make well, no, you still have to raise money, right?
Speaker 2 Sure, but like, if it's a great idea, then
Speaker 2 I'm sure there'll be like support to make it happen.
Speaker 1 Yeah,
Speaker 1 if there's something where I don't see a way to like profitably do it, and I think it's very important that it be done,
Speaker 1 yeah, I definitely wouldn't be opposed to it. But is that, by the way, where you're leading? Like, who asked you in 2024, what is Finn doing? Do you have a non-profit startup?
Speaker 2 I don't have something concrete in mind. That kind of thing feels very exciting to me to at least try out.
Speaker 1 Gotcha, gotcha. Yeah, I think
Speaker 1 I guess my prior is that there are profitable ways to do many things if you're more creative about it.
Speaker 1 There are obvious counterexamples of so many different things where, yeah, I could not tell you how you could make that profitable, right?
Speaker 1 Like if you have like something like One Day Sooner where they're trying to, you know, speed up challenge trials, it's like, how is that a startup? It's not clear.
Speaker 1 So, yeah, I think that
Speaker 1 there's like a big branch of the decision tree where I think that's the most compelling thing I could do.
Speaker 2
Nice. And maybe a connected question is, I'm curious what you think EA in general is underrating from your point of view.
Also, maybe another question you can answer is said is like what you think
Speaker 2 I'm personally getting wrong or got wrong, but maybe the kind of more general question is a more interesting one for most people.
Speaker 1 So I think when you have statements
Speaker 1 which are somewhat ephemeral or ambiguous. Like, let's say there's like some historian like Toynbee, right?
Speaker 1 He wrote a study of history, and one of the things he says in it is: like, civilizations die when the elites lose confidence in the norms that they're setting and their
Speaker 1 in their,
Speaker 1 lose the confidence to rule. That, so, I, I, I don't think that's like actually an S X risk, right?
Speaker 1 I'm just like trying to use that as an example, like, something that comes up off the top of my head. It's the kind of thing, um, it like it could be true.
Speaker 1 I don't know how I would think about it in a sort of,
Speaker 1 I mean, it doesn't seem tractable. I don't know how to even analyze whether it's true or not
Speaker 1 using the modes of analyzing importance of topics that we've been using throughout this conversation.
Speaker 1 I don't know what that implies for EA because it's not clear to me, like maybe EA shouldn't be taking things that are vague and ambiguous like that seriously to begin with, right?
Speaker 1 Yeah, if there is some interesting way to think about statements like that from a perspective that EAs could appreciate, including myself, from a perspective that I could appreciate, I'd be really interested to see what that would be.
Speaker 1 Because there does seem to be a disconnect where when I talk to my friends who are intellectually
Speaker 1 inclined, who have a lot of interesting ideas, requiring a sort of like translation layer, almost like a compiler,
Speaker 1 or like a transpiler that
Speaker 1 converts code from this language into assembly here,
Speaker 1 it does create a little bit of inefficiency and potentially a loss of topics that could be talked about.
Speaker 2
Nice. That feels like a great answer.
I just say it's something I'm kind of worried about as well, especially in leading towards a more kind of speculative, long-termist end.
Speaker 2 Seems really important to
Speaker 2 keep hold of some real truth-seeking attitudes where those kind of like where the obvious feedback of whether you're getting things right or wrong is is much harder.
Speaker 2
And often you don't have the luxury of having that. So yeah, I think just like keeping that attitude in mind seems like very important.
I like that.
Speaker 2 What is your answer, by the way?
Speaker 1 What do you think that EA should improve off on?
Speaker 2 Yeah, I guess off the top of my head, maybe I have two answers which go in exact opposite directions.
Speaker 2 So, one answer is that one kind of something that looks a bit like a failure mode that I'm a bit worried about is as or if the movement grows significantly, then the ideas that kind of originally motivated it that were like quite new and exciting and important ideas somewhat kind of dilute maybe because it's like
Speaker 2 you i guess it's related to what you said you kind of lose these attitudes of like just taking weird ideas seriously like scrutinizing one another quite a lot and it becomes a bit like i don't know greenwashing or something where like the language stays but the real kind of like fire behind it of like taking
Speaker 2 impact really seriously rather than just saying the right things that kind of fades away so i don't know if that i don't think i want to say e is currently underrating that in any important sense but it's like something that seems worth having as a you know kind of worry on the radar and then like the roughly opposite thing that seems also worth worrying about is i think it's just really worth paying attention or like uh it's worth considering best case outcomes where
Speaker 2 a lot of this stuff maybe grows um quite considerably
Speaker 2 you know, thinking about how this stuff is like
Speaker 2 could become mainstream. I think thinking about really scalable projects as well as just like little fun kind of interventions on on margins.
Speaker 2 There's at least some chance that that becomes like very important.
Speaker 2 And so as such, you know, one part of that is maybe just like learning to make a lot of these fields legible and attractive to
Speaker 2 people who could contribute, who are like learning about it for the first time.
Speaker 2 And just, yeah, in general, planning for the best case, which could mean just like being thinking in in very ambitious terms, thinking about things going very well. That just also seems worth doing.
Speaker 2 Sorry, I think that's a very vague answer, but maybe that's yeah, maybe not worth keeping in, but that's my answer.
Speaker 1 Perhaps,
Speaker 1 you know, opposite to what you were saying about EA not taking weird ideas too seriously in the future, it's maybe they're taking weird ideas too seriously now.
Speaker 1 It could be the case that just following basic common sense morality, kind of like what Tyler Cowan talks about in sovereign attachments, is really the most effective ways to deal with many threats, even weird threats.
Speaker 1 If you have areas that are more speculative, like bio-risk or AI,
Speaker 1 where it's not even clear that the things you're doing to address them are necessarily making them better, I know there's concern in the movement, like the initial grant that they gave to OpenAI might have like sped up
Speaker 1 AI doom.
Speaker 1 Maybe the best case scenario in cases where there's a lot of ambiguity is to just do more common sense things like, and maybe this is also applicable to things like global health, where malarinets are great, but the way that hundreds of millions of people have been lifted out of poverty is just through implementing capitalism, right?
Speaker 1 It's not through
Speaker 1
targeted interventions like that. Again, I don't know what this implies for the movement in general.
Like
Speaker 1 even if like just implementing like the neoliberal agenda is the best way to like decrease poverty,
Speaker 1 what does that mean that somebody should do if they're Yeah, what does that mean you should do with the marginal million dollars, right? So it's not clear to me.
Speaker 1 It's something I hope I'll know more about in like five to ten years is I'd be very curious to talk to Future Me about like what does he think about common sense morality versus taking beer idea seriously.
Speaker 2 I think like one way of thinking about quote-unquote weird ideas is that in some sense they are the result of like taking a bunch of common sense starting points and then just like really reflecting on them hard and seeing what comes out.
Speaker 2 Yeah, so I think maybe the question is like how much trust should we place on those like reflective processes versus like
Speaker 2 how much what should I prior be on like
Speaker 2 weird ideas being true because
Speaker 2 they're weird? Is that like good or bad?
Speaker 2 And then like separately, I know one thing that just seems kind of obvious and important is if you take these ideas, like first of all, you should ask yourself whether you like actually believe them or whether they are like kind of fun to like say, or you're like, you're just kind of saying that you believe them.
Speaker 2 And then sometimes, I know it's like fun to say weird ideas, but that's like, okay, I actually don't have good grounds to believe this.
Speaker 2 And then second of all, if you do in fact believe something, it's like really valuable to ask, if you think this thing is really important and true,
Speaker 2 why aren't you working on it if you have the opportunity to work on it this is like the hamming question right what's the most important problem in your field and then
Speaker 2 um what's stopping you from from working on it and obviously look many people have the luxury of like dropping out everything and and working on the things that they in fact believe are really important but if you do have that that opportunity then that's a question which i know is maybe just valuable to to ask
Speaker 1 maybe this like this is a meta objection to ea which is that i'm aware of a lot of potential objections to ea like the the ones ones we were just talking about, but there's so many other ones where people will identify,
Speaker 1 yeah, yeah, that's an interesting point,
Speaker 1 and then like nobody knows what to do about it, right? It's like, oh, you know, should we take common sense morality more seriously? Should we take weird ideas more seriously?
Speaker 1
It's like, oh, that is an interesting debate. And then, but how do you resolve that? I don't know how to resolve that.
I don't know if somebody's come up with a good way to resolve that.
Speaker 2
I guess it kind of hooks into the long reflection stuff a little bit. Because one answer here is just time.
So I think the story of people raising concerns about AI is maybe instructive here, where,
Speaker 2 you know, early on you get some real kind of just like radical out there researchers or writers who are kind of raising this as a worry.
Speaker 2
There's a lot of kind of like weird baggage attached to the what they write. And then maybe you get like a first book or two.
And then you get like more kind of prestigious or established
Speaker 2 people
Speaker 2 expressing concerns. I think one way to accelerate that process when it's like worth accelerating is just to ask that question, right? Like, do I in fact see,
Speaker 2 like, can I go along with this argument? Do I see a hole in it?
Speaker 2 And then, if the answer is no, like, if it just kind of checks out, even if you're obviously going to always going to be uncertain, but if it's like, yeah, this
Speaker 2 seems kind of reasonable, then
Speaker 2 by default, you might just like spend a few years being like, just kind of living like, oh, yeah, there's this thing that I guess I think is true, but I'm not really acting on.
Speaker 2 You can just like skip that step and be like, well, just acting on now.
Speaker 1 I'm not sure i agree i think um maybe maybe an analogy here is like i don't know you're in a relationship and you think like oh well i don't see what's wrong with this relationship so um instead of just waiting a few years to like try to find something wrong with it might as well just tie the knot now and get married i i think it's something similar with um
Speaker 1 i think a failure mode if you maybe not
Speaker 1 because we're easy you wouldn't see it in ea but we can see generally in the world yeah is that people just come to a conclusion about how the world works or how the world ought to work too early in life when they don't seem to know that much about what is optimal and what is possible.
Speaker 1 That's a great point. So, yeah, maybe they should just wait a little longer.
Speaker 1 Maybe just like integrate these weird radical ideas as things that exist in the world and wait until you're late 20s until you decide actually this is the thing I should do with the rest of my career or with my political
Speaker 1 rights or whatever.
Speaker 2 Yeah, I think that's actually just a really good point. I think maybe I'd want to kind of walk back what I said based on that.
Speaker 2 But I think there's some version of it which I'd still really endorse, which is maybe like, you know, I've spent like some time reflecting on this such that I don't expect further reflection is going to radically change what I think.
Speaker 2 You can maybe talk about
Speaker 2 this being the case for like a group of people rather than a particular person.
Speaker 2
And I could just like really see this. thing playing out where I just like believe it's important for a really long time without acting on it.
And that's a thing which seems worth skipping.
Speaker 2 I mean, to be, to be a little tiny bit more concrete, like if you really think some of these
Speaker 2 potentially catastrophic risks just like are real, and you think there are things that we can do about it, then
Speaker 2 sure seems good to start working on this stuff.
Speaker 2 And you really want to like avoid that regret of, you know, some years down the line, like, oh, I really could have just started working on that earlier.
Speaker 2 There are occasions where this kind of thinking is useful, or at least kind of asking this question, like, what would I do right now if I just like did what my kind of idealized self would endorse doing?
Speaker 1 Maybe that's useful.
Speaker 1 So it seems that if you're trying to pursue, I don't know, a career related to EA, there's like two steps where the first step is you have to get a position like the one you have right now,
Speaker 1 where you're, you know, learning a lot and figuring out future steps. And then the one after that is where you actually lead or
Speaker 1 take ownership of a specific project, like a non-profit startup or something. Do you have any advice for somebody who's before step one?
Speaker 2 Huh, that's a really good question.
Speaker 2 I also will just do the annoying thing of saying definitely other things you can do other than that, that kind of like two-step trajectory, but yeah, what's as in go directly to step two? Or
Speaker 2 never go to step two and just like be a really excellent researcher or communicator or, like, anything else.
Speaker 1 Sure, sure, sure. Um,
Speaker 2 I think, like, where you have the luxury of doing it, not kind of rushing into the most salient like career option and then retroactively justifying why it was the correct option,
Speaker 2 I think is like
Speaker 2 quite a nice thing to bear in mind. I suppose often it's quite uncomfortable.
Speaker 1 Obviously, I don't want to. Do you want to mean something like consulting?
Speaker 2 Yeah, something like that. Yeah, I mean, the kind of the obvious advice here is that there is a website designed to answer this question, which is 80,000 hours.
Speaker 2 Oh, yeah, so maybe some, there's a particular bit of advice from ADK, which I found very useful. Which was after I left uni, I was like really unsure what I wanted to do.
Speaker 2 I was choosing between a couple options,
Speaker 2 and I was like, oh my god, this is like such a big decision, because I guess in this context,
Speaker 2 not only do you have to answer the question of what might be a good fit for me, what I might enjoy, but also like, in some sense, what is actually most important,
Speaker 2 maybe?
Speaker 2 And
Speaker 2 how am I supposed to answer that? Given that, like, there's a ton of disagreement. And so, I just, like, found myself, like,
Speaker 2 bashing my head against the wall of trying to get to a point where I was certain that like one option was better than the other.
Speaker 2 And
Speaker 2 the piece of advice that I found useful was that often you should just write off the possibility of becoming fully certain about what option is best.
Speaker 2 Instead, what you should do is you should reflect on the decision like proactively.
Speaker 2 That is, you know, like talk to people, write down your like thoughts, and just like keep iterating on that until the like
Speaker 2 the dial stops moving backwards and forwards and just kind of settles on some particular uncertainty so it's like look i guess i'm i'm a kind of 60
Speaker 2 70 option a is better than b and that hasn't really changed having done like a bunch of extra thinking that's roughly speaking the point where it might be best to make the decision rather than holding out for um certainty does that make sense
Speaker 1 yeah it's like kind of like gradient descent where if if the loss function hasn't changed in the last iteration, you call it.
Speaker 2
Yeah, nice. Like it.
Like it.
Speaker 1 Yeah, that's super interesting.
Speaker 1 Though, I guess one problem maybe that somebody might face is that before they've actually done things, it's hard to know
Speaker 1 that like that's actually a... Like,
Speaker 1 not that this is actually going to be my career, but I would have, like, the podcast was just something I did as in, like, I was bored during COVID and I,
Speaker 1
yeah, the classes went online and I just didn't have anything else to do. I don't think it's something I would have pursued if I ever thought of it.
Well, I never thought of it as a career, right?
Speaker 1 So it's like, um, but uh just doing things like that can potentially lead you down interesting, um, interesting avenues.
Speaker 2
Yeah, yeah, yeah. I think that's a great point.
There was, um, I guess we're both involved with this uh blog prize and there was a
Speaker 2 uh like a kind of mini prize last month for people writing about like the idea of agency and what you just said I think links into that really nicely.
Speaker 2 There's this kind of property of going from realizing you can do something to doing it, which just seems like both really valuable and learnable. So yeah, just like
Speaker 2 going from the idea of I could maybe do a little podcast series to like actually testing it and like being open to the possibility that it fails, but you learn something from it, just really valuable.
Speaker 2 Also, we were talking about sending cold emails in that same bit of the conversation, right? Like
Speaker 2 if there's someone you look up to and you have, you think it's like very plausible that you might end up in their line of research and you think there's a bunch of things you can learn from them.
Speaker 2 As long as you're not like demanding a huge amount of their time or attention, then you can just like ask to talk to them. I think finding a mentor in places like this is
Speaker 2 just like so useful. And just like asking people if they could fill that role, like again, in a kind of friendly way is just,
Speaker 2 you know, maybe it's a kind of a move people don't opt for a lot of the time.
Speaker 2 But yeah, just like taking the non-obvious options, being proactive about connecting to other people, seeing if you can like physically meet other people who are interested in the same kind of weird things as you.
Speaker 2 Yeah, this is all extremely obvious, but I guess it's stuff I kind of would really have benefited from learning earlier on.
Speaker 1 Yeah, and the unfortunate thing is it's like not clear how you should apply that in your own circumstance when you're
Speaker 1 trying to decide what to do.
Speaker 1 Okay, so yeah,
Speaker 1 let's close out by talking about just like plugging the effective ideas,
Speaker 1 the blog prize you just mentioned, and then the
Speaker 1 red teaming EA
Speaker 1 contest.
Speaker 1 We already mentioned that earlier, but if you just want to leave links and just again summarize them for you.
Speaker 2 I appreciate that.
Speaker 2 Yeah, so the criticism contest, the deadline is the first of September.
Speaker 2 The kind of canonical post that announces that is an EA forum post, which I'd be very grateful if you could link to somewhere, but I'm happy to do that.
Speaker 2 And then price pool is at least $100,000, but possibly more if there's just a lot of exceptional
Speaker 2 entries.
Speaker 2
And then hopefully hopefully all the kind of relevant information is there. And then yeah, this this blog prize as well, which I've been kind of helping run.
I think you mentioned well at the start.
Speaker 2 So the like overall prize is yeah, $100,000 and up to five five of those prizes. But also there are these smaller monthly prizes that
Speaker 2 I just mentioned.
Speaker 2 So last month was the theme was agency and the theme this month is to write some response or some reflection on this series of blog posts called the Most Important Century blog post series by Holon Karnofsky, which incidentally people should just read anyway.
Speaker 2 I think it's just really truly excellent and kind of remarkable that
Speaker 2 one of the most affecting
Speaker 2 series of blog posts I've basically ever read was written by the co-CEO of this like enormous
Speaker 2 like philanthropic organization in his spare time. It's just kind of of insane.
Speaker 2 Yeah, so
Speaker 2 the website is effectiveideas.org.
Speaker 1 Yeah, and then obviously,
Speaker 1 where can people find you? So your website, Twitter handle, and then
Speaker 1 where can people find your podcast?
Speaker 2
Oh, yeah. So website is my name.com, the morehouse.com.
Twitter is my name.
Speaker 2 And podcast
Speaker 2
is called Hear This Idea, as in Listen This Idea. So it's just thatphrase.com.
And I'm sure if you if you kind of Google it, it'll it'll come up.
Speaker 1 But by the way,
Speaker 1 what is your probably distribution of how impactful these criticisms end up being, or just how good they end up being?
Speaker 1 Like, if you had to guess, what is like your median outcome, and then what is like your 99th or 90th percentile outcome of how good these end up being?
Speaker 2 Yeah, okay, that's a good question.
Speaker 2 I feel like I want to say that doing this stuff is really hard.
Speaker 2 So
Speaker 2 I don't want to like discourage posting by saying this, but I think
Speaker 2 maybe the median submission is
Speaker 2 really robustly useful, absolutely worth writing and submitting.
Speaker 2 That said, maybe the difference between the most valuable posts of this kind or work of this kind and the median kind of effort is probably very large, which is just to say that the ceiling is really high.
Speaker 2 If you think you have
Speaker 2 a 1% chance of influencing $100 million
Speaker 2 of philanthropic spending, then there is some sense in which a you know impartial philanthropic donor might be willing to spend roughly 1% of that amount to kind of find out that information, right?
Speaker 2
Which is like a million dollars. So yeah, this stuff can be like really, really important, I think.
Yeah.
Speaker 1
Yeah. Okay.
Excellent. Yeah.
So the stuff you're working on seems really interesting and the blog prices seem
Speaker 1
like they're gonna they might have a potentially very big impact. I mean our our worldviews have been shaped so much by some of these bloggers we've talked about.
So
Speaker 1
if this leads to one more of those, that alone could be very valuable. So, Finn, thanks so much for coming on the podcast.
This was the longest, but also
Speaker 1 one of the most fun conversations I've gotten a chance to do.
Speaker 2 The whole thing was so much fun. Thanks so much for having me.
Speaker 1
Thanks for watching. I hope you enjoyed that episode.
If you did and you want to support the podcast, The most helpful thing you can do is share it on social media and with your friends.
Speaker 1
Other than that, please like and subscribe on YouTube and leave good reviews on podcast platforms. Cheers.
I'll see you next time.