Joseph Carlsmith - Utopia, AI, & Infinite Ethics
Joseph Carlsmith is a senior research analyst at Open Philanthropy and a doctoral student in philosophy at the University of Oxford.
We discuss utopia, artificial intelligence, computational power of the brain, infinite ethics, learning from the fact that you exist, perils of futurism, and blogging.
Watch on YouTube. Listen on Spotify, Apple Podcasts, etc.
Episode website + Transcript here.
Follow Joseph on Twitter. Follow me on Twitter.
Subscribe to find out about future episodes!
Timestamps
(0:00:06) - Introduction
(0:02:53) - How to Define a Better Future?
(0:09:19) - Utopia
(0:25:12) - Robin Hanson’s EMs
(0:27:35) - Human Computational Capacity
(0:34:15) - FLOPS to Emulate Human Cognition?
(0:40:15) - Infinite Ethics
(1:00:51) - SIA vs SSA
(1:17:53) - Futurism & Unreality
(1:23:36) - Blogging & Productivity
(1:28:43) - Book Recommendations
(1:30:04) - Conclusion
Please share if you enjoyed this episode! Helps out a ton!
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Press play and read along
Transcript
Speaker 1 Today, I have the pleasure of interviewing Joe Carlsmith, who's a senior research analyst at Open Philanthropy and a doctoral student in philosophy at the University of Oxford.
Speaker 1 Joe has a really interesting blog that I got to check out called Hands in Cities.
Speaker 1 And that's the reason that I wanted to have him on the podcast because it has a bunch of thought-provoking and insightful
Speaker 1
posts on there about philosophy, morality, ethics, the future. And yeah, so I really wanted to talk to you, Joe.
But do you want to give a bit of a longer intro on what you're up to?
Speaker 1 Sure. So I work at Open Philanthropy on existential risk from artificial intelligence.
Speaker 1 And so, you know, I think about what's going to happen with AI, how can we make sure it goes well? And in particular, how can we make sure that advanced AI systems are safe?
Speaker 1 And then
Speaker 1 I I have a side project, which is this blog where I write about philosophy and the future and things like that. And that emerges partly from
Speaker 1 sort of my background, which is I was before getting into
Speaker 1 AI and working at Open Philanthropy, I was in academic philosophy.
Speaker 1 Okay, yeah,
Speaker 1 that's quite an ambitious side project. I mean, given the length and the regularity of those posts, it's actually quite stunning.
Speaker 1 Do you want to talk more about what you're working on about AI at
Speaker 1 Open Philanthropy?
Speaker 1 So it's a mix of things.
Speaker 1 Right now, I'm thinking about AI timelines and what's called takeoff speeds, sort of sort of how fast the transition is from pretty impressive AI systems to AI systems that are kind of radically transformative.
Speaker 1
And I'm trying to use that to provide more perspective on the probability that everything goes terribly wrong. I see.
Okay.
Speaker 1 And so what are the implications? I suppose it's higher or lower than I would expect.
Speaker 1 I guess guess if it's higher, maybe I should work on AI 11. But other than that,
Speaker 1 what are the implications of that figure changing?
Speaker 1 I think there are a number of implications just from understanding timelines with respect to how you prioritize and what, you know, just to some extent, the sooner something is, then you need to be planning for it coming sooner and kind of cutting more corners or
Speaker 1 counting less on having more time.
Speaker 1 And yeah, I think overall, the higher you think the probability of catastrophe is,
Speaker 1 the easier it is for to become kind of the most important priority.
Speaker 1 I do think there's a range of probabilities where it maybe doesn't matter that much, but I think the difference between, say, 1 and 10%, I think, is quite substantive.
Speaker 1 And the difference between 10 and 90 is quite substantive. And
Speaker 1 I know people in all of those ranges.
Speaker 1
Gotcha. Okay, interesting.
Yeah, so
Speaker 1 let's back up here and talk a bit more about the philosophy motivating this. So I think you identify as a long-termist.
Speaker 1 Yeah, so maybe a rotten picture question here is:
Speaker 1 you have an interesting blog post about what the future, looking back on us, might think about the 21st century, given the risk we're taking.
Speaker 1 So, I mean, what do you think about the possibility that we are potentially giving up resources, potentially dedicating, well, I'm not, you're dedicating your career to
Speaker 1 building a future that
Speaker 1 maybe
Speaker 1 given the fact that you're alive now, you might find strange or disturbing or disgusting. I mean,
Speaker 1 so I guess to add more context to the question, if from a utilitarian perspective, the present is clearly much, much better than the past. But somebody from the past might think that
Speaker 1 there's a lot of things about the present that are kind of disturbing. I mean, they might not like the configuration of how maybe isolating a modern city might be.
Speaker 1 They might find the kinds of free to cheap information that you can access on your phone
Speaker 1
kind of disturbing. Yeah, so how do you think about that? So, yeah, a few comments there.
So, one,
Speaker 1 I do think that if you took, you know, for most people throughout history, if you brought them to the present day,
Speaker 1 they would,
Speaker 1 my guess is that fairly quickly, and depending on exactly the circumstances, they would come to prefer
Speaker 1 living in the present day to the past, even if there are sort of a bit of future shock and a bit of
Speaker 1 some things that are alienating or disturbing.
Speaker 1 And, but that said, I think the distance, the sort of gap between historical humans and the present is actually much, much smaller, both in terms of time and kind of other factors than the gap I envision between present-day humans and the future humans who are living ideally in a kind of radically better situation.
Speaker 1 And so I do expect sort of greater distance and possibly greater alienation when you first show up. My personal view is that
Speaker 1 the best futures are
Speaker 1 going to be such that if you really understood them and if you really experienced what they're like, which may be a big step and might require sort of extensive engagement and possibly sort of changes to your capacities to understand and experience, then you would think it's really good.
Speaker 1 And so,
Speaker 1
and I think that's the relevant standard. So, for me, I worry less if the future is sort of initially alienating.
And the question for me is: how do I feel once I've really
Speaker 1 understood what's going on? I see.
Speaker 1 So
Speaker 1 I wonder how much we should value that kind of inside view you would get into the future from being there.
Speaker 1 If you think about, I don't know, many, many existing ideologies, like, I don't know, think of an Islamist or something who might say, listen, if you could just like come to Iraq and feel the bliss of fighting for the caliphate,
Speaker 1 you would understand better than you can understand from the outside view, just sitting on a couch, eating Doritos,
Speaker 1 what it's like to fight for a cause. And maybe their experience is kind of blissful in some kind of way, but um
Speaker 1 i i i feel like the outside view is more useful than the inside view there
Speaker 1 well so i think there's a couple different questions there one is what would the experience be if you had it from the inside um and then there's a i think a subtly different question which was what which is what would your take on this be if you kind of fully understood where fully understanding is not just um a matter of having the internal experience of being in you know in a certain situation but it's also a a matter of understanding what that situation is causing, what sort of beliefs are structuring the ideology, whether those beliefs are true, and all sorts of other factors.
Speaker 1 And it's the latter thing that I have in mind. So I'm not just imagining, oh, the feature will feel good if you're there.
Speaker 1 Because sort of by hypothesis, the people who are there, at least one hopes they're enjoying it, or one hopes they're thumbs up.
Speaker 1 If the people who are there aren't thumbs up, that's a strange, a strange utopia.
Speaker 1 But I'm thinking more that in addition to their perspective, there's a sort of more holistic perspective, which which is the sort of full understanding. And that's the perspective from which you would
Speaker 1 endorse this situation. I see.
Speaker 1 And then, yeah, so another respect in which it's interesting to think about what they might think of us is, you know, like, well, what will they think of the crazy risk we're taking by not optimizing for existential risks?
Speaker 1 And
Speaker 1 so, you know, one analogy you could offer, I think what McCaskill does is in his new book is to think of us as, you know, teenagers in our civilization's history.
Speaker 1 And then, you know, think of the crazy things you did as a teenager and how.
Speaker 1 And yeah, so I mean, maybe there is an aspect of which, like, one would wish they could take back the crazy things they did as a teenager.
Speaker 1 But my impression is that most adults probably think that while the crazy things were kind of risky,
Speaker 1 they're very formative and important, and they feel nostalgic about the things they did in the past.
Speaker 1 Do you think that the future looking back, they are going to
Speaker 1 regret the way we're living in in the 21st century or uh or will they look back and think oh you know that was kind of a cool time i mean i guess this is kind of conditional on there being a future which takes away a lot of the mystery here but
Speaker 1 i doubt that they will look back with um
Speaker 1 uh with pleasure at uh the sort of risks and uh and horrors of the of the 21st century i mean if you just think about how uh we or at least i tend to think about uh something like the cuban missile crisis or uh world war ii I don't personally have a kind of nostalgia.
Speaker 1 Oh, you know, sure, it was risky, but it made me who I am or something like that.
Speaker 1 I also want to say, you know, I think it's true that when you look back on your teenage years, there is often a sort of, you know, let's say you did something like crazy.
Speaker 1 You and your friends used to race, you know, around and you played chicken or something at the local quarry. And it's like, oh,
Speaker 1 but you know, you survived, right? And the real reason not to do that is the like chunk of probability where you just died.
Speaker 1 And so I think there's a,
Speaker 1 you know, to some extent,
Speaker 1 it's
Speaker 1 the ex-post perspective of looking back on certain sorts of risks is not the right one,
Speaker 1
especially for death risks. That's not the right perspective to use to kind of calibrate your understanding of how to feel about it overall.
I see.
Speaker 1 Okay, so I think you brought up Utopia and you have a really interesting post about the concept of utopia.
Speaker 1 So do you want to talk a little bit more about this concept and why it's important? And also, why do we have so much trouble thinking of a compelling utopia?
Speaker 1
Yeah, so utopia for me just means a kind of profoundly better future. And I think it's important because I think it's just actually possible.
I just think it's actually something that we could do.
Speaker 1 We could make, if we sort of play our cards right in sort of non-crazy ways, we could just build a world that is
Speaker 1 radically better than the world we live in today.
Speaker 1 And
Speaker 1 in particular, I think we often, in thinking about that sort of possibility, underestimate just how big the difference in value could be between our current situation and kind of what's available.
Speaker 1 And I think often utopias are kind of anchored too hard on the status quo and sort of changing it in small ways, but imagining our kind of fundamental situation basically unaltered.
Speaker 1 And I think such that it's a little bit like the difference between, between, you know, you have a kind of a crappy job or like a beach vacation. And Utopia's like, everyone has beach vacation.
Speaker 1 And, you know, I don't know how you feel about beach vacations, but I think it's much, I think the difference is more like being asleep and being awake or sort of it's it's more
Speaker 1 yeah it's sort of it's like living in a cave or living living in the in under the open sky I think I think it's like a really big a really big difference and um and that that matters a lot
Speaker 1 I that's interesting because I remember in the essay
Speaker 1 you had a section where
Speaker 1 you mentioned that you expect utopia to be recognizable
Speaker 1 to a person live now.
Speaker 1 I guess the way you put it just earlier made it seem like it would be a completely different category of experience than we would be familiar with.
Speaker 1 Yeah, so
Speaker 1 is there a contradiction there or am I missing something? So I think there's at least a tension.
Speaker 1 And the way I see the tension playing out or kind of being reconciled is specifically via the notion I referenced earlier of kind of you would, if you truly understood, come to see
Speaker 1 the utopia as genuinely good.
Speaker 1 But I think that process, I mean, ideally, I think the way we end up building utopia is we go through a long patient process of becoming wiser and better and more capable as a species.
Speaker 1 And it's in virtue of that process kind of culminating that we're in a position to
Speaker 1 build a civilization that is sort of profoundly good and radically radically different um but that's a long process and so i i do think you know if as i say if i just transported you right there and you skipped you skipped the process then you might not like it um but uh and and and it is quite alien in some sense but i still but if you went through the process of like really understanding and kind of becoming wiser um uh you would you would endorse
Speaker 1 uh-huh that's um that's interesting to me that you think uh the process to get to utopia is more of a sort of,
Speaker 1 maybe I'm misconstruing it, but when you mentioned it's a process of us getting wiser and
Speaker 1 yeah, so it sounds like it's a more philosophical process rather than, I don't know, we figure out how to convert everything to hedonium and you know it's eternal bliss from then on.
Speaker 1 Yeah, so
Speaker 1 am I getting it right that you think it's more a philosophical process? And then why is it that you think so?
Speaker 1 Yeah, so I definitely don't sit around thinking that utopia, we sort of know what utopia is right now, and it's hedonium.
Speaker 1 I'm not especially into the notion of hedonium, though I think it's possible to.
Speaker 1 I think it's, I think the brand is bad.
Speaker 1 I think, you know, people talk about pleasure with this kind of dismissive attitude sometimes. And, you know, hedonium implies this kind of sterile uniformity.
Speaker 1
You know, and you're sort of tiling. People are talking about they're like going to tile the universe with hedonium.
And it's like, wow, this sounds, this sounds rough.
Speaker 1 Whereas I think actually, you know the relevant perspective when you're thinking about something like hedonium is the kind of internal perspective from which uh the sort of experience of the subject is something kind of uh joyful and you know boundless and kind of uh energizing and you know whatever whatever pleasure is actually like pleasure is not a trivial thing i think pleasure is a profound thing in a lot of ways but i really don't I don't assume that that's what utopia is about at all.
Speaker 1
I think we're at, I think, A, my own values seem to be quite complicated. I don't think I just value pleasure.
I value a lot of different things.
Speaker 1 And more broadly, I have a lot of uncertainty about how I will think and feel about things if I were to go through a kind of process of significantly increasing my capacity to understand.
Speaker 1 I don't, I think sometimes when people imagine that, they imagine, oh, we're going to sit around and do a bunch of philosophy, and then we'll have like solved normative ethics and then we'll implement our solution to normative ethics.
Speaker 1 That's not what I'm imagining by kind of wisdom. I'm imagining something
Speaker 1 richer and also that involves importantly, a kind of enhancement to our cognitive capacity.
Speaker 1 So sort of really, you know, I think we have, we have very small, we're really limited in our ability to understand the universe right now. We have kind of,
Speaker 1 and I think there's just a huge amount of uncharted territory in terms of what minds can be and do and see.
Speaker 1 And so I want to sort of chart that territory before we start making kind of big and irreversible decisions about what sort of civilization we want to build in the long term. I see.
Speaker 1 And
Speaker 1 another
Speaker 1 maybe concerning part of the utopia is that, yeah, as you mentioned in the piece,
Speaker 1 many of the worst ideologies in history have had elements of utopian thinking in them.
Speaker 1 To the extent that EA and utilitarianism generally are compatible with utopian thinking, maybe they don't advocate utopian thinking, but they are compatible with it.
Speaker 1 Do you see that as a problem for
Speaker 1 the movement's health and potential impact?
Speaker 1 Is the question something like,
Speaker 1 is this a red flag? Kind of, ah, you know,
Speaker 1 we looked at other ideologies throughout history and they've been compatible with utopian thinking. And maybe sort of effective altruism or
Speaker 1 utilitarians or something is similarly compatible. So should we worry in the same way? Is that the question?
Speaker 1 Yeah, partly. And also
Speaker 1 another part is, um, maybe the maybe maybe it's still right that like morally speaking, yeah, utopia is compatible with this worldview and the worldview is correct.
Speaker 1 But that the implications are that
Speaker 1 somebody misunderstands what is best,
Speaker 1 they identify as an EA, and this leads to bad consequences when they try to implement their scheme.
Speaker 1 Yeah, so I think there are certainly reasons to be cautious in this broad vein.
Speaker 1 I don't see them as very specific to EA or utility. I don't identify as utilitarian, but
Speaker 1 to utilitarianism.
Speaker 1 I see them as more
Speaker 1 or sort of better understood as
Speaker 1 risks that come from believing that something is very important at all.
Speaker 1 And I think it's true that many acting from a space of conviction, especially where that conviction has sort of a flavor of, you know, it's interesting what exactly constitutes an ideology, but I think it's reasonable to look at EA and sort of be like, this looks like an ideology.
Speaker 1 And I think, you know, and I think that's, I think that's right. And I think that's sort of important to
Speaker 1 have the sort of of relevant red flags about um i think it's pretty hard to have a view of the world that doesn't in some sense imply that it could be a lot better um or at least a plausible view of the world and and when i say utopia i don't really mean anything much different from that you know i think it's sort of um i'm not saying a perfect thing i'm not you know i i do have sort of a more specific view about exactly how much better things could be but more broadly it seems to me many many people believe in the possibility of a much better world and are fighting for that in different ways.
Speaker 1 And
Speaker 1 so I wouldn't pin the red flag specifically to the belief that sort of things can be better.
Speaker 1 I think it would have more to do with sort of what degree of rigidness are you
Speaker 1 relating to that belief with?
Speaker 1 How are you acting on it in the world? How much are you willing to kind of
Speaker 1 kind of break things or kind of act in uncooperative ways in virtue of that sort of conviction? And there, I think
Speaker 1
caution is definitely warranted. I see.
Yeah, so
Speaker 1 I'm not sure I agree that
Speaker 1 most people have a view or an ideology that implies
Speaker 1 anywhere close to the kind of utopia that
Speaker 1 utopian thinking one can have.
Speaker 1 Like if you think of modern political parties in a developed democracy, like in the United States, for example, if you think of what is like a utopian vision that either party has, it's like, it's actually
Speaker 1
quite banal. It's like, oh, we'll have universal healthcare or I don't know, GDP will be higher in the next couple of decades, which is, which doesn't seem utopian to me.
It just seems, and
Speaker 1 it does seem like a limited worldview where they're not really thinking about how much better or worse things could be, but it doesn't exactly seem utopian. Yeah, yeah, I'll let you react to that.
Speaker 1 I think that's a good point.
Speaker 1 So maybe the relevant notion of utopian here is something like, to what extent is a concept of a radically better world kind of operative in your day-to-day uh engagement you know to some extent what i meant is that i think i think if i sat down and talked with most uh you know most people um
Speaker 1 you know we could eventually
Speaker 1 with some kind of constraints on reasonableness come to agree that things could be a lot better in the world like we could just cure cancer we could cure you know xyz disease we could just go through a few things like that we could talk about um the degree of abundance that could be available um and i think you know
Speaker 1 so
Speaker 1 but the question is whether that's like a kind of structuring or important dimension to how people are relating to the world i think you're right that it's often not and that's part of maybe um the thing i'm hoping to
Speaker 1 uh kind of push back against with that post is actually i think this is a really important feature of our situation um i think it's true that it's it can be dangerous and if you're wrong about it or if you're acting um in the right in a sort of um unwise way with respect to it that can be really bad.
Speaker 1
But I also think it's just, it's just a really basic fact. And I think we just sort of need to learn to deal with it maturely.
And kind of pretending it's not true, I think, isn't the way to do that.
Speaker 1 I see.
Speaker 1 But to me, at least, utopian or utopia sounds like some sort of peak.
Speaker 1 And maybe you didn't mean it this way, but so are you saying in the essay and generally that you think there is some sort of carrying capacity to how much good things can get or that beyond a certain point, things can keep getting
Speaker 1 indefinitely better.
Speaker 1 But at this point, we're willing to say that we have reached utopia.
Speaker 1 Yeah, so I mean, I certainly don't have a kind of hard threshold. You know,
Speaker 1 here's exactly where I'm going to call it utopia.
Speaker 1 I mean something that is profoundly better.
Speaker 1 I do think
Speaker 1 that if you have a finite, so, you know, a very basic level, if there's only a finite number of states that the sort of affectable universe can be in, in, and your ranking of these states in terms of how good they are is
Speaker 1 transitive and complete,
Speaker 1 then there will be a sort of top,
Speaker 1 a top. And
Speaker 1 I don't think that's an important
Speaker 1 thing to focus on from the perspective of just getting it, just
Speaker 1 taking seriously that things could be radically better at all. I think like talking about, ah, but exactly how good and what's the perfect thing is often kind of distracting in that respect.
Speaker 1 And it gets into these issues about like, oh, you know, how much suffering is good to have.
Speaker 1 And a lot of the sort of discourse on utopia, I think, gets distracted from basic facts about like, at the very least, we can do just a ton better.
Speaker 1
And that's important to keep in mind. I see.
I see.
Speaker 1 You point out in the piece that many religions and spiritual movements have done the most amount of thinking on what a utopia. could look like.
Speaker 1 And you know, there's a very interesting essay by Nick Bostrom in 2008, where he lays out his vision of what somebody speaking from the future, utopia, talking back to us would sound like.
Speaker 1 And when you read it, it sounds very much like a sort of
Speaker 1 mystical essay, the kind of thing that a change of few words and a Christian could write, like C.S. Lewis could have written about what it's like to speak down from heaven.
Speaker 1 Yeah,
Speaker 1 so to what extent is there,
Speaker 1 and I don't mean this pejoratively, but
Speaker 1 to what extent is there some sort of like spiritual or religious dimension to utopian thinking
Speaker 1 that relies on some amount of faith that things can get in sort of indescribably better in some sort of ephemeral, indescribable way?
Speaker 1 So I think there are
Speaker 1 definitely analogues and similarities between some ways of relating to the notion of utopia and
Speaker 1 attitudes and orientations that are common in religious contexts and spiritual contexts. And I think it's,
Speaker 1 and I think personally,
Speaker 1
so I don't think it needs to be be that like that. As I say, I don't think it requires faith.
I don't think it requires anything mystical.
Speaker 1 I think this is, it's just a basic fact
Speaker 1 about our kind of current,
Speaker 1 you know, our current cognitive situation, our current civilizational situation, that things could be radically better.
Speaker 1 And
Speaker 1 it's ephemeral in the sense that it's quite hard to imagine, especially, you know, for me,
Speaker 1 an important source of evidence here is sort of variance in the quality of human experiences. So if you think about your kind of peak experiences,
Speaker 1 they're often, it's a really big deal.
Speaker 1 You're kind of sitting there going, wow, this is radically, this is serious.
Speaker 1 And kind of feeling in touch or feeling that this is, in some sense,
Speaker 1
something you would trade much. much sort of mundane experience for the sake of.
And I think it's important. So the thing that I think we need to do is sort of extrapolate from there.
Speaker 1 So you sort of look at the trajectory that your mind moved along as you moved into some experience or some broader non-experiential, like your community got a lot better, your relationships got about a lot better.
Speaker 1 Look at that trajectory and then sort of stare down, you know, where is that going?
Speaker 1 Um, and I do think that requires a kind of, I don't want to call it faith, I think it requires a kind of um extrapolation into a sort of zone that is in some sense beyond your experience, but that is sort of deeply worthy and important.
Speaker 1 And I think that's
Speaker 1 something that is often associated with spirituality and religion. And
Speaker 1 I think that's okay.
Speaker 1 But I actually think there are a number of really important differences between utopia and something like heaven.
Speaker 1 So, you know, centrally, utopia will be a sort of concrete, limited situation.
Speaker 1 You know, there are going to be frictions, there are going to be resource constraints, it's going to be finite.
Speaker 1 There's a bunch of, it's still going to be in the real world, whereas I think
Speaker 1 many,
Speaker 1 you know, most religious visions
Speaker 1 don't have those constraints. And that's an important, an important feature of their,
Speaker 1 yeah, of their situation. Yeah, speaking of constraint constraints, this reminds me of Robin Hansen's theory that
Speaker 1 eventually the universal economy will just be made up of these digital people Ms, and that because of competition, their wages will be driven down to subsistence levels,
Speaker 1 which
Speaker 1 maybe that's compatible with some engineering in their ability to experience such that
Speaker 1 it's still blissful for them to work at subsistence levels of compute or whatever.
Speaker 1 But yeah, so it seems like this sort of like
Speaker 1 first order economic thinking implies that there will be no there will be no utopia. In fact,
Speaker 1 things will get worse on average, but maybe better overall if you just add up all the experience, but worse on average.
Speaker 1 Yeah, so I don't know.
Speaker 1 this vision seems incompatible with yours of a utopia what do you think
Speaker 1 yeah i would not call uh robin's world a utopia uh and so you know a thing i haven't been talking about is what should our overall probability distribution be with respect to different quality of futures um and what um you know exactly how possible is it and how likely is it that we build something that is sort of profoundly good as opposed to uh mediocre or much worse.
Speaker 1 And
Speaker 1 I would class Robin's scenario in the mediocre or much worse zone.
Speaker 1 So do you have a criticism of the logic he uses to derive that? To some extent, I think
Speaker 1 my main criticism or the first thing that would come to mind is that I think we will very likely
Speaker 1 like I think competitive pressures are
Speaker 1 a source of kind of
Speaker 1 pushing the world in bad directions. But I also think there are ways in which kind of wise forms of coordination and kind of preemptive action can
Speaker 1 stave off the sort of bad effects of competitive pressures.
Speaker 1 And so that's a sort of, that's the way I imagine avoiding
Speaker 1 stuff in the vicinity of what Robin is talking about.
Speaker 1 There are a lot of complexities there. Yeah, yeah.
Speaker 1 The last few years have not reinforced my
Speaker 1 belief in the possibility of wise coordination. But yeah, yeah.
Speaker 1 Anyways, so
Speaker 1 one thing I wanted to talk to you about is you have a paper on what it would take to match humans' brains' computational capacity.
Speaker 1 And then associated with that, you have
Speaker 1 a very good summary on open philanthropy.
Speaker 1 Yeah, so do you want to talk about the approach you took to estimate this and then why this is an important metric to try to figure out?
Speaker 1 Yeah, so
Speaker 1 the approach I took was to look at the evidence from neuroscience and the literature on the kind of computational capacity of the human brain and to talk to a bunch of neuroscientists and to try to, you know, see what we know right now about the
Speaker 1 number of floating point operations per second that would be sufficient to kind of reproduce the task relevant aspects of human cognition in a computer.
Speaker 1 And that's important. I mean, it's actually not,
Speaker 1 you know, it's not clear to me exactly how important this parameter is to our overall picture.
Speaker 1 I think the way in which it's relevant to thinking that I've been doing and that OpenPhil has been doing is as an input into an overall methodology for estimating when we might see kind of human-level AI systems that proceeds by first trying to estimate roughly the kind of computational capacity of the brain or the sort of
Speaker 1 the sort of size of size of a kind of AI system and its kind of overall parameter count
Speaker 1 and kind of compute capacity, and that would be sort of analogous to humans.
Speaker 1 And then you extrapolate from that to the training cost, the cost to kind of create a system of that kind using current methods in machine learning and kind of current
Speaker 1 scaling laws.
Speaker 1 And
Speaker 1 that methodology, though, brings in a number of additional assumptions that I think aren't
Speaker 1 like just transparent that that's, oh, yeah, of course, that's how we would do it. And so I think you have to sort of be a little bit more in the weaves to to see exactly how it
Speaker 1
feeds in. I see.
And then, yeah, so what I think you said it was 10 to the 15 flops for
Speaker 1 human brain, but like, did you have an estimate for how many flops it would take to train
Speaker 1 to train something like the human brain? I know GPT-3 is like
Speaker 1 only 175 billion parameters or something, which can fit into a you know, like a micro SD card, even.
Speaker 1 But
Speaker 1 yeah, it was like $20 million to train. So yeah, so
Speaker 1 we were able to come up with some sort of estimate for what it would cost to train something like this.
Speaker 1
Yeah, so my focus in that report was not on the training extrapolation. That was work that Ajaya Katra at Open Philanthropy did using my report's estimate as an input.
And
Speaker 1 that her methodology involves assigning different probabilities to different kind of ways of using that input
Speaker 1 to derive an overall training estimate.
Speaker 1 And in particular, an important source of uncertainty there is the kind of amount of compute required or the sort of number of times you need to run a system per data point that it gets.
Speaker 1 So in the case of something like GPT-3, you get a meaningful data point and a gradient update as to how well you're performing with each token that you output as you're doing GPT-3 style training.
Speaker 1 So you're, you know, you're predicting text from the internet, you know, you suggest the next token, and then your training process says like, nope, do better next time, or something like that.
Speaker 1 Whereas if you're, say, learning to play Go and you have to play, I mean, this isn't exactly how, or this isn't how a Go system would work, but it's an example.
Speaker 1 If you have to play the full game out, and that's sort of hundreds of moves,
Speaker 1 then before you get an update as to whether, you know, you're playing well or poorly, then that's a big multiplier on the compute requirement. And so that's that's one of the central pieces.
Speaker 1 That's called what Ajaya calls the horizon length of training. And that's a sort of very important source of uncertainty in getting to your
Speaker 1 overall
Speaker 1 training estimate. I think, but ultimately, you know, she ends up with this big spread out distribution from something like, I think GPT-3 was like
Speaker 1 10 to the
Speaker 1 four times 10 to the 23 or something like that. And, you know, she's she spreads out all the way up to the evolution anchor, I think, is something like 10 to the 41.
Speaker 1 And I think her distribution is centered somewhere in the low 30s.
Speaker 1 Okay, that's still quite a bit, I guess.
Speaker 1 How much does this rely on
Speaker 1 the scaling hypothesis? If one thought that the current efforts and the current approach were not
Speaker 1 likely to lead in, or at least not likely in a sample-efficient way,
Speaker 1 towards human intelligence,
Speaker 1 it might be analogous to somebody saying we have enough deuterium on Earth to power a civilization for millions of years.
Speaker 1 But if you haven't figured out fusion, then it may be an irrelevant statistic.
Speaker 1 Yeah, so I think the approach does assume that
Speaker 1 you can train a human level or sort of transformative AI system with a sort of non-astronomical amount of compute and data using current,
Speaker 1 you know, without without major conceptual or algorithmic breakthroughs relative to what's currently available.
Speaker 1 Now, the actual methodology JI uses allows you to assign probabilities to that assumption too. So you can, if you want, you know, say I'm only 20% on that.
Speaker 1 And then
Speaker 1 you have, then there are sort of other,
Speaker 1 there are a few few other options.
Speaker 1 So you can also kind of rerun evolution, which is not, and so that's an anchor that she provides to sort of, and this is often what people will say as a sort of upper bound on how hard it is to
Speaker 1 create human-level systems is to do something, something analogous to simulating evolution. Though there are a lot of open questions as to how hard that is.
Speaker 1 But I do think this methodology is a lot more compelling and interesting if you
Speaker 1 are compelled by the
Speaker 1 kind of available techniques in deep learning and by and by kind of scaling hypothesis like views, at least as an upper bound. I think it's important.
Speaker 1 So, you know, there's different ways of kind of being interested in algorithmic breakthroughs. One is because you think deep learning isn't enough.
Speaker 1 Another is because you think they will sort of provide a lot of efficiency relative to deep learning, such that an estimate like a JS is an overestimate, because actually, you know, we won't have to do that.
Speaker 1 We'll make some sort of breakthrough and it'll happen a lot earlier.
Speaker 1 And
Speaker 1 uh and i put i put weight on that view as well yeah that's really interesting so yeah that implies that like even if you think the current uh techniques are not uh not optimal maybe that maybe that should update you in favor of thinking it could happen sooner that's that's really interesting um uh
Speaker 1 um yeah so yeah then how did you go about estimating uh like
Speaker 1 uh the amount of flops it would take to emulate uh the interactions that happen in a brain uh obviously it
Speaker 1 it would be unreasonable to say that you have to emulate every single
Speaker 1 atomic interaction.
Speaker 1 But then
Speaker 1 what is your proxy that you think would be sufficient to emulate?
Speaker 1 So I used a few different methodologies and tried to kind of synthesize them.
Speaker 1 So one was looking at the kind of mechanisms of the brain and what we know about the kind of complexity of what they're doing and how hard it is to capture the kind of task relevant or our best guess about the task relevant dimensions of the signaling happening in the brain.
Speaker 1 And then I also tried to bring in comparisons with existing AI systems that are replicating kind of chunks of functionality that humans, that the human brain has, and in particular in the context of vision.
Speaker 1 So sort of
Speaker 1 how do our current vision systems compare with the parts of the brain that are kind of plausibly doing analogous processing, though they're often doing other things as well.
Speaker 1 And then I used a third method, which has to do with physical limits on the kind of energy consumption per unit computation that the brain is possibly doing.
Speaker 1 And then a fourth method that I sort of gesture at, which tries to extrapolate from the communication capacity of the brain to its computational capacity using comparisons with current computers.
Speaker 1 So it's sort of a triangulation of like you look at a bunch of different sources of evidence, all of which, in my opinion, are pretty weak. I think we are
Speaker 1 quite, well, the physical limits stuff is maybe more complicated, but it's sort of an upper bound.
Speaker 1 I think we are significantly uncertain about all of this. And my distribution is pretty spread out.
Speaker 1 But
Speaker 1 the hope is that by looking at a bunch of things at once, you can at least get a sort of educated guess. And then, yeah, so I'm very curious.
Speaker 1 Is there consensus in neuroscience or other relevant fields that we understand the signaling mechanisms well enough that we can say, like, basically, this is what it's involved.
Speaker 1 This is what the system is reducible to.
Speaker 1 And yeah, so this is how many bits you need to represent out of all the synaptic connections here. Or is there a a variance of opinion about like just how complicated the enterprise is?
Speaker 1 There's definitely disagreement. And it was interesting and in some sense, disheartening to talk with neuroscientists about just how
Speaker 1 difficult neuroscience is.
Speaker 1 I think it's easy. A consistent message, and I have a section on this in the report,
Speaker 1 was kind of how far we are from really understanding what's going on in the brain, especially at at a kind of algorithmic level.
Speaker 1 That said, so in some sense, the report is somewhat opinionated in that,
Speaker 1 you know, there are experts that I found more compelling than others. There are experts who are much more in a sort of agnosticism mode of like, we just don't know.
Speaker 1 You know, the brain is really, really complicated, who sort of err on the side of very large compute estimates, a lot of emphasis on biophysical detail, a lot of emphasis on sort of mysterious things that could be happening that aren't happening.
Speaker 1 And then there are other neuroscientists who are more,
Speaker 1 you know, more willing to say stuff like, well, we kind of basically know what's what's going on at a mechanistic level, which isn't the same as knowing kind of the algorithm, the sort of algorithmic organization overall and how to replicate it.
Speaker 1 I sort of lean towards the latter view, though I give weight to both and try to
Speaker 1
synthesize. the kind of opinions of people I saw overall.
Just looking at the post itself, I haven't really looked deeper into the actual
Speaker 1 paper from its described, but it seems like you were to estimate the flops mechanistically, you were adding up the different systems that play here.
Speaker 1 Should we expect it to be additive in that way, or maybe it's like multiplicative, or there's more complicated interaction, like the flops grow super linearly to the inputs?
Speaker 1 I know that probably sounds really naive having studied it, but just like from a first glance kind of way,
Speaker 1 that's a question I had.
Speaker 1 Yeah, so the way I was understanding and breaking down the forms of processing that you would need to replicate in the brain
Speaker 1 made them seem not multiplicative in this way. So, you know, an example would be if you think about maybe, yes, sort of simple examples.
Speaker 1 So suppose we have some neurons and they're, you know, they're signaling centrally via spikes through synapses or something like that. And then we have
Speaker 1 glial cells as well, which are signaling via like slower calcium waves and it's a sort of separate network.
Speaker 1 You could think that if it were something like the rate of calcium signaling is
Speaker 1 dependent on the rate of spikes through synapses or something like that, then that's an important interaction.
Speaker 1 But
Speaker 1 overall, if you sort of imagine like this kind of network processing,
Speaker 1 uh these are kind of you can just you can estimate them independently and then and then add it up it's they're not they're not actually multiplicative processes on that on that conception um i do think there are kind of correlations between the estimates for
Speaker 1 the different parts, but
Speaker 1
it's sort of additive at a fundamental level. I see.
Okay. And then, yeah, how much credence do you put in
Speaker 1 the sort of
Speaker 1 almost woo-voo hypotheses that I don't know, Roger Hipman Rose has that thing about there's something like something quantum mechanical happening in the brain. That's very important
Speaker 1 for understanding cognition.
Speaker 1 Yeah, to what extent
Speaker 1 do you put credence in those kinds of hypotheses?
Speaker 1 I put very little credence in those hypotheses.
Speaker 1 Yeah, I don't see a lot of reason to think that.
Speaker 1 I see a good amount of reason not to think it.
Speaker 1
But it wasn't something I dug in on a ton. Okay, gotcha.
All right, so you have this really interesting blog post about infinite ethics.
Speaker 1 Do you want to talk about why this is an important topic, why it's important to integrate into our worldview and so on?
Speaker 1 Sure. So infinite ethics is ethics that tries to grapple grapple with how we should act with respect to kind of infinite worlds and how should we, you know, how should we rank them?
Speaker 1 How should they enter into our expected utility calculations or our attitudes towards risk?
Speaker 1 And I think this is important for both kind of theoretical and practical reasons.
Speaker 1 So I think at a theoretical level, when you try to do this with a lot of common ethical theories and constraints and principles, they just break on
Speaker 1 infinite worlds.
Speaker 1 And i think that's that's an important clue as to their viability because i think infinite worlds are at the very least possible um even if our world is finite um and even if our causal influence is finite or our influence overall is finite um it's possible uh to have infinite worlds and we have opinions about them you know like an infinite heaven is better than an infinite hell and you know uh so i think um often in ethics we we we expect our ethical principles to extend to um kind of ranking scenarios or sort of acting in hypothetical scenarios or overall kind of to
Speaker 1 all possible situations rather than just our actual situation. And I think
Speaker 1 infinities come in there. But then I think maybe more importantly, I think it's an issue with practical relevance.
Speaker 1 And a way to see that is that, you know, I think we should have non-zero credence that we live in an infinite world.
Speaker 1 And, you know, it's a very live physical hypothesis that the universe is infinite, even if I think the mainstream view is that our causal influence on that universe is finite in virtue of things like entropy and light speed and stuff like that.
Speaker 1 But the universe itself may well be infinite
Speaker 1 and possibly infinite in a number of different ways.
Speaker 1 If that sort of Max Tegmark has some work on all the different kind of like large
Speaker 1 ways the universe may be really very large. There's a number of ways that I think it's just we should have non-zero credence that we can have infinite influence in our actions now.
Speaker 1 So
Speaker 1
our kind of the causal influence, the limitations there could be wrong. It may be that there are ways you know, in the future we'll be able to do infinite things.
And then I also think somewhat more
Speaker 1 exotically that it's there's sort of ways of having a causal influence on an infinite universe, even if you are limited in your causal influence.
Speaker 1 And that comes from some additional work I've done on decision theory.
Speaker 1 And so if you try to incorporate that, if you're a sort of expected value reasoner,
Speaker 1
it just very quickly starts to dominate or at least break. your expected value calculation.
So, you know, you mentioned long-termism earlier.
Speaker 1 And, you you know, a natural reason, a natural argument for getting interested in long-termism is, oh, you know, in the future, there could be all these people, their lives are incredibly important.
Speaker 1 So if you do the EV calculation, sort of your effect on them is what dominates.
Speaker 1 But actually, if you have even a tiny credence that you can do an infinite thing,
Speaker 1 you know, either that dominates or it breaks. And then if you have tiny credences on doing different types of infinite things and you need to compare them, you need to know how to do it.
Speaker 1 And so I just think this is actually, you know, it's actually a part of our epistemology now, though it's, I think we often don't don't treat it that way because we're often not doing EV reasoning or really thinking about that,
Speaker 1 that
Speaker 1 these are questions that just apply to us.
Speaker 1 Yeah, yeah. So that's that's super fascinating.
Speaker 1 If it is the case that we can only have an impact on a finite amount of stuff, then maybe it is true that like there's infinite suffering or happiness in the universe at large, but the delta between the best case scenario for what we do in the worst case scenario is finite.
Speaker 1 But yeah, I don't know, that still seems less compelling if the hell or heaven we're surrounded by is overall not,
Speaker 1 doesn't change.
Speaker 1 Can you talk a bit more? I think you mentioned in your other work on having impact,
Speaker 1
having infinite impact beyond the scope of what light at speed and entropy would allow us. Can you talk a bit more about how that might be possible? Sure.
So,
Speaker 1 you know, a common decision theory, though it's not, I think, the mainstream decision theory, it's a contender in the literature, is evidential decision theory, where you should act
Speaker 1 such that
Speaker 1 you would be, you know, roughly speaking, happiest to learn that you had acted that way for that reason.
Speaker 1 And
Speaker 1 so the reason this allows you kind of a causal influence, so you know, a way of thinking about it is suppose that you are a
Speaker 1 deterministic simulation and there's a copy of you being run sort of too far away for you to ever causally interact with it, right? But you know that it's a sort of,
Speaker 1 you know, it's a deterministic copy. And so it'll do exactly what you do absent some sort of computer malfunction.
Speaker 1 And now you're deciding whether to give,
Speaker 1
you know, you have two options. You can send a million dollars to that.
Well, it's a little complicated because he's too far away, but
Speaker 1 just in general, like if I raise my hand or if I want to write stuff on my whiteboard, right?
Speaker 1 Or if I'm going to, you know, there's, let's say I have to make some ethical decision, like whether I should take an expensive vacation or I should donate that money to save someone's life, because that the other guy is going to act just like I do, even though I can't cause him to do that.
Speaker 1 In some sense, when I make my choice, after doing so, I should think that he made the same choice. And so evidential decision theory treats his action as in some sense under my control.
Speaker 1 And so if you imagine an infinite universe where there are an infinite number of copies of you, or even not copies, people whose actions are correlated with you, such that when you act a certain way, that gives you evidence about what they do.
Speaker 1 In some sense, their actions are under your control. And so if there are an infinite number of them on evidential decision theory and a few other decision theories,
Speaker 1 then in some sense, you're having an influence on the universe. Yeah, this sounds really similar to the SOD experiment and quantum mechanics called EPR pair,
Speaker 1
which you might have heard of. But the basic idea is if you have two entangled bits and you take them very far away from each other, and then you measure one of them.
And
Speaker 1 before they're they're brought apart, you come up to some rule that, like, hey, if it's plus, we do this, if it's minus, we do the other thing.
Speaker 1 It seems at first glance that measuring something yourself has an impact on what the other person does, even though it shouldn't be allowed
Speaker 1 by light speed.
Speaker 1 It gets resolved if you take the many worlds view. But
Speaker 1 yeah, yeah. So that's very interesting.
Speaker 1 Is this just a thought experiment, or is this something that we should anticipate for some cosmological reason to actually be a way we could have influence on the world so i haven't dug into the cosmology a lot but my understanding is that it's at the very least a very live hypothesis that the universe is um infinite in the sense that there are um you know it's a sort of infinite in extent and there are uh you know suitably far away um there are copies of us having just this conversation and then you know even further away there are copies of us having this conversation but wearing raccoons for hats um and
Speaker 1 all the rest,
Speaker 1 which is itself something to wonder about and sit with.
Speaker 1 But my understanding is this is just a live hypothesis and that more broadly, kind of infinity's playing, infinite universes are just sort of a part of
Speaker 1 mainstream cosmology at this point.
Speaker 1 And so
Speaker 1 yeah,
Speaker 1 I don't think it's just a thought experiment. I think infinite universes are live.
Speaker 1 And then I think um uh you know these sort of non-causal decision theories are actually my sort of best guess decision theories, though that's not a mainstream view.
Speaker 1 So it's fairly, I think it comes in fairly directly and substantively if you have that combination of views.
Speaker 1
But then I also think it comes in, I think everyone should have non-zero credence in all sorts of different infinity involving hypotheses. And so infinite ethics gets a grip regardless.
I see.
Speaker 1 And then
Speaker 1 So taking that example,
Speaker 1 if you're having an impact on every identical copy of yourself in the infinite universe, it seems that for any such copy, there's an infinite amount of other copies that are slightly different.
Speaker 1 So it's not even clear if you're increasing. Maybe it makes no sense to talk about proportions in an infinite universe, but
Speaker 1 if there is another infinite set of copies that scribbled the exact opposite thing on the whiteboard, then it's
Speaker 1
not clear that you had any impact on the total amount of good or bad stuff that happened. I don't know.
My brain breaks here, but maybe you can help me understand this.
Speaker 1 Yeah, so I mean, there's a general, i think there's a couple of dimensions here there so one is um
Speaker 1 trying to understand actually what sort of difference does it make if you're in this sort of infinite situation and you're thinking about acausal inference influence um what even did you change um at a sort of empirical level before you talked about how to value that um and i think that's a pretty gnarly question um even if we settled that question though in terms of like the empirical uh acausal impact uh there's a further question of how to you how do you rank that or how do you deal with
Speaker 1 the sort of the normative dimension here? And there,
Speaker 1 so that's the sort of ethical question. And there things get really gnarly very fast.
Speaker 1 And
Speaker 1 so,
Speaker 1 and in fact, there are kind of
Speaker 1 impossibility results that show that even very basic constraints that you really would have thought that we could get at the same time in our ethical theories, you can't get them at the same time
Speaker 1 when it comes to infinite universes.
Speaker 1 And
Speaker 1 so we know that something is going to have to go and change if we're going to extend our ethics to infinities.
Speaker 1 I see. But then,
Speaker 1 so
Speaker 1 is there some reason you settled on, I guess you mentioned you're not a utilitarian, but on some version of EA or long-termism as your tentative moral hypothesis, despite the fact that this seems unresolved?
Speaker 1 And then like, how do you settle with that tension while tentatively remaining an EA?
Speaker 1 Yeah, so I think there's two dimensions there. One is that
Speaker 1 I think it's good practice to not totally upend your life and do, and
Speaker 1 if you encounter some destabilizing philosophical idea, especially one that's sort of difficult and you don't totally have a grip on it to come back.
Speaker 1 Isn't that what long-termism is? Yeah, so I think there's a real tension there in that I think many, you know, how seriously should we take these ideas?
Speaker 1 At what point should you be making what sorts of changes for your life on the basis of different things that you're
Speaker 1 thinking and believing? You know, it's a real art, right? And I think some people go, you know, they grab the first idea they see and they start doing crazy stuff and in an unwise way.
Speaker 1 And some people are too.
Speaker 1 it's kind of sluggish and they're not willing to take ideas seriously and not willing to reorient their their life on the basis of of uh changes in what in what seems true um
Speaker 1 but i think you know nevertheless i think especially things that involve like ah turns out it's fine to, you know, do terrible things or, you know, there's no reason to eat your lunch or whatever, like things that, you know, sort of really, really holistically breaking of your ethics views, I think, I think one should tread very cautiously with.
Speaker 1 So that's one aspect. At a philosophical level,
Speaker 1 the way I resolve it is I think for many of these issues,
Speaker 1 the right path forward, or at least a path that looks pretty good, is to
Speaker 1 survive long enough for our civilization to become much wiser.
Speaker 1 And if it as and then to use that position of wisdom and empowerment to act better with respect to these issues.
Speaker 1 And so, and that's what I say in the end of the Infinite Ethics post: is that
Speaker 1 I think future civilization, if all goes well, will be much better equipped to deal with this.
Speaker 1 And
Speaker 1 we are at square one in kind of really understanding
Speaker 1 how these issues play out and how to respond. And so
Speaker 1 I think both at an empirical level and at a kind of philosophical level.
Speaker 1 And so
Speaker 1 it looks convergently pretty good to me to survive, become wiser, keep your options open, and then act from there.
Speaker 1 And that looks, that ends up pretty similar to a lot of long-termism and existential risk. It's just that it's focused less on, and the main event will be what happens to future people.
Speaker 1 And it's more about getting to the point where we are wise enough to understand and reorient in a better way.
Speaker 1
Okay. Yeah.
So what I find really interesting about this is that you can,
Speaker 1 yeah, so different people tend to have like different thresholds for um epistemic learned helplessness, where they basically say, This is too weird, I'm not gonna think about this, uh, let's just stick with my current, uh, current uh moral theories.
Speaker 1 Um, so for somebody else, it might be before they became a long-term risk, where it's just like, yeah, surely the future people, what are we talking about here? Let's
Speaker 1 not changing my mind on stuff, and then, yeah, for you, maybe it's before the infinite ethics stuff. Um,
Speaker 1 is there some principled reason for thinking that this is where that stop should be or
Speaker 1 is it just a matter of like temperament and openness
Speaker 1 so i don't think there's a principled reason and and i should say i don't think of my attitude towards infinite ethics as solely oh this has gotten too far down the crazy the crazy path i'm out um it is this thing about the wisdom in the future is pretty important to me um as a as a a reason to uh uh as an as a mode of orientation a first pass cut that i use
Speaker 1 is when do you feel like it's real?
Speaker 1 If you feel like a thing is real,
Speaker 1 as opposed to a kind of abstract, fun argument,
Speaker 1 then
Speaker 1 that's important or that's a real signal. And I think, so,
Speaker 1 and I generally encourage people, if the sort of mode that I,
Speaker 1 I don't know, I'm drawn to is something like, if there's an idea that seems compelling intellectually, that's a reason to investigate it a lot and think about it and really grapple with, you know, if this doesn't seem right to you or if it seems too crazy, why?
Speaker 1 And really kind of processing, you know, it's a reason to pay a lot of attention.
Speaker 1 But if you've paid a lot of attention, at the end of the day, you're like, well, I guess at an abstract level, that sort of makes sense, but it just doesn't feel to me like the real world.
Speaker 1 It just doesn't feel to me like wisdom or like a healthy way of living or whatever. Then I'm like, well, maybe you shouldn't do it, right? I mean, and
Speaker 1 I think some people will do that wrong and they will end up bouncing off of ideas that are in fact good.
Speaker 1 But you know, I think overall
Speaker 1 these are sort of sufficiently intense and difficult issues that
Speaker 1 kind of being actually persuaded and not just sort of chopping off the rest of your epistemology for the sake of some like version of the abstraction is it seems to me important and it's a sort of a healthier way to relate.
Speaker 1 Yeah, so another example of this is that you have this really interesting blog post on ants and
Speaker 1 your thoughts after
Speaker 1 sterilizing a colony of them? So
Speaker 1 I,
Speaker 1 yeah, so this is another example of a thing where
Speaker 1 almost everybody other than, I don't know, maybe a Jane who wears a face mask to prevent bugs from going into his mouth would say, like, okay, at this point, if we're talking about how many hedons are in a hectare of forest from all the millions of insects there,
Speaker 1 then you've lost me.
Speaker 1 But then, you know, somebody else might say, okay, well, there's not a strong reason for thinking they have no, absolutely no capacity to feel suffering.
Speaker 1 Yeah, so
Speaker 1 I wonder how you think about such questions because you can't like stop living and not killing.
Speaker 1 You're not even going to stop going on road trips where you're probably killing hundreds of insects by just driving.
Speaker 1 But yeah, so
Speaker 1 what do you think about such conundrums?
Speaker 1 I have significant uncertainty about, you know, exactly, and I think this is the appropriate position about exactly how much kind of consciousness or suffering or other forms of moral you know other ways other kind of properties that we associate with moral patianhood how much those apply to different um different types of insects um i think it's a strange view to be you know extremely confident that uh what happens with insects is uh totally morally neutral and i think it actually doesn't fit with our common sense so let's say you see if you see a child like frying ants uh with uh with a magnifying glass i think we you know there is some
Speaker 1 you know, you could say, ah, well, that just indicates that they're going to be cruel to other things that matter.
Speaker 1
But I don't think so. I think, you know, and you see the ants like, you know, and they're twitching around.
And
Speaker 1 so I think we aren't, you know, as in many cases with animal ethics, I think we're a bit like kind of schizophrenic about what cases we view as sort of morally relevant and
Speaker 1 which not. You know,
Speaker 1 we have, you know, pet treatment laws, and then we have factory farms and stuff like that.
Speaker 1 So
Speaker 1 I don't see it as a radical position that ants matter somewhat. I think there's a further question of what your overall practical response should to that should be.
Speaker 1 And I think the main, and I do think the kind of costs, as in a lot of ethical life, there are trade-offs and
Speaker 1 you have to make a call about what sort of constraints you're going to put on yourself at the cost of other goals. And
Speaker 1 in the case of insects, it's not my current moral focus and I don't pay a lot of costs to kind of um
Speaker 1 uh to lower my impact on animals and and i don't you know
Speaker 1 i don't i don't sweep the sidewalk or any or sorry on on on ants in particular um uh
Speaker 1 and so i think it's i think and i think that's you know that's my best guess response and that and that has to do with other ethical priorities in my life um but i think you know there's
Speaker 1 there's a middle ground between um i shall ignore this completely and i shall you know be a jane um which is recognizing that this is a this is a real trade-off there's uncertainty here and
Speaker 1 taking responsibility for how you're responding to that. Yeah, this seems
Speaker 1 kind of similar to the infinite ethics example, where if you put any sort of credence that
Speaker 1 they have any ability to suffer, then at least if you're not going to say that, oh, it doesn't matter because like the far future are trillions and trillions of ants.
Speaker 1 It seems like this should be
Speaker 1 a
Speaker 1 compelling thing to think about. But then the result is,
Speaker 1 um,
Speaker 1 yeah, it's not even like becoming a vegan where it's like you change your diet. Um, uh,
Speaker 1 and then so you know, as you might know, this is used as a reductive ad absurdum of veganism, where you know, if you're going to start caring about other non-human animals, why not also care about insects?
Speaker 1 And even if they're worth like a millionth of a cow, then you know, you're probably still killing like a million of them on any given day from all your activities, uh, indirectly, maybe.
Speaker 1 Uh, like, I don't know, like the food you're eating, all the pesticides that are used to create that food.
Speaker 1 I don't know how you go about resolving that kind of stuff.
Speaker 1 I mean, I guess I'd want to really hear the empirical case.
Speaker 1 I think it's true, you know, there are a lot of insects,
Speaker 1 but I think it's easy.
Speaker 1 You know, I think if you want to say like, ah, taking seriously sort of the idea that
Speaker 1 there's some reason to not like
Speaker 1 squash a bug
Speaker 1 if you see it leads immediately to kind of Jane-like behavior, absent long-termism or something like that.
Speaker 1 I really, I feel like I want to hear the empirical case about like exactly what impact you're having and how.
Speaker 1 And I'm not at all persuaded that that's the practical upshot.
Speaker 1 And if it is, if that's a really strong case, then I think that's an interesting, an interesting,
Speaker 1 you know, that's an interesting kind of implication of this view.
Speaker 1 And,
Speaker 1 you know, worth concern.
Speaker 1 But I wouldn't jump, it feels to me like it's easy to jump to that almost out of a desire to to get to the reductio um without kind of i would try to move slower and and really see it's like wait is that right there are a lot of trade-offs here what's the source of my hesitation about that um and kind of uh and not not jump too quickly to something that's sufficiently absurd that i can be like ah therefore i get to reject this whole mode of thinking even though i don't know why i see yeah um okay so let's talk about uh the two different ways of thinking about observo effects and their implications so do you want to uh explain
Speaker 1 You have a four-part series on this, but do you want to explain the self-indication assumption and the self-sampling assumption?
Speaker 1 I know it's a big topic, but yeah,
Speaker 1 as much as possible.
Speaker 1 Sure. So I think
Speaker 1 one way to start to get into this debate is by thinking about the following case. So you wake up in a white room and there's a message written on the wall.
Speaker 1 And let's say you're going to believe this message. And the message says, I, God, it's from God.
Speaker 1 I, God, created, I flipped a coin.
Speaker 1 And if it was heads, I created one person in a white room. And if it was tails, I created a million people all in white rooms.
Speaker 1 And now you are asked to assign probabilities to the coin having come up heads versus tails.
Speaker 1 And
Speaker 1 so one approach to this question,
Speaker 1 which is the approach I favor, or at least think is better than the other, is the self-the self-indication assumption.
Speaker 1 These names are terrible,
Speaker 1 but
Speaker 1 so it goes. So SIA says that your probability that the coin came up heads should be approximately one in a million.
Speaker 1 And that's because SIA thinks it's more likely that you exist in worlds where there are more people in your epistemic situation or more people who have your evidence, which in this case is just waking up in this white room.
Speaker 1 And that's and so that's that could be a weird conclusion and go to weird places, but I think it's a better conclusion than the alternative.
Speaker 1 SSA, which is the main alternative I consider in that post, which is the self-sampling assumption,
Speaker 1 says that you should, you think it more likely that you exist in worlds where people with your evidence are a larger fraction of
Speaker 1 something called your reference class,
Speaker 1 where it's quite opaque what a reference class is supposed to be, but broadly speaking, a reference class is the sort of set of people you could have been, or that's kind of how it functions in SSA's discourse.
Speaker 1 So
Speaker 1 in this case, in both cases, everyone has your evidence.
Speaker 1 And so the fraction is the same.
Speaker 1 And so you stick with the one half prior.
Speaker 1 But that's not true. So SSA in other contexts,
Speaker 1
not everyone has your evidence. And so it updates towards worlds where it's a larger fraction.
So famously, SSA leads to what's known as the doomsday argument,
Speaker 1 where you imagine that there are two possibilities. Either humanity will go extinct very soon, or we won't go extinct very soon, and there will be tons of people in the future.
Speaker 1 And in the former case, and then you imagine everyone is sort of ranked in terms of when they're born.
Speaker 1 In the former case, people born at roughly this time are a much larger percentage of all the people who ever lived.
Speaker 1 And so if you imagine God first creates a world world and then he inserts you randomly into like some group, it's much more likely that you would find yourself in the 21st century
Speaker 1 if humanity goes extinct soon than if it's if there are tons of people in the future.
Speaker 1 If God randomly inserted you into these tons of people in the future, then it's like really, it's a tiny fraction of them are in the 21st century.
Speaker 1 So SSA in other contexts actually, you know, it has these important implications, namely that in this case, you update very, very hard towards the future being short.
Speaker 1 And that matters a lot for long-termism because uh long-termism is all about the future being big in expectation
Speaker 1 okay so and then how what is it what is the sia take on this yeah so i think a way to think about saia's
Speaker 1 kind of story so i gave this story about ss ssa which is it's sort of like this it's like first god creates a world this is ssa first he creates a world And then he takes, and he's dead set on putting you into this world.
Speaker 1 So he's got your soul, right? And he really wants, and your soul is going in there no matter what, right?
Speaker 1 But the way he's going to insert your soul into the world is by throwing you randomly into some set of people, the reference class.
Speaker 1 And so if you wake, so you should expect to end up in the world where
Speaker 1 the kind of person you end up as is sort of
Speaker 1 more like a more likely result of that throwing process is a sort of larger fraction of the total people you could have been.
Speaker 1 What SSA or what SIA thinks is different, the way the story that I'll use for SIA, though, doesn't this isn't the only gloss, is
Speaker 1 God
Speaker 1 decides he's going to create a world. And then he, and say there's like a big line of souls in heaven, and he goes and grabs them kind of randomly out of heaven and puts them into the world, right?
Speaker 1 And so in that case, if there are more people in the world, then you've got more shots at being, and you're one of these souls. You're sort of sitting in heaven, hoping to get created.
Speaker 1 On SIA, God has more chances to grab you out of heaven and put you into the world if there are more people,
Speaker 1 more people like you in that world.
Speaker 1 And so you should expect to be in a world where there was sort of, there are more such people. And that's kind of SIA's vibe.
Speaker 1 Doesn't this also imply that you should be in the future, assuming there will be more people in the future? Tell me more about why I would imply that. Okay, in an analogous scenario, maybe like
Speaker 1 going back to the God tossing the coin scenario, where if you just substitute for people in right rooms, you substitute
Speaker 1 being a thing,
Speaker 1 a conscious entity.
Speaker 1 And if there's going to be more conscious entities in the future, like you would really expect to, just like in that example of being in that scenario where there's a lot more rooms, just as maybe you should expect you to be in that scenario where there's a lot more conscious beings, which presumably is the future.
Speaker 1 So then it's still odd that you're in the present under SIA?
Speaker 1 Yes. So, in a specific sense, So
Speaker 1 it's true that on SIA,
Speaker 1 say that we don't know what room you're in first, right? So
Speaker 1 you wake up in the white room and you're wondering, am I in room one or am I in rooms two through a million, right?
Speaker 1 And on SIA, what you did first, so you woke up.
Speaker 1 and you don't know what room you're in but there's a lot more people in the world with lots of rooms and you become very, very confident that you're in that world, right?
Speaker 1 So you're very, very confident on tails. And then you're right that
Speaker 1 conditional on tails, you think it's much more like you sort of split your credence evenly between all these rooms.
Speaker 1 So you are very confident that you're in one of the sort of two through a million rooms and not room one.
Speaker 1 But that's before you've seen your room number.
Speaker 1 Once you see your room number, it's true that you should be quite surprised about your room number.
Speaker 1 But But
Speaker 1 once you get the room number,
Speaker 1 you're back to 50-50 on heads versus tails, because you had sort of equal credence in being in room one, conditional on tails,
Speaker 1 or sorry, you had equal credence in being in tails in room one and
Speaker 1 heads in room one. And so when you get rid of all of the other tails in rooms two through a million, you're left with 50-50 overall on heads versus tails.
Speaker 1 And so
Speaker 1 the sense in which SIA leaves you back at normality with the doomsday argument is once you update on being in the 21st century, which admittedly should be surprising.
Speaker 1
Like if you didn't know that you were in the 21st century and then you learned that you were, you should be like, wow, that's really unexpected. And fair.
So that's true.
Speaker 1 But I think once you do that, you're back at
Speaker 1 whatever your prior was about extinction. Maybe I'm still not sure on why the fact that you were surprised should not itself be the doomsday argument.
Speaker 1 Yeah, I think there's an intuition there, which is sort of like, yeah, is SIA making a bad prediction?
Speaker 1 So you could kind of update against SIA because SIA sort of would have predicted that you're in the future.
Speaker 1 I think there's something there. And I think there's a few other analogs.
Speaker 1 Like, for example, I think SIA naively predicts that
Speaker 1 you should find yourself in a situation where there are just tons of people that, you know, a situation obsessed with creating people with your evidence. And, you know, this is one problems with SIA.
Speaker 1 So, you should expect to find, you know, in every nook and cranny, a simulation of you.
Speaker 1 As soon as you like, you know, you open the door, it's actually this giant bank of simulations of you in like your previous epistemic state.
Speaker 1 And so, you know, I think there are, and then you don't see that. You might be like, well, I should update against the anthropic theory that predicted that I would see that.
Speaker 1 And I think there are arguments in that vein. Yeah, so maybe let's back up to go to the original example
Speaker 1 that
Speaker 1 was used to distinguish these two theories. Yeah, so
Speaker 1 can you help me resolve my intuitions here? Where my intuition is very much SSA, because yeah, it seems to me that
Speaker 1 you knew you were going to wake up, right? You knew you were going to wake up in a white room. Before you actually did wake up, your prior should have been like one half heads or tails.
Speaker 1 So it's not clear to me why, having learned nothing new,
Speaker 1 your posterior probability on either of those scenarios should change.
Speaker 1 So I think the SIA response to that would be, or at least I think a way of making it intuitive would be to say that you didn't know that you were going to wake up right so in the um if we go back to that just so story where god is grabbing you out of the um out of heaven uh
Speaker 1 you know it's uh it's not at all it's actually incredibly unlikely that he grabs you there are so many so many people yeah i mean there's a different thing where sia is in general very surprised to exist um and in fact that's uh the um so you could make the same arguments like sia says you shouldn't exist isn't that weird that you exist um and i actually think that's a that's a good argument.
Speaker 1 So,
Speaker 1 but
Speaker 1 once you're in that headspace, then I think the way the way to think about it is that it's not a guarantee that you were, God is not dead set on creating you.
Speaker 1 You are a particular contingent arrangement of the world.
Speaker 1 And so that you should expect that arrangement to come about more often if there are more arrangements of that type,
Speaker 1 rather than sort of assuming that no matter what, existence will include you. Yeah, so can you talk more about the problems with SSA, like scenarios where you think it breaks down?
Speaker 1 Like why you prefer SIA?
Speaker 1 Yeah, so
Speaker 1 an easy problem, or sort of one of the most dramatic problems, is that SSA predicts that it's possible to have a kind of telekinetic influence on the world. So imagine that there's a there's a puppy,
Speaker 1 you wake up and you're in an empty universe, except for this puppy and you, and this boulder that's rolling towards the the puppy right and the boulder is inexorably going to kill the puppy um it's very large boulder it's basically guaranteed that the puppy is dead meat
Speaker 1 but
Speaker 1 you have the power to make binding pre-commitments um that you will in fact execute and and you have also to your right a button that would allow you to create tons of people like zillions and zillions and zillions of people all of whom um are wearing different clothes from you uh so they would be in a different epistemic state than you if you if you created them um now ssa uh so you so you make the following resolution you say um
Speaker 1 if uh this boulder does not jump out of the way of this puppy um like the boulder leaps in you know in some very weird very unlikely way um uh then i will press this button and i will create zillions and zillions of people,
Speaker 1 all of whom are in a different epistemic state than me, but let's assume they were in my reference class.
Speaker 1 SSA thinks that it's sufficiently unlikely that you would be in a world with zillions of those people,
Speaker 1 but you at the very beginning
Speaker 1 with different colored clothes, because that's a tiny fraction of the reference class if those people get created.
Speaker 1 That SSA thinks it's actually more likely once you've made that commitment that the boulder will jump out of the way.
Speaker 1 And that looks weird, right? It just seems like that's not going to work.
Speaker 1 You can't just make that commitment and then expect the boulder to jump.
Speaker 1 And you get, so that's a sort of exotic example.
Speaker 1 You get similar analogs, even in the God's coin toss case, where,
Speaker 1 like naively, it doesn't actually matter whether God has tossed the coin yet, right? So suppose,
Speaker 1 yeah, so like, let's say, let's say you wake up and learn that you're in room one, right?
Speaker 1 But God hasn't tossed the coin. It's like he created room one first before he tossed, and then he's going to toss, and that's going to determine whether or not he creates all the rooms in the future.
Speaker 1 if you on SSA, once you wake up and
Speaker 1 learn that you're in room one, you think it's incredibly unlikely that there's going to be these future people. So now you say, before it's a fair coin, God's going to toss it in front of you.
Speaker 1 You're still going to say, I'm sorry, God, it's, you know, it's a one in a million chance that this
Speaker 1 coin lands tails.
Speaker 1 And,
Speaker 1
or sorry, one in a million, something very small number. I forgot, I forget exactly.
And that's, um, and that's very weird. That's a fair coin.
It hasn't been tossed.
Speaker 1 But you, with the power of SSA, have become extremely confident about what how it's going to land. And that's, so that's, that's another argument.
Speaker 1 There's There's a number of other, I think, really, really bad problems for us, I'd say.
Speaker 1 Yeah, while I digest that,
Speaker 1 so let me
Speaker 1 let me just mention the problems you already pointed out against SIA
Speaker 1 in the post and earlier, where
Speaker 1 if one thinks SIA is true, one should be very confident that there are, you're here in a universe with many other people who have been sampled just like you.
Speaker 1 And so then it's um then it's kind of surprising that we're in a universe that is not filled to the brim with people. Like there's a lot of
Speaker 1 you could imagine like Mars is just completely made up of bodies
Speaker 1 or, you know, like every single star has like, you know, a simulation of a trillion people inside.
Speaker 1 The fact that this is not happening seems like
Speaker 1
it seems like very strong evidence against SIA. And then, you know, there's other things like the presumptuous philosopher that you might want to talk about as well.
But yeah, so
Speaker 1 do you just bite the bullet on these things or how do you think about these things?
Speaker 1 My main claim is that SIA is better than SSA.
Speaker 1 And
Speaker 1 I think it's just a horrible situation
Speaker 1 with anthropics. And
Speaker 1 I think overall, SIA is an update towards bigger, more populated universes.
Speaker 1 I think
Speaker 1 the most salient populated universes don't involve like hidden people on other planets, but they're probably,
Speaker 1 I don't know, maybe we're in a simulation and people are obsessed with simulating us or something like that.
Speaker 1 And then I think this is actually more important and worrying: I think the way I see this dialectic is:
Speaker 1 first, SIA, I mean, so a big problem with SIA is it immediately becomes certain naively that you live in an infinite universe or a universe with an infinite number of people.
Speaker 1 And that, and then it breaks because, and it doesn't know how to compare kind of infinite universes. Now, to be fair, SSA also isn't great at comparing infinite universes.
Speaker 1 And they both have some, you can do things that are actually quite analogous to things you can try to do in infinite ethics, where you have like expanding spheres of space-time and you count, you know, you have some fraction or some density of people in those spheres.
Speaker 1 And there's this general problem in cosmology of like trying to understand what it means to have like a fraction or a density of different types of observers.
Speaker 1 But, you know, my own take is kind of what happens here is
Speaker 1 we hit this infinity, you hit infinite universes fairly fast, and then they kind of break your anthropics in analogous ways to how they break your ethics.
Speaker 1 And that's kind of where I'm currently at. And I'm hoping to understand better how to do anthropics with infinities.
Speaker 1 And
Speaker 1 some of my work on the universal distribution,
Speaker 1 which is a sort of, I have a, I have a couple of blog posts on that, was attempting to go a little bit in that direction, though it has its own giant problems.
Speaker 1 Okay, interesting.
Speaker 1 Do you know if
Speaker 1 just vaguely it seems to me that the robin hanson's grabby aliens thing probably uses ssa uh but do you do you know if that's the case uh if he's using ssa in there
Speaker 1 i don't i haven't looked closely at that work okay okay cool i don't know it's hard for me to think about so maybe it'll take me a few more few more weeks before i can uh digest it uh fully but um yeah okay so that's really interesting you have a really interesting blog post about believing in things you cannot see um and one i mean this is almost an aside in the post itself, but I thought it was a really interesting comment.
Speaker 1 You make an interesting comment about futurism.
Speaker 1 Here's what you say. Much of futurism, in my experience, has a distinct flavor of unreality.
Speaker 1 The concepts, mind uploads, nanotechnology, settlement, and energy capture in space are, I think, meaningful, even if loosely defined.
Speaker 1 But at a certain point, one's models become so abstracted and incomplete that the sense of talking about a real thing, even a possibly real thing, is lost. Yeah, so why do you think that is?
Speaker 1 And is there a way to do futurism better? I think it comes partly because imagination is just quite a limited tool.
Speaker 1 And it's just easy, you know, when you're talking about the whole world, like the future is a big thing to try to model with this tiny mind.
Speaker 1 And so, you know, of necessity, you need to use these extremely lossy abstractions.
Speaker 1 And so, you know,
Speaker 1 it puts you in a mode of having these like, you know, really sketchy and gappy maps that you're trying to manipulate.
Speaker 1 I think that's one dimension. And then I think there's also a way in which,
Speaker 1 you know, this isn't all that unique to futurism insofar as just in general, I think it's hard sometimes to keep our
Speaker 1 intellectual engagement kind of rooted and grounded in the kind of real world.
Speaker 1 And I think it's just easy to kind of move into a zone, and especially if that zone is inflected with kind of social dynamics or it's, you know, it's kind of like a intellectual game or you're enjoying it for its own sake, or it's like a sort of, there's sort of status dimensions in the way people talk and other things that I think start to move our discourse
Speaker 1
in directions that aren't about like, ah, we're talking about the real world right now. Let's actually get it right.
And I think that happens with futurism.
Speaker 1 And maybe more so because it can feel like.
Speaker 1 Like, I think some people, there's sort of topics that they treat as like, ah, that's a real serious topic that's about real stuff.
Speaker 1 And then there are other topics where it's like, this is a chance to kind of make stuff up.
Speaker 1 And, you know, my experience is sometimes people relate to futurism that way. There are other topics where people move into a zone of like, one can just say stuff here.
Speaker 1
And there are kind of no constraints. And I think, I think that's actually wrong.
And with futurism, I think there are important constraints and important things we can say.
Speaker 1 But I think that can, that vibe can seep in nonetheless. Yeah, and it's interesting that it's true of the future and the past.
Speaker 1 I recently interviewed somebody who wrote a book about the Napoleonic War. And yeah,
Speaker 1 it's very interesting to talk about it in a sort of abstract sense. But then also, you can,
Speaker 1 which is very seldom done, you can like think of the reality of like a million men marching out of Russia and freezing and eating the remains of horses and other people and then starving.
Speaker 1 And
Speaker 1 then the concrete reality, when you're not, yeah, when you're not just talking about abstractions, like, oh, the border changed so much in these few decades or something.
Speaker 1 Yeah, just how you think about history changes so much. And it becomes,
Speaker 1 yeah, even recently I was reading this book about
Speaker 1 the use of meth by the Nazis.
Speaker 1 And if you just,
Speaker 1 there's this really cynical part of the book where the leaders
Speaker 1 in the Nazi regime, they're talking about like, oh, meth is a perfect drug because it gives them courage to kind of just blitz through an area without any sort of...
Speaker 1 without thinking about how cold it is, without thinking about how scary it is to just be in no man's land.
Speaker 1 And just this idea of like this meth up soldier who's like been forced to just go out into the middle of nowhere um and yeah and then all like marching to russia or something uh in the winter i i don't know if that was going to lead up to a question i don't know if you have a reaction but yeah yeah i mean i think so i think that's a great example of um or it's you know specifically the sort of image of the difference between relating to history as this sort of how is the border changing versus the concreteness of these people and you know often i think engaging with history is horrifying in this respect is when you really bring to mind the lived reality of all these events.
Speaker 1 It's a really different
Speaker 1 experience. And I think to some extent, one of the reasons that concreteness might be often lacking from futurism is that you can't,
Speaker 1 any attempt to specify the thing will be wrong. So, you know, we can, you can, you might be right about some abstract thing.
Speaker 1 Like you might be like, oh, we will, you know, we will have the ability to manipulate matter at like blah, you know, blah level, you know, you know, of scale. But
Speaker 1 if you try to dig in and then you're like, and here's what it's like to wake up in the future, you know, and then, you know, you're eating the
Speaker 1
or whatever, and it's just, you're wrong immediately. That's not how it's going to be.
And so you don't have the
Speaker 1 ability to really hone in on concrete details that are actually true.
Speaker 1 And so in some sense, you need to, there's this like back and forth where you need to sort of imagine a concrete thing and then be like, okay, that's wrong, but there will, then take the flavor of concreteness that you got from that and say, but it will be a concrete thing.
Speaker 1 It just won't be the specific one I imagined.
Speaker 1 And then keep that flavor of concreteness, even as you talk in more abstract ways. And that's, I think, a delicate dance.
Speaker 1 Yeah, yeah.
Speaker 1 As many viewers will know, Peter Thiel has his like very talking point that he often brings up about
Speaker 1 our, that we've become indefinite optimists
Speaker 1 and that he prefers a sort of definite optimism where you have a concrete vision of what the future could be.
Speaker 1 Okay, so yeah, I guess to close out, one of the things I wanted to ask you about was,
Speaker 1 so you said this is a side project, this blog.
Speaker 1 I thought it was one of the actually
Speaker 1 before you mentioned that your main work is AI, I thought this was at least part of your main work. And so it's surprising, it's really surprising to me then that
Speaker 1 you're able to keep up the regularity. It's like basically you're publishing a small book every, I don't know, every week or so, and
Speaker 1 filled with a lot of insight. And I mean, it's like, well, so so
Speaker 1
unlike many other blogs on the internet, we're just plain style. Yeah, you've got like great pros.
How are, like, what is your,
Speaker 1 how are you able to like maintain such productivity on your side project?
Speaker 1 I should say a few of my recent, my most recent posts, which were especially long, I was, I had taken some time off from work and I, and I was working on those partly in an academic context.
Speaker 1 But the first, the first year and a half or so of the blog was just on the side, and I've gone back to having it be on the side now. I think one thing that helps is my blog posts are too long.
Speaker 1 And so there's, you know, I have dreams of writing these, you know, taking my long blog posts and then really crunching them down and making it into this like pithy, elegant
Speaker 1 statement that that's really concise and condensed.
Speaker 1
But that would be more. So, you know, one way I sort of increased my output is by not doing that editing.
And I feel I feel bad about that. But that's one, that's that's one thing at least.
Speaker 1 What is that quote where I don't know, somebody's asked, like, how did you?
Speaker 1 I think it's something like, I would have, you know, I would have written you a long letter, but I, or I would have, I didn't have time to write you a short letter, so I wrote you a long letter or something like that.
Speaker 1 Yeah, exactly.
Speaker 1 I have a friend who says, it's like, the actual thing should be, I didn't have time to write you a short letter, so I wrote you a bad letter.
Speaker 1 And, you know, I'm like, I hope it's not that bad. But I do, you know, I do think if I had more time for these posts, I would, I would try to kind of cut them down.
Speaker 1 And that's one one time saving, you know, for better or worse.
Speaker 1 Yeah, at least as a reader, it often seems to me that the people like you who write,
Speaker 1 maybe this is to describe your process, but Scott Alexander says he kind of just writes stream of consciousness. And that, you know, it just turns out to be really readable.
Speaker 1 Your blog posts are really readable.
Speaker 1 And even like the stuff I write, like the things that I write are that are, I'm like consciously not trying to make edits while I'm going on, they end up reading much better than the ones where I'm trying to optimize each sentence and then taking two steps back for every one I take forward.
Speaker 1 I don't know if it's just, it could just be like a selection effect of
Speaker 1 the things that are harder to convey, you are spending more time editing. But yeah, it's kind of interesting.
Speaker 1
Yeah, I wonder. I wonder.
I mean, my feeling is that my writing is quite a bit better if I have a chance to edit it. And
Speaker 1 it's just a time thing.
Speaker 1
But I do think people vary quite a bit. And, you know, it's interesting.
I don't know if I was recently reading this book.
Speaker 1 George Saunders, who I think is a writer I really admire, has this book about fiction writing called Swim in the Pond in the Rain. And the vibe
Speaker 1 he tries to convey, and I think this is relatively common amongst writer types, is like this obsessive focus on,
Speaker 1 even at a sentence-by-sentence level, really thinking about what, where is the reader's mind right now? How are they engaging? Are they interested? Are they surprised? Am I losing them?
Speaker 1 And, you know, his writing is really, really engaging in ways that it's like not even obvious. You just sort of start reading along and you're like, oh, wow, I'm really into this.
Speaker 1 But it's also quite a daunting picture of the level of attentiveness required.
Speaker 1
And it's like, wow, if I'm going to write everything like that, it's like, that's going to cut down a lot on my kind of overall output. And so I do think there's a balance there.
And
Speaker 1 to the extent you're one of these people who you can just like stream of consciousness, and that's like close to what you would get out of editing, which I'm not sure I am,
Speaker 1 you know, all the better. It's sort of like you're lucky.
Speaker 1 Yeah, there's also an additional consideration where if you think there's going to to be some kind of power law to like how much, how interesting a piece is or how, how many, um, how many people see it, and how many people uh find value in it, then it's not clear whether that advises you to spend so much time on each piece to increase the odds that that one piece is gonna blow up, given that there's a big difference between the pieces that blow up and don't, or whether you should just like do a whole bunch and then uh kind of just try to sample as as often as possible.
Speaker 1 Yeah, and I think I think actually the blog,
Speaker 1 I started the blog partly as an exercise in just getting stuff out there. I think
Speaker 1 I had had the idea that I would one day write up a bunch of stuff that I've been thinking about, but it was somehow a,
Speaker 1 and I would write it up in this grand, you know, I would finally write it up and it'd be this beautiful thing and I would take all this time.
Speaker 1 And then I had ended up, you know, for various reasons, feeling like I was approaching some aspects of my life with too much perfectionism or too much, and I needed to just like
Speaker 1 get stuff out there faster. And so the blog was an exercise
Speaker 1 in that. And I think has,
Speaker 1
you know, I think that's paid off in ways and that I don't know, I don't think I would have done it otherwise. I see.
All right. Final question.
Speaker 1 I'm curious if you have three book recommendations that you can give the audience.
Speaker 1 Probably my primary recommendation, though this is somewhat self-serving because I helped with this project is the book The Precipice by Toby Ord.
Speaker 1 You know, may be familiar to many, of your listeners, but I think
Speaker 1 it's a book that really
Speaker 1 conveys the ideas that matter
Speaker 1 most to me or that have had close to the biggest impact in my own life.
Speaker 1 Other books, I love the play Angels in America.
Speaker 1 I think it's just, I think it's epic and amazing.
Speaker 1 And, you know, that's not quite a book, but
Speaker 1
you can read it. I actually recommend watching the HBO mini series, um, but uh, that's you know, that's something I recommend.
Um, and then uh,
Speaker 1 I don't know, uh, last year I read I read this book, Housekeeping, by um, by Marilyn Robinson, and uh, it had this sort of numinous
Speaker 1 uh
Speaker 1
quality that um I think a lot of her writing does. Um, and so I really like that and recommend it to people.
That's also a piece of fiction.
Speaker 1 If you're looking for philosophy, I don't know, a lot of my work is in dialogue with Nick Bostrom, um, and uh, and his uh, yeah, his overall kind of corpus.
Speaker 1
And I think that's really, really valuable to engage with. I see.
Cool, cool. All right.
Yeah, Joe, thanks so much for coming on the podcast. This is a lot of fun.
A lot of fun.
Speaker 1 Yeah, thanks for having me. Oh, I'll also say, you know, everything I've said here is just purely my personal opinion.
Speaker 1 I'm not speaking for my employer, not speaking for
Speaker 1
anyone else, just myself. So just keeping that in mind.
Cool, cool. And then where can people find your stuff?
Speaker 1 So just if you want to go over your blog link and then your Twitter link and other things.
Speaker 1 Yep. So my blog is handsandcities.com.
Speaker 1 And my Twitter handle is jkcarlsmith.
Speaker 1
Those are good places to reach me. And then my personal website is josephcarlsmith.com.
Okay. And then we're going to find your stuff on AI
Speaker 1
and those kinds of things. The stuff on AI is linked from my personal website.
So that's the best, that's the best place to go. All right.
Cool, cool.
Speaker 1
Thanks for watching. I hope you enjoyed that episode.
If you did, and you want to support the podcast, the most helpful thing you can do is share it on social media and with your friends.
Speaker 1
Other than that, please like and subscribe on YouTube and leave good reviews on podcast platforms. Cheers.
I'll see you next time.