AMA ft. Sholto & Trenton: New Book, Career Advice Given AGI, How I'd Start From Scratch

49m

I recorded an AMA! I had a blast chatting with my friends Trenton Bricken and Sholto Douglas. We discussed my new book, career advice given AGI, how I pick guests, how I research for the show, and some other nonsense.

My book, “The Scaling Era: An Oral History of AI, 2019-2025” is available in digital format now. Preorders for the print version are also open!

Watch on YouTube; listen on Apple Podcasts or Spotify.

Timestamps

(0:00:00) - Book launch announcement

(0:04:57) - AI models not making connections across fields

(0:10:52) - Career advice given AGI

(0:15:20) - Guest selection criteria

(0:17:19) - Choosing to pursue the podcast long-term

(0:25:12) - Reading habits

(0:31:10) - Beard deepdive

(0:33:02) - Who is best suited for running an AI lab?

(0:35:16) - Preparing for fast AGI timelines

(0:40:50) - Growing the podcast



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Press play and read along

Runtime: 49m

Transcript

Speaker 1 Today, this is going to be an Ask Me Anything episode. I'm joined with my friends Trendon Bricken and Sholto Douglas.
You guys do some AI stuff, all right?

Speaker 1 Devil.

Speaker 1 They're researchers at Anthropic. Other news, I have a book launching today.
It's called A Scaling Era.

Speaker 1 I hope one of the questions ends up being why you should buy this book.

Speaker 1 But we can kill two birds with one stone. But

Speaker 1 okay, let's just get at it. What's the first question that we got to answer? Take us away.

Speaker 1 So I want to ask the the flyball question that I heard before of why should ordinary people care about this book? Like, why should my mom buy and read the book? Yeah.

Speaker 1 First, let me tell you about the book, what it is.

Speaker 1 So, you know, these last few years, I've been interviewing AI lab CEOs, researchers, people like you guys, obviously, but also scholars from all kinds of different fields, economists, philosophers.

Speaker 1 And

Speaker 1 they've been addressing, I think, what are basically the gnarliest, most interesting, interesting, most important questions

Speaker 1 we've ever had to ask ourselves. Like, what is the fundamental nature of intelligence? What will happen when we have billions of extra workers?

Speaker 1 How do you model out the economics of that?

Speaker 1 How do we think about an intelligence that is greater than the rest of humanity combined? Is that even a coherent concept? And so...

Speaker 1 What I'm super delighted with is that with Stripe Press, we made this book where we compiled and curated the best, most insightful snippets across all these interviews.

Speaker 1 And you can read Dario addressing wider scaling work.

Speaker 1 And then on the next page is Demis explaining DeepMind's plans for whether they're going to go the RL route and how much of the AlphaZero stuff will play into the next generation of LLMs.

Speaker 1 And on the next page is, of course, you guys going through the technical details of how these models work.

Speaker 1 And then there's so many different fields that are implicated.

Speaker 1 I mean, I feel feel like AI is one of the most multidisciplinary fields that one can imagine because there's no field, no domain of human knowledge that is not relevant to understanding what a future society of different kinds of beings will look like.

Speaker 1 There's, you know, you'd have like Carl Schulman talk about how the scaling hypothesis shows up in primate brain scaling from chimpanzees to humans.

Speaker 1 On the next page might be an economist trying to argue with like Tyler Cowan explaining why he doesn't expect explosive economic growth and why the bottlenecks will eat all that up.

Speaker 1 Anyways, Anyways, so that's why your mom should buy this book.

Speaker 1 It's just like it is the distillation of all these different fields of human knowledge applied to the most important questions that humanity is facing right now.

Speaker 1 I do like how the book is sliced up by different topics and across interviews. Yeah.
So it does seem like a nice way to listen to all of the interviews in one digestible way.

Speaker 1 There's two interviews I've done that haven't been released publicly before that are in the book. So one was with Jared Kaplan,

Speaker 1 who's one of your co-founders. And this is another example where it's like, he's like a physicist and he's explaining scaling from this like very mathematical perspective about data manifolds.

Speaker 1 And then on the next page, you have like a totally different perspective. It's like Gorin talking about, you know,

Speaker 1 why can't we just have like distilled, like, why did general intelligence actually evolve in the first place? What is the actual evolutionary purpose of it?

Speaker 1 And it's like page by page, right? You can just get addresses.

Speaker 1 Even for me, I mean, like the person who's been on the other end of these conversations, conversations, it was actually really cool to like read it and just be like, oh, I actually like, now I realize how these insights connect to each other.

Speaker 1 Yeah. The only other thing that stood out to me as well is the introduction section.
The only thing that stood out to you.

Speaker 1 Yeah, yeah, that was really the only thing that was noteworthy.

Speaker 1 I just mean in terms of it stood out in accessibility

Speaker 1 is the introduction section and the diagrams for like all the different inputs. that enable you to train the machine learning model.

Speaker 1 Stripe press books are also just beautiful and have these nice like side captions for explaining what parameters are, what a model is, these sorts of things.

Speaker 1 Actually, when we did our episode together, a bunch of people, I don't know if you saw this, independently made these

Speaker 1 like blog posts and Anki cards and shit where they were like explaining the, because we just kind of passed over some things.

Speaker 1 And hopefully we've given a similar treatment to every single interview I've done, where you can read a very technical interview with a lab CEO or something or an engineer or a researcher.

Speaker 1 And then the side is like,

Speaker 1 here's like more context, here's more definitions, here's more commentary.

Speaker 1 And yeah, I feel like it elevated the conversations. So in other words, my parents will finally understand what I do for a job.

Speaker 1 Why would they do it? They got it very well.

Speaker 1 Maybe my parents will.

Speaker 1 All I need to know is that my name's in a book. Yeah.

Speaker 1 You're a co-author. They're like, cool.

Speaker 1 Should we get into the AMA questions? Yes. All right.

Speaker 1 So Brian Krav asks, the issue issue you raised with Dario and occasionally tweet about relating to models not making connections across disparate topics, some sort of combinatorial attention challenge.

Speaker 1 What are your thoughts on that now? Do you solve it with scale, thinking models, or something else? By the way, so the issue is

Speaker 1 one of the questions I asked Dario is, look, these models have all of human knowledge memorized. And you would think if a human had this much stuff memorized,

Speaker 1 And they were moderately intelligent, they couldn't be making all these connections between different fields. And there are examples of humans doing this, by the way.

Speaker 1 There's Donald Twan or something like this. This guy noticed that

Speaker 1 the way, what happens to your brain after magnesium deficiency is exactly the kinds of, I don't know, structure you see during a migraine.

Speaker 1 So he's like, okay, you take magnesium supplements and we're gonna cure a bunch of migraines. And it worked.

Speaker 1 And there's many other examples of things like this where you just like notice two different connections between pieces of knowledge.

Speaker 1 Why, if these LLMs are intelligent, are they not able to use this unique advantage they have to make these kinds of discoveries?

Speaker 1 I feel a little shy, like me giving answers on AI shit with you guys here. But

Speaker 1 so actually, Scott Alexander addressed this question in one of his AMA threads, and he's like, look, humans also don't have this kind of logical omniscience, right?

Speaker 1 It's he used the example of in language, if you really thought about like, why are two words connected? It's like, oh, I understand like why rhyme has the same etymology as this other word.

Speaker 1 If you really thought about, but you just like, don't think about it, right? There's like this combinatorial explosion.

Speaker 1 I don't know if that addresses the fact that human, we know humans can do this, this, right? Like the humans have, in fact, done this. And I don't know of a single example of LLMs ever having done it.

Speaker 1 Actually, yeah, what is your answer to this?

Speaker 1 I think my answer at the moment is that the sort of pre-training objective doesn't necessarily, like it imbues you with this like nice, flexible general knowledge about the world, but doesn't necessarily imbue you with the like the skill of making like novel connections or like research.

Speaker 1 The kinds of things that.

Speaker 1 People are trained to do through PhD programs and through like sort of the process of exploring and interacting with the world.

Speaker 1 And so

Speaker 1 I think like at a minimum, you need significant RL in at least similar things to be able to approach making novel discoveries.

Speaker 1 And so I would like to see some early evidence of this as we start to build models that are sort of interacting with the world and trying to make scientific discoveries and sort of like modeling the behaviors that we expect of people in these positions.

Speaker 1 Because I don't actually think we've done that in a

Speaker 1 meaningful or scaled way as a field, so to speak. Yeah, riffing off that with respect to RL, I wonder if models currently just aren't good at knowing what memories they should be storing.

Speaker 1 Like most of their training is just predicting the next word on the internet and remembering very specific facts from that.

Speaker 1 But if you were to teach me something new right now, I'm very aware of my own memory limitations. And so I would try to construct some summary that would stick.

Speaker 1 And models currently don't have the opportunity to do that. Memory scaffolding in general is just very primitive right now.
Right, like Club Place Pokemon. Exactly.
Yeah.

Speaker 1 We're like, someone worked on it. It was awesome.
It got far. But

Speaker 1 another excited anthropic employee then iterated on the memory scaffold and was able to very quickly improve on it. Interesting.

Speaker 1 So, yeah, that's the one. I do also just wonder if models are idiot savants.

Speaker 1 The best analogy might be to Kim Peake. So Kim Peake was born without a corpus callosum, if I recall correctly.

Speaker 1 Each hemisphere of his brain operated quite independently.

Speaker 1 He could read a page of a book. So he'd open a book, there'd be two pages visible.
Each eye would read one of the pages.

Speaker 1 And he had like a perfect encyclopedic memory of like everything he'd ever read.

Speaker 1 But at the same time, he had other debilitations,

Speaker 1 functioning socially, these sorts of things.

Speaker 1 And it's just kind of amazing how good LLMs are at very niche topics, but can totally fail. at other ones still.

Speaker 1 I really want to double-click on this thing of why there's this trade-off between memorization, like, yeah, why it is cutting it off.

Speaker 1 Like, apparently it's sort of, it's connected to this debilitation, but why can't we, like, Wikitext is not that, it's like five megabytes of information. The human brain can store much more.

Speaker 1 So, why does the human brain just not want to memorize these kinds of things? Um, and is it actively pruning? Uh,

Speaker 1 and yeah, I don't know. But we don't know how to do it right now.
We'll do a separate episode. The one thing I'll say on that is like there is another case study of someone with a perfect memory.

Speaker 1 Yeah. So they never forgot anything.
Yeah. But their memory was too debilitating.

Speaker 1 It'd be like your context window for the Transformer is like

Speaker 1 trillions of tokens. And then you spend all your time attending to past things and are too trapped in the details to extract any meaningful, generalizable insights from it.

Speaker 1 Terrence Deacon, whose book you recommended, had this interesting insight about how we learn best when we're children, but we forget literally everything that happened to us when we were children, right?

Speaker 1 We have total amnesia. And adults have this in between where we don't remember exact details, but

Speaker 1 we can still like learn in a pretty decent way. And then LLMs are on the opposite end of this gradient where

Speaker 1 they'll get the exact phrasing of Wikitext down, but they won't be able to generalize in these very obvious ways. Yeah.
A little bit like Gwen's theory, optimize a theory, no? Yeah, yeah, yeah.

Speaker 1 I think I probably got it from that.

Speaker 1 Yeah, yeah. Gwen has definitely had a big influence on all this for me.
That's right.

Speaker 1 I mean,

Speaker 1 I feel like what's underappreciated.

Speaker 1 on the podcast is like we have this like group chat and we also just like meet up a lot in person and just all the offer from the podcast just comes from you and a couple other people just

Speaker 1 feeding me like ideas and nudges and whatever. And then I can just use that as an intuition pump during the conversation.
Yeah, you're not the only one.

Speaker 1 What do you mean? Oh, like I'd benefit immensely from just hearing what everyone else has to say. And

Speaker 1 yeah, it's all regurgitation.

Speaker 1 Yeah, yeah, yeah, yeah.

Speaker 1 Another question? Yes.

Speaker 1 Maybe rabid, maybe rabid monkey

Speaker 1 asks, imagine you have a 17-year-old brother/slash nephew just starting college. What would you recommend he study given your AGI timelines?

Speaker 1 I don't know, become a podcaster. I feel like that job's still going to be around.

Speaker 1 It's funny because I started computer science and in retrospect, I mean, at the time, I was like, you could have become a software engineer or something. Instead, you became a podcaster.

Speaker 1 It's like kind of an irresponsible career move, but in retrospect, it's like,

Speaker 1 it kind of worked out just as these guys are getting automated.

Speaker 1 I get asked this question all the time. Yes, okay.

Speaker 1 And one answer that I like to give is that you should think about the next couple of years as increasing your individual leverage by like a huge factor every year.

Speaker 1 So, you know, already software engineers will come up and say, you know, I'm two times faster or in new languages, I'm five times faster than I was last year.

Speaker 1 I expect that trend line to continue basically as you sort of go from this model of, well, I'm working with some model that's assisting me on my computer.

Speaker 1 And it's like, like basically a pairing session to I'm managing a small team through to I'm managing like a division or a company basically that is like targeting a task.

Speaker 1 And so I think that deep technical knowledge in fields will still matter in four years.

Speaker 1 Like it absolutely will because you will be in the position of managing dozens or like your sort of your individual management bandwidth will be maxed out by trying to manage like teams of AIs and this kind of thing.

Speaker 1 And maybe AIs, you know, maybe we end up like a truly like singularity world where you have AIs managing AIs and this kind of stuff.

Speaker 1 But I think in a very wide part of the like possibility spectrum, you are managing enormous, like vastly more resources than an individual could command today. Yeah.
Yeah.

Speaker 1 And you should be able to solve so many more things with that. That's right.
And I think like, I would emphasize that this is not just Coke.

Speaker 1 Like it genuinely is the case that these models lack the kind of long-term coherence, which is like absolutely necessary for making a successful company.

Speaker 1 Just like getting a fucking office is like kind of complicated, right? It's like you had to deal with all these.

Speaker 1 So you can just imagine that for a sector out after sector like the economy is really big and really complex exactly and so especially if it's i mean i don't know the details but i assume if it's if it's like a data sparse uh thing where you kind of just like got to know what's actually what is the context of what's happening in the sector or something i i feel like you'd be in a good position maybe the other thought i have is that it's it's really hard to like plan your career in general and i don't know what advice that implies because i remember being super frustrated i mean i was in college and the reason i was doing the podcast was to figure out what it is i wanted to do um it was it wasn't the podcast itself.

Speaker 1 And it would go on like 80,000 hours or whatever career advice. And it's just like in retrospect, it was all kind of mostly useless.
And just like, just try doing things.

Speaker 1 I mean, especially with AI, we just like don't know what it's going to, like, it's so hard to forecast what kind of transformations there will be. So try things, do things.

Speaker 1 I mean, it's such a but now vague advice, but I am quite skeptical of career advice in general.

Speaker 1 Well, maybe the like piece of career advice I'm not skeptical of is put yourself close to the frontier because you have a much better vantage point. That's right.

Speaker 1 You can study deep technical things, whether it's computer science or biology or like, and get to the point where you can see what the issues are.

Speaker 1 Because it's actually remarkably obvious at the frontier what the problems are. And it's very difficult to see.
But actually, do you think there is an opportunity? Because

Speaker 1 one of the things people bring up is

Speaker 1 maybe the people who are advancing their career and have all this TASA knowledge will be in a position to be accelerated by AI.

Speaker 1 But you guys four years ago or two years ago, when you were getting discovered or or something, that kind of thing where you have a GitHub open issue and you try to solve it.

Speaker 1 Is that just like, that's done? And so the onboarding is much harder. It's still what we look for in hiring.

Speaker 1 I'm in favor of the learn fundamentals, gain useful mental models. But it feels like everything should be done in an AI native way.
or like top down instead of your bottom-up learning.

Speaker 1 So first of all, learn things more efficiently by using the AI models and then just know where their capabilities are and aren't.

Speaker 1 And I would be worried and skeptical about any subject which prioritizes rote memorization of lots of facts or information. Yeah, yeah, yeah.

Speaker 1 Instead of ways of thinking.

Speaker 1 But if you're always using the AI tools to help you, then you'll naturally just have a good sense for the things that it is and isn't good at. That's right.
Yeah, yeah, yeah.

Speaker 1 Next one.

Speaker 1 What is your strategy, method, or criteria for choosing guests?

Speaker 1 The most important thing is, do I want to spend one to two weeks reading every single thing you've ever written, every single interview you've ever recorded, talking to a bunch of other people about your research?

Speaker 1 Because I get asked by people who are like quite influential often to be like, would you have me on your podcast?

Speaker 1 And more often than not, I say no for two reasons. One is just like.

Speaker 1 Like,

Speaker 1 okay, you're influential or something. It's just like, it's not fundamentally that interesting as an interview prospect.
Not from like, I don't think about the hour that I'll spend with you.

Speaker 1 I think about like the two weeks, because this is my life, right? The research is my life. And And I want to have fun while doing it.

Speaker 1 So just like, is this going to be an interesting two weeks to spend? Is it going to help me with my future research or something? And the other is

Speaker 1 big guests don't really matter that much in the, in like, if you just look at what are the most popular episodes or what in the long run helps the podcast grow.

Speaker 1 By far, my most popular guest is Sarah Payne. And she, before I interviewed her, was just like a scholar who was not publicly well known at all.
And I just found her books quite interesting.

Speaker 1 Same goes with, so my most popular guests are Sarah Payne, and then Sarah Payne, Sarah Payne, Sarah Payne, because I have a lecture series with her.

Speaker 1 That's awesome. And by the way, from a viewer a minute adjusted basis, I host a Sarah Payne podcast where I occasionally took my ass.

Speaker 1 And then it's David Reich, who is a geneticist of ancient DNA. I mean, he's like somewhat well-known, but...

Speaker 1 He had a best-selling book, but he's not like, he's not Satya Nadella or Mark Zuckerberg, who are the next people on the list.

Speaker 1 And then again, I think that like pretty soon it's like you guys or Leopold or something. And then you get to the lab CEOs or something.

Speaker 1 So big names just don't matter that much for what I'm actually trying to do. And so it's just like, well, and it's also really hard to predict who's going to be the David Reicher, Sarah Payne.

Speaker 1 So just like have fun, talk to whoever you want to spend time researching. And it's a pretty good proxy for what will actually be popular.

Speaker 1 What was the specific moment, if there was one, that you realized that producing your podcast was a viable long-term strategy?

Speaker 1 I think when I was shopping around ad spots for a Mark Zuckerberg episode, and which were, which are, you know, like, and now that when I look back on it, it's like

Speaker 1 not in retrospect that mind flowing, but at the time, I'm like, oh, I could actually hire an editor full-time, or maybe more editors than one.

Speaker 1 And from there, like turning into a real business.

Speaker 1 That's when I, because before I just like didn't, people would tell me like, oh, these other podcasts are making whatever, whatever amount of money. And I'd be like, how? You know,

Speaker 1 I have this running joke with one of my friends that

Speaker 1 I don't know if you've seen me do this, but every time I encounter like a young person who's like, what should I do with my life? I'm like, you got to start a blog.

Speaker 1 You got to be the Matt Levine of AI. You can do this.
It's like a totally empty niche.

Speaker 1 You could like, it's, you can, um, and I was running joke with them where they're like, you're like a country bumpkin who's like won the lottery.

Speaker 1 And you're like, you go out to everything where they're like, guys, the scratch pad. Get the scratch pad.

Speaker 1 I do want to press on that a bit more because your immediate answer to the 17-year-old was to start a podcast. Yeah.
So, like, what niches are there?

Speaker 1 What sort of things would you be excited to see in like new blogs, podcasts? I mean, I wonder if you guys think this too, but I think this, like, Matt Levine of AI

Speaker 1 is like a totally open niche as far as I can tell. And I apologize to those who are trying to fill it.

Speaker 1 I was like, oh, I'm aware of at least one that's trying to do this.

Speaker 1 The other thing I would really emphasize is

Speaker 1 it is really hard to do this based on other people's advice or to to say like, here's a niche I'm

Speaker 1 like, at least I'm trying not to fill like a specific niche. And if you think about any sort of successful new media thing out there, it has two things which are true.

Speaker 1 It's like often not just geared towards one particular topic or interest. And two, it's the most important thing is that it is

Speaker 1 propelled by a single person's vision.

Speaker 1 It's not like a collective or whatever. And so I would just really emphasize, sorry, the thing I really want to emphasize is it can be done.

Speaker 1 Two, you can make a lot of money at it, which is not the most important thing, probably for the kind of person who would succeed at it, but still it's just worth knowing that it's a viable career.

Speaker 1 Three,

Speaker 1 that

Speaker 1 I, yeah, that basically you're going to feel like shit in the beginning where it's like all your early stuff is going to kind of suck.

Speaker 1 Maybe some of it will get appreciated, but

Speaker 1 it seems like bad advice to say still stick through it in case you actually are terrible because some people are terrible, but

Speaker 1 in case you are not, like, just do it, right? Like, what is the three months of blogging on the side really going to cost you?

Speaker 1 And people just don't actually seriously do the thing for long enough to actually get evidence or get the sort of RL feedback on, like, oh, this is how you do it. This is how you frame an argument.

Speaker 1 This is how you make a compelling thing that people will want to read or watch. Blogging is definitely underrated.
I think most of us have.

Speaker 1 So you both had blogs which are relevant. I don't know if you're actually relevant to getting admitted.

Speaker 1 They were like somewhat relevant. Yeah.

Speaker 1 But I think more so that we have all read almost all the blogs that do an in-depth treatises on AI. That's right.
Like if you write something that is high quality, it is almost

Speaker 1 invariably going to be shared around Twitter and read. Oh, this is so underappreciated.
Yeah. So two pieces of evidence.

Speaker 1 I was talking to a very famous blogger you would know, and I was asking him, how often do you discover a new like undiscovered blogger? And he was like, it happens very rarely, like maybe once a year.

Speaker 1 And they ask him, how long after you discover him or her, does the rest of the world discover them? And he's like, maybe a week.

Speaker 1 And what I suggest is like, it's actually really efficient. Like, oh, so

Speaker 1 I have some more takes.

Speaker 1 This is the MMA. So

Speaker 1 I believe that slow compounding growth in media is kind of fake.

Speaker 1 Like Leopold's situational awareness, it's not like it was really going to have an audience for a long time, for years or something. Just like it was really good.
Disagree or agree with it.

Speaker 1 And if it's good enough,

Speaker 1 literally everybody who matters, and I mean that literally, will read it. I mean, I think it's like hard to zero a shot, something like that.
But the fundamental thing to emphasize is

Speaker 1 the compounding growth, at least for me, has been, I feel like I've gotten better.

Speaker 1 And it's not so much that somehow the three years of having 1,000 followers was somehow like a compounding, you know, I don't think it was that. I think it was just like it took a while to get better.

Speaker 1 Yeah, certainly when Leopold posted that like the next day, it's almost like you can picture it like being almost stapled. Not Not it wasn't, but it was stapled to walls, so to speak, on Twitch.

Speaker 1 Like, you know, everyone was talking about it. You went to any event for the following week.
Every single person in the entire city was talking about that essay. Yeah.
Yeah.

Speaker 1 It's like Renaissance Florence. That's right.
That's right. That's right.
Yeah. The world is small.
Yeah. What would you say was your first big success?

Speaker 1 I'm trying to think back to when I first found your podcast. I distinctly remember you had your blog post on the Annas Mirabilos and Jeff Bezos retweeted it, I think.
Yeah.

Speaker 1 I'm trying to remember if it was even before that or not. But yeah, what? I'm curious.

Speaker 1 I feel like that was it. Okay.
Yeah. Yeah.
Yeah. I mean, it wasn't something where I'm like,

Speaker 1 it was some big insight that deserved to blow up like that. It was just taking some shots on gold.
They were all like, whatever, inside porny. And then one of them,

Speaker 1 I guess, caught the right guy's attention. And yeah, but I think that was it.

Speaker 1 Yeah, that's something else which is underappreciated, which is that a piece of writing doesn't need to have a fundamentally new insight so much as give people a way to express cleanly a set of ideas that they already are like aware of in a like sort of broader way.

Speaker 1 And if it's really crisp and like articulate, then even still, that's very valuable.

Speaker 1 And sorry, the one thing I should emphasize, which I think is maybe the most important thing to the feedback loop, it's not the compounding growth of the audience.

Speaker 1 I don't even think it's the compounding, like it's me getting more shots on goal in terms of doing the podcast.

Speaker 1 I actually don't think you improve that much by just doing the same thing again and again.

Speaker 1 If there's like no reward signal, it just you'll keep doing whatever you were doing before.

Speaker 1 I genuinely think the most important thing has been the podcast is good enough that it merits me getting to meet people like you guys.

Speaker 1 That I become friends with people like you. You guys teach me stuff.
I produce more good podcasts. So hopefully slightly better.
That helps me meet people in other fields. They teach me more things.

Speaker 1 Like with the China thing recently, I wrote this like blog post about a couple stories about things that happened in China. And that alone has like netted me.

Speaker 1 an amazing China network in the matter of like one blog post, right? And so hopefully if I do an episode on China, we'll be better as a result.

Speaker 1 And hopefully, that happens across field after field. And so, just getting to meet people like you is actually the main sort of flywheel.
Interesting. So, move to San Francisco.
Yes.

Speaker 1 If you're trying to do AI, yeah.

Speaker 1 Next questions. Shall we do?

Speaker 1 Can we do

Speaker 1 a very important question from a jacked Pajeet?

Speaker 1 How much can you bench?

Speaker 1 You can't lie because we don't know the answer.

Speaker 1 At one point, I did bench 225 for 4.

Speaker 1 Now I think I'm probably like

Speaker 1 20 pounds lighter than that or something.

Speaker 1 The reason you guys are asking me this

Speaker 1 is because I've gone lifting with both of you. And I remember Trent and I were doing like...

Speaker 1 pull-ups and bench and then we'd like bench and he'd like throw on another plate or something and then like instead of pull-ups he'd like be cranking out these muscle ups

Speaker 1 it's all technique

Speaker 1 Yeah, so they both bench more than me, but I'm trying my best.

Speaker 1 Ask again in six months. Yeah.
Yeah. Yeah.
Yeah. Exactly.

Speaker 1 What's your favorite history book? There's a wall of them behind you.

Speaker 1 Oh, I mean, obviously the Caro LBJ biographies. Oh, okay.
Yeah.

Speaker 1 Sorry, the main thing I took away from those books. is LBJ had this quote that he would tell his debate school.
In his early 20s, he taught debate to these like poor Mexican students in

Speaker 1 Texas. And he used to tell them, if you do everything, you'll win.
And this is an underrated quote. So that's the main thing I took away.

Speaker 1 And you see it through his entire career where there's a reasonable amount of effort, which goes by like 20, 80. You do the 20 to get the 80% of the effect.

Speaker 1 And then if you go beyond that to get like, oh, no, I'm not just going to do 20%. I'm going to just do the whole thing.
And there's a level even beyond that, which is like,

Speaker 1 this is just like an unreasonable use of time. This is going have no ultimate impact and still try doing that um uh yeah

Speaker 1 you you've shared on twitter uh using anki yeah even like a clawed integration yeah uh do you do book clubs do you use goodreads and what are you reading right now i don't use i don't have uh book clubs i do the um but the space prior edition has just genuinely been a huge like uplift in my ability to learn mostly because it it's not even the long-term impact over years, though I think that is part of it.

Speaker 1 And I do regret all the episodes I did without using speech print cards because all the insights have just sort of faded away.

Speaker 1 The main thing is if you're studying a complicated subject, at least for me, it's been super helpful to

Speaker 1 consolidate. So it's like, if you don't do it, you feel like a general where you're like, I'm going to wage a campaign against this country.
And then you like climb one hill.

Speaker 1 And then the next day you had to retreat. And then you climb the same hill.

Speaker 1 There might be a sort of more kosher analogy.

Speaker 1 Sorry, and the other question was, what am I reading right now? Yeah.

Speaker 1 Oh,

Speaker 1 my friend, Alvaro Daymenard, author of Fantastic Anachronism. Can I just pull it up? Actually, it's right here.
Yeah.

Speaker 1 I hope he's okay with me sharing this, but he wrote,

Speaker 1 he made like 100 copies of this translation he did of his favorite Greek poet. And they're like, Yeah,

Speaker 1 Kavafi. Hopefully I didn't mispronounce it.

Speaker 1 That one has an inscription for Gwern because that's his coffee.

Speaker 1 But it's super delightful. And that's what I've been reading recently.

Speaker 1 Any insights from it so far?

Speaker 1 Poets will hate this framing. I feel like

Speaker 1 poetry is like

Speaker 1 TikTok.

Speaker 1 Where it's like you get this quick vibe. of a certain thing and then you like

Speaker 1 like swipe and then you get the next vibe swipe of our effect.

Speaker 1 interesting.

Speaker 1 How do you go about learning new things or preparing for an episode? Like, what is that? You mentioned the one to two-week period where you're like deep diving on the person.

Speaker 1 What does that actually look like? Um, it's very much the obvious thing. Like, you read their books, you read their papers, you read the papers.

Speaker 1 If they have colleagues, you try to talk to them to better understand the field. I will also mention that

Speaker 1 all I have to do is ask some questions. And I do think it's a much,

Speaker 1 I think it's much harder to like learn a a field to be a practitioner than just learn enough to ask interesting questions. But

Speaker 1 yeah, for that, like it's very much the obvious thing you'd expect. Based, Carl Sagan asks,

Speaker 1 what are your long-term goals and ambitions?

Speaker 1 Yeah, the

Speaker 1 AGI kind of just makes the prospect of a long-term like harder to articulate, right?

Speaker 1 You know the Peter Thiel quote about what is your 10-year plan and why can't you do it in six months? Like it's especially salient,

Speaker 1 you know, given timelines.

Speaker 1 For the foreseeable future, grow the podcast and do more episodes, maybe more writing. But

Speaker 1 yeah, so we'll see what happens after like 10 years or something. The world might be different enough.

Speaker 1 Yeah. So basically, podcast for now.
Something you've spoken to me about, and particularly when you're trying to hire people for the podcast, was what you wanted to achieve with the podcast. Like,

Speaker 1 in what way do you want the podcast to shape the world, so to speak? Do you have any thoughts on that?

Speaker 1 Or, Because I remember you talking about, I really want people to actually understand AI and like how this might change their lives or

Speaker 1 what we could be doing now to shape the world such that it ends up better. Yeah.

Speaker 1 I don't know. So

Speaker 1 I have

Speaker 1 I have contradictory views on this. On the one end, I do

Speaker 1 I do know that important decisions are being made right now in AI. And I do think,

Speaker 1 I mean, roughing on what we were saying about situational awareness, if you do something something really good it has a very high probability of one-shotting the relevant person you know people are just generally reasonable like you make a good argument it'll go places the on the other hand I just think it's very hard to know what should be done it's like you got to have the very correct world model and then you got to know how in that world model the action you're taking is going to have the effect you anticipate And even in the last week, I've changed my mind on some pretty fundamental things about

Speaker 1 what I think about the possibility of an intelligence explosion or transformative AI as a result of talking to the epoch folks.

Speaker 1 Basically, the TLDR is, I want the podcast to just be an epistemic tool for now until I, because I think it's just very easy to be wrong.

Speaker 1 And so just having a background level of understanding of the relevant arguments is the highest priority. Makes sense.
Yeah. What's your sense? What should I be doing?

Speaker 1 I mean, I think the podcast is awesome.

Speaker 1 And a lot more people should listen to it. And there are a lot more guests I'd be excited for you to interview.
So that gives you a rest.

Speaker 1 Seems like a pretty good answer for now. Yeah.

Speaker 1 I think making sure that like there is a great debate of ideas on not just AI, but on like other fields and everything is incredibly high leverage and value. Yeah.

Speaker 1 How do you groom your beard? It's majestic.

Speaker 1 I don't know what to say. Just genetics.

Speaker 1 I do trim ed, but. No beard oil?

Speaker 1 Sometimes I do beard oil.

Speaker 1 How often?

Speaker 1 Once every couple of days. Okay.
That's not sometimes.

Speaker 1 That's pretty odd. Do you have different shampoo for your head and your beard? No.
What kind of shampoo do you use? Anti-dandruff.

Speaker 1 Do you condition that?

Speaker 1 Yeah.

Speaker 1 How often do you shampoo?

Speaker 1 We're giving people the answers that they want.

Speaker 1 Big beard oil.

Speaker 1 Yeah, you can sell some ad slots to different shampoo companies and we can edit it. Maybe we sell them an ad slot.

Speaker 1 Sorry, you had this idea of merch. Do you want to explain this? This is an idea.
Yeah, yeah. So people should react to this.
Someone should make it happen.

Speaker 1 Dwarkesh wants merch, but he doesn't want to admit that he wants it.

Speaker 1 Or he doesn't want to make it himself because that seems tacky.

Speaker 1 So I really want a plain white tea with just Dwarakesh's beard in the center of it. And that's it.
Nothing else.

Speaker 1 But you were saying it should be like, have a different texture than the rest of the show.

Speaker 1 Oh, so, so, and I'm just really gripping off it where maybe like a limited edition set can have some of your beard hair actually sewn into the shirt.

Speaker 1 That'd be pretty cool.

Speaker 1 I would pay. I would pay for that.

Speaker 1 I've got like patches all over my beard.

Speaker 1 Depends on how much hair. If it's like one is like in there somewhere versus like the whole thing.
Like, do I have to dry clean it? Can I wash it as like on the delicate setting?

Speaker 1 But really, I think you should get merged. If you want to grow the podcast, which apparently you do, then this is one way to do it.

Speaker 1 Oh, yeah,

Speaker 1 Which historical figure would be best suited to run a Frontier AI lab? This is definitely a question for you guys. Oh, no, I mean, I'm curious what your take is first.

Speaker 1 You've spoken to more of the heads of AI labs than I have.

Speaker 1 I was going to say

Speaker 1 LBJ.

Speaker 1 Sorry, it's a question who would be best at running an AI lab or who would be best for the world?

Speaker 1 What outcome do you want? What outcome do you want? Because I imagine, it seems like what the best lab, lab, AI lab CEO succeed at is raising money, building a pipe,

Speaker 1 setting a coherent vision.

Speaker 1 I don't know how much it matters for the CEO themselves to have good research taste or something, but it seems like their role is more as a sort of emissary to the rest of the world.

Speaker 1 And I feel like LBJ would be pretty good at this. Like just getting the right concessions, making projects move along.
um

Speaker 1 coordinating among different groups to maybe oh robber moses yeah again not necessarily best for the world but just in terms of like making shit happen yeah I mean I think best for the world is a pretty important precondition oh right

Speaker 1 there's a Lord Ackwood quote of great people are very rarely good people

Speaker 1 so it's hard to think of a great person in history who are like I feel like they'd really move the ball forward and also I trust their moral judgment we're lucky in many senses with like the set today that's right like the set of people today are both like they try and care a lot about the moral side as well as uh yeah sort of drive the labs forward.

Speaker 1 This is also why I'm skeptical of big grand schemes like nationalization or some public-private partnership or just generally shaking up the landscape too much.

Speaker 1 Because I do think we've we're in like one of the better

Speaker 1 I mean the sort of like the difficulty of whether it's alignment or whether it's some kind of deployment

Speaker 1 safety risks that is just like the nature of the universe is going to make that some level of difficulty.

Speaker 1 But the human factors in a lot of the counterfactual universes, I feel like we don't end up with people. Like, we could be in a universe where they don't even play lip service.

Speaker 1 This is like not an idea that anybody had. They could have an ASI takeover.

Speaker 1 I think the thing we live in a pretty good counterfactual universe. Yeah, we got a good

Speaker 1 set of gameplays. That's right.
That's right. How are you preparing for fast timelines?

Speaker 1 If there's fast timelines, then there will be this six-month period in which the most important decisions in human history are being made.

Speaker 1 And I feel like having an AI podcast during that time might be useful.

Speaker 1 That's basically the plan. Have you made any shorter term decisions

Speaker 1 with regards to like spending or health or anything else? After I entered Rude Zuckerberg, my business bank balance was a negative 23 cents.

Speaker 1 When the ad money hit, I immediately reinvested it in NVIDIA.

Speaker 1 So that is the

Speaker 1 sorry, but you were asking from a sort of altruistic perspective.

Speaker 1 no no just in general like have you have you changed the way you live at all because of your age timelines i never looked into getting a roth irray

Speaker 1 um you brought us fiji water before

Speaker 1 which is in plastic bottles

Speaker 1 but have you guys changed your lifestyle as a result not really no i mean i just like work all the time

Speaker 1 but you're doing that anyways or would you not uh

Speaker 1 i would probably be going very intensely at whatever thing I'd picked to devote myself to. Yeah.
How about you? I canceled my 401k contributions. Oh, really? Yeah.
Yeah.

Speaker 1 That felt like a more serious one.

Speaker 1 It's hard for me to imagine a world in which I'm like, have all this money that's just sitting in this account and waiting until I'm 60 and things look so different then.

Speaker 1 But you could be like a trillionaire with your marginal 401k contribution. I guess, but you also can't invest it in like specific things.
And

Speaker 1 I don't know. I might change my mind in the future and can restart it.

Speaker 1 And I've been contributing for a few years now. On a more serious note, one thing I have been thinking about is

Speaker 1 how could you use this money to an altruistic end? And basically, if there's somebody who's up and coming in the field that I know, which is like making content, could I use money to support them?

Speaker 1 And I'm of two minds on this.

Speaker 1 One, there are people who did this for me, and it was counterfactually responsible for me continuing to do the podcast when it just like did not make sense as a there were like a couple hundred people listening or something.

Speaker 1 I want to shout out Anil Varanasi for doing this and also Leopold actually for the foundation news for this they're running. On the other hand,

Speaker 1 it's like I feel like I wouldn't

Speaker 1 It's the thing about what that blogger was saying that the good ones you actually do notice. It's like it's hard to find a hidden talent.

Speaker 1 Maybe I'm totally wrong about this, but I feel like if you if I put up a sort of grant application, I give you money if you're trying to make a blog.

Speaker 1 I'm actually not sure about how well that would work. There's different things you could do, though.

Speaker 1 Like there's, I'll give you money to move to San Francisco for two months and like, you know, sort of meet people and like sort of get more like context and taste and like feedback on what you're doing.

Speaker 1 And like, it's not so much about the money or time. It's like it's putting them in an environment where they can more rapidly grow.
Like that's something that one could do.

Speaker 1 I mean, you also, I think you do that like quite proactively in terms of you like deliberately introduce people that you think will be be interesting to each other and this kind of stuff. Yeah.
Yeah.

Speaker 1 No, I mean, that's very fair. And I obviously I've benefited a ton from moving to San Francisco.
Yes.

Speaker 1 It's unlikely that I would be doing this podcast, at least on AI to the degree I am if I wasn't here.

Speaker 1 So maybe it's a mistake to judge people based on the quality of their content as it exists now and just throw money at them.

Speaker 1 Not throw money, but give them enough money to move to SF to get caught up in this intellectual milieu and then maybe do something interesting as a result.

Speaker 1 Yeah. The the thing that most readily comes to mind is the Mats program for AI research.
And this seems like it's just been incredibly successful at giving people the time, the funding,

Speaker 1 and the

Speaker 1 like social status, justification

Speaker 1 to do AI safety relevant research with mentors. Oh, and you have a similar program.

Speaker 1 We have the Anthropic Fellows program.

Speaker 1 And what is your,

Speaker 1 I know you're probably selecting for a slightly different thing, but

Speaker 1 I assume it's going to be power law dominated.

Speaker 1 And have you noticed a pattern among the, even if it's the MATS Fellows or your fellows who's just like this was this made the whole thing worth it what characteristic i mean there have been multiple people who anthropic and other labs have hired out of this program yeah yeah so i i think the return on investment for it has been massive um and yeah apparently the fellows i think there are 20 of them are like really good what is the trick to making it work well or finding that one person i think it's gotten much better with time where the early fellows some of them did good work and got good jobs.

Speaker 1 And so then now later fellows, like the quality bar has just risen and risen and risen. And there are even better mentors now than before.

Speaker 1 So it's this really cool flywheel effect.

Speaker 1 But originally, it was just people who like didn't have the funding or time to make a name for themselves or do ambitious work. So it was kind of like giving them that niche to do it.

Speaker 1 Right, right, right. Seems really key.
Yeah. You can do other things that doesn't have to be money.
You know, like you could put out ideas for things you'd be really interested in reading.

Speaker 1 That's right. Like promoting.
Yeah, yeah.

Speaker 1 There's something coming there. Okay.
So

Speaker 1 this episode hopefully will launch Tuesday at the same time as the book, by the way, which you can get at stripe.press slash scaling.

Speaker 1 But on Wednesday, which is the day after, hopefully there's something useful for you here. Okay.
Exciting. Yeah.
Any other questions we want to ask?

Speaker 1 The thing I have takes on, which I rarely get asked about, is distribution. Distribution of AI? No, no, sorry.
Like Mr. Beast Tale did a distribution.
Oh, yeah, yeah, yeah, yeah.

Speaker 1 Where people,

Speaker 1 I think, rightly focus on the content.

Speaker 1 And if that's not up to snuff, I think you won't succeed. But to the extent that somebody is trying to do similar things, the thing they consistently underrate is

Speaker 1 putting the, yeah, putting the time into getting distribution right.

Speaker 1 I just take like random takes about,

Speaker 1 for example, the most successful thing for my podcast in terms of growth has been YouTube shorts. It's a thing you would never have predicted beforehand.
And, you know, they're like responsible for

Speaker 1 basically

Speaker 1 at least half the growth of the podcast or something.

Speaker 1 I buy that. Yeah.
Why wouldn't you predict it? Like, I mean, like, I mean, I guess there's the contrast of like the long form deep content and like YouTube shorts and stuff.

Speaker 1 But I definitely think they're good hooks. That's good content.
Yeah. And I have like takes on how to write tweets and stuff.
The main intuition being like, write like you're writing to a group chat.

Speaker 1 Yeah.

Speaker 1 Yeah. To a group chat of your friends rather than this like formal or whatever.
I don't know.

Speaker 1 Just like these sort of like. Yeah.
No, I mean, what else comes to mind here? Maybe it's interesting, the difference between like TikTok and YouTube shorts. Oh, yeah.
Were you never cracked TikTok?

Speaker 1 Yeah. Why not? Like you've tried.
Yeah.

Speaker 1 I mean,

Speaker 1 have you done everything? No, but I haven't. If you've read these poems.

Speaker 1 Maybe you're like in a bubble bath with like some beard shampoo on.

Speaker 1 Reading poems. That'd be an incredible movie.
I I bet you that would go viral. You have to do that now.

Speaker 1 Reading a poem, Uncross Your Legs.

Speaker 1 Last episode, it was the interpretability challenge. Now it's Dory Cash and a bubble bath.

Speaker 1 We gotta sell the book somehow, you know.

Speaker 1 We literally do it like Margot Romney. Exactly.
Explaining the CD things.

Speaker 1 Yeah.

Speaker 1 So what is scaling?

Speaker 1 And that's how your crack distribution is.

Speaker 1 But yeah, no, like when we did our episode, it launched and you were sharing interesting tidbits about how it was doing and the thumbnail you wanted to use and the title.

Speaker 1 And I think I even asked you to share more details because it seemed interesting and cool and subtle things.

Speaker 1 But it seemed like you also kind of just hated it, like playing this game of like really having to optimize all these knobs. So like what I realized, I mean, talent is everything.

Speaker 1 So, having, I'm, I'm really lucky to have

Speaker 1 three to four editors who I'm just like incredibly proud to work with. I don't know how to hire more of them.
Like, they're just so good and self-directed.

Speaker 1 So, honestly, I don't have the tips to how to correct that. I hired those guys.
So, one of them was a farmer in Argentina. One of them was a freshman master in Sri Lanka.

Speaker 1 One of them was a former editor for one of Mr. Beast's channels.

Speaker 1 The other is a director in Czechoslovakia who makes these AI animations that you've seen in The Notes on China, and he's working on more essays like that.

Speaker 1 So I don't know how to replicate that catch again. God, that's a pretty widely cost net, I'm going to be honest.

Speaker 1 But they're all like, God damn it. And this was just through your challenges and just tweeting about.
That's right. Yeah.
So I had a competition to make clips in my podcast.

Speaker 1 I rounded up a couple of them this way.

Speaker 1 Yeah, it's hard to replicate because I've tried. Yeah.
Why do you think this works so well with the video editors? Because you tried a similar approach with your chief of staff. Yeah.

Speaker 1 The difference is with the video editor, I think there is this arbitrage opportunity where there are people, it is fundamentally a sort of, are you willing to work hard and obsess about getting better over time?

Speaker 1 Which all of them go above and beyond on, but you can just find people in other countries who are like,

Speaker 1 and it's not even about the wages. Like I've 10x'd their salaries or something like that.

Speaker 1 It's just about getting somebody who is really digital oriented. And there is this global arbitrage there.

Speaker 1 Whereas with the general manager, by the way, so the person I ended up hiring and who I'm super excited to work with is your childhood best friend,

Speaker 1 Max Hearns. Max is so great.
And he would have plenty of other opportunities. There's not this like weird arbitrage where, you know, you find some farmer in Argentina.

Speaker 1 But you know, it is striking that you were looking for a while as Ren,

Speaker 1 you mentioned offhand that Max was looking for something new. I genuinely,

Speaker 1 this is going to be like a total

Speaker 1 12-year-old learns about the world kind of question, but I genuinely don't know how big companies hire because I was trying to find this person for a year.

Speaker 1 And I'm really glad about the person I ended up hiring.

Speaker 1 But it was just like, if I need to hire 100 people for a company, let alone 1,000 people, I just do not know how to find people like this at scale. Yeah.

Speaker 1 I mean, I think this is like the number one issue that startup CEOs have hiring. Like it's just relentlessly the number one.

Speaker 1 And the thing I was stunned with is how it didn't seem like my platform helped that much. I got like close to a thousand applications across the different rounds of publicizing it that I did.

Speaker 1 And a lot of, I think, really cool people applied. But the person that I ended up hiring was somebody who was just a reference,

Speaker 1 you know, like a mutual friend kind of thing.

Speaker 1 And a couple of other top contenders were also this way. So it's it's weird.
Like the best people in the world don't want to apply, at least to things like this.

Speaker 1 And you you just got to seek them out, even if you think you have a public platform or something. Yeah.
Yeah.

Speaker 1 I mean, the job might just be so out of distribution from anything else that people would do. That's right.
Yeah.

Speaker 1 So Tritzia Ray asks, how do you make it on Substack as a newbie writer?

Speaker 1 I think

Speaker 1 if you're starting from scratch, there's two useful hacks. One is podcasting because you don't need to have some super original new take.

Speaker 1 You can just interview people who do, and you can leverage their platform. And two is writing book reviews.

Speaker 1 Again, because you have something to react to rather than having to come up with a unique worldview of your own.

Speaker 1 There's probably other things, and it's really hard to give advice in advance, just try things. But those I think are just like good,

Speaker 1 cool starts. The book reviews is a good suggestion.
I actually use like Gwern's book reviews as a way to recommend books to people.

Speaker 1 By the way, this is a totally undersupplied thing because if anybody has book reviews, Jason Furman is this economist who has like a thousand

Speaker 1 Goodreads reviews. And I probably have visited his Goodreads on 100 independent visits.
Wow. Same with the Gren book reviews or something, right?

Speaker 1 So book reviews are a sort of very undersupplied thing if you're looking to get started making some kind of content.

Speaker 1 I like that. Yeah.
Cool. Thank you guys so much for doing this.
Yeah, this was fun.

Speaker 1 We'll turn the tables on you again pretty soon. But

Speaker 1 how does it feel being a hot seat?

Speaker 1 It's nice.

Speaker 1 Nobody ever ever asked me a question.

Speaker 1 Nobody ever asked how is 12.

Speaker 1 Cool. Yeah.
Yeah. Super excited for the book launch.
Thank you. The website's awesome, by the way.
Appreciate it. Oh, yeah.

Speaker 1 Yeah, yeah.

Speaker 1 Strife.press slash scaling. Yeah.

Speaker 1 Cool. Cool.
Thanks, guys.

Speaker 1 Thanks. Okay.
I hope you enjoyed that episode. So as we talked about, my new book is out.
It's released with Strife Press. It's called The Scaling Era.

Speaker 1 And it compiles the the main insights across these last few years of doing these AI interviews. And I'm super pleased with how it turned out.

Speaker 1 It really elevates the conversations and adds the necessary context.

Speaker 1 And just seeing them all together, even reminded me of many of the most interesting segments and insights that I had myself forgotten. So I hope you check it out.

Speaker 1 Go to the link in the description below to buy it. Separately, I have a Clips channel now on YouTube.
People keep complaining about the fact that I put Clips and the main video on the same channel.

Speaker 1 So request granted. There is a new Clips channel, but please do subscribe to it so we can get it kick-started.
And while you're at it, also make sure to subscribe to the main channel.

Speaker 1 Other than that, just honestly, the most helpful thing you can do is share the podcast.

Speaker 1 If you enjoyed it, just send it on Twitter, put it in your group chats, share it with whoever else you think might enjoy it.

Speaker 1 That's the most helpful thing. If you want to learn more about advertising on future episodes, go to dwarkesh.com/slash advertise.
Okay, see you on the next one.