
AMA ft. Sholto & Trenton: New Book, Career Advice Given AGI, How I'd Start From Scratch
I recorded an AMA! I had a blast chatting with my friends Trenton Bricken and Sholto Douglas. We discussed my new book, career advice given AGI, how I pick guests, how I research for the show, and some other nonsense.
My book, “The Scaling Era: An Oral History of AI, 2019-2025” is available in digital format now. Preorders for the print version are also open!
Watch on YouTube; listen on Apple Podcasts or Spotify.
Timestamps
(0:00:00) - Book launch announcement
(0:04:57) - AI models not making connections across fields
(0:10:52) - Career advice given AGI
(0:15:20) - Guest selection criteria
(0:17:19) - Choosing to pursue the podcast long-term
(0:25:12) - Reading habits
(0:31:10) - Beard deepdive
(0:33:02) - Who is best suited for running an AI lab?
(0:35:16) - Preparing for fast AGI timelines
(0:40:50) - Growing the podcast
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Listen and Follow Along
Full Transcript
Today, this is going to be an Ask Me Anything episode. I'm joined with my friends Trenton Bricken and Sholta Douglas.
You guys do some AI stuff, right? Dabble. They're researchers at Anthropic.
Other news, I have a book launching today. It's called A Scaling Era.
I hope one of the questions ends up being why you should buy this book. But we can kill two birds with one stone.
Okay, let's just get get at it. What's the first question that we got to answer? Thanks, man.
So I want to ask the fly ball question that I heard before of why should ordinary people care about this book? Like, why should my mom buy and read the book? Yeah. First, let me tell you about the book, what it is.
So, you know, these last few years, I've been interviewing AI lab CEOs, researchers, people like you guys, obviously, but also scholars from all kinds of different fields, economists, philosophers, and they've been addressing, I think, what are basically the gnarliest, most interesting, most important questions we've ever had to ask ourselves. Like what is the fundamental nature of intelligence? What will happen when we have billions of extra workers? How do we model out the economics of that? How do we think about an intelligence that is greater than the rest of humanity combined? Is that even a coherent concept? And so what I'm super delighted with is that with Stripe Press, we made this book where we compiled and curated the best, most insightful snippets across all these interviews.
And you can read Dario addressing why does scaling work? And then on the next page is Demis explaining DeepMind's plans for whether they're going to go with RL route and how much the AlphaZero stuff will play into the next generation of LLMs. And on the next page is, of course, you guys going through the technical details of how these models work.
And then there's so many different fields that are implicated. I mean, I feel like AI is one of the most multidisciplinary fields that one can imagine because there's no field, no domain of human knowledge that is not relevant to understanding what a future society of different kinds of beings will look like.
You can have Carl Schulman talk about how the scaling hypothesis shows up in primate brain scaling from chimpanzees to humans. On the next page might be an economist trying to argue with Tyler Cowen explaining why he doesn't expect
explosive economic growth and why the bottlenecks will eat all that up. Anyways, so that's why your mom should buy this book.
It just like it is the distillation of all these different fields of human knowledge applied to the most important questions that humanity is facing right now. I do like how the book is sliced up by different topics and across interviews.
Yeah, yeah.
So it does seem like a nice way to listen to all of the interviews.
Yeah. I do like how the book is sliced up by different topics and across interviews.
Yeah, yeah. So it does seem like a nice way to listen to all of the interviews in one digestible way.
There's two interviews I've done that haven't been released publicly before that are in the book. So one was with Jared Kaplan, who's one of your co-founders.
And this is another example where it's like, he's like a physicist and he's explaining scaling from this like very mathematical perspective about data manifolds. And then on the next page, you have like a totally different perspective.
It's like Goran talking about, you know, why can't we just have like distilled, like why did general intelligence actually evolve in the first place? What is the actual evolutionary purpose of it? And it's like page by page, right? You can just get addresses. Even for me, I mean, like the person who's been on the other end of these conversations, it was actually really cool to like read it and just be like, oh, I actually like, now I realize how these insights connect to each other.
Yeah, the only other thing that stood out to me as well is the introduction section. The only thing that stood out to you.
Yeah, that was really the only thing that was noteworthy. I just mean in turn, it stood out in accessibility.
Yeah. Is the introduction section and the diagrams for like all the different inputs that enable you to train a machine learning model.
Stripe Press books are also just beautiful and have these nice like side captions for explaining what parameters are, what a model is, these sorts of things. Actually, when we did our episode together, a bunch of people, I don't know if you saw this, independently made these like blog posts and on-key cards and shit where they were like explaining the content because we just kind of passed over some things.
And hopefully we've given a similar treatment to every single interview I've done where you can read read a very technical interview with a lab CEO or something, or an engineer or a researcher, and then decide is like, here's more context, here's more definitions, here's more commentary. And yeah, I feel like it elevated the conversations.
So in other words, my parents will finally understand what I do for a job. What do they do? They get it very well.
Maybe my parents will. Your parents will.
All I need to know is that my name's in a book. You're a co-author.
They're like, cool. Should we get into the AMA questions? Let's do it.
So Brian Crave asks, the issue you raise with Dario and occasionally tweet about this much stuff memorized and they were moderately intelligent, they couldn't be making all these connections between different fields. And there are examples of humans doing this, by the way.
There's Donald Swan or something like this. This guy noticed that the way what happens to a brain after magnesium deficiency is exactly the kinds of, I don't know, structure you see during a migraine.
So he's like, okay, you take magnesium supplements, and we're going to cure a bunch of migraines. And it worked.
And there's many other examples of things like this where you just like notice two different connections between pieces of knowledge. Why, if these LLMs are intelligent, are they not able to use this unique advantage they have to make these kinds of discoveries? I feel a little shy, like me giving answers on AI shit with you guys here.
But so actually Scott Alexander addressed this question in one of his AMA threads and he's like, look, humans also don't have this kind of logical omniscience, right? It's, he was the example of in language, if you really thought about like why are two words connected? It's like, oh, I understand like why rhyme has the same etymology as the other word. If you really thought about, but you just like don't think about it, right? There's like this combinatorial explosion.
I don't know if that addresses the fact that human, we know humans can do this, right? Like the humans have in fact done this. And I don't know of a single example of LLMs ever having done it.
Actually, yeah, what is your answer to this? I think my answer at the moment is that the sort of pre-training objective doesn't necessarily, like it imbues with this like nice, flexible, general knowledge about the world,
but doesn't necessarily imbue with the,
like the skill of making novel connections
or like research.
The kinds of things that people are trained to do
through PhD programs
and through like sort of the process
of exploring and interacting with the world.
And so I think like at a minimum,
you need significant RL in at least similar things to be able to approach making novel discoveries. And so I would like to see some early evidence of this as we start to build models that are interacting with the world and trying to make scientific discoveries and sort of like modeling the behaviors that we expect of people in these positions.
Because I don't actually think we've done that in a meaningful or scaled way as a field, so to speak. Yeah, riffing off that with respect to RL, I wonder if models currently just aren't good at knowing what memories they should be storing.
Like most of their training is just predicting the next word on the internet and remembering very specific facts from that. But if you were to teach me something new right now, I'm very aware of my own memory limitations.
And so I would try to construct some summary that would stick. And models currently don't have the opportunity to do that.
Memory scaffolding in general is just very primitive right now. Right, like Coldplay's Pokemon.
Exactly, yeah. Or like someone worked on it.
It was awesome, it got far, but another excited Anthropic employee then like iterated on the memory scaffold and was able to like very quickly improve on it. Interesting.
So yeah, that's one. I do also just wonder if models are idiot savants.
The best analogy might be to Kim Peek. So Kim Peek was born without a corpus callosum, if I recall correctly.
Each hemisphere of his brain operated quite independently. He could read a page of a book.
So he'd open a book, there'd be two pages visible. Each eye would read one of the pages.
And he had like a perfect encyclopedic memory of like everything he'd ever read. But at the same time, he had other debilitations, functioning socially, these sorts of things.
And it's just kind of amazing how good LLMs are at very niche topics, but can totally fail at other ones still. I really want to double click on this thing of why there's this trade off between memorization, like, yeah, why does cutting it off, like, apparently, it's sort of, it's connected to this debilitation but why can't like wiki text is not that it's like 5 megabytes of information the human brain can store much more so what does the human brain just not want to memorize these kinds of things and is actively pruning and yeah I don't know but we don't have to do it right now we'll do a separate episode the one thing I'll say on that is like there is another case to someone with a perfect memory.
So they never forgot anything. But their memory was too debilitating.
It'd be like your context window for the Transformer is like trillions of tokens. And then you spend all your time attending to past things and like are too trapped in the details to extract any meaningful generalizable insights from it it.
Terrence Deacon, whose book you recommended, had this interesting insight about how we learn best when we're children, but we forget literally everything that happened to us when we were children, right? We have total amnesia. And adults have this in-between where we don't remember exact details, but we can still learn in a pretty decent way.
And then LLMs are on the opposite end of this gradient where they'll get the exact phrasing of wiki text down, but they won't be able to generalize in these very obvious ways. A little bit like Gwen's theory, optimizer theory, no? Yeah, yeah, yeah.
I think I probably got it from that. Yeah, yeah.
Gwen has definitely had a big influence on all this for me as well. Yeah, yeah, yeah.
I mean, I feel like what's underappreciated on the podcast is like we have this like group chat and we also just like meet up a lot in person. And just all the offer from the podcast just comes from you and a couple other people just feeding me like ideas and nudges and whatever.
And then I can just use that as an intuition pump during the conversation. Yeah, you're not the only one.
What do you mean? Oh, like I benefit immensely from just hearing what everyone else has to say.
Yeah.
It's all regurgitation.
Another question?
Yes.
Maybe Rabid Monkey asks,
imagine you have a 17-year-old brother slash nephew just starting college.
What would you recommend he study given your AGI timelines? I don't know. Become a podcaster? I feel like that job's still going to be around.
It's funny because I started computer science. And in retrospect, I mean, at the time, you could have become a software engineer or something.
Instead, you became a podcaster. It's kind of an irresponsible career move.
But in retrospect, it's like, it's great. It kind of worked out.
Just as these guys are getting automated. I get asked this question all the time.
Yes. Okay, go ahead.
And one answer that I like to give is that you should think about the next couple of years as increasing your individual leverage by like a huge factor every year. So, you know, already software engineers will come up and say, you know, I'm two times faster or in new languages, I'm five times faster than I was last year.
I expect that trend line to continue basically as you sort of go from this model of what i'm working with some model that's assisting me on my computer and it's like like basically a pairing session to i'm managing a small team uh through to i'm managing like a division or a company basically that is like targeting a task um and so i think that deep technical knowledge in fields will still matter in four years. Like it absolutely will because you will be in the position of managing dozens or like your sort of your individual management bandwidth will be maxed out by trying to manage like teams of AIs.
Yeah. And this kind of thing.
And maybe AIs, you know, maybe we end up like a truly like singularity world where you have AIs managing AIs and this kind of stuff. But I think in a very wide part of the possibility spectrum, you are managing enormous, vastly more resources than an individual could command today.
Yeah, yeah. And you should be able to solve so many more things with that.
That's right. And I think I would emphasize that this is not just C.
Like it genuinely is the case that these models lack the kind of long-term coherence, which is like absolutely necessary for making a successful company. I guess like getting a fucking office is like kind of complicated, right? It's like you had to deal with all these.
So you can just imagine that for a sector after sector, like the economy is really big. And really complex.
Exactly. And so especially if it's, I mean, I don't know the details, but I assume if it's like a data sparse thing where you kind of just like got to know what's actually, what is the context of what's happening in the sector or something, I feel like you'd be in a good position.
But the other thought I have is that it's, it's really hard to like plan your career in general. And I don't know what advice that implies.
Cause I remember being super frustrated. I mean, I was college and the reason I was doing the podcast was to figure out what it is I want to do.
It wasn't the podcast itself. And it would go on like 80,000 hours or whatever career advice.
And it's just like in retrospect, it was all kind of mostly useless. And just like, just try doing things.
I mean, especially with AI, we just like don't know what it's going to, like it's so hard to forecast what kind of transformations there will be. So try things, do things.
I mean, it's such banal, vague advice, but I am quite skeptical of career advice in general. Well, maybe the like piece of career advice I'm not skeptical of is put yourself close to the frontier because you have a much better vantage point.
That's right. Right? Like you can study deep technical things, whether it's computer science or biology or like and get to the point where you can see what the issues are because it's actually like remarkably obvious at the frontier what the problems are and it's very difficult to see but actually do you think there is an opportunity because one of the people one of the things people bring up is the um maybe the people who are advanced in their career and have all this task and knowledge will be in a position to to be accelerated by AI.
But you guys four years ago or two years ago, when you were getting discovered or something, that kind of thing, where you have a GitHub open issue and you try to solve it, is that just like that's done and so the onboarding is much harder? It's still what we look for in hiring. I'm in favor of the learn fundamentals, gain useful mental models.
But it feels like everything should be done in an AI native way or like top down instead of your bottom up learning. So first of all, learn things more efficiently by using the AI models and then just know where their capabilities are and aren't.
And I would be worried and skeptical about any subject which prioritizes rote memorization of lots of facts or information. Yeah, yeah, yeah.
Instead of ways of thinking. Yeah.
But if you're always using the AI tools to help you, then you'll naturally just have a good sense for the things that it is and isn't good at. That's right.
Yeah, yeah, yeah. Next one.
What is your strategy, method, or criteria for choosing guests? The most important thing is, do I want to spend one to two weeks reading every single thing you've ever written, every single interview you've ever recorded, talking to a bunch of other people about your research? Because I get asked by people who are quite influential often to be like, would you have me on your podcast? And more often than not, I say no for two reasons. One is just like, okay, you're influential or something.
It's not fundamentally that interesting as an interview prospect. I don't think about the hour that I'll spend with you.
I think about the two weeks because this is my life, right? The research is my life and I want to have fun while doing it. So just like, is this going to be an interesting two weeks to spend? Is it going to help me with my future research or something? And the other is big guests don't really matter that much in the, in like, if you just look at what are the most popular episodes or what in the long run helps the podcast grow.
By far, my most popular guest is Sarah Payne. And she, before I interviewed her, was just like a scholar who was not publicly well-known at all.
And I just found her books quite interesting. Same goes with, so my most popular guests are Sarah Payne and then Sarah Payne, Sarah Payne, Sarah Payne, because I had an electric series with her.
And by the way, from a viewer minute adjusted basis, I host a Sarah Payne podcast where I occasionally talk about it. And that it's David Reich, who is a geneticist of ancient DNA.
I mean, he's like somewhat well-known, but he had a bestselling book, but he's not like, he's not Satya Nadella or Mark Zuckerberg, who are the next people on the list. And then again, I think at like pretty soon, it's like you guys or Leopold or something.
And then you get to the lapsed EOs or something. So big names just don't matter that much for what I'm actually trying to do.
And so just like, well, and it's also really hard to
predict who's going to be the David Reich or Sarah Payne. So just like have fun, talk to whoever
you want to spend time researching. And it's a pretty good proxy for what will actually be
popular. What was the specific moment, if there was one, that you realized that producing your
podcast was a viable long-term strategy? I think when I was shopping around ad spots for a Mark Zuckerberg episode, and which are, you know, like now when I look back on it, it's like, not in retrospect that mind-blowing, but at the time I'm like, oh, I could actually hire an editor full-time or maybe more editors than one. And from there, like turning into a real business.
That's when I because before I just like didn't people would tell me like, oh, these other podcasts are making whatever, whatever amount of money. And I'd be like, how, you know, I have this running joke with one of my friends that I don't know if you've seen me do this.
But every time I encounter encounter a young person who's like, what should I do with my life?
I'm like, you've got to start a blog.
You've got to be the Matt Levine of AI.
You can do this.
It's like a totally empty niche.
And I have this running joke with them
where they're like,
you're like a country bumpkin
who's like won the lottery.
And you're like,
you go out to everything where it's like,
guys, it's Crashpad.
Get the Crashpad.
I do want to press on that a bit more because your immediate answer to the 17-year-old was to start a podcast. Yeah.
So like what niches are there? What sort of things would you be excited to see in like new blogs, podcasts? I mean, I wonder if you guys think this too, but I think this like Matt Levine of AI. Absolutely.
It's like a totally open niche as far as I can tell. And I apologize to those are trying to fill it in I was like oh I'm aware of at least one that's trying to do this the other thing I would really emphasize is it is really hard to do this based on other people's advice or to say like here's a niche at least I'm trying not to like a specific niche.
And if you think about any sort of successful new media thing out there, there's, it has two things which are true. It's like often not just geared towards one particular topic or interest.
And two, it's the most important thing is that it is propelled by a single person's vision. It's not like a collective or whatever.
And so I would just really emphasize, sorry, the thing I really want to emphasize is it can be done to, you can make a lot of money at it, which is not the most important thing, probably for the kind of person who would succeed at it, but still it's just worth knowing that it's a viable career. Three, that, yeah, that basically you're going to feel like shit in the beginning where it's like all your early stuff is going to kind of suck.
Maybe some of it will get appreciated. But it seems like bad advice to say still stick through it in case you actually are terrible because some people are terrible.
But in case you are not, like just do it, right? Like what is the three months of vlogging on the side really going to to cost you and people just don't actually seriously do the thing for long enough to actually get evidence or get the sort of rl feedback on like oh this is how you do it this is how you frame an argument this is how you make a compelling thing that people want to read or watch blogging is definitely underrated i think like most of us have probably both had blogs which are relevant i don't know if they're actually relevant to getting that. They were like somewhat relevant.
Yeah. But I think more so that we have all read almost all the blogs that do in-depth like treatises on AI.
That's right. Like if you write something that is high quality is almost like invariably going to be like shared around Twitter and read.
Oh, this is so underappreciated. Yeah.
So two pieces of evidence. I was talking to a very famous blogger you would know, and I was asking him, how often do you discover a new, like, undiscovered blogger? And he was like, it happens very rarely, like maybe once a year.
And they ask him, how long after you discover him or her does the rest of the world discover them? And he's like, maybe a week. Interesting.
And what I suggest is like, it's actually really efficient. Like, oh, so I have some more takes.
Let's hear it. Let's hear it.
Let's hear it. This is EMA.
So I believe that slow compounding growth in media is kind of fake. Like Leopold's situational awareness.
It's not like it was really going to have an audience for a long time, for years or something. It's just like, it was really good.
Disagree or agree with it. And if it's good enough, literally everybody who matters, and I mean that literally, will read it.
I mean, I think it's like, heart is like zero shot, something like that. But the fundamental thing to emphasize is the compounding growth, at least for me, has been, I feel like I've gotten better.
And it's not so much that somehow the three years of having 1,000 followers
was somehow like a compounding.
You know, I don't think it was that.
I think it was just like it took a while to get better.
Yeah.
Certainly when Leopold posted that, like the next day, it's almost like
you can picture it like being almost stapled.
Not it wasn't, but it was stapled to walls, so to speak, on Twitter.
That's right.
You know, everyone was talking about it.
You went to any event for the following week.
Every single person in the entire city was talking about that essay.
Yeah, yeah.
Like Renaissance Florence.
That's right, that's right, that's right.
Yeah, the world is small.
Yeah, what would you say was your first big success?
I'm trying to think back to when I first found your podcast.
I distinctly remember you had your blog post on the Annus Mirabilis.
And Jeff Bezos retweeted it
I think. I'm trying to remember
if it was even before that or not
but yeah, I'm curious. I feel like that was it.
Okay. I mean it wasn't
something where I'm like
it was some big insight that deserved to blow
up like that. It was just taking some
shots on goal. They were all like whatever
inside porny and then one of them
I guess caught the right guy's attention and yeah. but i think that was yeah that's something else which is underappreciated which is that a piece of writing doesn't need to have a fundamentally new insight so much as give people a way to express cleanly a set of ideas that they already are like aware of in a like sort of broader way um and if it's really crisp and not like articulate then yeah even still that's very valuable and sorry the one thing i should emphasize which i think is maybe the most important thing to the feedback loop it's not the compounding growth of the audience um i don't even think it's the compounding like it's me getting more shots on goal in terms of doing the podcast i actually don't think you improve that much by just doing the same thing again and again uh if there's like no reward signal just you'll'll keep doing whatever you were doing before.
I genuinely think the most important thing has been the podcast is good enough that it merits me getting to meet people like you guys. Then I become friends with people like you.
You guys teach me stuff. I produce more good podcasts.
So hopefully slightly better. That helps me meet people in other fields.
They teach me more things. Like with the China thing recently,
I wrote this like blog post about a couple stories
about things that happened in China.
And that alone has like netted me
an amazing China network
in the matter of like one blog post, right?
And so hopefully if I do an episode on China,
it will be better as a result.
And hopefully that happens across field after field.
And so just getting to meet people like you
is actually the main sort of flywheel.
Interesting.
So move to San Francisco?
Yes.
If you're trying to do AI, yeah.
Next questions.
Shall we do...
Can we do...
A very important question.
From Jacked Pajit.
How much can you bench?
You can't lie because we've got the answer. At one point, I did bench 225 for four.
Now I think I'm probably like 20 pounds lighter than that or something. The reason you guys are asking me this is because I've gotten lifting with both of you.
And I remember Trent and I were doing, like, pull-ups and bench. And then we, like, bench, and he'd, like, throw on another plate or something.
And then, like, instead of pull-ups, he'd, like, be cranking out these muscle-ups. It's all technique.
For sure. Yeah, so they both bench more than me.
But I'm trying best ask again in uh six months yeah yeah yeah exactly um what's your favorite history book there's a wall of them behind you oh i mean obviously the uh carol lbj biographies okay yeah sorry the main thing i took away from those books is LBJ had this quote that he would tell his debate. In his early 20s, he thought debate to these poor Mexican students in Texas.
And he used to tell them, if you do everything, you'll win. I think it's an underrated quote.
So that's the main thing I took away. And you see it through his entire career where there's a reasonable amount of effort, which, you know, goes by like 20, 80, you do the 20 to get the 80% of the effect.
And then if you go beyond that to get like, oh no, I'm not just going to do 20%, I'm going to just like do the whole thing. And there's a level even beyond that, which is like, like this is like, just like an unreasonable use of time.
This is going to have no ultimate impact and still try doing that. Yeah.
You've shared on twitter uh using enki and even like a claude integration yeah uh do you do book clubs do you use goodreads and what are you reading right now i don't use i don't have uh book clubs i do the um but the space partition has just genuinely been a huge like uplift my ability to learn. Mostly because it's not even the long-term impact over years.
Though I think that is part of it. And I do regret all the episodes I did without using Space Print Cars because all the insights have just sort of faded away.
The main thing is if you're studying a complicated subject, at least for me, it's been super helpful to consolidate. So it's like, if you don't do it, you feel like a general where you're like, I'm going to wage a campaign against this country and then you like climb one hill.
And then the next day you had a retreat and then you climb the same hill. There might be a sort of more kosher analogy.
Sorry, and the other question was, what am I reading right? Yeah. Oh, my friend, Alvaro de Menard, author of Fantastic Anachronism.
Can I just fill it up? Actually, it's right here. Yeah.
I hope he's okay with me sharing this, but he wrote, he made like 100 copies of this translation he did of his favorite Greek poet. And they're like, yeah, Kavaffi.
Hopefully I didn't mispronounce it. That one has an inscription for Guern because that's his coffee.
But it's super delightful and that's what I've been reading recently. Any insights from it so far? Poets will hate this framing.
I feel like poetry is like, it's like TikTok. Where it's like you get this like quick vibe of a certain thing.
And then you like swipe. And then you get the next vibe, swipe.
Of our... I'm sorry.
That's interesting. How do you go about learning new things or preparing for an episode? Like what is that? You mentioned the one to two week period where you're like deep diving on the person.
What does that actually look like? It's very much the obvious thing. Like you read their books.
You read their papers. You read the papers.
If they have colleagues, you try to talk to them to better understand the field. I will also mention that all I have to do is ask them questions.
And I do think it's a much, I think it's much harder to like learn a field to be a practitioner than just learn enough to ask interesting questions. But yeah, for that, like it's very much the obvious thing you'd expect.
Based Carl Sagan asks, what are your long-term goals and ambitions? Yeah, the AGI kind of just makes the prospect of a long-term like harder to articulate, right? You know the Peter Thiel quote about what is your 10-year plan and why can't you do it in six months? Like it's especially salient given timelines. For the foreseeable future, grow the podcast and do more episodes, more writing but um yeah so we'll see what happens after like 10 years or something the world might be different enough yeah so basically podcast for now something you've spoken to me about and particularly when you were when you're trying to hire people for the podcast was what you wanted to achieve with the podcast like what in what way do you want the podcast to like shape the world, so to speak? Do you have any thoughts on that? Or, because I remember you talking like, I really want people to actually understand AI and like how this might change their lives or like, you know, how, like what we could be doing now to shape the world such that it ends up better.
Yeah. I don't know.
So I have, I have contradictory views on this. On the one end, I do know that important decisions are being made right now in AI.
And I do think, I mean, riffing on what we were saying about situational awareness, if you do something really good, it has a very high probability of one-shotting the relevant person. You know, people are just generally reasonable.
If you make a good argument, it'll go places. On the other hand, I just think it's very hard to know what should be done.
It's like, you got to have the very correct world model, and then you got to know how in that world model, the action you're taking is going to have the effect you anticipate. And even in the last week, I've changed my mind on some pretty fundamental things about what I think about the possibility of an intelligence explosion or transformative AI as a result of talking to the EPOC folks.
Yep. Basically, the TLDR is, I want the podcast to just be an epistemic tool for now until I, because I think it's just very easy to be wrong.
And so just having a background level of understanding of the relevant arguments is the highest priority.
Makes sense.
Yeah.
What's your sense?
What should I be doing?
I mean, I think the podcast is awesome and a lot more people should listen to it.
And there are a lot more guests I'd be excited for you to interview.
So it seems like a pretty good answer for now.
Yeah.
I think making sure that like there is a great debate of ideas on not just AI, but on like other fields and everything is incredibly high leverage and value. Yeah.
How do you groom your beard? It's majestic. I don't know what to say.
Just genetics. I do trim it, but.
No beard oil? Sometimes I do beard oil. How often?
Once every couple of days.
That's not sometimes.
That's pretty often.
Do you have different shampoo for your head and your beard?
No.
What kind of shampoo do you use?
Anti-dandruff.
Do you condition it?
Yeah.
How often do you shave it?
We're giving people the answers that they want. Big beard oil.
Yeah, you can sell some ad slots to different shampoo companies and we can edit it. Maybe we sold an ad slot.
Sorry, you had this idea of merch. Do you want to explain this? Yes.
So people should react to this. Someone should make it happen.
Dworkesh wants merch,
but he doesn't want to admit that he wants it.
Or he doesn't want to make it himself
because that seems tacky.
Yeah.
So I really want a plain white tee
with just Dworkesh's beard
in the center of it.
And that's it.
Nothing else.
But you were saying it should be like
have a different texture
than the rest of the shirt.
Oh, so then I'm really ripping off it
where maybe like a limited edition set can have some of your beard hair actually sewn into the shirt. That'd be pretty cool.
I would pay. I would pay for that.
Okay. How much? I've got like patches all over my beard.
Depends on how much hair. If it's like one is like in there somewhere versus the whole thing.
Like, do I have to dry clean it? Can I wash it as like on the delicate setting? But really, I think you should get merch. If you want to grow the podcast, which apparently you do, then this is one way to do it.
If you're adhering to it, it's necessary. Which historical figure would be best suited to run a Frontier AI lab? This is definitely a question for you guys.
No, I mean, I'm curious what your take is first. You've spoken to more of the heads of AI labs than I have.
I was going to say LBJ. Sorry, is the question who would be best at running an AI lab or who would be best for the world? What outcome do you want? What outcome do you want? Because I imagine, it seems like, what the best AI lab CEO succeed at is raising money, building up hype, setting a coherent vision.
I don't know how much it matters for the CEO themselves to have good research taste or something. But it seems like
their role is more
as a sort of emissary
to the rest of the world.
And I feel like LBJ
would be pretty good at this.
Like just getting
the right concessions,
making projects move along,
coordinating among
different groups to...
Maybe, oh, Robert Moses.
Yeah.
Again, not necessarily
best for the world,
but just in terms of
making shit happen.
Yeah.
I mean, I think
best for the world
is a pretty important
precondition.
Oh, right.
There's a Lord Ackwood quote
I don't know. Yeah.
Again, not necessarily best for the world, but just in terms of making shit happen. Yeah.
I mean, I think best for the world is a pretty important precondition. Oh, right.
There's a Lord Acquid quote of great people are very, very good people. So it's hard to think of a great person in history who are like, I feel like they'd really move the ball forward and also I trust their moral judgment.
We're lucky in many senses with the set today. That's right.
Like the set of people today are both like they try and care a lot about the moral side as well as.
Yeah. We're lucky in many senses with the set today.
That's right. The set of people today are both, they try and care a lot about the moral side as well as drive the labs forward.
This is also why I'm skeptical of big grand schemes like nationalization or some public-private partnership or just generally shaking up the landscape too much. because I do think we're in like one one of the better, I mean the sort of like the difficulty of whether it's alignment or whether it's some kind of deployment, um, safety risks that is just like the nature of the universe is going to make that some level of difficulty.
But the human factors in a lot of the counterfactual universes, I feel like we don't end up with people like we couldn't be in a universe where where they don't even pay lip service. This is like not an idea that anybody had that you could have an ASI takeover.
I think to think we live in a pretty good counterfactual universe. Yeah, we got a good set of gameplays on the board.
That's right. How are you preparing for fast timelines? If there's fast timelines, then there will be this six-month period in which the most important decisions in human history are being made.
And I feel like having an AI podcast during that time might be useful. That's basically the plan.
Have you made any shorter-term decisions with regards to spending or health or anything else? After I interviewed Zuckerberg, my business bank balance was negative 23 cents.
When the ad money hit,
I immediately reinvested it in NVIDIA.
So that is the...
Sorry, but you were asking
from a sort of altruistic perspective.
No, no, just in general.
Have you changed the way you live at all
because of your AGI timelines? I never looked into getting a Roth IRA. He brought us Fiji water before.
Which is in plastic bottles. Dohokish has changed.
Have you guys changed your lifestyle as a result? Not really, no. I just like work all the time.
Would you be doing that anyways? Or would you not? I would probably be going very intensely at whatever like thing I'd picked to devote myself to. Yeah.
How about you? I canceled my 401k contributions for the week. Oh, really? Yeah.
Yeah, that felt like a more serious one. It's hard for me to imagine a world in which I'm like, have all this money that's just sitting in this account and waiting till I'm 60 and things look so different then.
You could be like a trillionaire with your marginal 401k contribution. I guess, but you also can't invest it in like specific things.
And I don't know, I might change my mind in the future and can restart it. And I've been contributing for a few years now.
On a more serious note, one thing I have been thinking about is how could you use this money to an altruistic end? And basically, if there's somebody who's up and coming in the field that I know, which is like making content, could I use money to support them? And I'm of two minds on this. One, there are people who did this for me, and it was kind of actually responsible for me continuing to do the podcast when it just like did not make sense as a couple hundred people listening or something.
I want to shout out Anil Varanasi for doing this. And also Leopold, actually, for the Foundation News for you to be running.
On the other hand, it's like like i feel like i wouldn't it's the thing about what that blogger was saying that the good ones you actually do notice it's like it's hard to find a hidden talent maybe i'm totally wrong about this but i'd feel like if you if i put up a sort of grant application i give you money if you're trying to make a blog i'm actually not sure about how well that would work there's different things things you could do, though. Like, there's, I'll give you money to move to San Francisco for two months.
And, like, you know, sort of meet people and, like, sort of get more, like, context and taste and, like, feedback on what you're doing. And, like, it's not so much about the money or time.
It's, like, it's putting them in an environment where they can more rapidly grow. Like, that's something that one could do.
I mean, you also, I think you do that like quite proactively in terms of you like deliberately introduce people that you think will be interesting to each other and this kind of stuff. Yeah.
Yeah. No, I mean, that's very fair.
And obviously, I've benefited a ton from moving to San Francisco. Yes.
It's unlikely that I would be doing this podcast at least on AI. Yes.
The degree I am if I wasn't here. So maybe it's a mistake to judge people based on the quality of their content as it exists now and just throw money at them, not throw money, but give them enough money to move to SF to get caught up in this intellectual milieu and then maybe do something interesting as a result.
Yeah. The thing that most readily comes to mind is the MATS program for AI research.
And this seems like it's just been incredibly successful at giving people the time, the funding, and the social status justification to do AI safety relevant research with mentors. Oh, and you have a similar program.
We have the Anthropic Fellows Program. That's right, yeah.
And what is your, I know you're probably selecting for a slightly different thing, but I assume it's going to be power law dominated. And have you noticed a pattern among the, even with the mass fellows or your fellows, who is just like, this made the whole thing worth it? Yeah, I mean, there have been multiple people who Anthropic and other labs have hired out of this program.
Yeah, yeah. So I think the return on investment for it has been massive.
And yeah, apparently the fellows, I think there are 20 of them, are like really good. What is the trick to making it work well or finding that one person? I think it's gotten much better with time where the early fellows, some of them did good work and got good jobs.
And so then now later fellows, like the quality bar has just risen and risen and risen. And there are even better mentors now than before uh so it's this really cool flywheel effect um but originally it was just people who like didn't have the funding or time to like make a name for themselves or do ambitious work so it's kind of like giving them that niche to do it right right right uh seems really key yeah you can do other things that doesn't to be money.
You know, like you could put out ideas for things you'd be really interested in reading.
That's right.
Or like promoting.
Yeah, yeah, yeah.
There's something coming there.
Okay, there we go.
So this episode hopefully will launch Tuesday at the same time as the book.
By the way, which you can get at stripe.press slash scaling.
But on Wednesday, which is the day after, hopefully there's something useful for you here. Okay.
Exciting. Yeah.
Any other questions we want to ask? The thing I have takes on, which I rarely get asked about, is distribution. Distribution of AI? No, no, sorry.
Like Mr. Beast's tale, the distribution.
Oh, yeah, yeah, yeah. Where people, I think, rightly focus on the content.
And if that's not up to snuff, I think you won't succeed.
But to the extent that somebody is trying to do similar things, the thing they consistently underrate is putting the time into getting distribution right.
I just like random takes about, for example, the most successful thing for my podcast in terms of growth has been YouTube shorts. It's a thing you would never have predicted beforehand.
And, you know, they're like responsible for like, basically at least half the growth of the podcast or something. I mean, I buy that.
Yeah. Why wouldn't you predict it? Like, I mean, like, yeah, I mean, I guess there's the contrast of like the long form deep content and like YouTube shorts and stuff.
But I definitely think they're good hooks. That's right.
Yeah. Yeah.
And I have like takes on how to write tweets and stuff. The main intuition being like, write like you're writing to a group chat.
Yeah. To a group chat of your friends rather than this like formal whatever.
I don't know. Just like these sort of like.
Yeah i mean what else comes to mind here maybe it's interesting the difference between like tiktok and youtube shorts oh yeah we've never cracked tiktok yeah why not like you've tried yeah i mean have you done everything i know if you've read these poems maybe you're like in a bubble bath with like some beard shampoo on reading poems that'd be an incredible movie I bet you that would go viral you have to do that now reading a poem uncross your legs last episode it was the interpretability challenge now it's Dory Cash in a bubble bath I gotta sell the sell the book somehow, you know? We literally do it like Margot Robbie. Yeah, exactly.
Explaining the CD-O as it's not. So what is scaling? And that's how you crack distribution.
And that's how you crack distribution. But yeah, no, like when we did our episode it launched and you were sharing interesting tidbits about how it was doing and the thumbnail you wanted to use and the title and i think i even asked you like to share more details because it seemed interesting and cool and subtle things um but it seemed like you also kind of just hated it, like playing this game of like really having to optimize all these knobs.
What I realized, I mean, talent is everything. So I'm really lucky to have three to four editors who I'm like incredibly proud to work with.
I don't know how to hire more of them. Like they're just so good and self-directed.
So honestly, I don't have the tips to how to correct that. I hired those guys.
So one of them was a farmer in Argentina. One of them was a freshman master in Sri Lanka.
One of them was a former editor for one of Mr. Beast's channels.
The other is a director in Czechoslovakia who makes these AI animations that you've seen in the notes on China. And he's working on more essays like that.
So I don't know how to replicate that catch again. God, that's a pretty widely cost net, I'm going to be honest.
Damn. But they're all like, God damn.
And this was just through your challenges and just tweeting about. That's right.
So I had a competition to make clips in my podcast. I rounded up a couple of them this way.
Yeah, it's hard to replicate because I've tried. Yeah.
Why do you think this works so well with the video editors? Because you tried a similar approach with your chief of staff. Yeah.
The difference is with the video editor, I think there is this arbitrage opportunity where there are people, it is fundamentally a sort of of are you willing to work hard and obsess about getting better over time um uh which all of them go above and beyond on but you can just find people in other countries who are like um and it's not even about the wages like i've 10x their salaries or something like that um it's just about getting somebody who is really detail-oriented and there is this global global arbitrage there. Whereas with the general manager, by the way, so the person I ended up hiring and who I'm super excited to work with is your childhood best friend, Max Herons.
Max is so great. And he would have plenty of other opportunities.
There's not this weird arbitrage where you find some farmer in Argentina. But yeah, it is striking that you were looking for a while.
That's right. And then I just kind of mentioned offhand that Max was looking for something new.
I genuinely, this is going to be like a total 12-year-old learns about the world kind of question. But I genuinely don't know how big companies hire.
Because I was trying to find this person for a year. And I'm really glad about the person I ended up hiring.
But it was just like, if I needed to hire 100 people for a company, let alone 1000 people,
I just like do not know how to find people like this at scale. Yeah, I mean, I think this is like
the number one issue that startup CEOs have hiring, like it's just relentlessly the number one.
And and the thing I was stunned with is how it didn't seem like my platform helped that much.
I got like close to 1000 applications across the different rounds of publicizing it that I did. And a lot of, I think, really cool people applied.
But the person that I ended up hiring was somebody who was just a reference, you know, like a mutual friend kind of thing. And a couple of other top contenders were also this way.
So it's weird.
Like, the best people in the world don't want to apply,
at least to things like this.
And you just got to seek them out,
even if you think you have a public platform or something.
Yeah.
Yeah, I mean, the job might just be so out of distribution
from anything else that people would do.
That's right.
So Aditya Ray asks,
how do you make it on Substack as a newbie writer? I think if you're starting from scratch, there's two useful hacks. One is podcasting because you don't need to have some super original new take.
You can just interview people who do and you can leverage their platform. And two is writing book reviews.
Again, because you have something to react to rather than having to come up with a unique worldview of your own there's probably other things and it's really hard to give advice in advance just try things but um those i think are just like good um cold starts the book reviews is a good suggestion i actually use like guern's book reviews as a way to recommend books to people by the way this is a totally undersupplied thing because i if anybody has book reviews, Jason Furman is this economist who has like a thousand, you know, Goodreads reviews. And I like, I probably have visited his Goodreads on a hundred independent visits.
Same with the grown book reviews or something, right? So book reviews are a sort of very undersupplied thing. If you're looking to get started making some kind of content.
I like that.
Cool. Thank you guys so much for doing this.
Yeah, this was fun. We'll turn the tables on you again pretty soon.
How does it feel being in the hot seat? It's nice. Nobody ever asked me a question.
Nobody ever asked how is Dworket. Yeah, super excited for the book launch.
Thank you. The website's awesome by the way.
I appreciate it. Oh, you have to listen to it.
Yes. strife.press slash scaling.
Yeah. Cool.
Cool. Thanks guys.
See you later. Thanks.
Okay, I hope you enjoyed that episode. So, as we talked about, my new book is out.
It's released with Stripe Press. It's called The Scaling Era.
And it compiles the main insights across these last few years of doing these AI interviews. And I'm super pleased with how it turned out.
It really elevates the conversations and adds the necessary context. And just seeing them all together even reminded me of many of the most interesting segments and insights that I had myself forgotten.
So I hope you check it out. Go to the link in the description below to buy it.
Separately, I have a Clips channel now on YouTube. People keep complaining about the fact that I put Clips and the main video on the same channel.
So request granted, there is a new Clips channel, but please do subscribe to it so we can get it kickstarted. And while you're at it, also make sure to subscribe to the main channel.
Other than that, just honestly, the most helpful thing you can do is share the podcast. If you enjoyed it, just send it on Twitter, put it in your group chats, share it with whoever else you think might enjoy it.
That's the most helpful thing. If you want to learn more about advertising on future episodes,
go to dwarkesh.com slash advertise.