Mark Zuckerberg – Llama 4, DeepSeek, Trump, AI Friends, & Race to AGI

Mark Zuckerberg – Llama 4, DeepSeek, Trump, AI Friends, & Race to AGI

April 29, 2025 1h 15m

Zuck on:

* Llama 4, benchmark gaming

* Intelligence explosion, business models for AGI

* DeepSeek/China, export controls, & Trump

* Orion glasses, AI relationships, and preventing reward-hacking from our tech.

Watch on Youtube; listen on Apple Podcasts and Spotify.

----------

SPONSORS

* Scale is building the infrastructure for safer, smarter AI. Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you’re an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh.

* WorkOS Radar protects your product against bots, fraud, and abuse. Radar uses 80+ signals to identify and block common threats and harmful behavior. Join companies like Cursor, Perplexity, and OpenAI that have eliminated costly free-tier abuse by visiting workos.com/radar.

* Lambda is THE cloud for AI developers, with over 50,000 NVIDIA GPUs ready to go for startups, enterprises, and hyperscalers. By focusing exclusively on AI, Lambda provides cost-effective compute supported by true experts, including a serverless API serving top open-source models like Llama 4 or DeepSeek V3-0324 without rate limits, and available for a free trial at lambda.ai/dwarkesh.

To sponsor a future episode, visit dwarkesh.com/p/advertise.

----------

TIMESTAMPS

(00:00:00) – How Llama 4 compares to other models

(00:11:34) – Intelligence explosion

(00:26:36) – AI friends, therapists & girlfriends

(00:35:10) – DeepSeek & China

(00:39:49) – Open source AI

(00:54:15) – Monetizing AGI

(00:58:32) – The role of a CEO

(01:02:04) – Is big tech aligning with Trump?

(01:07:10) – 100x productivity



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Listen and Follow Along

Full Transcript

All right, Mark, thanks for coming on the podcast again. Yeah, happy to do it.
Good to see you. You too.
Last time you were here, you had launched Llama 3. Yeah.
Now you've launched Llama 4. Well, the first version.
That's right. What's new? What's exciting? What's changed? Oh, well, I mean, the whole field's so dynamic.
So, I mean, I feel like a ton has changed since the last time that we talked. Meta AI has almost a billion people using it now monthly.
So that's pretty. Um, and you know, I think that this is going to be a really big year on all of this because, um, especially once you start getting the personalization loop going, um, which we're just starting to build in now really, um, from both the context that all the algorithms have about what you're interested in feed and all your profile information, all the social graph information, but also just what you're interacting with the AI about.
I think that's just going to be kind of the next thing that's going to be super exciting. So really big on that.
The modeling stuff continues to make really impressive advances too, as you know. The Llama 4 stuff, I'm pretty happy with the first set of releases you know we announced the um we announced four models and we released the first two the scout and maverick ones which are kind of like the mid-sized models mid-sized to small um it's not like you know actually the most popular llama 3 model was um was the the 8 billion parameter model so we're we we've got one of those coming in the Llama 4 series too.
Our internal code name for it is Little Llama. But that's coming probably over the coming months.
But the Scout and Maverick ones, I mean, they're good. They're some of the highest intelligence per cost that you can get of any model that's out there, natively multimodal, very efficient, run on one host, designed to just be very efficient and low latency for a lot of the use cases that we're building for internally.
And, you know, that's our whole thing. We basically build what we want and then we open source it we open source it so other people can use it, too.
So I'm excited about that. I'm also excited about the behemoth model, which is coming up.
That's going to be our first model that is sort of at the frontier. I mean, it's like more than two trillion parameters.
So it is. I mean, it's, you know, as, as the name says, it's, it's quite, quite big.
Um, so we're kind of trying to figure out how we make that useful for people. It's so big that we've had to build a bunch of infrastructure, um, just to be able to post-train it ourselves.
And we're kind of trying to wrap our head around how does the, like the average developer out there, how are they going to be able to use something like this and how do we make it so it can be be useful for distilling into models that are of reasonable size to run? Because you're obviously not going to want to run something like that in a consumer model. But yeah, there's a lot to go.
I mean, as you saw with the Lama 3 stuff last year, the initial Lama 3 launch was exciting. And then we just kind of built on that over the year.
3.1 was when we released the 405 billion model. 3.2 is when we got all the multimodal stuff in.
So we basically have a roadmap like that for this year too. So a lot going on.
I'm interested to hear more about it. There's this impression that the gap between the best closed source and the best open source models has increased over the last year, where I know the full family of Llama 4 models is not yet, but Llama 4 Maverick is 35 on Chabot Arena.
And on a bunch of major benchmarks, it seems like O4 Mini or Gemini 2.5 Flash are beating Maverick, which is in the same class. What do you make of that impression? Yeah, well, okay, there's a few things.
I actually think that this has been a very good year for open source overall, right? If you go back to the, like where we were last year, what we were doing with Llama was like the only real super innovative open source model. Now you have a bunch of them in the field.
And I think in general, the prediction that this would be the year where open source generally overtakes closed sources, the most used models out there, I think is generally on track to be true. I think the thing that's been sort of an interesting surprise, I think positive in some ways, negative in others, but I think overall good is that it's not just LLAMA.
There are a lot of good ones out there. So I think that that's quite good.
Then there's the reasoning phenomenon, which you basically are alluding to with talking about 03 and 04 and some of the other models. I do think that there's this specialization that's happening where if you want a model that is sort of the best at math problems or coding or different things like that, I do think that these reasoning models with a lot of the ability to just consume more test time or inference time compute in order to provide more intelligence is a really compelling paradigm.
But for a lot of the applications that, and we're going to do that too, we're building a Lama for reasoning model and that'll come out at some point.

For a lot of the things that we care about, latency and good intelligence per cost are actually much more important product attributes. if you're primarily designing for a consumer product

people don't necessarily want it to

wait like half a minute to go think through the answer. If you can provide an answer that's generally quite good too in like half a second, then that's great.
And that's a good trade-off. So I think that both of these are going to end up being important directions.
I am optimistic about integrating the reasoning models with kind of the core language models over time. I think that's sort of the direction that Google has gone in with some of the more recent Gemini models.
And I think that's really promising. But I think that there's just going to be a bunch of different stuff that goes on.
I mean, you also mentioned the whole chatbot arena thing, which I think is interesting. And it goes to this challenge around how do you do the benchmarking, right? And basically, how do you know what models are good for which things? And one of the things that we've generally tried to do over the last year is anchor more of our models in our meta AI product Northstar use cases, because the issue with both kind of open source benchmarks and, you know, any given thing like, like the LM arena stuff is it's just, it's, they're often skewed for a, either a very specific set of use cases, which are often not actually what any normal person does in your product.
They are often weighted, kind of the portfolio of things that they're trying to measure is different from what people care about in any given product. And because of that, we've found that trying to optimize too much for that stuff has often led us astray and actually not led towards the highest quality products and the most usage and best feedback within Meta.ai as people use our stuff.
So we're trying to anchor our North Star in basically the product value that people kind of report to us and what they say that they want and what their revealed preferences are and using the experiences that we have. So sometimes I think, sometimes these things don't quite line up.
And I think that a lot of them are quite easily gameable, right? So I mean, I think on the arena, you'll see stuff like Sonnet three, seven, it's like a great model. Right.
And it's, it's like not near the top. Um, and it was relatively easy for our team to tune a version of Llama four Maverick, um, that basically was way at the top.
Um, whereas the one that we released, that's the, the kind of the pure model actually has no tuning for that at all. So it, so it's further down.
So I think you just need to be careful with some of the benchmarks and we're going to index primarily on the products. Do you feel like there is some benchmark which captures what you see as a North Star of value to the user, which can be sort of objectively measured between the different models and you're like, I need Llama 4 to come out on top on this well i mean our benchmark is basically um user value in meta ai right so it's not compare other models um well we might be able to because we might be able to run other models in that and be able to tell and i think that's one of the advantages of open source is basically you have a good community of folks who can like poke holes that okay where is your model not not, where is it good? Um, but I think the reality at this point is that all of these models are optimized for slightly different mixes of things.
I mean, everyone is trying to, I think, go towards the same, um, you know, I think all the leading labs are trying to create general intelligence, right. And, um, super intelligence, whatever you call it, right.
Like basically AI that can lead towards a world of abundance where like everyone has these superhuman tools to create whatever they want. And that leads to just dramatically empowering people and creating all these economic benefits.
I think that that's sort of, however you define that, I think that that's kind of what a lot of the labs are going for. And, but there's no doubt that different folks have sort of optimized towards different

things. I think the anthropic folks have really focused on kind of coding and, and agents around that, you know, the open AI folks, I think have gone a little more towards reasoning, um, recently.
And I think that there is a space, which if I had to guess, I think we'll end up probably being the most used one which um which is quick is very natural to interact with um is very natively multimodal that fits into kind of throughout your day the ways that you want to interact with it um and i think you got a chance to play around with um with uh the new meta ai app that we're that we're releasing releasing. And one of the fun things that we put in there is the demo for the full duplex voice.
And it's early, right? I mean, there's a reason why we haven't made that the default voice model in the app. But there's something about how naturally conversational it is that I think is just really fun and compelling, and I think being able to mix kind of that in, um, with the right personalization is going to lead towards a product experience where, you know, I, I would basically just guess that you go forward a few years, like we're just going to be talking to AI throughout the day about different things that we're wondering.
And, um, you know, it's like, you'll, um, you'll, you'll have your phone, you'll talk, you'll talk to on your phone, you'll talk to it while you're browsing your feed apps. It'll give you context about different stuff.
You'll be able to answer questions. It'll help you as you're interacting with people in messaging apps.
Eventually, I think we'll walk through our daily lives and we'll either have glasses or other kinds of AI devices and just be able to kind of seamlessly interact with it all day long. So I think that that is, that's kind of the North star and whatever the benchmarks are that lead towards people feeling like the quality is like, that's what they want to interact with.
That I think is actually the thing that is ultimately going to matter the most to us. I got a chance to play around with both Orion and also the MetAI app, and the voice mode was super smooth.
It was quite impressive. On the point of what the different labs are optimizing for, to steel man their view, I think a lot of them think that once you fully automate software engineering and AI research, then you can kick off an intelligence explosion where you have millions of copies of these software engineers replicating the research that happened between LAMA 1 and LAMA 4.
That scale of improvement, again, in a matter of weeks or months rather than years. And so it really matters to just have closed the loop on the software engineer, and then you can be the first to ASI.
What do you make of that? Well, I mean, I personally think that's pretty compelling. And that's why we have a big coding effort too.
I mean, we're working on a number of coding agents inside Meta, you know, because we're not really an enterprise software company. We're primarily building it for ourselves.
So again, you know, we go kind of like for, you know, the specific goal, we're not trying to build a general developer tool. We're trying to build a coding agent and an AI research agent that basically advances Lama research specifically.
And it's like just fully kind of plugged into our tool chain and all this. So I think that that's important.
And, and I think is going to end up being an important part of how this stuff gets done. I would guess that like sometime in the next 12 to 18 months, we'll reach the point where like most of the code that's going towards these efforts is written by AI.
And I don't mean like autocomplete. I mean, right today you have like, you have kind of, you know, good autocomplete.
Like you start writing something and it can complete the kind of section of code. I'm talking more like you give it a goal, it can run tests, right? It can kind of improve things.
It can find issues. It writes higher quality code than like the average very very good person on on the team already and and like i think that that's going to be a really important part of this for sure but i don't know if that's the whole game i mean i think that that's that i think is going to be a big industry um and i think that that's going to be um an important part of how ai gets developed but i think that there are still guys.
I think, I mean, look, I guess one way to think about this is this is a massive space, right? So I don't think that there's just going to be like one company with one optimization function that serves everyone as best as possible. I think that there are a bunch of different labs that are going to be doing leading work towards different domains.
Some are going to be more kind of enterprise focused or coding focused. Some are going to be more productivity focused.
Some are going to be more social or entertainment focused. Within the assistant space, I think there are going to be some that are much more kind of informational or productivity.
Some are going to be more companion focused. It's going to be a lot of the stuff that's just like fun and entertaining and like shows up in your feed.
and i think that that's so i think that there's just like a huge amount of space and part of what's fun about this is like it's like going towards this agi future there are a bunch of common threads for what needs to get invented but there are a lot of things at the end of the day that need to get created and i i think that that's, I think you'll start to see a little more specialization

between the groups, if I had to guess.

It's really interesting to me

that you basically agree with the premise

that there will be an intelligence explosion

and something like super intelligence on the other end.

But then if that's the case,

tell me if I'm misunderstanding you,

if that's the case,

why even bother with personal assistance and whatever?

Why not just get to super human intelligence first

and then deal with everything else later? Well, I think that that's just one aspect of the flywheel, right? So part of what I generally disagree with on the fast takeoff thing is it takes time to build out physical infrastructure, right? So if you want to build like a gigawatt cluster of compute, that just is going to take some time, right right it like takes nvidia a bunch of time to like stabilize their new generation of of of the systems and then you need to figure out the networking around it and then you need to like build the building you need to get permitting and you need to get the energy and then like okay you want like some whether it's gas turbines or, or green energy, you need to like, there's a whole supply chain of that stuff. So I think that there's like a lot of, and we talked about this a bunch on the last time that I was, that it was on the podcast with you.
And I think some of these are just like physical world, human time things that as you start getting more intelligence in one part of the stack, um, you'll basically just run into a different set of bottlenecks. I mean, that's sort of the way that engineering always works.
It's like you solve one bottleneck, you get another bottleneck. Another bottleneck in the system or another ingredient that's going to make this work well is basically people getting used to and learning and having a feedback loop with using the system.
So I don't think like, like these systems don't tend to be the type of thing where like something just shows up fully formed and then people magically fully know how to use it. And that's the end.
I think that there is this co-evolution that happens where people are learning how to best use these AI assistants. On the same side, the AI assistants are learning what those people care about.
And the developers of those AI assistants are able to make the kind of AI assistants better. And then you're also building up this base of context.
So now you wake up and you're like a year or two into it. And now the AI assistant can reference things that you talked about a couple of years ago.
And like, that's pretty cool, but you couldn't do that. If it just did, you just launched the perfect thing on day one.
There's no way that it could reference what you talked about two years ago, if it didn't exist two years ago. So, so I guess my view is like, there's this huge intelligence growth.

There's a, a very rapid curve on the uptake of people interacting with the AI assistants and like the learning feedback and kind of data flywheel around that. Um, and then there is also the build-out of the supply chains and infrastructure and regulatory frameworks to enable the scaling of a lot of the physical infrastructure.
But I think at some level, all of those are going to be necessary and not just the coding piece. I guess one specific example of this that I think is interesting.
actually if even if you go back a few years ago we had a project on i think it was on our ads team

to I guess one specific example of this that I think is interesting, actually, even if you go back a few years ago, we had a project on, I think it was on our ads team, to automate ranking experiments, right?

That's like a pretty constrained environment. It's not like write open-ended code.
It's basically look at the whole history of the company, every experiment that any engineer has ever done in the ad system and look at what worked,

what didn't, what the results of those were, and basically formulate new hypotheses

for different tests that we should run that could improve the performance of the ad system.

And what we basically found was we were bottlenecked on compute to run tests based on the number of hypotheses. It turns out even with just the humans that we have right now on the ads team, we already have more good ideas to test than you actually have either kind of compute or really cohorts of people to test them with, right? Because even if you have like three and a half billion people using your products you still want each you know each test needs to be statistically significant so it needs to have you know some number of whatever it is hundreds of thousands or millions of people and um and it's there are only there's some there's kind of only so much throughput that you can get on testing through that so we we're already at the point, even with just like the people we have, that we already can't really test everything that we want.
So now just being able to test more things is not necessarily going to be additive to that. We need to get to the point where the average quality of the hypotheses that the AI is generating is better than what all the things above the line that we're actually able to test that like sort of the best humans on the team have been able to do before it'll even be marginally useful for it.
So I think that there's like, we'll get there. We'll get there, I think pretty quickly, but it's not like, okay, cool.
The thing can write code. All of a sudden, everything is just improving massively.
There are like these real world constraints that basically it needs to, first it needs to be able to kind of do a reasonable job. Then it needs to be able to, you need to have the compute and the kind of people to test.
and then over time as the quality creeps up, I don't know, are we here in like five or 10 years and it's like no set of people can generate a hypothesis

as good as the AI system?

I don't know, maybe, right?

Then I think in that world, obviously, that's going to be how all the value is created. But that's not the first step.
Publicly available data is running out. So major AI labs like Meta, Google DeepMind, and OpenAI all partner with Scale to push the boundaries of what's possible.
Through SCALE's Data Foundry, major labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. Scale's research team, SEAL, is creating the foundations for integrating advanced AI into society through practical AI safety frameworks and public leaderboards around safety and alignment.
Their latest leaderboards include Humanities Last Exam, Enigma Eval, Multi-Challenge, and Vista, which test a range of capabilities from expert-level reasoning to multimodal puzzle solving to performance on multi-turn conversations. Scale also just released Scale Evaluation, which helps diagnose model limitations.
Leading frontier model developers rely on scale evaluation to improve the reasoning capabilities of their best models. If you're an AI researcher or engineer, and you want to learn more about how Scales Data Foundry and Research Lab can help you go beyond the current frontier of capabilities, go to scale.com slash thwarkash.
So if you buy this view that this is where intelligence is headed, the reason to be bullish on meta is obviously that you have all this distribution, which you can also use to learn more things that can be useful for training. You mentioned the Meta AI app now has a billion active users.
Not the app. The app is a standalone thing that we're just launching now.
I think it's fun for people who want to use it. It's a cool experience.
We can talk about that. We're kind of experimenting with some new ideas in there that I think are novel and worth talking through.
But I'm talking mostly about our apps. Meta AI is actually most used in WhatsApp.
Got it. So it's in WhatsApp is mostly used outside of the US.
We just passed like 100 million people in the U S but it's not the primary messaging system in the U S iMessages. Yeah.
So, um, so I think people in the U S probably tend to underestimate the meta AI use, um, somewhat, but it's also part of the reason why the standalone app is going to be so important is the U S is, you know, for a lot of reasons, one of the most important country. And like, and, you know, the fact that WhatsApp is the main way that people are using MetAI and that's not the main messaging system in the U.S.
means that we need another way to kind of build a first-class experience that's in front of people. And I guess, to finish the question, the bearish case would be that if the future of AI is less about just answering your questions and more so just being a virtual coworker, it's not clear how meta AI inside of WhatsApp gives you the relevant training data to make a fully autonomous programmer, remote worker.
So yeah, in that case, does it not matter that much who has more distribution right now with LLMs? Well, again, I just think that there are going to be different things, right? It's like, if you were sitting at the beginning of the kind of the development of the internet and it's like, well, what's going to be the main internet thing? Is it going to be knowledge work or is it going to be like massive consumer apps? It's like, I don't know, you get both, right? It's like, you don't have to choose one, right? And now the world is big and complicated. And does one company build all that stuff? I think normally the answer is no.
But yeah, no, to your question, people do not code in WhatsApp for the most part. And I don't foresee that that's going to be like, that people starting to write code in WhatsApp is going to be like a major, major use case.
Although I do think that people are going to ask AI to do a lot of things that result in the AI coding without them necessarily knowing it. So that's a separate thing.
But we do have a lot of people who are writing code at Meta and they use Meta AI. We have this internal thing that we call MetaMate and basically in a number of different coding and AI research agents that we're building around that.
And that has quite its own feedback loop and I think can get good for accelerating those efforts. But again, I just think that there are gonna be a bunch of things.
I think AI is almost certainly going to unlock this massive revolution in knowledge work and code. I also think it's going to be kind of the next generation of search and how people get information and do more complex information tasks.
I also think it's going to be fun. I think people are going to use it to be entertained.
And, you know, a lot of the internet is like memes and humor, right? And we have this like amazing technology at our fingertips and it is sort of amazing and kind of funny when you think about it, how much of human energy just goes towards entertaining ourselves and design and pushing culture forward and finding humorous ways to explain cultural phenomenon that we observe. And I think that that's almost certainly going to be the case in the future, right? If you look at like the evolution of things like Instagram and Facebook, if you go back 10, 15, 20 years ago, it was like text.
Then, then we all got phones with cameras. Most of the content became photos.
Then the mobile networks got good enough that you, you know, if you wanted to watch a video on your phone, it wasn't just like buffering. So that got good.
So over the last like 10 years, um, most of the content has moved, you know, basically towards video at this point, most of the time spent in Facebook and Instagram is video, but like, I don't know, do you think in five years, we're just going to be like sitting in our feet and consuming media that's video. It's like, no, it's going to be interactive, right? It's like, you'll be scrolling through your feed and there will be content that is, um, is basically, I don't know, maybe it looks like a reel to start, but then like you talk to it or you interact with it and it talks back or it changes what it's doing, or you can jump into it like a game and interact with it.
And that's all going to be like AI, right? So, um, so I guess my point is there's just all these different things. And, um, and I guess we're, we're, we're ambitious.
So we're working on a bunch of them. Um, but I don't think any one company is going to do all of it.
Okay. So on this point of AI generated content or AI interactions, already people have meaningful relationships with AI therapists, AI friends, you know, maybe more.
Um. And this is just going to get more intense as these AIs become more unique and more personable, more intelligent, more spontaneous and funny and so forth.
How do we make sure people are going to have relationships with AIs? How do we make sure that these are healthy relationships? Well, I think there are a lot of questions that you only really can answer as you start seeing the behaviors. So probably the most important upfront thing is just like ask that question and care about it at each step along the way.
But I think also being too prescriptive upfront and saying we think these things are not good often cuts off value, right? Because I don't know, people use stuff that's valuable for them. I mean, one of my core guiding principles in designing products is like, people are smart, right? They know what is valuable in their lives.
Um, you know, every once in a while, um, you know, something can, something bad happens in a product and you want to make sure that you design your products well, um, to, to minimize that. But, but if, if, if you think that something someone is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong.
And you just haven't come up with the framework yet for understanding why the thing that you're doing is, is valuable and helpful in their life. Um, yeah, so, so that's kind of the main way that, that I, that I think about it.
I do think that people are going to use, um, AI for a lot of these social tasks. Already one of the main things that we see people using that AI for is kind of talking through difficult conversations that they need to have with, um, with people in their life.
It's like, okay, my, you know, my, my, I'm having this issue with my girlfriend or whatever, like help me have this conversation or Or like, I need to have this hard conversation with my boss at work. Like, how do I have that conversation? That's pretty helpful.
And then I think as the personalization loop kicks in and the AI just starts to get to know you better and better, I think that will just be really compelling. You know, one thing just from working on social media for a long time is, um, there's the stat that I always think is crazy.
The, the average American I think has, I think it's fewer than three friends, three people that they'd consider friends. And, and the average person has demand for meaningfully more.
I think it's like 15 friends or friends or something right i guess there's probably some point where you're like all right i'm just too busy i can't deal with more people but but the average person wants more connectivity connection than they have um so you know there's a lot of questions that people ask of stuff like okay is this going to replace kind of in-person connections or real life connections? And my default is that the answer to that is probably no. I think it, you know, I think that there are all these things that are better about kind of physical connections when you can have them.
But the reality is that people just don't have the connection and they feel more alone um a lot of the time than they would like so i think that a lot of these things that today there might be a little bit of a stigma around um i would guess that over time we will find the vocabulary as a society to be able to articulate why that is valuable and why the people who are doing these things are like why they are rational for doing it and like and how it is adding value for their for their lives but but also i think that the field is very early so um i mean it's like i i think you know there are a handful of companies and stuff we're doing virtual therapists and you know there's like virtual girlfriend type stuff but it's um it's very early right it's i mean the the embodiment in the things is is pretty weak a lot of them like you you open it up and it's just like a an image of of like of the therapist or the person you're talking to or whatever i mean sometimes there's some very rough animation but it's not like an embodiment i mean you've seen the stuff that we're working on in reality labs where like you have the codec avatars and it like feels like it's a real person i think that's kind of where it's going you're gonna you know you'll you'll be able to um basically have like an always on video chat where it's like oh and also the the um the uh the ai will be able to you know the gestures are important too like more than half of communication when you're actually having a conversation is not the words that you speak. It's all the nonverbal stuff.
Yeah. I did get a chance to check out Orion the other day, and I thought it was super impressive.
And I'm mostly optimistic about the technology just because generally I'm, as you mentioned, libertarian about if people are doing something, probably think it's good for them. Although I actually don't know if it's the case that if somebody is using TikTok, they would say that they're happy with how much time they're spending on TikTok or something.
So I'm mostly optimistic about it, also in the sense that if we're going to be living in this future world of AGI, we need to be, in order to keep up with that, humans need to be upgrading our capabilities as well with tools like this. And just generally, there can be more beauty in the world if you can see Studio Ghibli everywhere or something.
I was worried that one of the flagship use cases that your team showed me was I'm sitting at the breakfast table and on the periphery of my vision is just a bunch of reels that are scrolling by. Maybe in the future, my girlfriend is on the other side of the screen or something.
And so I am worried that we're just removing all the friction between getting totally reward hacked by our technology. How i don't know this is not what ends up happening in five years i mean again i think i think people have a good sense of what they want i mean that that experience that you saw was that was a demo just to show multitasking and holograms right so i mean i i agree that like i don't think that the the future is like you have stuff that's trying to compete

for your attention in the corner of your vision all the time i don't think people would like that too much um so it's actually it's one of the things as we're designing these glasses that we're really mindful of is like probably the number one thing that glasses need to do is get out of the way and be good glasses, right?

And as an aside, I think that's part of the reason why the Ray-Ban Meta product has done so well is like, all right, it's like great for listening to music and taking phone calls and taking photos and videos. And the AI is there when you want it.
But when you don't, it's like a great, you know, good looking pair of glasses that, that, that, that people like, and it kind of gets out of the way. Well, um, I would guess that that's going to be a very important design principle for, for the, um, the augmented reality future, right? The main thing that I, that I see here is, you know, I think it's kind of crazy that for how important the digital world is in all of our lives, the only way we can access it is through these like physical, you know, digital screens, right? It's like you, you have like a phone, you have your, your, your computer, you can put a big TV.
It's like this huge physical thing. It just seems like we're at the point with technology where the physical and the digital world should really be fully blended.
And that's what the holographic overlay is allow you to do. But I agree.
I think a big part of the design principles around that are going to be, okay, you'll be interacting with people and you'll be able to bring digital artifacts into those interactions and be able to do cool things very seamlessly. It's like, if I want to show you something here, like here's a screen.
Okay, here it is. I can show you, you can interact with it.
It can be 3D. We can kind of play with it.
You want to, you know, like play a card game or whatever. It's like, all right, here's like a deck of cards.
We can play with it. It's like two of us are here physically.
Like you have a third friend who's just hologramming in, right? And they can kind of participate too. But I think that in that world, people are going to be, you know, just like,

you don't want your physical space to be cluttered. It's sort of like, uh, you know, it just kind of has like a, it wears on you psychologically.
I don't think people are going to want the digital kind of physical space to, to feel that way either. So I don't know that that's more of an aesthetic and, and, and, and one of these norms that I think we'll have to get worked out to get worked out.
But I think we'll figure that out. Going back to the AI conversation, you're mentioning how big of a bottleneck the physical infrastructure can be.
Related to other open source models like DeepSeq and so forth, DeepSeq right now has less compute than a lab-like meta, and you could argue that it's competitive with the Lama models.

If China is better at, you know, physical infrastructure, industrial scale-ups, getting more power and more data centers online, how worried are you that this will, they might beat us here? I mean, I think it's like a real competition. I mean, I think that you're seeing the industrial policies really play out where, yeah, I mean, I think China's bringing online more power.
And because of that, I think that the U.S. really needs to focus on streamlining the ability to build data centers and build and produce energy.
Or I think we will be at a significant disadvantage. At the same time, I think some of the export controls on things like chips, I think you can see how they're clearly working in a way because, you know, there was all the conversation with DeepSeek about, oh, they did all these like very impressive low-level optimizations.
And the reality is they did, and that is impressive. but then you ask why did they have to do that when none of the like american labs did it and it's like well because they're using like partially nerfed chips that are the only thing that nvidia is allowed to sell in china because of the export controls so so deep seek basically had to go spend a bunch of their calories and time doing low-level infrastructure optimizations that the American labs didn't have to do.
Now, they produced a good result on text, right? It's like, I mean, DeepSeek is text only. so the infrastructure is impressive the text result is is impressive

but every new major model that comes out now is multimodal, right? It's image, it's voice, and theirs isn't. And now the question is, why is that the case? I don't think it's because they're not capable of doing it.
I think that they basically had to spend their calories on doing these infrastructure optimizations to overcome the fact that there were these export controls. But when you compare like LAMA 4 with DeepSeek, I mean, our reasoning model isn't out yet.
So I think that the kind of R1 comparison isn't clear yet. But we're basically like effectively same ballpark on all the tech stuff is what DeepSeek is but with a smaller model so it's it's much more kind of efficient per um the kind of cost per intelligence is lower with what we're doing for llama on text and then all the multimodal stuff we're effectively leading at and it just doesn't even exist in their stuff so um so i think that the llama four models when you compare them to what they're doing are are good and and i think generally people are going to prefer to use the llama 4 models um but i think that there is this interesting contour where like it's clearly a good team that's doing stuff over there and i think you're right to ask about the um accessibility of power the accessibility of compute and chips and things like that.
Because I think the kind of work that you're seeing the different labs do and play out, I think is somewhat downstream of that. Premium products attract a ton of fake account signups, bot traffic, and free tier abuse.
And AI is so good now that it's basically useless to just have a captcha of six squiggly numbers on your signup page. Take Cursor.
People were going to insane lengths to take advantage of Cursor's free credits, creating and deleting thousands of accounts, sharing logins, even coordinating through Reddit. And all this was costing Cursor a ton of money in terms of inference compute and LLM API calls.
Then they plugged in WorkOS Radar. Radar distinguishes humans from bots.
It looks at over 80 different signals from your IP address to your browser to even the fonts So Sam Altman recently tweeted that OpenAI is going to release an open source SOTA reasoning model. I think part of the tweet was that we will not do anything silly like say that you can only use it if you have less than 700 million users.
DeepSeek has the MIT license, whereas Lama, I think a couple of the contingencies in the Lama license require you to stay built with Lama on applications using it, or any model that you train using Lama has to begin with the word Lama. What do you think about the license? Should it be less onerous for developers? I mean, look, we've basically pioneered the open source LLM thing.
So, I mean, I don't, I don't consider the, the license to be onerous. I kind of, you know, think that when we were starting to push on open source, it was this, it was this big debate in the industry of like, is this even a reasonable thing to do? but can you do something that is safe and trustworthy with open source? Like is, um, will open source be ever be able to be competitive enough that anyone will even care? And, and basically when we were answering those questions, which, you know, a lot of the hard work that, you know, I think a lot of the teams at meta, although there are other folks in the but really the LLAMA models were the ones that I think broke open this whole open source AI thing in a huge way.
You know, we were very focused on, okay, if we're going to put all this energy into it, then at a minimum, you know, if you're going to have these large cloud companies like Microsoft and Amazon and Google turn around and sell our model that we should at least be able to have a conversation with them before they do that around, um, around basically like, okay, what kind of business arrangement should we have? But, but our goal with the, with the license isn't, um, you know, we're generally not trying to stop people from using the model. We just think like, okay, if you're like one of those companies or if you're Apple, you know, just come talk to us about what you want to do and let's find like a productive way to do it together.
So I think that that's generally been fine. Now, if the whole open source part of the industry evolves in a direction where, you know, there's like a lot of other great options.
And if like the, you know, the license ends up being a reason why people don't want to use Llama, then I don't know, we'll have to reevaluate the strategy, whether, you know, what it makes sense to do at that point. But I just don't think we're there.
That's not in practice a thing that we've seen companies coming to us and saying, we don't want to use this because your license says that if you reach 700 million people, you have to come talk to us. So at least so far, it's a little bit more of something that we've heard from like kind of open source purists, like is this as clean of an open source model as you'd like it to be? And look, I mean, I think that debate has existed since the beginning of open source with like, you know, just all the GPL license stuff versus other things.
And it's like, okay, does it need to be the case that anything that touches open source has to be open source or can people just take it and use it in different ways? And I'm sure there will continue being debates around this, but I don't know if you're, if you're spending many, many billions of dollars training these models, I think asking the other companies that are also huge and similar in size and can easily kind of afford to have a relationship with us to talk to us before they use it. I think it seems like a pretty reasonable thing.
If it turns out that other models are also,

there's a bunch of good open source models,

so that part of your mission is fulfilled,

and maybe other models are better at coding,

is there a world where you just say,

look, open source ecosystem is healthy,

there's plenty of competition,

we're happy to just use some other model,

whether it's for internal software engineering at meta or deploying to our apps. We don't necessarily need to build with Llama.
Well, again, I mean, we do a lot of things. So it's possible that, you know, I guess, let's take a step back.
The reason why we're building our own big models is because we want to be able to, like, build exactly what we want, right? and none of the other models in the world are sort of exactly what we want if they're open source then you can take them and you can fine tune them in different ways but you still have to deal with the model architectures and you know they make different size trade-offs around um that affect the latency and inference cost of the models and it's okay, the scale that we operate at, that stuff really matters. Like we made the Llama Scout and Maverick models certain sizes for a specific reason because they fit on a host and we wanted certain latency, especially for the voice models that we're working on that we want to just basically have purveyed and be across everything that we're doing from the glasses to all of our apps to the meta AI app and all this stuff.
So, so I think that there's a level of kind of control of your own destiny that you only get when you, when you build the stuff yourself. That said, there are a lot of things that like AI is going to be used in every single thing that every company does.
When we build a big model, we also need to choose which things, which use cases internally we're going to optimize for. So does that mean that for certain things, we're not going to think that like, okay, maybe Claude is better for building this specific development tool that this team is using.
All right, cool. Then like use that.
Fine. Great.
I don't think we don't want to fight with one hand tied behind our back. We're doing a lot of different stuff.
You also asked, would we maybe, would it not be important because other people are doing open source? I don't know. On this, I'm a little more worried because I think you have to ask.
For anyone who shows up now and is doing open source now that we have done it. There's a which is would they still be doing open source if we weren't doing it and like i think that there are a handful of folks who see the trend that more and more development is going towards um towards open source and like oh crap like we kind of need to be on this train or else we're going to lose like we have some some closed model API and like increasingly a lot of developers,

that's not what they want.

So I think you're seeing a bunch of the other players

start to do some work in open source,

but it's just unclear if it's dabbling

or fundamental for them

in the way that it has been for us.

And, you know, a good example is like

what's going on with like Android, right?

It's like Android started off as the open source thing. There's not really like any open source alternative.
Like I think over time, Android has just been kind of getting more and more closed.

so I think if you're us you kind of need to worry that if we stopped pushing the industry in this direction that like all these other people maybe you're only really doing it because they're trying

to kind of compete with us in the direction that we're pushing things. And, you know, they already have their revealed preference for what they would build if open source didn't exist.
And it wasn't open source, right? So I just think we need to be careful about relying on that continued behavior for the future of the technology that we're going to build at the company.

I mean, another thing I've heard you mention is that it's important that the standard gets built around American models like Lama.

I guess I wanted to understand your logic there because it seems like with certain kinds of networks, it is the case that the Apple App Store just has a big contingency around what it's built around. But it doesn't seem like, you know, if you build some sort of scaffold for DeepSeek, you couldn't have easily just switched it over to Llama 4, especially since between generations, like Llama 3 wasn't MOE, Llama 4 is.
So things are changing between generations of models as well. So what's the reason for thinking things will get built out in this contingent way on a specific standard? I'm not sure.
What do you mean by contingent? Oh, as in like it's important that people are building for LLAMA rather than for LLMS in general, because that will determine what the standard is for the future. Well, look, I mean, I think these models encode values and ways of thinking about the world.
And, you know, we had this interesting experience early on where we took an early version of Lama and we translated it. I think it was, might've been into French or some other language.
And the feedback that we got, I think it was, I think it was French from, from French people was, this sounds like an American who learned to speak French. Like it doesn't sound like a French person.
It's like, well, what do you mean? Does it not speak French well? It's like, no, it speaks French fine. It's just like the way that it thinks about the world is like, seems slightly American.
So I feel there's like these subtle things that kind of get built into it. Over time, as the models get more sophisticated, they should be able to embody different value sets across the world.
So maybe that's like a very kind of, you know, not particularly sophisticated example, but I think it sort of illustrates the point. And, you know, some of the stuff that we've seen in testing some of the models, especially coming out of China, is like they sort of have certain values encoded in them.
And it's not just like a light fine tune to get that to feel the way that you want. Now, the stuff is different, right? So I think language models or something that has like a kind of like a world model embedded into it

have more values. Reasoning, I think, is, I mean, I guess there are kind of values or ways to think

about reasoning, but one of the things that's nice about the reasoning models is they're trained on

verifiable problems. So do you need to be worried about like cultural bias if your model is doing

math? Probably not, right? I think that that's, you know, I think it's like the chance that like

Thank you. worried about like cultural bias if your model is doing math probably not right I think that that's you know I think it's like the the chance that like some reasoning model that was built elsewhere is like going to kind of incept you by like solving a math problem in a way that's that's um devious seems low um there's a whole set of different issues I think around coding um which the other verifiable domain which is you know i think you kind of need to be worried about like waking up one day and like does a model that have some tie to another government like can it embed all kinds of different vulnerabilities in code that then like the intelligence organizations associated with that government can then go exploit.
So now you sort of like, all right, like in some future version where you have, you know, some model from some other country that we're using to like secure or build out a lot of our systems. And then all of a sudden you wake up and it's like, everything is just vulnerable to, um, in a way that like that country knows about, but, but like you don't you don't or it turns on a vulnerability at some point.
Those are real issues. So what we've basically found is now, I mean, I'm very interested in studying this because I think one of the main things that's interesting about open source is the ability to distill models.
you know most people the the primary value isn't just like taking a model off the shelf and saying

like okay like meta built this version of llama i'm going to take it and i'm going to run it exactly in my application it's like no well your application isn't doing anything different if you're just running our thing you're at least going to fine tune it or try to distill it into a different model and when we get to stuff like the behemoth model like the whole value in that is being able to basically take this very high amount of intelligence and distill it down into a smaller model that you're actually going on a run. But this is like the beauty of distillation.
And it's like one of the things that I think has really emerged as a very powerful technique in the last year since the last time we sat down is, um, I think it's worked better than most people would predict as you can basically take a model that is much bigger and take probably like 90 or 95% of its intelligence and run it in something that's 10% the size. Now, do you get a hundred percent of the intelligence? No, but like 95% of the intelligence at 10% of the cost is like pretty good for, for a lot of things.
Um, The other thing that's interesting is now with this like more varied open source community where you, it's not just Llama, you have other models, you have the ability to distill from multiple sources. So now you can basically say, okay, Llama is really good at this.
Like maybe the architecture is really good because it's fundamentally multimodal and fundamentally more inference friendly and more efficient. But like, let's say this other model is better at coding.
Okay, well, just you can distill from both of them and then build something that's better than either of them for your own use case. So that's cool.
But you do need to solve the security problem of knowing that you can distill it in a way that is safe and secure.

And so this is something that we've been researching and have put a lot of time into. And what we've basically come to is like, look, anything that's kind of like language is quite fraught because there's like a lot of values embedded in that.
So unless you don't care about having the values from whatever the model is that you got, you probably don't want to like distill the straight like language world model um on reasoning i think you can get a lot of the way there by limiting it to verifiable domains running um kind of code cleanliness and security filters like like whether it's like the llama guard open source the CodeShield open source things that we've done that basically allow you to incorporate different input into your models and make sure that both the input and the output are secure. And then just a lot of red teaming to make sure that you're like, you just have people or experts who are looking at this.
It's like, all right, is this model doing anything that isn't what I want after distilling from something? And I think with a combination of those techniques, you can probably distill on the reasoning side for verifiable domains quite securely. That's something I'm pretty confident about.
And it's something that we've done a lot of research around. But I think this is a very big question.
It's like, how do you do good distillation? Because there's just so much value to be unlocked. But at the same time, I do just think that there is some fundamental bias in the different models.
Speaking of value to be unlocked, what do you think the right way to monetize AI will be? Because obviously digital ads are quite lucrative, but as a fraction of total GDP, it's small in comparison to like all remote work. Like even if you can increase its productivity and not replace work, that's still worth tens of trillions of dollars.
So is it possible that ads might not be it? Yeah. How do you think about this? I mean, like we were talking about before, there's going to be all these different applications and different applications tend towards different things.
Ads is great when you want to offer people of free service, right? Because it's free. You need to cover it somehow.
Yeah. Ads is like, okay, it's ads solves this problem of like a person does not need to pay for something and they can get something that is like amazing for free.
Um, and also by the way, with modern ad systems, a lot of the time people think that the ads add value to the thing if you do it well, right?

You need to be good at ranking and you need to be good at having enough liquidity of advertising inventory. So that way, you know, if you only have five advertisers in the system, no matter how good you are at ranking, you may not be able to show something to someone that they're interested in.
But if you have a million advertisers in the system, then you're probably going to be able to find something pretty compelling if you're good at it picking out you know the different needles in the haystack that that person's going to be interested in so i think that definitely has its place but there are also clearly going to be other business models as well including ones that just have higher costs so it doesn't even make sense to offer them for free. Which by the way, there have always been business models like this.
There's a reason why social media is free and ad supported. But then if you want to watch Netflix or like ESPN or something, you need to pay for that.
It's okay because the content that's going into that, like they need to produce it and that's very expensive for them to produce. And they probably could not have enough ads in the service in order to make up for the cost of producing the content.
So basically you just need to pay to access it. Then the trade-off is fewer people do it, right? It's like they're talking about hundreds of millions of people using those instead of billions.
So there's kind of a value switch there. I think similar here, you know, not everyone is going to want like a software engineer or a thousand software engineering agents or whatever it is.
But if you do, that's something that you are probably going to be willing to pay thousands or tens of thousands or hundreds of thousands of dollars for. So I think that this just speaks to the diversity of different things

that need to get created is like, there are going to be business models at each point along the spectrum. And it met a, yeah, for the consumer piece, we definitely want to have a free thing.
And I'm sure that will end up being ad supported. But I also think we're going to want to have a business model that supports people using arbitrary amounts of compute to do like really even more amazing things than what it would make sense to be able to offer the free service.
And for that, I'm sure we'll end up having a premium service. But I mean, I think our basic values on this are we want to serve as many people in the world.
Lambda is the cloud for AI developers. They have over 50,000 NVIDIA GPUs ready to go for startups, enterprises, and hypersalers.
Compute seems like a commodity though, so why use Lambda over anybody else? Well, unlike other cloud providers, Lambda's only focus is AI. This means their GPU instances and on-demand clusters have all the tools that AI developers need pre-installed.
No need to manually install CUDA, drivers, or manage Kubernetes. And if you only need GPU compute, you can save a ton of money by not paying for the overhead of general purpose cloud architectures.
Lambda even has contracts that let enterprises use any type of GPU in their portfolio and easily upgrade to the next generation. For all of you wanting to build with Llama 4.
Lambda has a serverless API which... Thank you.
lambda.ai slash Thwarkesh for a free trial of their inference API featuring the best open source

models like DeepSeq and Lama4

at the lowest prices in the

industry. Alright, back to Zuck.

How do you keep track of

you've got all these different projects, some of

which we've talked about today, I'm sure there's many I don't

even know about.

As the CEO

overseeing everything, there's a big spectrum

between going to the Llama team

and here's the hyper parameters you should use

to just giving like a mandate,

like go make the AI better.

And there's many different projects.

How do you think about the way

in which you can best deliver your value add

and oversee all these things?

Well, I mean, a lot of what I spend my time on

is trying to get awesome people onto the teams, right?

I mean, it's, so there's that.

And then there's stuff that cuts across teams. It like all right you build meta ai and you want to get it into whatsapp or instagram it's like okay then now i need to get those teams to talk together and then there's a bunch of questions like okay i was um and it's like okay do you want the thread for meta ai and whatsapp to feel like other whatsapp threads or do you want it to feel like other kind of like ai chat experiences there's like different idioms for for those and so i think there's like all these interesting questions that sort of need to get answered around like how does this stuff basically fit into all of what we're doing then there's a whole other part of what're doing, which is basically pushing on the infrastructure.
If you want to stand up a gigawatt cluster, then first of all, that has a lot of implications for the way that we're doing infrastructure buildouts. It has sort of political implications for how you engage with the different states where you're building that stuff.
It has financial implications for the company in terms of all right there's like a lot of economic uncertainty in the world do we like go double down on infrastructure right now um and and if so what other trade-offs do we want to make around the company like those are things that like it's tough for other people to really make those kind of decisions um and then and then, and then I think that there's this question around like taste and quality, which is like, when is something good enough that we want to ship it? And, and I do feel like in general, I'm the steward of that for the company. Although, you know, we have a lot of other people I think have good taste as well, who are also filters for, for but um but yeah i think that those are those are basically the areas but i think um ai is interesting because more than some of the other stuff that we do it is more research and model led than really product led like you can't just like design the product that you want and then try to build the model to fit into it.

You really need to design the model first and the capabilities that you want.

And then you get some emergent properties.

Then it's like, oh, you can build some different stuff because this kind of turned out in this way.

And I think at the end of the day, people want to use the best model. So that's partially why when we're talking about building the most like personal ai um the best voice the best personalization um and like also a very smart experience with very low latency those are the things that we basically need to design the whole system to build which is why we're working on full duplex voice which is why we're working on like the personalization to both both have like good memory extraction from your interaction with AI, but also be able to plug into all the other meta systems and why we design the specific models that we designed to have the kind of size and latency parameters that they do.
Speaking of politics, there's been this perception that some tech leaders have been aligning with Trump. You and others have donated to his inaugural event and were on stage with him.
And I think you settled like a lawsuit, which resulted in them getting $25 million. I wonder what's going on here.
Does it feel like the cost of doing business with the administration or, yeah, what's the best thing about this this my view on this is like he's the president of the united states our default as an american company should be to try to have a productive relationship with whoever is running the government um i would do this you know like we we've tried to offer to support um previous administrations as well i've i've been pretty public with some of my frustrations with the previous administration, how they basically did not engage with us or the business community more broadly, which I think, frankly, I think is going to be necessary to make progress on some of these things. Like we're not going to be able to build the level of energy that we need if you don't have a dialogue and they're not prioritizing trying to do those things so um but fundamentally you know look i mean i think a lot of people want to write this story about like like you know what direction are people going like i just think it's like we're trying to build great stuff we want to work with have a productive relationship with people and that's sort of that's that's how see it.
And it is also how I would guess most others see it. But obviously I can't speak for them.
You've spoken out about how you rethought some of the ways in which you engage and defer to the government in terms of moderation stuff in the past. How are you thinking about AI governance? Because if AI is as powerful as we think it might be, the government will want to get involved.
What is the most productive approach to take there? And what should the government be thinking about here? Yeah, I guess in the past, I probably just... I mean, most of the comments that I made, I think were in the context of content moderation, where it's been an interesting journey over the last 10 years on this, where it's obviously been an interesting time in history.
There have been novel questions raised about online content moderation. Some of those have led to, I think, productive new systems getting built, like our AI systems to be able to detect nation states trying to interfere in each other's elections.
I think we will continue building that stuff out. And that, that I think has been net positive.
I think other stuff, we went down some bad paths. Like I just think the fact checking thing was not as effective as community notes because it's not an internet scale solution.
There weren't enough fact checkers and like people didn't trust the specific fact checkers. They like, you want a more robust system so i i think what we got with community notes is the right one on that but um but my point on this was was more that um that i think historically i probably deferred a little bit too much to um either the media in in kind of their critiques or the government on things that they did not really have authority over, but just as like a central figure.
Like, I think we tried to build systems that were maybe we could not have to make all the content moderation decisions ourselves or something. And I guess i just think part of the the growth process over the last 10 years is just okay like we're a meaningful company we need to own the decisions that we need to make we should listen to feedback from people but shouldn't defer too much to people who are not who do not actually have authority over this because at the end of the day, we're in the seat

and we need to own the decisions that we make.

And so I think we probably,

it's been a maturation process

and in some ways painful,

but I think we're probably a better company for it.

Will tariffs increase the cost

of building data centers in the US

and shift build-outs to Europe and Asia?

It is really hard to know how that plays out. I think we're probably in the early innings on that, and it's very hard to know.
Got it. What is your single highest leverage hour in a week? What are you doing in an hour? I don't know.
I mean, every week is a little bit different. I mean, it's probably got to be the case that the most leveraged thing that you do in a week is not the same thing each week, or else by definition, you should probably spend more than one hour doing that thing every week.
But yeah, I don't know. It's part of the fun of both, I guess, this job, but also the industry being so dynamic as like things really move around.
Right. And like, and the world is very different now than it was at the beginning of the year than it was six months into the middle of last year.
Um, you know, I think a lot has sort of has really advanced meaningfully and like a lot of cards have been turned over since the last time that we sat down. I think that was about a year ago.
Right. Yeah.
Yeah. I guess you were saying earlier that recruiting people is a super high leverage thing you do.
It's very high leverage. Yeah, yeah.
What would be possible if, you know, you talked about these models being mid-level software engineers by the end of the year, or what would be possible if, say, software productivity increased like 100x in two years? What kinds of things could we build that we can't build right now? What kinds of things? Well, that's an interesting question.

I think one theme of this conversation is that the amount of creativity that's going to be unlocked is going to be massive. um and if you look at like the overall arc of kind of human society and the economy over

100 or 150 years it's basically people going from being primarily agrarian and most of human energy going towards just feeding ourselves to that has become a kind of smaller and smaller percent and the things that take care of like our basic physical needs or a smaller and smaller percent of human energy which has led to two impacts one is more people are doing kind of creative and cultural pursuits and two is that more people people in general spend less time working and more time on entertainment and culture um i think that that is almost certainly going to continue as, as this goes on. This isn't like the one to two year thing of what happens when you have a, like a super powerful software engineer.
But I think over time, you know, if you, like everyone is going to have these superhuman tools to be able to create a ton of different stuff. I think you're going to get this incredible diversity.
Part of it is going to be solving like things that we hold up as like, these like hard problems, like solving diseases or like solving different things around science or just like different technology that makes our lives better. But I would guess that a lot of it is going to end up being kind of cultural and social pursuit and entertainment.
And like, I would guess that the world is going to get a lot more, like a lot funnier and like weirder and quirkier in a way that like the memes on the internet have sort of gotten over the last 10 years. And I think that that adds a certain kind of richness and depth as well, that in kind of funny ways, I think it actually helps you connect better with people because now like, I don't know, it's like all day long.
I just find interesting stuff on the internet and like send it in group chats to the people I care about who I think are going to find it funny. And it's like, like the, the media that people can produce today to express very, very nuanced, specific cultural ideas.
I don't know. It's cool.
And I think that'll continue to get built out. And I think it does advance society in a bunch of ways, even if it's not like the hard science way of curing a disease.
But I guess this is sort of, if you think about it, like the, the like meta social media view of the world is like, yeah, I think people are going to spend a lot more time doing that stuff in the future. Um, and, and it's going to be a lot better and it's going to help you connect because it's like going to help express different ideas.
Um, because the world's going to get more complicated, but like our technology, our cultural technology to kind of express these very complicated things, um, in like a very kind of funny little clip or something,

um, are gonna just get so much better. So I think that's all great.
Um,

I don't know next year for, uh, I tend to, I mean, just, I guess one other thought that I think is

interesting to cover is, um, I tend to think that it, just I guess one other thought that I think is interesting to cover is I tend to think that for at least the foreseeable future, this is going to lead towards more demand for people doing work, not less. Now, people have a choice of how much time they want to spend working.
But I'll give you one interesting example of something that we were talking about recently. We have almost three and a half billion people use our services every day.
And one question that we've struggled with forever is how do we provide customer support? Today, you can write an email. But we've never seriously been able to contemplate having voice support where someone can just call call in and i guess that's maybe one of the artifacts of having a free service right is like the the revenue per person's not so high that you can have an economic model that people can can kind of call in but also with three and a half billion people using your service every day i mean you there'd be like a massive massive number of people like some like like the biggest call center in the world type of thing.

Yeah.

But it would be like $10, $20 billion,

something ridiculous a year to kind of staff that.

So we've never really kind of like thought

too seriously about it

because it was always just like,

no, there's no way that this kind of makes sense.

But now as the AI gets better,

you're going to get to this place where the AI can handle a bunch of people's issues. Not all of them, right? Because maybe 10 years from now or something, it can handle all of them.
But when we're thinking about like a three to five year time horizon, it'll be able to handle a bunch. Kind of like self-driving cars can handle a bunch of terrain.
But in general, they're not like doing the whole route by themselves yet in in most cases right it's like people thought truck driving jobs were going to go away there's actually more truck driving jobs now than there were like when we started talking about self-driving cars um in whatever it was almost 20 years ago um and i think for going back to this customer support thing it it's like, all right, it wouldn't make sense for us to staff out, um, calling for everyone. But let's say the AI can handle 90% of that.
Then like, and then if you, if it can't handle it, then it kicks it off to a person. Okay.
now like if you've gotten the cost of providing that service down to one 10th of what it would have otherwise been, then all right, maybe the, now that actually makes sense to go do, and that would be kind of cool. So the net result is like, I actually think we're probably going to go hire more customer support people, right? It's like, like the, the common knowledge or like the, the kind of common belief that people have is that like, oh, this is clearly just going to automate jobs and like all these jobs are going to go away.
I actually just, that has not really been how the history of technology has worked. It's been, you know, you can, you like create things that take away 90% of the work and that leads you to want more people, not less.
Yeah. I mean, to close off the interview, i've been playing devil's advocate on a bunch of

points and i really appreciate you being a good sport about it but i do think there's like not an up and bound to how much beauty there can be in the world especially if there's billions of ais optimizing the amount of beauty you can see and the amount of connection you can have and so forth so um yeah i'm pretty optimistic about it final question who is the one person in the world today who you most seek out for advice? Oh man. Well, I feel like it's part of my style is I like having a breadth of advisors.
So it's, it's not just, it's not just one person, but it's, um, we've got a great team. I mean, it's, uh, you know, I'm, I'm, I think that there's people at the company, people on our board.
Um, and there's a lot of people in the industry who are doing new stuff. I, there's, there's not, there's not like a single person.
Um, but it's, uh, I know it's fun. And, and also as when the world is dynamic, um, just having a reason to work with people you like on cool stuff, to me, that's

what life is about.

Yep.

All right.

Great note to close on.

Awesome.

Thanks for doing this.

Yeah.

Thank you.

I hope you enjoyed this episode.

If you did, the most helpful thing you can do is just share it with other people who

you think might enjoy it.

Send it to your friends, your group chats, Twitter, wherever else.

Just let the word go forth.

Other than that, super helpful if you can subscribe on YouTube and leave a five-star

review on Apple Podcasts and Spotify.

Check out the sponsors in the description below. If you want to sponsor a future episode, go to

dhwarkesh.com slash advertise. Thank you for tuning in.
I'll see you on the next one.