Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus

1h 17m

Mark Zuckerberg on:

- Llama 3

- open sourcing towards AGI

- custom silicon, synthetic data, & energy constraints on scaling

- Caesar Augustus, intelligence explosion, bioweapons, $10b models, & much more

Enjoy!

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Human edited transcript with helpful links here.

Timestamps

(00:00:00) - Llama 3

(00:08:32) - Coding on path to AGI

(00:25:24) - Energy bottlenecks

(00:33:20) - Is AI the most important technology ever?

(00:37:21) - Dangers of open source

(00:53:57) - Caesar Augustus and metaverse

(01:04:53) - Open sourcing the $10b model & custom silicon

(01:15:19) - Zuck as CEO of Google+

Sponsors

If you’re interested in advertising on the podcast, fill out this form.

* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Learn more at stripe.com.

* V7 Go is a tool to automate multimodal tasks using GenAI, reliably and at scale. Use code DWARKESH20 for 20% off on the pro plan. Learn more here.

* CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Press play and read along

Runtime: 1h 17m

Transcript

Speaker 1 Mark, welcome to the podcast.

Speaker 2 Hey, thanks for having me. Big fan of your podcast.

Speaker 1 Oh, thank you. That's very nice of you to say.

Speaker 1 Okay, so let's start by talking about the releases that will go out when this interview goes out.

Speaker 1 Tell me about the models. Tell me about MetaAI.
What's new? What's exciting about them?

Speaker 2 Yeah, sure. So, you know, I think the main thing that most people in the world are going to see is the new version of Meta AI.

Speaker 2 So it's, and, you know, the most important thing about what we're doing is the upgrade to the model. We're rolling out Lama 3.

Speaker 2 We're doing it both as open source for the dev community, and it is now going to be powering Meta AI.

Speaker 2 So, you know, there's a lot that I'm sure we'll go into around Lama 3, but I think the bottom line on this is that with Lama 3, we now think that Meta AI is the most intelligent AI assistant that people can use that's freely available.

Speaker 2 We're also integrating Google and Bing for real-time knowledge. We're going to make it a lot more prominent across our apps.

Speaker 2 So, you know, basically, you know, at the top of WhatsApp and Instagram and Facebook and Messenger, you'll just be able to use the search box right there to ask any question.

Speaker 2 And there's a bunch of new creation features that we added that I think are pretty cool, that I think people enjoy.

Speaker 2 And I think animations is a good one. You can basically just take any image and animate it.
But I think one that people are going to find pretty wild is

Speaker 2 it now generates high quality images so quickly. I don't know if you've gotten a chance to play with this, that it actually generates it as you're typing and updates it in real time.

Speaker 2 So you're like typing your query and it's and it's kind of like honing in on and and you know it's like okay here um you know show me a picture of a a cow okay in a field with mountains in the background it's just like everything

Speaker 2 eating macadamia it's drinking beer and like it just and and just like it's updating the image in real time It's pretty wild. I think people are going to enjoy that.

Speaker 2 So, yeah, so that I think is, that's what most people are going to see in the world, right? We're rolling that out.

Speaker 2 You know, not everywhere, but we're starting in a handful of countries and we'll do more over the coming weeks and months.

Speaker 2 So, that I think is going to be a pretty big deal.

Speaker 2 And I'm really excited to get that in people's hands.

Speaker 2 It's a big step forward for MetAI.

Speaker 2 But I think if you want to get under the hood a bit, the Llama 3 stuff is obviously the most technically interesting. So,

Speaker 2 we're basically, for the first version, we're training three versions,

Speaker 2 an 8 billion and a 70 billion, which we're releasing today, and a 405 billion dense model, which is still training. So we're not releasing that today.

Speaker 2 But, you know, the eight and 70, I mean, I'm pretty excited about how they turned out. I mean, it's, you know, they're, they're leading for their scale.

Speaker 2 You know, it's, I mean, we'll, we'll release a blog post with all the benchmarks so people can check it out themselves. And obviously, it's open source, so people get a chance to play with it.

Speaker 2 We have a roadmap of new releases coming that are going to bring multimodality, more multilinguality,

Speaker 2 bigger context windows to those as well.

Speaker 2 And then hopefully sometime later in the year, we'll get to roll out the 405, which I think is,

Speaker 2 in training, it's still training, but

Speaker 2 for where it is right now in training, it is already at

Speaker 2 around 85 MMLU.

Speaker 2 And just we expect that it's going to have leading benchmarks

Speaker 2 on a bunch of the benchmarks. So I'm pretty excited about all of that.
I mean, the 70 billion

Speaker 2 is great too. I mean, we're releasing that today.
It's around 82 MMOU and has leading scores on math and reasoning. So, I mean, I think just getting this in people's hands is going to be pretty wild.

Speaker 2 Oh, interesting.

Speaker 1 Yeah, that's the first time I'm hearing this benchmark. That's super impressive.
Yeah, and

Speaker 2 the 8 billion

Speaker 2 is nearly as

Speaker 2 powerful as the biggest version of Llama 2 that we released. So it's like the smallest llama 3 is basically as powerful as the biggest llama 2.

Speaker 1 Okay, so before we dig into these models, I actually want to go back in time. 2022 is, I'm assuming, when you started acquiring these H100s,

Speaker 1 or you can tell me when.

Speaker 1 But you're like, stock price is getting hammered. People are like, what's happening with all this CapEx? People aren't buying the metaverse.

Speaker 1 And presumably, you're spending that CapEx to get these H100s.

Speaker 1 Back then, how did you know to get the H100s? How did you know we'll need the GPUs?

Speaker 2 I think it was because we were working on reels. So, you know, we got into this situation where,

Speaker 2 we always want to have enough capacity to build something that we can't quite see that we're on the horizon yet.

Speaker 2 And we got into this position with reels where we needed more GPUs to train the models.

Speaker 2 It was this big evolution. For our services, where instead of just ranking content from people who you follow or your friends and whatever pages you follow,

Speaker 2 we made this big push to basically start recommending what we call unconnected content, basically content from people or pages that you're not following. So now kind of the

Speaker 2 corpus of kind of content candidates that we could potentially show you expanded from on the order of thousands to on the order of hundreds of millions. So completely different infrastructure.
And we

Speaker 2 started working on doing that. And we were constrained on

Speaker 2 basically the infrastructure that we had to catch up to what TikTok was doing as quickly as we would have wanted to.

Speaker 2 So I basically looked at that and I was like, hey, we have to make sure that we're never in this situation again.

Speaker 2 So let's order enough GPUs to do what we need to do on reels and ranking content and feed, but let's also, let's double that, right?

Speaker 2 Because again, like our normal principle is there's going to be something on the horizon that we can't see yet.

Speaker 1 Did you know it would be AI?

Speaker 2 Well, we thought it would be,

Speaker 2 we thought it was going to be something that had to do with training large models, right? I mean, but at the time, I thought it was probably going to be more something that had to do with content.

Speaker 2 But I don't know. I mean, it's almost just the pattern matching and running the company is there's always another thing, right?

Speaker 2 So I'm not even sure I had at that time, I was so deep in just trying to get the recommendations working for reels and other content.

Speaker 2 Because I mean, that's just such a big unlock for Instagram and Facebook to now being able to show people content that's interesting to them that they're from people that they're not even following.

Speaker 2 But

Speaker 2 yeah,

Speaker 2 that ended up being a very good decision in retrospect. Yeah.
Yeah. Okay.
And it came from being behind. So then it wasn't like I was, I, you know, it wasn't like, oh, I was so far ahead.

Speaker 2 Actually, most of the times I think where we kind of make some decision that ends up seeming good is because we messed something up before and just didn't want to repeat the mistake.

Speaker 1 This is a total detour, but I actually want to ask about this while we're on this. We'll get back to A and I and AI in a second.
So you didn't sell for 1 billion.

Speaker 1 Presumably there's some amount you would have sold for, right?

Speaker 1 Did you write down in your head, like, I think the actual valuation of Facebook at the time is this, and they're not actually getting the valuation right?

Speaker 1 Like, they're already $5 trillion, of course, you would have sold. So

Speaker 1 how did you think about that choice?

Speaker 2 Yeah, I don't know. I mean, look, I think some of these things are just personal.

Speaker 2 I don't know at the time that I was sophisticated enough to do that analysis, right? I had all these people around me who were making all these arguments for how like

Speaker 2 a billion dollars was, you know, it's like, here's the revenue that we need to make and here's how big we need to be. And like, it's clearly so many years in the future.

Speaker 2 Like, it was, it was very far ahead of where we were at the time. And I don't know, I didn't, I didn't really have the financial sophistication to really even engage with that kind of debate.

Speaker 2 I just, I think I sort of deep down believed in what we were doing. And I did some analysis.

Speaker 2 I was like, okay, well.

Speaker 2 What would I go do if I wasn't doing this? It's like, well, I really like building things and I like helping people communicate.

Speaker 2 And I like understanding what's going on with people and the dynamics between people. So I think if I sold this company, I'd just go build another company like this.
And I kind of like the one I have.

Speaker 2 So,

Speaker 2 I mean, you know, what's why, why? Right. But

Speaker 2 I don't know. I think a lot of the biggest bets that people make

Speaker 2 are often just based on conviction and values.

Speaker 2 It's actually usually very hard to do the analyses trying to connect the dots forward. Yeah.

Speaker 1 So you've had Facebook AI research for a long time.

Speaker 1 Now it's become seemingly central to your company.

Speaker 1 At what point did making AGI or whatever, however you consider that mission, at what point is that like, this is a creek priority of what Meta is doing?

Speaker 2 Yeah, I mean, it's been a big deal for a while. So we started FAIR

Speaker 2 about 10 years ago.

Speaker 2 And the idea was that along the way to general intelligence or AI, like full AI, whatever you want to call it, there are going to be all these different innovations and that's going to just improve everything that we do.

Speaker 2 So we didn't kind of conceive it as a product. It was more kind of a research group.
And

Speaker 2 over the last 10 years, it has created a lot of different things that have basically improved all of our products and advanced the field and allowed other people in the field to create things that have improved our products too.

Speaker 2 So I think that that's been great. But there's obviously a big change

Speaker 2 in the last few years when ChatGPT comes out, the diffusion models around image creation come out. And like, I mean, this is some pretty wild stuff, right?

Speaker 2 That I think is like pretty clearly going to affect how

Speaker 2 people interact with like every app that's out there. So

Speaker 2 at that point, we started a second group.

Speaker 2 the Gen AI group, with the goal of basically bringing that stuff into our product. So building leading foundation models that would sort of power all these different products.

Speaker 2 And initially, when we started doing that,

Speaker 2 the theory at first was, hey, a lot of the stuff that we're doing is pretty social, right?

Speaker 2 So, you know, it's helping people interact with creators, helping people interact with businesses to, you know, so the businesses can sell things or do customer support or

Speaker 2 basic assistant functionality for,

Speaker 2 you know, whether it's for our apps or or the smart glasses or VR, like all these different things.

Speaker 2 So initially it wasn't completely clear that you were going to need kind of full AGI to be able to support those use cases.

Speaker 2 But then through working on them, I think it's actually become clear that you do, right? They're in all these subtle ways.

Speaker 2 So for example, you know, for Llama 2, when we were working on it, we didn't prioritize coding.

Speaker 2 And the reason why we didn't prioritize coding is because people aren't going to ask Meta AI a lot of coding questions in WhatsApp. No, they they will.

Speaker 2 Well, I don't know. I'm not sure that WhatsApp is like the UI that people are going to be doing a lot of coding questions.

Speaker 2 So we're like, all right, look, in terms of the things that, you know, or Facebook or Instagram or, you know, those, those different services, maybe, maybe the website, right?

Speaker 2 Meta.ai that we're launching, I think. But

Speaker 2 the thing that was sort of, I think, has, has been a somewhat surprising result over the last,

Speaker 2 you know, 18 months is that it it turns out that coding is important for a lot of domains, not just coding, right?

Speaker 2 So even if people aren't asking coding questions to the models, training the models on coding helps them just be more rigorous and answer the question and kind of help reason across a lot of different types of domains.

Speaker 2 Okay, so that's one example where it's like, all right, so for Lama 3, we like really focused on training it with a lot of coding because it's like, all right, that's going to make it better on all these things, even if people

Speaker 2 aren't asking primarily coding questions. Reasoning, I think, is another example.

Speaker 2 It's like, okay, yeah, maybe you want to chat with a creator or, you know, you're a business and you're trying to interact with a customer.

Speaker 2 You know, that interaction is not just like, okay, the person sends you a message and you just reply, right?

Speaker 2 It's a, it's like a multi-step interaction where you're trying to think through how do I accomplish the person's goals.

Speaker 2 And, you know, a lot of times when a customer comes, they don't necessarily know exactly what they're looking for or how to ask their questions.

Speaker 2 So it's not really the job of the AI to just respond to the question. It's like you need to kind of think about it more holistically.
It really becomes a reasoning problem, right?

Speaker 2 So if someone else solves reasoning or makes good advances on reasoning and we're sitting here and with a basic chat bot then like our product is lame compared to what other people are building so it's like so okay so at the end of the day we've got we you know i we basically realized we've got to solve general intelligence um and we just kind of upped the ante and the investment to make sure that we could do that so the version of llama that um

Speaker 1 that uh that's going to solve all these use cases for users, is that the version that will be powerful enough to like replace a programmer you might have in this building

Speaker 2 i mean i just think that all this stuff is going to be progressive over time but end case lama 10 um

Speaker 2 i i mean i think that there's a lot baked into that question i i'm not sure that we're replacing people as much as

Speaker 2 giving people tools to do more stuff is a programmer in this building 10x more productive after i would know more but um but no i mean look i i i'm not i don't believe that there's like a single threshold of intelligence for for humanity because i mean people have different skills and at some point i think that ai is going to be um is is probably going to surpass people at most of of those things um depending on how powerful the models are but um

Speaker 2 but i think it's progressive and i don't think agi is one thing i think it's you're basically adding different capabilities so multimodality is is kind of a key one that we're focused on now initially with photos and images and text but eventually with videos and then because we're so focused on the metaverse kind of 3d type stuff is important

Speaker 2 One modality that I'm pretty focused on that I haven't seen as many other people in the industry focus on this is sort of like

Speaker 2 emotional understanding. Like, I mean, so much of the human brain is just dedicated to understanding people and kind of like understanding your expressions and emotions.

Speaker 2 And I think that that's like its own whole modality, right? That, I mean, you could say, okay, maybe it's just video or image, but it's like clearly a very specialized version of those too.

Speaker 2 So there's all these different capabilities that I think you want to basically train the models to focus on, as well as getting a lot better at reasoning, getting a lot better at memory, which I think is kind of its own whole thing.

Speaker 2 It's, I mean, I don't think we're going to be primarily shoving context or

Speaker 2 kind of things into a query context window in the future to ask more complicated questions. I think that there will be kind of different stores of memory or different custom models that

Speaker 2 are maybe more personalized to people. But I don't know, I think that these are all just different capabilities.

Speaker 2 And then, obviously, making them big and small, we care about both because we want to, if you're running something like Meta AI,

Speaker 2 then we have the ability to, that's pretty server-based.

Speaker 2 But we also want it running on smart glasses. And there's not a lot of space in smart glasses.
So

Speaker 2 you want to have something that's very efficient for that.

Speaker 1 What is the use case that if you're doing tens of billions of dollars worth of inference, or even eventually hundreds of billions of dollars worth of inference, if you're using intelligence in an industrial scale, what is the use case?

Speaker 1 Is it simulations? Is it the AIs that will be in the metaverse?

Speaker 1 What will we be using the data centers for?

Speaker 2 I mean, our bet is that

Speaker 2 this is basically going to change all of the products, right? So I think that there's going to be a kind of meta AI general assistant product. And I think that that will shift.

Speaker 2 from something that feels more like a chat bot where it's like you just ask a question and it kind of formulates an answer to things where you're increasingly giving it more complicated tasks and then it goes away and does them.

Speaker 2 So I think that that's going to take a lot of inference. It's going to take a lot of compute in other ways too.

Speaker 2 Then I think that there's a big part of what we're going to do that is

Speaker 2 like interacting with other agents for other people. So whether it's businesses or creators.

Speaker 2 I guess a big part of my theory on this is that there's not just going to be like one singular AI that you interact with, because I think

Speaker 2 every business is going to like want an AI that represents their interests.

Speaker 2 They're not going to like want to primarily interact with you through an AI that is going to sell their competitors' customers. So, sorry, their competitors' products.

Speaker 2 So,

Speaker 2 yeah, so I think creators is going to be a big one. I mean, there are about 200 million creators on our platforms.
They all basically have the pattern where

Speaker 2 they want to engage their community, but they're limited by hours in the day. And their community generally wants to engage them, but they're limited by hours in the day.

Speaker 2 So if you could create something where an AI could basically, where that creator can basically own the AI and train it in the way that they want

Speaker 2 and can engage their community, I think that that's going to be super powerful too. So

Speaker 2 I think that there's going to be a ton of engagement across all these things.

Speaker 2 But these are just the consumer use cases. I mean, I think when you think about stuff like,

Speaker 2 I mean, you know, I run like our foundation, right? Chen Zuckerberg initiative with my wife. And, you know, we're doing a bunch of stuff on science.
And

Speaker 2 there's obviously a lot of AI work that I think is going to advance science and healthcare and all these things, too.

Speaker 2 So, I think that it's like there's a this is, I think, gonna end up affecting basically every area of the products and

Speaker 2 the uh the economy.

Speaker 1 The thing you mentioned about an AI that can just go out and do something for you that's multi-step, is that a bigger model?

Speaker 1 Is that you'll make like Lama 4 will still, there'll be a version that's still 70B, but will just be, you'll just train it on the right data, and that will be super powerful.

Speaker 1 How do you like, what does the progression look like? Is it scaling? Is it just same size, but different banks, like you were talking about?

Speaker 2 I don't know that we know the answer to that. So I think one thing that seems to be a pattern is that you have the Llama, sorry, the Lama model, and then you build some

Speaker 2 kind of other application-specific code around it, right? So some of it is the fine-tuning for the use case, but some of it is just like logic for, okay, how

Speaker 2 like how MetaAI should integrate, like should work with tools like Google or Bing to bring in real-time knowledge. I mean, that's not part of the base Llama model.

Speaker 2 That's like part of a, okay, so for Llama 2, we had some of that and it was a little more kind of hand engineered.

Speaker 2 And then part of our goal for Llama 3 was to bring more of that into the model itself.

Speaker 2 But for Llama 3, as we start getting into more of these agent-like behaviors, I think some of that is going to be more hand-engineered.

Speaker 2 And then I think our goal for Lamba 4 will be to bring more of that into the model.

Speaker 2 So I think at each point, like at each step along the way, you kind of have a sense of what's going to be possible on the horizon. You start messing with it and hacking around it.

Speaker 2 And then I think that that helps you hone your intuition for what you want to try to train into the next version of the model itself. Interesting.

Speaker 2 Which makes it more general because obviously anything that you're hand coding is,

Speaker 2 you know, you can unlock some use cases, but it's just inherently brittle and non-general.

Speaker 1 Hey, everybody. Real quick, I want to tell you about a tool that I wish more applications used.
So obviously you've noticed every single company is trying to add an AI chatbot to their website.

Speaker 1 But as a user, I usually find them really annoying because they give these long, generic, often useless answers.

Speaker 1 Command bar is a user assistant that you can just embed into your website or application. And it feels like you're talking to a friendly human support agent who is browsing with you and for you.

Speaker 1 And it's much more personalized than a regular chatbot. It can actually look up users' history and respond differently based on that.
It can use APIs to perform actions.

Speaker 1 It can even proactively nudge users to explore new features.

Speaker 1 One thing that I think is really cool is that instead of just outputting text, Commandbar can kind of just say, here, let me show you, and start browsing alongside the user.

Speaker 1 Anyways, they're in a bunch of great products already. You can learn more about them at commandbar.com.
Thanks to them for sponsoring this episode. And now back to Mark.

Speaker 1 When you say into the model itself, you train it on the thing that you want in the model itself. But what do you mean by into the model itself?

Speaker 2 Well, I mean, I think like the example that I gave for Llama 2, where, you know, it's, we...

Speaker 2 We really, I mean, for Llama 2, the tool use was very, very specific.

Speaker 2 Whereas Llama 3 has the ability to has much better tool use, right? So we don't have to like hand code all the stuff to have it use Google to go do a search.

Speaker 2 It just kind of can do that.

Speaker 2 So, and similarly for coding and kind of running code and just a bunch of stuff like that.

Speaker 2 But I think once you kind of get that capability, then you get a peak of, okay, well, what can we start doing next?

Speaker 2 Okay, well, I don't necessarily want to wait until llama 4 is around to start building those capabilities. So let's start hacking around it.

Speaker 2 And so you do a bunch of hand coding and that makes the products better for the interim. But then that also helps show the way of what we want to try to build into the next version of the model.

Speaker 1 What is the community fine-tune of Lama 3 you're most excited by? Maybe not the one that will be most useful to you, but just you'll just enjoy playing it with the most.

Speaker 1 They like fine-tune it on antiquity, and you'll just be like talking to Virgil or something. What are you excited about?

Speaker 2 I don't know. Um,

Speaker 2 I mean, I think the nature of the stuff is it's like

Speaker 2 you get surprised, right? So, I think like any

Speaker 2 specific thing that I sort of

Speaker 2 thought would be valuable, we'd probably be building, right? So, um,

Speaker 2 but

Speaker 2 I think you'll get distilled versions. I think you'll get kind of smaller versions.
I mean, I mean, one thing that I think is

Speaker 2 8 billion, I don't think is quite small enough for a bunch of use cases, right?

Speaker 2 I think like over time, I'd love to get, you know, a billion parameter model or a 2 billion parameter model, or even like a, I don't know, maybe like a 500 million parameter model and see what you can do with that.

Speaker 2 Because I mean, as they start getting,

Speaker 2 if with 8 billion parameters, we're basically nearly as powerful as the largest Lama 2 model, then with a billion parameters, you should be able to do something that's interesting, right, and faster,

Speaker 2 good for classification or a lot of kind of like basic things that people do before

Speaker 2 kind of understanding the intent of a user query and feeding it to the most powerful model to kind of hone what the prompt should be.

Speaker 2 so i don't know i think that's one thing that maybe the community can help fill in but i mean i will we'll also we're also thinking about getting around to distilling some of these ourselves but you know right now the gpus are uh pegged training the 405 so what okay so you have all these gpus um uh

Speaker 2 i think you said 350 000 by the end of the year that's the whole fleet i mean i was we we built two

Speaker 2 I think it's like 22, 24,000 clusters that are kind of the single clusters that we have for training the big models.

Speaker 2 I mean, I mean, obviously, across a lot of the stuff that we do, a lot of our stuff goes towards training like reels models and like Facebook news feed and Instagram feed.

Speaker 2 And then inference is a huge thing for us because we serve a ton of people, right? So our ratio of inference compute required to

Speaker 2 training is probably much higher than most other companies that are doing this stuff just because of the sheer volume of the community that we're serving. Yeah.
Yeah. Yeah.

Speaker 1 That was really interesting in the material they shared with me before that you trained it on more data than is compute optimal just for training because the inference is such a big deal for you guys and also for the community that it makes sense to just have this thing have trillions of tokens in there.

Speaker 2 Yeah. Yeah.
Although, and one of the interesting things about it that we saw even with the 70 billion is we we thought it would get more saturated at,

Speaker 2 you know, it's like we trained it on around 15 trillion tokens. Yeah.

Speaker 2 I guess our prediction going in was that it was going to asymptote more, but even by the end it was still learning, right?

Speaker 2 It's like, we probably could have fed it more tokens and it would have gotten somewhat better.

Speaker 2 But I mean, at some point, you know, you're running a company, you need to, there are these meta-reasoning questions of like, all right, do I want to spend our GPUs on like training this 70 billion model further?

Speaker 2 Or do we want to kind of get on with it so we can start testing hypotheses for Lama 4? So we kind of needed to

Speaker 2 make that call. And I think we got it.
I think we got to a reasonable balance for this version of the 70 billion.

Speaker 2 There will be others in the future where the 70 billion multimodal one that'll come over the next period. But

Speaker 2 yeah, I mean,

Speaker 2 that was fascinating that you could just, that it's the architectures at this point can just take so much data.

Speaker 1 Yeah, that's really interesting. So, what does this imply about future models?

Speaker 1 You mentioned that the Llama 38B is better than the Lama 270B.

Speaker 2 No, no, it's nearly as good.

Speaker 1 Okay,

Speaker 1 I don't understand. But does that mean like the Lama 3?

Speaker 1 Does that mean like the Lama 470B will be as good as the Lama 3405B? Like, I mean, what is the future model? those?

Speaker 2 This is one of the great questions, right? That I think no one knows. Um, is basically,

Speaker 2 you know, it's, it's one of the trickiest things in the world to plan around is when you have an exponential curve, how long does it keep going for? Yeah. And

Speaker 2 I think it's likely enough that it will keep going, that it is worth investing the tens or

Speaker 2 hundred billion plus in building the infrastructure to assume that if that kind of keeps going, you're going to get some really amazing things that are just going to make amazing products. But

Speaker 2 I don't think anyone in the industry can really tell you that it will continue scaling at that rate for sure, right? In general, in history, you hit bottlenecks at certain points.

Speaker 2 And now there's so much energy on this that maybe those bottlenecks get knocked over pretty quickly. But

Speaker 2 I don't know. I think that's an interesting question.

Speaker 1 What does the world look like where there aren't these bottlenecks?

Speaker 1 Suppose progress just continues at this pace, which seems like plausible.

Speaker 1 Like zooming out and forgetting about like there's gonna be different bottlenecks. Right.
So if not trading, then in like, oh yeah, go ahead.

Speaker 2 Well, I think at some point,

Speaker 2 you know, over the last few years, I think there was this issue of GPU production. Yeah.
Right. So even companies that had the models,

Speaker 2 sorry, that had the money to pay for the GPUs

Speaker 2 couldn't necessarily get as many as they wanted because there were all these supply constraints.

Speaker 2 Now I I think that's sort of getting less. So now I think you're seeing a bunch of companies think about, wow, we should just like really invest a lot of money in building out these things.

Speaker 2 And I think that that will go for

Speaker 2 some period of time.

Speaker 2 I think

Speaker 2 there is a capital question of like, okay,

Speaker 2 at what point does it stop being worth it to put the capital in? But I actually think before we hit that, you're going to run into energy constraints, right? Because

Speaker 2 I just, I mean, I don't think anyone's built a gigawatt single training cluster yet, right? And

Speaker 2 then you run into these things that just end up being slower in the world. Like getting

Speaker 2 energy permitted is like a very heavily regulated government function, right? So you're going from, on the one hand, software, which is somewhat regulated.

Speaker 2 I'd argue that it is more regulated than I think a lot of people

Speaker 2 in the tech community feel, although it's obviously different.

Speaker 2 If you're starting a small company, company maybe you feel that less if you're a big company you know we just interact with people but different governments and regulators are you know we have kind of lots of rules that we need to kind of follow and make sure we do a good job with around the world um but i think that there's no doubt that like energy and if you're talking about building large new power plants or large build outs and then building transmission lines that cross

Speaker 2 other private or public land, that is just a heavily regulated thing. So you're talking about many years of lead time.
So if we wanted to stand up just some like massive facility to power that,

Speaker 2 I think that that is,

Speaker 2 that's, that's a very long-term project. Right.
And

Speaker 2 so I don't know. I think that that's, I think people will do it.

Speaker 2 I don't, but, but I don't think that this is like something that can be quite as magical as just like, okay, you get a level of AI and you get a bunch of capital and you put it in.

Speaker 2 And then like, all of a sudden the models are just going to kind of like, it just, like, I think you, you do hit different bottlenecks along the way.

Speaker 1 Yeah. Is there something, a project, maybe AI related, maybe not, that even a company like Meta doesn't have the resources for?

Speaker 1 Like, if your RD budget or your CapEx budget was 10x what it is now, then you could pursue it. Like, it's in the back of your mind.

Speaker 1 But Meta today, and maybe you could, like, because even you can't even issue stock or bond for it. It's like just 10x bigger than your budget.

Speaker 2 Well, I think energy is one piece. Yeah.
Right. Um, I think we would probably build out bigger clusters than we currently can

Speaker 2 if we could get the energy to do it. So I think that that's

Speaker 1 fundamentally money bottlenecked in the limit. Like if you had a trillion dollars.

Speaker 2 I think it's time. Yeah.
Right.

Speaker 2 Well, if you look at it in terms of, but it depends on how far the exponential curves go. Right.
Like I think a number of companies are working on, you know, right now I think

Speaker 2 a lot of data centers are on the order of 50 megawatts or 100 megawatts or like a big one might be 150 megawatts. Okay.

Speaker 2 So you take a whole data center and you fill it it up with just all the stuff that you need to do for training and you build the biggest cluster you can i think you're that's kind of i think a bunch of companies are running at stuff like that um

Speaker 2 but

Speaker 2 then when you start getting into building a data center that's like 300 megawatts or 500 megawatts or a gigawatt i just i mean just no one has built a single gigawatt data center yet.

Speaker 2 So I think it will happen, right? I mean, this is only a matter of time, but it's not going to be like next year, right?

Speaker 2 I think that some of these these things will take,

Speaker 2 I don't know, some number of years to build out. And then the question is, okay, well, if you,

Speaker 2 I mean, just, I guess, put this in perspective, I think a gigawatt,

Speaker 2 it's like around the size of like a meaningful nuclear power plant only going towards training a model.

Speaker 1 Didn't Amazon do this? There's like, they have a 950 giga megawatt.

Speaker 2 Yeah, I'm not exactly sure what you did. You'd have to, what they did.
You'd have to ask them.

Speaker 1 But it doesn't have to be in the same place, right? If distributed training works, it can be a distribution.

Speaker 2 That I think is a big question. Yeah.
Right. Is basically how that's going to work.

Speaker 2 And I do think in the future, it seems quite possible that more of what we call training for these big models is actually

Speaker 2 more along the lines of inference generating synthetic data to then go feed into the model. So I don't know what that ratio is going to be, but I consider

Speaker 2 the generation of synthetic data to be more inference than training today. But obviously, if you're doing it in order to train a model, it's part of the broader training process.
So

Speaker 2 I don't know.

Speaker 2 That's an open question is to kind of where the balance of that and how that plays out.

Speaker 1 If that's the case,

Speaker 1 would that potentially also be the case with Lama 3?

Speaker 1 And maybe like Lama 4 onwards, where you put this out and if somebody has a ton of compute, then using the models that you've put out, you can just keep making these things arbitrarily smarter.

Speaker 1 Like some Kuwait or UAE or some random country has a ton of compute, and they can just actually just use Lama 4 to just make something much smarter.

Speaker 2 I do think that there are going to be dynamics like that. But I also think that

Speaker 2 there is a fundamental limitation on

Speaker 2 kind of the network architecture, right? Or the kind of model architecture.

Speaker 2 So I think like a 70 billion model

Speaker 2 that kind of we trained with the Lama 3 architecture can get better, right? It can, it can keep going. Like I was saying, it's, you know, we felt like if we kept on feeding it more data or

Speaker 2 rotated the high value tokens through again, then, you know, it would continue getting better.

Speaker 2 But

Speaker 2 and we've seen a bunch of other people around the world, you know, different companies basically take the Lama 2. 70 billion base, like take that model architecture and then build a new model.

Speaker 2 It's still the case that when you make a generational improvement to the kind of Lama 370 billion or the Lama 3 405, there's nothing open source anything like that today, right? Like

Speaker 2 it's not, I think that that's like, it's a big step function and what people are going to be able to build on top of that I don't think can go infinitely from there.

Speaker 2 I think it can, there can be some optimization in that until you get to the next step function.

Speaker 2 Yeah.

Speaker 1 Okay. So let's zoom out a little bit from specific models and even the many Euros lead times you would need to get energy approvals and so on.
on.

Speaker 1 Like big picture, these next couple of decades, what's happening with AI?

Speaker 1 Does it feel like another technology like metaverse or social, or does it feel like a fundamentally different thing in the course of human history?

Speaker 2 I think it's going to be pretty fundamental. I think it's going to be more like

Speaker 2 the creation of computing in the first place. Right.
So

Speaker 2 you'll get all these new apps

Speaker 2 in the same way that when you got the web or you got mobile phones, you got like people basically rethought all these experiences, and a lot of things that weren't possible before now became possible.

Speaker 2 So I think that will happen, but I think it's a much lower-level innovation.

Speaker 2 It's going to be more like going from people didn't have computers to people have computers, is my sense.

Speaker 2 But it's also, it's,

Speaker 2 I don't know, it's It's very hard to reason about exactly how this goes. I tend to think that,

Speaker 2 you know, in like the cosmic scale, obviously it'll happen quickly over a

Speaker 2 couple of decades or something. But I do think that there is some set of people who are afraid of like,

Speaker 2 you know, it really just kind of spins and goes from being like somewhat intelligent to extremely intelligent overnight.

Speaker 2 And I just think that there's all these physical constraints that make that, so that that's unlikely to happen.

Speaker 2 I just don't, I don't really see that playing out.

Speaker 2 So I think you'll have, I think we'll have time to kind of acclimate a bit, but it will really change the way that we work and give people all these creative tools to do different things that they,

Speaker 2 yeah,

Speaker 2 I think it's going to be, it's, it's going to really enable people to do the things that they want a lot more is my view.

Speaker 1 Okay, so maybe not overnight, but is it your view that like on a cosmic scale, if you think like humans evolved and then like AI happened and then they like went out through the galaxy?

Speaker 1 Or maybe it takes many decades, maybe it takes a century, but like

Speaker 1 is that like the grand scheme of what's happening right now in history?

Speaker 2 Sorry, in what sense?

Speaker 1 I mean, in the sense that there are other technologies like computers and even like fire, but like the AI happening is as significant as humans evolving in the first place.

Speaker 2 I think that's tricky. I think people like to,

Speaker 2 you know, the history of humanity, I think, has been

Speaker 2 people basically,

Speaker 2 you know, thinking that certain aspects of

Speaker 2 humanity are like

Speaker 2 really unique in different ways, and then

Speaker 2 coming to grips with the fact that that's not true, but humanity is actually still super special.

Speaker 2 It's like we thought that the Earth was the center of the universe, and it's like, it's not, but like

Speaker 2 it's like humans are still pretty awesome, right? And pretty unique.

Speaker 2 I think that another bias that people tend to have is thinking that intelligence is somehow

Speaker 2 kind of

Speaker 2 fundamentally connected to life. And it's not actually clear that it is, right? I think like people think that

Speaker 2 I mean, I don't know that we have a clear enough definition of consciousness or

Speaker 2 life to kind of fully interrogate this, but

Speaker 2 I know there's all the science fiction about, okay, you create intelligence and now it like starts taking on all these human-like behaviors and things like that.

Speaker 2 But I actually think that the current incarnation of all this stuff, at least, kind of feels like it's going in a direction where intelligence can be pretty separated from consciousness and agency and things like that.

Speaker 2 That

Speaker 2 I think just makes it a super valuable tool. So I don't know.
I mean, obviously

Speaker 2 it's very difficult to predict what direction this stuff goes in over time, which is why I don't think anyone should be dogmatic about

Speaker 2 how they plan to develop it or what they plan to do. I think you want to kind of look at like each release.

Speaker 2 It's like we're obviously very pro-open source, but I haven't committed that we're going to release every single thing that we do.

Speaker 2 But it's basically we, like, I'm just generally very inclined to thinking that open sourcing it is going to be good for the community and also good for us, right?

Speaker 2 Because we'll, we'll benefit from the innovations.

Speaker 2 But if it, at some point, like, there's some qualitative change in what the thing is capable of, and we feel like it's just not responsible to open source it, then we won't. But

Speaker 2 so I don't know, it's it's it's all it's all very difficult to predict.

Speaker 1 Yeah, um, what is a kind of qualitative change, like a specific thing? You're training Lamify, Lama 4, and you've seen this, and like, you know what, I'm not sure about open sourcing it.

Speaker 2 Um,

Speaker 2 I think that that it's a little hard to answer that in the abstract because there are negative behaviors that any product can exhibit that as long as you can mitigate it, it's like, it's okay, right?

Speaker 2 So,

Speaker 2 I mean, there's bad things about social media that we work to mitigate, right?

Speaker 2 There's bad things about Lama 2 that we spend a lot of time trying to make sure that it's not like, you know, helping people commit violent acts or things like that, right?

Speaker 2 I mean, that doesn't mean that it's like a

Speaker 2 kind of autonomous or intelligent agent. It just means that it's learned a lot about the world and it can answer a set of questions that we think it would be unhelpful for it to answer.

Speaker 2 So

Speaker 2 I don't know. I think the question isn't really what behaviors would it show, it's what things would we not be able to mitigate after it shows that.
And

Speaker 2 I don't know.

Speaker 2 I think that there's so many ways in which something can be good or bad that it's hard to actually enumerate them all up front. If you even look at like what we've had to deal with in

Speaker 2 in

Speaker 2 social media and like the different types of harms, we've basically gotten to, it's like, there's like 18 or 19 categories of harmful things that people do.

Speaker 2 And we've basically built AI systems to try to go identify what those things are that people are doing and try to make sure that that doesn't happen on our network as much as possible.

Speaker 2 So yeah, I think you can, over time, I think you'll be able to break down

Speaker 2 this into more of a taxonomy too. And I think that this is a thing that we spend time researching too, because we want to make sure that we understand that.

Speaker 1 So one of the things I asked Mark is what industrial scale use of LLMs would look like.

Speaker 1 You see this in previous technological revolutions where at first they're thinking in a very small scale way about what's enabled. And I think that's what chatbots might be for LLMs.

Speaker 1 And I think the large scale use case might look something like what v7Go is. And by the way, it's made by v7 Labs, who's sponsoring this episode.
So it's like a spreadsheet.

Speaker 1 You put in raw information like documents, images, whatever, and they become rows. And the columns are populated by an LLM of your choice.
And in fact, I used it to prepare for Mark.

Speaker 1 So I fed in a bunch of blog posts and papers from Meta's AI Research. And as you can see, if you're on YouTube, it summarizes and extracts exactly the information I want as columns.

Speaker 1 And obviously, mine is a small use case, but you can imagine, for example, a company like FedEx has to process half a million documents a day. Obviously, a chatbot can't do that.

Speaker 1 A spreadsheet can, because this is just like a fire hose of intelligence in there, right? Anyways, you can learn more about them at v7labs.com/slash go or the link in the description. Back to Mark.

Speaker 1 Yeah. Like, it seems to me it would be a good idea.
I would be disappointed in a future where AI systems aren't broadly deployed and everybody doesn't have access to them. Yeah.

Speaker 1 At the same time, I want to better understand the mitigations.

Speaker 1 Because if the mitigation is the fine-tuning, well, the whole thing about open weights is that you can then

Speaker 1 remove the fine-tuning, which is often superficial on top of these capabilities. Like if it's like talking on Slack with a biology researcher, and again, I think like models are very far from this.

Speaker 1 Right now, they're like Google search.

Speaker 1 But it's like I can show them my petri disk and they can explain, like, here's why your smallpox sample didn't grow. Here's what to change.

Speaker 1 How do you mitigate that? Because somebody can just like fine-tune that in there, right?

Speaker 2 Yeah, I mean,

Speaker 2 that's true. I think a lot of people will basically use the off-the-shelf model.
And some people who have basically bad faith are going to try to strip out all the bad stuff.

Speaker 2 So I do think that that's an issue. The

Speaker 2 The flip side of this is that, and this is one of the reasons why I'm kind of philosophically so pro-open source, is I do think that a concentration of AI in the future has the potential to be as dangerous as kind of it being widespread.

Speaker 2 So I think a lot of people are,

Speaker 2 they think about the questions of, okay, well, if we can do this stuff, is it bad for it to be out wild? Like just and kind of widely available.

Speaker 2 i think another version of this is like okay well it's probably also pretty bad for

Speaker 2 one institution to have an ai that is way more powerful than everyone else's ai right so if you look at like like i guess one security analogy that i think of is um

Speaker 2 you know it doesn't take ai

Speaker 2 to basically okay there's security holes in so many different things and if you could travel back in time a year or two years, right? It's like, that's not AI.

Speaker 2 It's like you just, let's say you just have like one year or two years more knowledge of the security holes. You could pretty much hack into like any system, right?

Speaker 2 So it's not that far-fetched to believe that

Speaker 2 a very intelligent AI would probably be able to identify some holes and basically be like a human who could potentially go back in time a year or two and compromise all these systems.

Speaker 2 Okay, so how have we dealt with that as a society? Well,

Speaker 2 one big part is open source software that makes makes it so that when improvements are made to the software, it doesn't just kind of get stuck in one company's products, but it can kind of be broadly deployed to a lot of different systems, whether it's banks or hospitals or government stuff.

Speaker 2 And like just everyone can kind of like as the software gets hardened, which happens because more people can see it and more people can bang on it, and there are standards on how this stuff works.

Speaker 2 the world can kind of get upgraded together pretty quickly.

Speaker 2 And I kind of think that a world where AI is very widely deployed in a way where it's gotten hardened progressively over time and is one where all the different systems will be in check in a way that seems like it is fundamentally more healthy to me than one where this is more concentrated.

Speaker 2 So there are risks on all sides, but I think that that's one risk that I think

Speaker 2 people, I don't hear them talking about quite as much. I think like there's sort of the risk of like, okay, well, what if the AI system does something bad?

Speaker 2 I am more like, you know, I stay up at night more worrying, well, what if like some actor that

Speaker 2 whatever, it's like from wherever you sit, there's going to be some actor who you don't trust.

Speaker 2 If they're the ones who have like the super strong AI, whether it's some like other government that we, that, that is sort of like an opponent of our country or some company that you don't trust or whatever, whatever it is.

Speaker 2 Um,

Speaker 2 like,

Speaker 2 I think that that's potentially a much bigger risk.

Speaker 1 As in, they could like overthrow our government because they have a weapon that nobody else has.

Speaker 2 Cause a lot of mayhem. Right.
I think it's like,

Speaker 2 I mean, I think the intuition is that this stuff ends up being pretty kind of important and

Speaker 2 valuable for both kind of economic and kind of security and other things. And I don't know.
I just think, yeah,

Speaker 2 if someone who you don't trust or is an adversary of you gets something that is more powerful, then

Speaker 2 I think that that could be an issue.

Speaker 2 And I think probably the best way to mitigate that is to have good open source AI that basically becomes the standard and in a lot of ways kind of can become the leader.

Speaker 2 And in that way, it just ensures that it's a much more kind of even and balanced playing field. Yeah.

Speaker 1 That seems plausible to me. And if that works out, that would be the future I prefer.

Speaker 1 I guess I want to understand like mechanistically how if somebody was going to cause mayhem with AI systems how the fact that there are other open source systems in the world prevents that like the specific example of like somebody coming with a bioweapon um is it just that we'll do a bunch of like r d in the rest of the world to like figure out vaccines really fast like what what's happening if you take like the computer the security one that i was talking about i think someone with a weaker ai trying to hack into a system that is like protected by a stronger ai will succeed less

Speaker 2 Right. So, so I think that that's, um, I mean, that's like in terms of how do you know everything in the world is like that?

Speaker 1 Like, what if bioweapons aren't like that?

Speaker 2 No, I mean, I don't know that everything in the world is like that.

Speaker 2 I think that that's, I guess, one of the bioweapons are one of the areas where I think the people who are most worried about this stuff are focused. And I think that that's...

Speaker 2 I think it makes a lot of sense to think about that.

Speaker 2 And I think that there are certain mitigations. You can try to not train certain knowledge into the model, right? There's different things, but

Speaker 2 yeah, I mean, at some level, I mean, if you get a sufficiently bad actor and you don't have other AI that can sort of balance them

Speaker 2 and understand what's going on and what the threats are, then

Speaker 2 that could be a risk. So I think that that's one of the things that we need to watch out for.

Speaker 1 Is there something you could see in the deployment of these systems where

Speaker 1 you observe like you're training Lama 4 and it's like lied to you you because you thought you weren't noticing or something. And you're like, whoa,

Speaker 1 what's going on here?

Speaker 1 Not that this is probably not likely with the Lama 4 test system, but is there something you can imagine like that where you'd

Speaker 1 be really concerned about deceptiveness? And if billions of copies of things are out in the wild?

Speaker 2 Yeah, I mean, I think that that's not necessarily, I mean, right now,

Speaker 2 we see a lot of hallucinations, right? So I think it's more that.

Speaker 2 I think it's an interesting question how you would tell the difference between a hallucination and deception. But yeah,

Speaker 2 look, I mean, I think that there's a lot of risks and things to think about.

Speaker 2 The flip side of all this is that there are also a lot of,

Speaker 2 I try to, in running our company, at least, balance what I think of as these longer-term theoretical risks

Speaker 2 with what I actually think are quite real risks that exist today. So like

Speaker 2 when you talk about deception, the form of that that I worry about most is people using this to generate misinformation and then, like, pump that through, whether it's our networks or others.

Speaker 2 So, the way that we've basically combated a lot of this type of harmful content is by building AI systems that are smarter than the adversarial ones.

Speaker 2 And I guess this is part of, this kind of informs part of my theory on this, right? Is if you look at like the different types of harm that people do or try to do through social networks,

Speaker 2 There are ones that are not very adversarial. So, for example, like

Speaker 2 hate speech, I would say is not super adversarial in the sense that people aren't getting

Speaker 2 better at being racist, right?

Speaker 2 They're just like, it's you just like, okay, if you, you kind of, that's one where I think the AIs are generally just getting way more sophisticated faster than people are at those issues.

Speaker 2 So, we have, and we have issues both ways. It's like people do bad things that whether they're trying to incite violence or something.

Speaker 2 But we also have a lot of false positives, right? So where we basically censor stuff that we shouldn't and I think understandably make a lot of people annoyed.

Speaker 2 So I think having an AI that just gets increasingly precise on that, that's going to be good over time.

Speaker 2 But let me give you another example, which is like nation states trying to interfere in elections.

Speaker 2 That's an example where they are absolutely, they have cutting edge technology and absolutely get better each year.

Speaker 2 So we block some technique, they learn what we did, they come at us with a different technique, right? It's not like a person trying to, you know,

Speaker 2 I don't know, say, say mean things, right? It's like

Speaker 2 they're basically, they have a goal, they're sophisticated, they have a lot of technology.

Speaker 2 In those cases, I still think the ability to kind of have our AI systems.

Speaker 2 grow and in sophistication at a faster rate than theirs have, it's an arms race, but I think we're at least currently winning that arms race.

Speaker 2 So I don't know. I think that that's, but this is like a lot of the stuff that I, that I spend time thinking about is like, okay,

Speaker 2 yes, it is possible that

Speaker 2 whether it's Lama 4 or Lama 5 or Lama 6, yeah, we need to think about like what behaviors we're observing. And it's not just us.

Speaker 2 I think part of the reason why you make this open source is that there are a lot of other people who study this too.

Speaker 2 So yeah, we want to see what other people are observing, what we're observing, what we can mitigate, and then we'll make our assessment on whether we can make it open source. But

Speaker 2 I think for the foreseeable future, I'm optimistic we will be able to.

Speaker 2 And in the near term, I don't want to take our eye off the ball of what are actual bad things that people are trying to use the models for today, even if they're not existential, but they're like pretty bad kind of day-to-day harms that we're familiar with in running our services.

Speaker 2 That's actually a lot of what we have to, I think, spend our time on as well. Yeah, yeah.

Speaker 1 Actually, I found the synthetic data thing really curious.

Speaker 1 I'm actually interested in why you don't think

Speaker 1 current models, it makes sense why there might be an asymptote with just doing the synthetic data again and again.

Speaker 1 If they get smarter and you use the kind of techniques you talk about in the paper or the blog post that's coming out

Speaker 1 on the date this will be released, where

Speaker 1 it goes to the thought chain that is the most correct.

Speaker 1 Why this wouldn't lead to a loop that, of course, it wouldn't be overnight, but over many months or years of training potentially with a smarter model, it gets smarter, makes better output, gets smarter, and so forth.

Speaker 2 Well, I think it could within the parameter of whatever the model architecture is. It's just that, like, at some level,

Speaker 2 I don't know.

Speaker 2 I think, like, today's 8 billion parameter models, I just don't think you're going to be able to get to be as good as the

Speaker 2 state-of-the-art multi-hundred billion parameter models that are incorporating new research into the architecture itself.

Speaker 1 But those will be open source source as well, right?

Speaker 2 Well, yeah, but I think that that's, if I mean, subject to all the

Speaker 2 questions that we just talked about.

Speaker 2 Yes, I mean, we would, we would hope that that'll be the case. But, but I think that at each point,

Speaker 2 I don't know, it's like when you're building software, there's like a ton of stuff that you can do with software, but then at some level, you're constrained by the chips that it's running on, right?

Speaker 2 So there are always going to be different physical constraints. And it's like, how big are the models is going to be constrained by how much energy you can get and

Speaker 2 use for inference.

Speaker 2 So

Speaker 2 I guess I'm simultaneously very optimistic that this stuff will continue to improve quickly and

Speaker 2 also a little more

Speaker 2 measured than I think some people are about

Speaker 2 kind of it's I just don't think the runaway case is like a particularly likely one.

Speaker 1 I think it makes sense to keep your options open. Like, there's so much we don't know.

Speaker 1 There's a case in which it's really important to keep the balance of power so that nobody becomes like a totalitarian dictator.

Speaker 1 There's a case in which you don't want to open source the architecture because China can use it to catch up to America's AIs. And there is an intelligence explosion and they win that.

Speaker 1 A lot of things possible. Just keeping your options open, considering all of them seems reasonable.

Speaker 2 Yeah.

Speaker 1 Let's talk about some other things.

Speaker 1 Okay. Metaverse, what time period in human history would you be most interested in going into? 100,000 BCE to now? You just want to see what it was like? It has to be the past?

Speaker 2 Huh?

Speaker 1 It has to be the past. Oh, yeah, it has to be the past.

Speaker 2 I don't know. I mean, I have the periods of time that I'm interested in.
I mean, I'm really interested in American history and classical history. And

Speaker 2 I'm really interested in the history of science, too. So I actually think seeing and trying to understand

Speaker 2 more

Speaker 2 about how some of the big advances came about. I mean, all we have are like somewhat limited writings about some of that stuff.

Speaker 2 I'm not sure the metaverse is going to let you do that because I mean, it's, you know, we can't, it's going to be hard to

Speaker 2 kind of go back in time for things that we don't have records of. But

Speaker 2 I'm actually not sure that going back in time is going to be

Speaker 2 that important of a thing for them. I mean, I think it's going to be cool for like history classes and stuff, but

Speaker 2 that's probably not the use case that I'm most excited about for the, for the metaverse overall.

Speaker 2 I mean, it's, um, I think the main thing is just the ability to feel present with people no matter where you are. I think that that's going to be killer.
I mean, there's, um,

Speaker 2 I mean, in the AI conversation that we, that we're having, I mean, it's, uh,

Speaker 2 you know, so much of it is about physical constraints that kind of underlie all of this, right?

Speaker 2 And you want to move, I mean, one lesson of technology is you want to move things from the physical constraint realm into software as much as possible because software is so much easier to build and evolve.

Speaker 2 And like you can democratize it more because, like, not everyone is going to have a data center, but like a lot of people can kind of write code and take open source code and modify it.

Speaker 2 The metaverse version of this is, I think, enabling realistic digital presence

Speaker 2 is going to be just an absolutely huge difference for

Speaker 2 making it so that people don't feel like they have to physically be together for as many things. Now, I mean, I think that there are going to be things that are better about being physically together.

Speaker 2 So it's not, I mean, these things aren't binary. It's not going to be like, okay, now it's, you don't need to do that anymore.
But

Speaker 2 overall, I mean,

Speaker 2 I think that this, it's just going to be really powerful for socializing, for feeling connected with people, for working,

Speaker 2 for

Speaker 2 I don't know, parts of industry, for medicine, for like

Speaker 2 so many things.

Speaker 1 I want to go back to something you said at the beginning of the conversation where you didn't sell the company for a billion dollars and like the metaverse, you knew we were going to do this, even though the market was hammering you for it.

Speaker 1 And then I'm actually curious, like, what is the source of that edge? And you said, like, oh, values, I have this intuition, but like, everybody says that, right?

Speaker 1 Like, what if you had to say something that's specific to you? What is how would you express what that is? Like, why were you so convinced about the metaverse?

Speaker 2 Well, I think that those are different questions. So

Speaker 2 what are the things that

Speaker 2 kind of power me?

Speaker 2 I think we've talked about a bunch of the themes. So it's, I mean, I

Speaker 2 just really like building things.

Speaker 2 I specifically like building things around how people communicate and sort of understanding how people express themselves and how people work, right?

Speaker 2 When I was in college, I was, I was, I studied computer science and psychology. I think a lot of other people in the industry studied computer science, right? So

Speaker 2 it's always been sort of the intersection of those two things for me. But

Speaker 2 I think it's also sort of this like

Speaker 2 really deep drive. I don't know how to explain it, but I just feel like in like constitutionally, like I'm doing something wrong if I'm not building something new.
Right. And

Speaker 2 so I think that there's like,

Speaker 2 you know, even when we're putting together the business case for, you know, investing like $100 billion in AI or some huge amount in the metaverse, it's like, yeah, I mean, we have plans that I think make it pretty clear that if our stuff works, it'll be a good investment.

Speaker 2 But like, you can't know for certain from the outset. And

Speaker 2 so there's all these arguments that people have, you know, whether it's like, you know, with advisors or different folks, it's like, well, how, how could you like, it's,

Speaker 2 how are you confident enough to do this? And it's like, well,

Speaker 2 the day I stop trying to build new things, I'm just done. I'm going to go build new things somewhere else, right? It's like,

Speaker 2 it's like, it is, I'm fundamentally incapable of.

Speaker 2 running something or in my own life and like not trying to build new things that I think are interesting. It's like, that's not even a question for me, right?

Speaker 2 It's like whether, like, whether we're going to go take a swing at like building the next thing. It's like, it's, I, like, I'm, I'm just incapable of not doing that.
Um,

Speaker 2 and

Speaker 2 I don't know. I, and I'm kind of like this in like all the different aspects of my life, right? It's like we built this, like,

Speaker 2 our family built this ranch in Kauai and like,

Speaker 2 I just like

Speaker 2 worked like design all these buildings. I'm like, kind of trying to, like, we started raising cattle and I'm like, all right, well, I want to make like the best cattle in the world, right?

Speaker 2 So it's like, how do we, like, how do we architect this so that way we can figure this out and like and build all the stuff up that we need to to try to do that. Um, so I don't know, that's me.

Speaker 2 Um, what was the other part of the question?

Speaker 1 Look, Meta is just a really amazing tech company, right? They have all these great software engineers, and even they work with Stripe to handle payments.

Speaker 1 And I think that's just a really notable fact that Stripe's ability to engineer these checkout experiences is so good that big companies like Ford, Zoom, Meta, even OpenAI, they work with Stripe to handle payments.

Speaker 1 Because just think about how many different possibilities you have to handle. If you're in a different country, you'll pay a different way.

Speaker 1 And if you're buying a certain kind of item, that might affect how you decide to pay.

Speaker 1 And Stripe is able to test these fine-grained optimizations across tens of billions of transactions a day to figure out what will convert people. And obviously, conversion means more revenue for you.

Speaker 1 And look, I'm not a big company like Meta or anything, but I've been using Stripe since long before they were advertisers. Stripe Atlas was just the easiest way for me to set up an LLC.

Speaker 1 And they have these payments and invoicing features that make it super convenient for me to get money from advertisers.

Speaker 1 And obviously, without that, it would have been much harder for me to earn money from the podcast. And so it's been great for me.
Go to stripe.com to learn more.

Speaker 1 Thanks to them for sponsoring the episode. Now back to Mark.

Speaker 1 I'm not sure, but then I'm actually curious about something else, which is,

Speaker 1 so a 19-year-old Mark

Speaker 1 reads a bunch of like antiquity and classics, high school, college. What important lesson did you learn from it? Not just interesting things you found, but like

Speaker 1 there aren't that many tokens you've consumed by the time you're 19. A bunch of them were about the classics.
Clearly, that was important in some ways.

Speaker 2 I don't know, that's a good question. I mean, one of the things that I thought was really fascinating

Speaker 2 is

Speaker 2 so when Augustus

Speaker 2 was first, so he became emperor and

Speaker 2 he was trying to establish peace. And

Speaker 2 there was no real conception of peace at the time. Like the people's, people's understanding of peace was it is the temporary time between when your enemies will inevitably attack you again.

Speaker 2 So you get like a short rest. And

Speaker 2 he had this view, which is like, look, like, we want to change the economy from instead of being so mercenary and like, and kind of militaristic to like actually this positive something.

Speaker 2 It's like a very novel idea at the time.

Speaker 2 I don't know. I think that there's like something that's just really fundamental about that.

Speaker 2 It's like in terms of the bounds on like what people can conceive at the time of like what are rational ways to work.

Speaker 2 and um,

Speaker 2 I don't know.

Speaker 2 I mean, going back to like, I mean, this applies to both the metaverse and the AI stuff, but like a lot of investors and just different people just can't wrap their head around why we would open source this.

Speaker 2 And it's like, are you like, like,

Speaker 2 I don't understand. It's like, open source, that must just be like the temporary time between which you're making things proprietary, right?

Speaker 2 And it's, um, but, but I actually think it's like this very profound thing in tech that has actually, it creates a lot of winners, right? And it's, and, and, um,

Speaker 2 so I don't know, I don't want to strain the analogy too much, but, but I do think that there's, um,

Speaker 2 there's a lot of times, I think, ways where you can

Speaker 2 that are just like models for building things

Speaker 2 that people can't even like, they just like often can't wrap their head around how that would be a valuable thing for people to go do or like a reasonable state of the world. That

Speaker 2 it's I mean it's uh

Speaker 1 I think that there's more reasonable things than people think that's super fascinating um can I give you my answer what I was thinking what you might have gotten from it um

Speaker 1 this is probably totally off but um just how young some of these people are who have very important roles in the empire like Caesar Augustus like by the time he's 19 he's actually incredibly one of the most prominent people in Roman politics and he's like leading battles and forming the second triumvirate I wonder if you are like the 19 year old is like I can actually do this because like Caesar Augustus did this.

Speaker 2 I think that's an interesting example, both from a lot of history and American history too. I mean, it's,

Speaker 2 I mean, one of my favorite quotes is, it's this Picasso quote that all children are artists. And the challenge is, how do you remain an artist when you grow up?

Speaker 2 And it's like, basically, I think, because when you're younger, I think it's just easier to

Speaker 2 have kind of wild ideas and you're not, you know, you have no,

Speaker 2 there are all these analogies to the innovators dilemma that exist in your life as well as your company or whatever you've built, right? So, you know, you're kind of earlier on your trajectory.

Speaker 2 It's easier to pivot and take in new ideas without it disrupting other commitments that you've made to, to different things. And

Speaker 2 so I don't know. I think that that's an interesting part of running a company is like, how do you, how do you kind of stay dynamic?

Speaker 1 Going back to the investors in open source,

Speaker 1 the $10 billion model, suppose it's totally safe, you've done these evaluations, and unlike in this case, the evaluators can also fine-tune the model, which hopefully will be the case in future models.

Speaker 1 Would you open source that, the $10 billion model?

Speaker 2 Well, I mean, as long as it's helping us, then yeah.

Speaker 1 But would it? Like the $10 billion of R D, and then now it's like open source for anybody?

Speaker 2 Well, I think here's, I think, a question which we'll have to evaluate this as time goes on too. But

Speaker 2 we have a long history of open sourcing software, right? We don't tend tend to open source our product, right?

Speaker 2 So it's not like we don't take like the code for Instagram and make it open source, but we take like a lot of the low-level infrastructure and we make that open source.

Speaker 2 Probably the biggest one in our history was Open Compute Project, where we took the designs for kind of all of our

Speaker 2 servers and network switches and data centers and made it open source and ended up being super helpful because I mean a lot of people can design servers, but now like the industry standardized on our design, which meant that the supply chains basically all got built out around our design.

Speaker 2 The volumes went up, so it got cheaper for everyone and saved us billions of dollars. So awesome, right? Okay, so there's multiple ways where open source I think could be helpful for us.

Speaker 2 One is if people figure out how to run the models more cheaply. Well, we're going to be spending tens or like $100 billion or more over time on all this stuff.

Speaker 2 So if we can do that 10% more effectively, we're saving billions or tens of billions of dollars. Okay, that's probably worth a a lot by itself.

Speaker 2 Especially if there's other competitive models out there. It's not like our thing is like giving away some kind of crazy advantage.

Speaker 1 So is your view that the trading will be commodified?

Speaker 2 I think there's a bunch of ways that this could play out. That's one.

Speaker 2 The other is

Speaker 1 that

Speaker 2 commodity kind of implies that it's going to get very cheap because

Speaker 2 there's lots of options. The other direction that this could go in is qualitative improvements.
So,

Speaker 2 you mentioned fine-tuning, right? It's like right now

Speaker 2 it's pretty limited what you can do with fine-tuning major other models out there, and there are some options, but generally not for the biggest models.

Speaker 2 So, I think being able to do that and be able to kind of

Speaker 2 do different app-specific things or use case-specific things or build them into specific tool chains,

Speaker 2 I think will not only enable

Speaker 2 kind of more efficient development, it could enable qualitatively different things.

Speaker 2 Here's one analogy on this is

Speaker 2 so one thing that I think generally sucks about the mobile ecosystem is that

Speaker 2 like you have these two gatekeeper companies, Apple and Google, that can tell you what you're allowed to build. And there are lots of times in our history.

Speaker 2 So there's the economic version of that, which is like, all right, we build something and they're just like, I'm going to take a bunch of your money. But then there's the, there's the, um,

Speaker 2 the qualitative version, which is actually what kind of upsets me more, which is there's a bunch of times when we've launched or wanted to launch features and then Apple's just like, nope, you're not launching that.

Speaker 2 I was like, that sucks. Right.
And

Speaker 2 so

Speaker 2 the question is, what is like, are we kind of set up for a world like that with AI where like you're going to get a handful of companies that run these closed models that are going to be in control of the APIs and therefore going to be able to tell you what you can build?

Speaker 2 Well, for one, I can say for us,

Speaker 2 it is worth it to go build a model ourselves to make sure that we're not in that position, right? Like, I don't want any of those other companies telling us what we can build.

Speaker 2 But from an open source perspective, I think a lot of developers don't want those companies telling them what they can build either.

Speaker 2 The question is, what is the ecosystem that gets built out around that? What are interesting new things? How much does that improve our products?

Speaker 2 I think that there's a lot of cases where if this ends up being like, you know, like our databases or caching systems or architecture, we'll get valuable contributions from the community that'll make our stuff better.

Speaker 2 And then our app-specific work that we do will still be so differentiated that it won't really matter, right? It's like we'll be able to do what we do.

Speaker 2 We'll benefit and all the systems, ours and the communities will be better because it's open source.

Speaker 1 There is one world where

Speaker 2 Maybe it's not that. I mean, maybe the model just ends up being more of the product itself.
In that case, then I think

Speaker 2 it's a trickier economic calculation around whether you open source that, because then you are kind of commoditizing yourself a lot.

Speaker 2 But I don't, from what I can see so far, it doesn't seem like we're in that zone.

Speaker 1 Do you expect to earn significant revenue from licensing your model to the cloud providers? So they have to pay you a fee to actually serve the model?

Speaker 2 We want to have an arrangement like that, but I don't know how significant it'll be. And we have this,

Speaker 2 this is basically our license for Lama.

Speaker 2 You know, in a lot of ways, it's like a very permissive open source license, except that we have a limit for the largest companies using it.

Speaker 2 And this is why we put that limit in, is we're not trying to prevent them from using it.

Speaker 2 We just want them to come talk to us because if they're going to just basically take what we built and resell it and make money off of it, then it's like, okay, well, if you're like...

Speaker 2 you know, Microsoft Azure or Amazon, then yeah, if you're going to be reselling the model, then we should have some revenue share on that. So just come talk to before you go do that.

Speaker 2 And that's how that's played out. So for Llama 2, it's, I mean, we basically just have deals with all these major cloud companies, and Llama 2 is available as a hosted service on all those clouds.
And

Speaker 2 I assume that as we release bigger and bigger models, that'll become a bigger thing. It's not the main thing that we're doing, but I just think

Speaker 2 if those companies are going to be selling our models, it makes sense that we should share the upside of that somehow. Yeah.

Speaker 1 With regards to the other open source dangers, I think I have genuinely legitimate points about the balance of power stuff

Speaker 1 and potentially the harms you can get rid of because we have better alignment techniques or something.

Speaker 1 I wish there was some sort of framework that Meta had. Other labs have this where they say, if we see this concrete thing, then that's a no-go on the open source or even potentially on deployment.

Speaker 1 Just writing it down. So

Speaker 1 the company is ready for it.

Speaker 1 People have expectations around it and so forth.

Speaker 2 Yeah, no, I think that that's a fair point on the existential risk side. Right now, we focus more on on the types of risks that we see today, which are more of these content risks.

Speaker 2 So, you know, we have lines on we don't want the model to be basically doing things that are helping people commit violence or fraud or, you know, just harming people in different ways. So

Speaker 2 in practice for today's models, and I would guess the next generation and maybe even the generation after that, I think while it is somewhat more maybe intellectually interesting to talk about the existential risks, I actually think the

Speaker 2 real harms that need more energy being mitigated are things that are going to like

Speaker 2 have someone take a model and do something to hurt a person with today's parameters

Speaker 2 and kind of the types of kind of more mundane harms that we see today, like people kind of committing fraud against each other or things like that. So

Speaker 2 that I just don't want to shortchange that. I think we have a responsibility to make sure we do a good job on that.

Speaker 1 Meta's a big company. You can handle both.
Yeah.

Speaker 2 Okay.

Speaker 1 So as far as the open source goes, I'm actually curious if you think the impact of the open source from PyTorch, React, Open Compute, these things has been bigger for the world than even the social media aspects of Meta.

Speaker 1 Because I've talked to people who use these services who think it's plausible. Because a big part of the internet runs on these things.

Speaker 2 It's an interesting question. I mean, I think almost half the world uses our.

Speaker 2 So I think it's hard to beat that, but

Speaker 2 no, I think open source is

Speaker 2 really powerful as a new way of building things. And

Speaker 2 yeah, I mean, it's possible. I mean, it's, you know, it may be one of these things where,

Speaker 2 I don't know, like Bell Labs, right, where they, you know, it's like they were working on the transistor because they wanted to enable long-distance calling. And, and they did.

Speaker 2 And it ended up being really profitable for them, that they were able to enable long-distance calling. And if you ask them five to ten years out from that,

Speaker 2 what was the most useful thing that they invented? It's like, okay, well, we enable long-distance calling. And now all these people are long-distance calling.

Speaker 2 But if you ask 100 years later, maybe it's a different question.

Speaker 2 I think that that's true of a lot of the things that we're building, right? Reality labs,

Speaker 2 some of the AI stuff, some of the open source stuff. I think it's like the specific products evolve and to some degree come and go, but I think like the advances for humanity persist.

Speaker 2 And that's like a, I don't know, cool part of what we all get to do.

Speaker 1 By when will the Lama models be trained on your own custom silicon?

Speaker 2 Soon.

Speaker 2 Not Llama 4.

Speaker 2 The approach that we took is first,

Speaker 2 we basically built custom silicon that could handle inference for our ranking and recommendation type stuff. So reels, newsfeed, ads.
And

Speaker 2 that was consuming a lot of GPUs.

Speaker 2 But when we were able to move that to our own silicon, we now were able to use the more expensive NVIDIA GPUs only for training. So

Speaker 2 at some point, we will

Speaker 2 hopefully have silicon ourselves that we can be using for probably first training some of the simpler things, then eventually training these like really large models.

Speaker 2 But in the meantime, I'd say the program is going quite well, and we're just rolling it out methodically and have a long-term roadmap for it.

Speaker 1 Final question: This is totally out of the left field, but if you were made CEO of Google Plus, could you have made it work?

Speaker 2 Google Plus? Oof.

Speaker 2 Well, I don't know.

Speaker 2 I don't know.

Speaker 2 That's a very difficult.

Speaker 2 Very difficult counterfactual.

Speaker 1 Okay, then the real final question will be when Gemini was launched,

Speaker 1 was there any chance that somebody in the office uttered Carthaga Delende est?

Speaker 2 No, I think we're tamer now.

Speaker 2 Cool, cool.

Speaker 1 I was a mark.

Speaker 2 Yeah. I don't know.
It's a good question. I don't know.

Speaker 2 The problem is there was no CEO of Google Plus. It was just like a division within a company.

Speaker 2 I think it's like, and you asked before about what are the kind of scarcest commodities, but you asked about it in terms of dollars. And I actually think for most companies, it's

Speaker 2 of this scale at least, it's focus, right? It's like when you're a startup, maybe you're more constrained on capital.

Speaker 2 You know, you just are working on one idea and you might not have all the resources. I think you cross some threshold at some point where

Speaker 2 the nature of what you're doing, you're building multiple things and you're creating more value across them,

Speaker 2 you become more constrained on what can you direct and to go well. And like,

Speaker 2 there's always the cases where something just random, awesome happens in the organization. I don't even know about it.
And those are, that's great. But like, but I think in general,

Speaker 2 the organization's capacity is largely limited by what like the CEO and the and the management team are able to kind of oversee and kind of manage.

Speaker 2 It's, I think that that's just been a big focus for us. It's like, all right, keep the,

Speaker 2 as I guess Ben Horowitz says, keep the main thing the main thing. All right.
And

Speaker 2 try to kind of stay focused on your key priorities. Yeah.
All right.

Speaker 1 Awesome. That was excellent, Mark.
Thanks so much. That was a lot of fun.

Speaker 2 Yeah, really fun. Thanks for having me.

Speaker 1 Yep, absolutely. Hey, everybody.
I hope you enjoyed that episode with Mark. As you can see, I'm now doing ads.
So if you're interested in advertising on the podcast, go to the link in the description.

Speaker 1 Otherwise, as you know, the most helpful thing you can do is just share the podcast with people who you think might enjoy it-you know, your friends, group chats, Twitter, I guess, threads.

Speaker 1 Yeah, I hope you enjoyed, and I'll see you on the next one.