Mark Zuckerberg on the AI bubble and Meta's new display glasses | ACCESS
Find ACCESS on YouTube or your favorite podcast app.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Listen and follow along
Transcript
Support for the show comes from Saks Fifth Avenue.
Saks Fifth Avenue makes it easy to holiday your way.
Whether it's finding the right gift or the right outfit, Saks is where you can find everything from a stunning David Yerman bracelet for her or a sleek pair of ferragama loafers to wear to a fancy holiday dinner.
And if you don't know where to start, Saks.com is customized to your personal style so you can save time shopping and spend more time just enjoying the holidays.
Make shopping fun and easy this season, and find gifts and inspiration to suit your holiday style at Saks Fifth Avenue.
To remind you that 60% of sales on Amazon come from independent sellers, here's Scott from String Joy.
Hey, y'all, we make guitar strings right here in Nashville, Tennessee.
Scott grows his business through Amazon.
They pick up, store, and deliver his products all across the country.
I love how musicians everywhere can rock out with our guitar strings.
A one, two, three, four.
Rock on, Scott.
Shop small business like mine on Amazon.
Adobe Acrobat Studio, so brand new.
Show me all the things PDFs can do.
Do your work with ease and speed.
PDF spaces is all you need.
Do hours of research in an instant.
With key insights from an AI assistant.
Pick a template with a click.
Now your Prezo looks super slick.
Close that deal, yeah, you won.
Do that, doing that, did that, done.
Now you can do that, do that with Acrobat.
Now you can do that, do that.
With the all-new Acrobat.
It's time to do your best work with the all-new Adobe Acrobat Studio.
Hi, everyone.
This is Pivot from New York Magazine and the Vox Media Podcast Network.
I'm Kara Swisher.
We're off for the holiday today, but we have a special episode from Access with Alex Heath and Ellis Hamburger for you today.
In this episode, Alex and Ellis talk all things Mark Zuckerberg from the newest Meta Ray-Band display glasses to the beverage selections in the new Meta AI lab.
Alex then sits down with Zuck himself ahead of the 2025 Meta Connect conference.
Enjoy, and we'll be back in your feeds on Friday.
I mean, didn't you just tell Trump you were going to spend like 600 billion?
I mean, that's
through 2028, which is a lot of money.
It is.
And
if we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously.
But
what I'd say is, I actually think the risk is higher on the other side.
Welcome to Access from the Vox Media Podcast Network.
I'm Alex Heath and Ellis, you are.
I am Ellis Hamburger, not your favorite sandwich, but your new favorite podcast host.
I have had a lot of people since I've been saying that I'm doing a show with you ask me if your actual last name is hamburger.
It is.
Verified.
Yeah, and you have
a hamburger on X, which is a flex.
That's why I'm so scared to leave.
Please don't make me leave.
Ellis, why are we doing a podcast?
I feel like there's so many podcasts, but you know,
I've been getting that question a lot too.
Yeah, Alex, I think we have great chemistry.
We've known each other for a long time.
We both, I think, see a different side of tech as it is today.
I feel like you're so well connected in big tech.
You love to schmooze with all the biggest founders.
I've got a good stranglehold.
Yeah, you do.
I've got a good stranglehold on the AI startup arena with the work that I've been doing at Meaning.
And I feel like we could just really bring something different.
I mean, you've been in media forever.
I was starting in media at The Verge, then went into startup world for a while at Snapchat and the browser company.
And so I think we have something interesting.
I think we want to talk about...
the inside conversation, what people are really thinking and talking about, as opposed to just what's in the headlines.
So
hopefully we could do that.
Yeah.
I think we both just wanted to make a show we wanted to listen to and didn't feel like a show like that existed.
And I hope we're going to make that.
I think we will.
And it feels really fun to be doing it with you.
And the way we're going to structure these episodes is, you know, it's a talk show, an interview show, pretty standard.
You and I are going to...
rap about some things happening in our world, things we think that you're going to want to know about, even if you're pretty plugged in, that's coming soon, or stuff that just hit.
And
then we're going to go usually to an interview, either with a big name.
This week we've got the one and only Mark Zuckerberg.
Next week, we've got Dylan Field, the CO of Figma, for his first pod since the biggest tech Ipio of the year.
We're going to have an interesting mix of, I think, big names and then also early stage founders, some of which you work with directly at Meaning
that we just think people should know about, that are going to be the companies that you're going to be hearing about in the next few years or maybe already.
And we want to make something that
feels good if you're tapped in, but also relevant if you just want to know more about this crazy tech world we're in.
Did I get all that?
Yeah, I think you got it all right.
I think the one other thing that's on my mind is I just want to have fun with this.
Tech has been a part of my life for so long.
And while this industry is so often mired and, you know, often valid skepticism, pessimism, uncertainty, I think there is still so much brightness and optimism and fun just to have in building the future together.
Obviously, we all want to be honest and hold each other accountable.
But I think tech is culture these days.
And I want to cover the whole thing holistically versus just the last earnings call and what the latest ARPU and DAU numbers are.
Yeah, I can ask that stuff.
I mean, I'm pretty interested in the business and the strategy of these companies.
It's what I cover a lot at Sources, my new publication about the tech industry and Silicon Valley and AI.
I was previously deputy editor at The Verge, where I had a pretty successful newsletter there called Command Line.
And now
I'm an entrepreneur, I guess, like you, Ellis.
I'm on the other side.
And this pod is part of that.
I hope it feels part of the same cinematic universe, I guess.
But sources is going to be, I guess, maybe where I also play a little more bad cop to the good cop of the mood we're trying to make on this show.
We found out that Zuck wanted to do the first episode.
That gave us a deadline I think neither of us were planning for, but it's been great.
It's been a good kick in the pants to get this thing going.
And, you know, usually we're going to do these interviews together.
This week is a little unusual just because of the timing.
And so it's just Zuck and I.
But, you know, I think there's also an element of the different perspectives we can bring to these conversations, right?
Like you've got this really interesting perspective working with a lot of these startups directly.
And then I've got my kind of more journalist POV of having met a lot of these leaders in big tech, especially in the big AI labs over the years.
Yeah.
Well, first I want to get at the real distinction between how you would interview Zuck and how I would interview Zuck.
I think you're going to be in one of those fireside chat chairs.
I want to go t-shirt shopping with him.
Maybe head to the jewelry store after, look at some chains.
That's the interesting combination I think that we represent, Alex.
I need a job with a steady paycheck.
I need a job that offers health care on day one for me and my kids.
I want a job where I can get certified in technical roles, like robotics or software engineering.
In communities across the country, hourly Amazon employees earn an average of over $23 an hour with opportunities to grow their skills and their paycheck by enrolling in free skills training programs and apprenticeships.
Learn more at aboutamazon.com.
Adobe Acrobat Studio, so brand new.
Show me all the things PDFs can do.
Do your work with ease and speed.
PDF Spaces is all you need.
Do hours of research in an instant.
With key insights from an AI assistant.
Pick a template with a click.
Now your prezzo looks super slick.
Close that deal, yeah you won.
Do that, doing that, did that, done.
Now you can do that, do that, with Acrobat.
Now you can do that, do that with the all-new Acrobat.
It's time to do your best work with the all-new Adobe Acrobat Studio.
What do walking 10,000 steps every day, eating five servings of fruits and veggies, and getting eight hours of sleep have in common?
They're all healthy choices.
But do all healthier choices really pay off?
With prescription plans from CVS CareMark, they do.
Their plan designs give your members more choice, which gives your members more ways to get on, stay on, and manage their meds.
And that helps your business control your costs because healthier members are better for business.
Go to cmk.co/slash access to learn more about helping your members stay adherent.
That's cmk.co/slash access.
So I got an early sneak peek of the conversation with Zuck, and I'll say this.
He seemed very confident, very comfortable, having fun.
What was the vibe you got from him during the conversation?
What was he wearing?
How was he feeling?
It's pretty crazy, you know, to go from sitting next to Trump to sitting next to Alex Heath.
It seems like he's got some swag these days.
I mean, yeah, the swag has been the Zuck arc for the last couple years, I would say.
I mean, you notice the chain is tucked in this year.
He was still wearing it, but it's tucked in, which I don't know if that means, you know, we've got business to do, which is maybe this new AI stuff.
I do think they feel pressure on these glasses.
They really want these glasses to be well received.
They're Ray-Ban-branded, and they're pretty wild.
They can do a lot.
It's not full augmented reality, but it's a pretty good heads-up display that can do texting, heads-up navigation,
a bunch of other stuff.
And they have this neural band that controls the glasses that feels like legitimate sci-fi.
It's one of the coolest demos you will do.
And they announced it this week at Kinect, and people are going to start seeing them out in the wild.
They're kind of expensive, you know, they're $800.
I think these are very clearly an early adopter
kind of product,
pro Sumer, so to speak.
And
I got to demo them last week, and they are
really cool.
You cannot be a fan of tech like you and I are, Ellis, and not think these are cool.
Now, whether I'm going to wear them all day or they're going to start replacing the smartphone, I think that's very much TBD.
But yeah, man, I was pretty impressed.
So, what was the magic moment across all these different use cases that you tried with the glasses?
Because I feel like there have been so many ambitions for what they could be.
I mean, I was back at Snapchat in the early days when we launched Spectacles, and all you could do is just take like a 10-second video or a photo with them.
We've seen people iterate on them over time.
I know you mentioned to me that it did a lot more than you expected.
Like, what was just first principles?
What was the best thing you tried?
There was this moment where I felt like I had super hearing.
It was a thing that only something like this form factor could do, where they're calling it, I think, live captions.
So I was in this room with a bunch of people, and
you could look at someone, and everyone was talking very loud.
And if you were looking at them, it would live caption what they were saying, even if they were six to eight feet away and it was super loud and you couldn't hear them on your own.
And then they've added language translation to this, where it can do live language translation back and forth.
That was pretty magical.
The display itself is honestly pretty good.
It sits to the side of your eye, which is kind of unusual when you first try it, but it lets you see the picture you're about to take or the video you're about to take, which is honestly something that feels simple, but in that form factor of the camera on your face, it actually matters a lot.
And the band has this little gesture where you can like twist in a knob like in the air to like zoom in and zoom out.
So it feels like you're Tom Cruise and
Minority Report a little bit.
The crazy thing is, is with the band, and I really can't describe how much of a game changer the band is for input because you don't have to just talk to it or wave in front of it to get it to do something.
You can just do this very light tap gesture.
Do a little pinch, little pinch,
and the display just melts away, comes back, melts away as you pinch.
And I started getting that pretty fast.
I wore the glasses for about an hour, took them outside, took the demo off rails a little bit, asked the AI things that weren't verbatim what they told me to ask.
It still worked.
You know, like place, you know, some glasses and plates on this table, make it look like a table setting.
It did it, you know, stuff like that.
The display is like 5,000 nits or something.
It's super bright and it's very clearly designed to just be worn everywhere you go.
And the battery lasts around eight hours.
I bet it's less than that with the display going the whole time.
I didn't really get to push that to the limits.
But
overall, love good nits conversation.
Good nit.
How many nits is it?
Yeah.
Is it 8K quantum dot or is it ocular occlusion?
Yeah.
No, this is Gorilla Glass version 3 Vestibule.
I think you made some stuff up there, but that's good.
Well, so setting aside the display, I feel like so much of AI is, you know, obviously the hardware has to be there, but how does their AI work?
You know, I feel like, especially in the age of AI and MCP and people trying to do things agentically, so much of it is like, what sources is it using?
What can it do?
Can it plug into your other services?
Or was it just kind of like better Siri, at least at the outset?
I would say better visual Siri.
It's still meta AI, which obviously is not the leading AI.
AI definitely took a backseat in the demo.
I think they probably wished that they had more AI features, but Zuck is doing this massive AI reboot that everyone's been reading about.
I asked him about that actually in the interview.
We talked about it, and he dropped some new stuff, I think, about the lab that was pretty interesting.
I don't want to give it away, but
I think they know they're behind on AI, but they have the bare minimum, and no one has a product anywhere close to this in terms of the form factor, the price, the display, the band, and the ability to like do texting with the band.
Zuck said he types 30 words per minute with the band, which I guess.
So, how does that work?
It's like auto-center.
So, you're scribbling with your finger?
Yeah, it's like auto-complete with slight wrist finger gestures.
Like, you can write almost like on your leg or whatever, and it auto-completes it.
And he's doing, he says he's doing 30 words per minute, which is impressive.
I think it's really more for just shooting off like quick text, which I did, and it worked.
But yeah,
it's a wild device.
Like, it's definitely something if you are listening to a show like this, you're going to want to try.
So,
watching Zuck for a great many years, I feel like he's been trying to build and own the next platform forever.
You know, he's tried phones, he's tried VR,
trying to build the metaverse, now AI glasses.
Alex, you are a betting man.
Do you bet that this is the one where the platform works?
Because
it is such a fine line, right?
Like Apple does appear to be uniquely good at combining the hardware and the software because everybody's competing on the hardware.
I do think at the end of the day, the software is going to be a pretty big deciding factor.
Does it have the apps you want?
How does, for example, VisionOS feel to you compared to the OS that they're kind of building across AR and VR and this and that?
Is this the platform where they win?
I think it's going to take a few years for this to really get mature and become something that is compelling to a lot of people, not just early adopters.
You can totally see the path to it, though, when you try this, I think.
And I felt that way when I tried Orion, their first pair of like full AR glasses that are not a consumer release last year, which when I say Orion in the interview, that's what I'm talking about.
Yeah, I think compared to the Vision Pro, it's definitely not as full-featured, but it's a different use case.
It's not a headset.
It's not a pair of goggles that fully blocks you off from the real world.
I mean, these are chunky, but like in the right lighting could pass almost as normal glasses.
And
that's their main job.
I mean, their main job is to be something you can wear around.
And the tech is supposed to be supplementary to that.
And I think there's a lot of work to do to make the tech actually feel like it fades into the background appropriately.
I think Meta is very motivated to figure it out.
You're right.
They really want another platform and they're sinking billions of dollars into all of this because
they, like pretty much every big tech company I know of, thinks that this combo of glasses with a display and AI is maybe going to be the next smartphone.
So I have one more question for you before we get to the feature presentation
in evening with Alex and Zuck.
We'd like to talk about people here on the Access Podcast, even though this is the very first episode.
So, every time that I feel like Meta goes through a different transformation, we hear about how that team has been moved next to Zuck's office.
Yeah.
How does that work out over time?
Do you just kind of find yourself in the inner orbit and then with each new tech trend, you get a couple inches away, kind of like tectonic plates?
Who's in the inner circle right now?
Who has been pushed to the outer orbit?
Yeah, the inner circle right now is the new AI lab, which I think I'm the first outsider to see physically.
When I was there for my demos, they walked me in and I was getting a lot of side-eye, I would say, from the researchers in there who are like, who is this guy that's looking at our LLAMA algorithm written out on these whiteboards?
Luckily, I don't know math, so their secret is safe with me.
But yeah, the lab is sitting sitting there in this kind of special area with Zuck.
And as I say in the interview, they were cranking.
I think I saw some shoes off,
a lot of code happening.
And
it's very clear that Meta is rebooting this stuff.
And I think this device is part of the reason why.
I think they know that AI is the killer feature for glasses like this.
And they want to...
be on the frontier of that.
Most importantly, what are they drinking?
We got Pipes, Monster Energy, Diet Coke is making a comeback.
What was on the tables?
I didn't catch the drinks.
That would have been a good catch.
But, you know, these big tech campuses, they have just about everything you need.
You know, you never need to leave.
It's like Hotel California.
All right, man.
Well, I guess we'll get into the convo with Zuck here.
Mark, I don't think you can be into technology and the cutting edge like I am and and not try these here in the middle, these new display glasses, and think they're just not really cool.
And I want to get into what they do and why you're building them, but can you kind of just initially set the stage for us and explain why you all are doing a display in this form factor?
Because you've had the AR glasses and you have the glasses that don't have displays.
So why do something in the middle here?
I mean, we're working on all kinds of glasses.
I mean,
my theory is that,
you know, at a high level, glasses, I think, are going to be the next computing platform device.
I think that they're great for a number of reasons.
One is that they don't take you away from the moment, so you can stay present in the moment, unlike with phones.
So I think that that's a big deal.
And that they're also basically the best device for AI
because it's the only device where you can basically let...
an AI see what you see, hear what you hear, talk to you throughout the day.
And then once you get the display, it can just generate a UI in the display for you.
So that's great.
And then the other thing that glasses can do is it's really the only form factor that can put holograms in the world to help you seamlessly blend the physical world around you in the digital world.
Which, I mean, I just think it's a little crazy that we're here.
It's 2025.
We have this incredibly rich digital world.
And you access it through this like five-inch screen in your pocket.
So that I think like there's going to get these things blended together.
So that's glasses at a high level.
But then you get into, okay, well, what do people want with glasses?
And
glasses are very personal.
It's not just like a phone where everyone kind of is okay with something that's pretty similar.
Maybe you get a different color case.
People are going to want a lot of different styles.
People are going to want different amounts of technology depending on whether they want a frame that is on the thinner side or bulkier side or whether they can afford more technology or less.
So I think that there's just going to be this whole range of different things
from
simple glasses that don't have that much technology in them, maybe the ability to talk to AI and have it see what's going on around you, all the way up to
full augmented reality glasses that have kind of wide field of view like the Orion prototype and everything in between and different styles, right?
So we did, we started with Ray-Ban, which is probably
the
single most iconic and popular glasses design in history.
and now this year uh we were adding oakley so we did the um oakley meta houstons this summer and then at connect we announced uh these guys the the oakley meta vanguard which we'll get to in a bit yeah um this is i think what people kind of had in mind when they thought when they heard that we were doing something with oakley i think it was it was more this but these are dope yeah i mean it's i mean they look great yeah they're great for performance and we'll we'll talk about that in a minute but the the but the deal is i mean people are going to want all kinds of different things so um
there's going to be this whole spectrum, and one important technology is obviously going to be getting
a holographic display, and then within that, there's a whole world of options, too.
You could have a small holographic display that can just display a little bit of information, you could have a wide field of view that can basically overlay kind of avatars and deliver a sense of presence, which was Orion last year.
That's Orion, and that's kind of what we're building towards in the consumer version of that.
So,
there's a number of different points on the spectrum.
And what we're doing here with the Meta Rayband display, I think, is a good kind of starting point where it's not a tiny display.
It's actually, it's quite meaningful.
You can read a whole text thread.
You can watch videos.
You can do a video chat.
You can watch videos that you've taken.
I guess you could even watch reels on it if you want.
But so it's a meaningful size display.
But
this one isn't really meant for putting objects in the world.
It's more meant for just showing information.
And
so anyway, we've been working on this for like, I mean, all the glasses at Meta we've been working on for more than 10 years at this point.
So, you know, when we get, when we have these moments along the way where we get to like show a new technology that I think is pretty different from what others are working on, which.
The display in the glasses is one thing, but the meta-neural band as the way to interact with it with you just get these like micro gestures in your hand and you're controlling what you're saying.
It's just wild.
The band is wild.
We got to talk about the band, but I guess the thing that surprised me the most in my demo of the glasses last week was just how much they can do.
Frankly, I mean, I've been reporting on these for a while and as the buildup has been
coming for them, and I thought they would have a little bit of a more limited use case to start, but they can do quite a bit.
And I'm curious, what was the goal of their overall functionality?
What are you trying to achieve with this?
Are you trying to replace the phone or just get people to use it less?
I mean, what's the big picture idea of like, this is what it can do?
Well, I always think about everything
from a communication and connection angle first, right?
Because that's kind of the legacy and the DNA of the company.
So
probably
the most important thing that I've focused on wanting to get them to do is be able to wear them, get a text message, respond really quickly and subtly with your hand if you want.
Um, with like, like, we're having this conversation now, and I could, I mean, like, we're talking about like this level of hand motion, right?
Like, I'm making it's like not.
I thought you might wear them in the interview, and then I wouldn't be able to.
I mean, I could, I could, I could have gone.
I mean, yeah, they'll put them on.
Yeah, put them on.
Um,
another thing about them is you can't, you can't tell they have a display, like, even when, yeah, and their transitions, and even when they're sensitive to the map.
So, that's actually an important part of the technology is
light leakage is a feature of some types of waveguides.
And so basically you get these trade-offs where you want them to be very efficient.
There are waveguides where you have to pump just a ton of light through them in order to get anything to show up.
But then some waveguides have
just different artifacts.
usually in a bad way.
It's like the light will catch them and you'll see all kinds of like rainbowing or something.
Another artifact that we think is pretty bad because it's a privacy issue is if the person who you're looking at can basically see it.
Yeah.
See what you see.
The very worst version of it would be if they could see what you're seeing.
But I think
another version that I think is still not that socially acceptable is if they can see that you're looking at something at all.
Right.
So
I think that that's one of the things that we're really proud of in the design here and
that we put a lot of energy into is, you know, the
displays are super bright to you and to the person that you're looking at, they can not even really tell that you're doing anything.
And that's an important thing for having it to be socially acceptable, right?
I mean, we also design them so that way when the display comes up, it's offset slightly to the side.
We don't want to block what you're doing.
An important principle for the glasses is the technology needs to get out of the way, right?
It's, I mean, fundamentally, it's like, you know, this is something that you're going to be wearing on your face for a lot of the day.
I mean, we designed these specifically to be,
you know, both indoor and outdoor that with the transition lenses, they work really well, sunglasses, outdoors.
But the reality is, is that most of the day you're not going to be using technology or at least visual yet, right?
So like maybe, maybe you'll be listening to music or something, but
we want...
you know, when you are interacting with something, it should show up.
It should kind of be off to the side.
If you don't interact with it, it needs to get out of the way really quickly.
That's like a really important principle of the whole technology.
You've got this wake gesture with the band where you can just tap quickly to make the display go away.
Yeah, which is very, very subtle.
And again, I want to get into the band.
The band is wild in its own right.
The thing that really stood out to me from my demo from these was some things that you only could do in a form factor like this.
Because I mean, the texting is cool, but like there's this live captions thing where I was in a room with a bunch of people and they all started talking really loudly.
And if I just looked at someone, it would live caption what they were saying and tune out everything else.
It's like super hearing.
Um, and then you're also doing that with language translation, yeah.
So you can do real-time.
I mean, ideally, both people are wearing the glasses to get the full experience, but you don't actually need to have the other, you could just hear what they're saying in your language or see it.
Yeah, that's pretty wild, and that speaks to like what just this form factor could do.
I'm curious, like, there's that.
Are there other things that
this form factor uniquely can do in your mind that a smartphone can't?
Um,
I mean,
all the things around AI where you have an AI that basically you want to have context around what's going on with you, right?
So like if you want an AI that can see what you see, hear what you hear, can just kind of talk to you passively throughout the day,
and then can show you information contextually, that's just not something that a phone can do.
I mean, I guess technically you could walk around holding a phone like this, but you can't really do that.
No one does it.
Yeah, you can't.
Those demos have existed forever, and I'm always like, no, I don't want to hold my phone up.
So I think that's actually going to be the main one.
And I think all the live AI stuff, it's interesting.
It takes on a different,
like a different feel.
So we have live AI in the Ray-Bands without a display too, the kind of classic Ray-Bans.
And
for that, it's audio-only live AI.
So it's really helpful for when you're doing something kind of by yourself.
If you're cooking or something, then you know it can just give you it can it's it's watching what you're doing with the video and and you can ask it questions about what you should be doing or it can give you tips and that's all that's all great but it's not really useful when you're in another kind of conversation and the thing that
the thing that i've observed with um
the thought experiment i've run but also just wearing these is, you know, so we go through the world, we have dozens of conversations a day.
and in every conversation i usually have like five things i want to follow up on maybe there's like some
it reminded me that i should you know go do a thing or or it reminded me of a person who i wanted to talk to or maybe i'm i'm talking to someone and
um
maybe
they like assert some assumption that doesn't quite sound right and I want to fact check it or like gut check it.
These are all things that I think with live AI, you can have this AI that's sort of running in the background and that goes and often does work for you and um and then can bring that context back, whether it's asynchronously kind of offline when um when you're done with the conversation or sometimes like when you're in the middle of a conversation, it's just useful, useful to have more context right then.
Yeah, how have you been using these?
Someone on your team was saying you text a lot through them.
Yeah, I um, well, I'm a texter, I like run the company through text messages.
So, um,
so when you were asking about what,
you know, what can you do that you can't do on a phone?
I mean, I guess you can, you can obviously text on a phone and we all do it dozens of times a day, but
I think that there's like a lot of times where it's just not socially acceptable to send a text message.
And so, like, let's say you're in the middle of a conversation, you want to like ask someone a question or get some information.
I mean, I have this like all the time.
I'm like having a one-on-one conversation with someone, and I'm like, oh, I like wanted to ask someone this, or like, I wanted to ask someone else a question to like pull some context so I can ask this person what they think about it, but I'm not gonna like pull up my phone in the middle of a conversation.
With this, it's actually just super quick.
You can just like send a message in like five seconds, get the context back.
Like, it actually just really improves the conversations that you're having.
I find it's um, to me, this is
the one thing
that I think is basically better about Zoom than in-person conversations is that you can sort of multitask a little bit, right?
It's like, or
it's worse in basically like every other way than kind of an in-person physical conversation.
But the one thing that I, that I think is useful is you can go from having a conversation to basically asking someone else a question.
It's not necessarily distracting, it's additive, right?
Because otherwise your option is like, all right,
you have a conversation, then you go check in with someone else, then you have to go back and call the other person back and have a whole second conversation.
So it just short-circuits these things all the time.
And now I think this kind of brings the best part of that into physical conversations.
Well, you basically feel present in the conversation.
You can pull in whatever information you need.
It's super, super awesome.
Yeah.
A real like, holy shit thing about this product is the band.
I thought that because it was with Orion, the demo you guys had last year too.
And I thought it at the time.
I was like, there's something special with this band.
And you're calling it, it's a neural band.
Is that right?
That's the neural band.
Because it's a neural interface.
It's a neural separate nerve activity.
So it feels like it's reading your your mind.
It's not doing that.
It's not reading your mind.
What you're doing is you're sending signals through your muscular nervous system that it actually picks up before you even make movements.
But
it basically picks up these micro gestures, and that allows you to control your interface no matter where your hand is.
So it's not doing like hand tracking visually or anything like that.
Like you could have your hand by your side.
You could have have your hand behind your back like whatever in your jacket pocket um and it's fine and um
and the gestures are really subtle right so like it's like this is all i need to do to bring it up i mean this brings up med ai so and i really like the music one did did you um i didn't try that one no oh so the way when you're listening to music that you adjust the volume is you just kind of oh the dial yeah you pretend that there's a dial and you just turn the dial i did that with the zoom in something yeah you could do it you could do it on on photos too yeah um that's like it feels like minority report when you do that.
It's like in real life.
It's a good interface.
Yeah, I think it's what you mean.
Yeah, it's like,
not the weird part of minority report, but it's a good part of minority report.
Where Tom Cruise is doing the hands,
it feels like sci-fi.
And I'm wondering why this band, like, why did you land on this band as the input for this?
Because people have been trying to figure out input for glasses like these forever.
And it's usually voice or hand gestures or something.
But it's like, I'm not going to be in the subway gesturing out into space.
So, okay.
I think that those are going to be useful too, but I don't think that they're,
I don't think that they're complete, right?
So voices is obviously going to be important.
People talk to MedAI, they do voice calls, they, you know, you do video chats.
Um, so voice is going to be a big thing, but I think the reality is that a lot of the time we're around other people.
And for the use case that we really wanted to nail, which I actually think is the most frequent and most important use case that we do on our phones is messaging.
So if you want to nail that, what you need is the ability for a message to come in to not be distracting, not be like center in your field of view, but just be there.
And then you need a way in whatever situation you're in to be able to quickly respond in like five or 10 seconds
in a way that is not interruptive.
to your current interaction and is socially acceptable and doesn't feel rude.
And so then you get to, okay, hand gestures.
I mean, yeah, I think that there are going to be useful things for, yeah, I mean, in Minority Report, he's doing a fair amount of that.
But for gaming and things like that, I think you'll do that.
But like you said, you're not going to walk down the street like that, right?
I mean, that's, that's kind of
pretty goofy.
Yeah, it looks, it looks weird.
Your arms get tired.
You know, it's
much more the former than the latter in terms of reason why it doesn't work, but the latter is also true.
So we needed something that was basically silent
and
subtle.
So the questions are:
there are a few options for that.
One that people are working on is basically whispering, right?
So, you can like sub-audibly even
pick up on the sound, or you can have some camera that can like look at your mouth and do like lip reading.
That's still pretty weird in a meeting.
I agree.
Yeah, I agree.
So, it didn't pass my bar for kind of subtlety, even though it is silent in theory.
So, I think like,
so I think you need to go for the neural interface.
And
the other thing that's nice about the neural interface is you can get really high bandwidth input to,
so it's not like, you know, with like, you know, smart watches today, you basically like, you know, you can move your arm and it can pick up like a gesture or two, but it's, but it's a very low bandwidth.
There aren't that many things that it can do.
You need something that can basically be reading the muscle signal.
So that way you can just control it very subtly.
And
this can do that.
I mean, I can, I can, I mean, already, I haven't like, you know, we're not that optimized.
I think we're going to get the autocorrect a lot better, but I'm already, I think, at around like 30 words a minute type.
Really?
Yeah, no, it's, yeah, yeah.
Man, I mean, how advanced do you think the band gets in its current form factor in terms of what it can do?
I think quite a bit.
I mean, basically,
today,
well, I guess
you have the sensors which can pick up the signals from your muscles.
But then on top of that, it's basically just an AI machine learning problem
to be able to pick up what you mean by the thing.
And right now,
it's not particularly personalized.
So
you get it out of the box and it needs to work with certain gestures and you've never used it before.
So it works with these like, like you're, you're doing kind of, it's like, I I mean, this isn't a big gesture, but I mean, this is much bigger than what it needs to be in the future.
And then in the
and then for
kind of the neural text entry, you're basically, you can kind of think about it as if you have a mini pencil and you're just like writing out the letter.
But
over time,
what
should happen is that the AI learns your pattern for how you, you know, write each letter.
And you should be able to write, make like increasingly subtle and invisible motions that it basically learns are your way of doing that letter or that input or whatever.
And I think the future version of this is that the motions just get really subtle and you're effectively just firing muscles in opposition to each other and making no visible movement at all.
And it picks it up.
So personalized autocomplete via your wrist, basically.
Yeah, so super fast because if you're not moving, there's no latency from actually having to actually physically move and then retract after making a motion.
So I think the upper bound of that is very high.
The other thing is, and that's just for typing, which is one kind of modality, but I think you can kind of, there are all these different dimensions where you can use it to input into things.
And you could control a
hand in space that is like operating a UI, right?
And it's like there's
all kinds of different things that it can can do that I think will just be really interesting to get into over time.
And,
you know, I mean, we basically,
we invented the neural band to work with the glasses, but I actually think that the neural band could end up being a platform on its own
to basically just interact with all of your electronics and devices and do all kinds of different things once we kind of get it to be a little bit more mature.
You could have an API for it that could theoretically plug into a smart home or something like that.
That'd be wild.
Yeah.
Yeah.
The price point for these is also lower than I expected.
It's only like 800 bucks.
Yeah.
Who are these for from a like, is this an early adopter thing?
Like you care about the cutting edge?
Like this is for you.
You're not making a ton of these right.
I assume this is not like a map going to be a massive thing for you.
It's more to see how people use this technology.
What's
I mean, I think that this is going to be
a big part of the future.
I mean, my view is that,
you know, there's between a billion and two billion people who wear glasses on a daily basis today for vision correction.
Like, is there a world where in five or seven years, the vast majority of those glasses aren't AI glasses in some capacity?
Like, I think that that's, it's kind of like when the iPhone came out and everyone had flip phones, and it's like just a matter of time before they all become smartphones.
I think that these are all going to become AI glasses.
And the question is, all right, well, there are 8 billion people in the world, not, you know, one to two.
So are there going to be a bunch of other people who also wear glasses?
I would guess yes.
I mean, there are a lot more people who wear sunglasses some of the time.
So yeah, I mean, I think it's a big category.
And there's a lot going on here.
I mean, there's,
I think what you see when you're building products is that V1,
you build what you think is going to be great.
And then
you get a lot of feedback, but you also didn't get everything exactly perfect in V1.
So, you know, v2 and v3 just end up a lot better, right?
It's not a, it's not a coincidence, I think, that,
you know, the first version of the Ray-Bans,
Ray-Ban Stories,
we thought it was good.
Then when we did the
second version, Ray-Ban Meta, I think it sold five times more.
And it was just refined, right?
It was, it was like, so I think that there's going to be some dynamic here where it's like you have the first version, you learn from it.
The second version is just a lot more kind of polished.
it's in the software gets polished too, not just the hardware.
And that just kind of compounds and gets better and better.
And full AR that fills your vision.
Yeah.
That's still coming to.
Yeah.
I mean, we're working on all this stuff, and we want to get it all to be as affordable as possible.
The reality is that the more technology you want to cram into the glasses, the more expensive that is because you're putting more components in.
We also want the glasses to be as thin as possible.
And
that's a process of miniaturization that happens.
And similarly, the more technology that you cram in, the harder it is to make smaller.
So, as much as we can miniaturize this technology, it will always be true that if you put half the technology in, you'll be able to make even thinner glasses.
And then some people have different aesthetic preferences where they'll want-I mean, like fortunately, you know, thick glasses are kind of in style, but you know, some people want thinner ones like yours.
Yeah, you can't fit many electronics.
Well, now I don't fit on it.
Well, and so, yeah, you have to rethink your aesthetic choices in the future.
But,
see, on pricing, we
did,
we work on getting it to be as affordable as possible.
And,
you know, I mean, our view is that our profit margin isn't going to come from a large device profit margin.
It's going to come from people using AI and the other services over time.
Because you'll pay a subscription for the AI or something?
Or, yeah, or use it and do commerce through it or whatever the different things are that people do.
So we're not like
a company like Apple
whose primary or a large part of their margin comes from having a large margin on the hardware.
But in general, yeah, we try to make it as affordable as possible.
And my hope is that if we build another one of these, hopefully it's even more affordable.
Or the other choice that we can make is put even more technology in it and keep the price point there.
But I think you're going to have...
a few different price points.
There's going to be the kind of standard AI glasses that that don't have a display.
And I think those will sort of range between, you know, $300 to $500 or $600, depending on, you know, the aesthetic and kind of how high fashion it is.
Maybe even more than $600 if you get something that's really high fashion.
But that's kind of the range that we've seen so far from the early Ray-Bans to some of the
Oakley Meta Vanguards with all the kind of you know, custom stuff in it,
like, you know, optical lenses that can, with a prescription.
Yeah.
Um,
then there's a category like this with the kind of
a display that isn't kind of a full field of view AR display.
And I think that that's going to be,
yeah, on the order of $1,000.
Like maybe you get it a little better.
Maybe it's a little more, but I, but you can call it in that area.
And then I think when you get to the full AR glasses,
That'll be somewhat more to start.
And and I think like people will just want to have the whole portfolio.
And then the goal over time will be to get as much of that technology into as affordable and as thin of a form factor so you can just like have as many styles as possible.
Every day, millions of customers engage with AI agents like me.
We resolve queries fast.
We work 24-7 and we're helpful, knowledgeable, and empathetic.
We're built to be the voice of the brands we serve.
Sierra is the platform for building better, more human customer experiences with AI.
No hold music, no generic answers, no frustration.
Visit sierra.ai to learn more.
Every great company's story is defined by moments when their founders make bold decisions.
These are high-stakes moments that risk the business, but can lead to greatness.
I'm Rolof Boeta, managing partner of Sequoia Capital and the host of Crucible Moments.
We're returning for a brand new season.
Join us as leaders from Stripe, Zipline, Pala Alto Networks, Klarna, Supercell and more.
Share what it's actually like to navigate the make-or-break decisions.
Crucible Moments is back on October 23rd.
Until then, catch up on seasons one and two wherever you find your podcasts.
Avoiding your unfinished home projects because you're not sure where to start?
Thumbtack knows homes, so you don't have to.
Don't know the difference between matte paint finish and satin, or what that clunking sound from your dryer is?
With thumbtack, you don't have to be a home pro.
You just have to hire one.
You can hire top-rated pros, see price estimates, and read reviews all on the app.
Download today.
And so you've got these new Oakleys, and your deal with Elsa or Lexotica means you can do other smart glasses with all their other brands, right?
So I think that implies, right, that there will be future brands that have Metatech.
We'd love to do it.
Yeah.
Yeah.
Yeah.
So you see it as like building out a kind of constellation of all these different form factors and price points.
And that's the goal.
Yeah.
Yeah.
What about all these other AI wearables that aren't glasses that are happening?
Like there's the friend pendant, I'm sure you've seen.
There's all these like displayless non-glasses devices.
Yeah.
Sam and Johnny Ivor apparently working on something.
There's a lot of interest in this.
And I'm curious, is this something that since AI has really taken off in the last few years, you see opportunity there in addition to glasses?
Or are you still just the main focuses is glasses?
Well, our main focus is glasses
because I think glasses are the best form factor for this for all the reasons that we talked about before.
I think that anything else that you have to kind of fiddle with takes your attention away from the physical world around you.
I I don't think that there's any other form factor that can see what you see, hear what you hear, talk to you throughout the day, and generate a UI in your vision.
And then there's the whole augmented reality part about blending the physical and digital world.
But
I don't know.
I mean, people use different electronics.
So, I mean, I'm not going to, I mean, I certainly don't think that it's like in the future.
all 8 billion people in the world are doing the exact same thing.
I mean, some people use their phone more.
Some people use a computer more.
Some people use an iPad instead of a computer, right?
Some people, you know, primarily,
you know, watch videos on a TV, some people watch videos primarily on phone.
So I do think that they're going to be different things.
My guess is that glasses will be the most important.
I think something like...
Earbuds is kind of interesting too.
I mean, Apple clearly is
by far the leader on that with AirPods.
I think partially because they did a good job and partially because I think they gave themselves some kind of unfair advantages with how they bundle it and couple it and have technology that works with the
phone that I guess they're now just starting to open up, which is great.
But for a while, I think it just made it impossible for anyone else to build anything like the AirPods.
Watches, I think, are interesting
in some ways too.
You're not a fan of the pendant thing, the pendant trend that's kind of starting right now.
I mean, is it a trend?
Is it I don't know, it's early.
I mean, there's a lot of startups doing this stuff.
I mean, I think it's an interesting idea.
I don't want to be, I don't want to be too dismissive.
I mean, my point is that I actually think that they're
my guess is that different people are going to like different things, but that glasses are going to be the most popular.
There was no new quests this year at Connect, and I'm curious how you're feeling about the quests these days and VR mixed reality generally as a category.
It seems like glasses are really taken off.
There's obviously a lot more quests out in the world, they've sold a lot more um but i'm curious you know not they're not being a new one this year and also just how you're feeling about it these days like the category yeah no i mean i think we're we're making progress on it i mean this year what we focused on was the meta horizon creation tools so we announced um meta horizon studio and meta horizon engine which are are these basically foundational tools for creating worlds and content using AI and um and that's going to go towards making it so that people can create a lot more content
in VR.
But I think that that's also going to be the case.
And
all that stuff, I think, should have should translate over to AR too.
I think a lot of this content you'll be able to have there.
I mean, glasses that are see-through may not be quite as immersive as VR, but you can deliver a lot of the same kind of holographic experience.
And then
I think a lot of these things will also
end up showing up
on
phones, right?
I mean, and
I think that there's this huge opportunity with AI
to, you know, you're browsing your feed on Instagram and Facebook.
And like, it's like each story
should be its own world that you can jump into.
But like the
and you're starting to see some of this with some of the AI models, some of the stuff that Google has put out recently, for example, are like interesting glimpses of where that could go.
But I think that there's this real sense that
the whole stack for how you create those kinds of immersive experiences needs to get rethought.
It's not just going to be people doing things in the same way that they've created 3D video games historically.
I mean, that's a very, very intensive
process where the tools are like very
hybrid traditional.
Yeah.
Yeah.
I mean,
yeah, I mean, so my, my kids are kind of into, are into programming and into making things.
And, you know, we try to, you know, build different 3D worlds.
And I think some of the stuff is just like intractable for them, right?
It's like it's, I mean, they're still kids.
So it's fine.
But if you intract it to like a pre-opt, I mean, then it becomes.
I think by the, yeah, so that's, but with, with the Meta Horizon Studio, which I've been playing with with them, then, and, you know, obviously, you know, it's, this isn't primarily designed for like an eight-year-old to be able to use.
But my, my bar is if it's, if it's, if it's enough that like I can kind of make something good with my, with my my eight-year-old, then that's pretty cool.
You really can create all these interesting things, right?
It's like
you can define what the world dynamic is, like what kind of you want the world to be.
If you want to put stuff in the world, you can do it.
If you want to texture things differently, if you want to change the skybox, you can do that.
So I think it's...
I think it'd be a very different way of creating things that's fundamentally a lot more accessible, which will then unlock a lot more creativity.
And there will just be a lot more interesting worlds and things to do.
And that, I think, is going to be important not just for VR and AR, but I think it's going to unlock all these experiences that billions of people will probably first see on their phones at some point.
So
that's what we're doing with the Meta Horizon Studio work.
It's this kind of agentic AI flow where people at different levels of sophistication can go in and create really interesting worlds and immersive environments.
And
I think that's neat.
Then we paired that with MetaHorizon Engine, which is basically this custom graphical rendering engine that we've been creating for two years now.
It's this project that we've had to build it from the ground up because previously we were using Unity, which is great, but it's not really built for this use case.
I mean, most games, you know, you load a game, it takes, you know, 20 seconds to
kind of get into the game, which kind of makes sense because you're loading this whole 3D world that you now need to be able to interact with.
But we want
the worlds that you can interact with to feel more like jumping between
two web pages or jumping between like two screens within
a native app that's like really fast.
So the whole like, okay, it has to take 20 seconds to like page this whole new world into memory was not going to cut it.
So we rebuilt, we basically built MetaHorizon Engine from scratch
to be this graphics engine that can support rendering these kinds of worlds with high concurrency and the avatar system, the photorealistic avatars and all this
with just a few seconds of load time.
So it's more like a website or just a transition in an app.
And that's the kind of thing.
that'll make it so that when you're in VR, you can jump between worlds easily.
It's not like some big commitment or some big decision.
You can just feel free to explore because,
you know, it's not like you're going to have to wait 20 or 30 seconds for the next thing to load.
You walk through the portal, you don't like it, you walk back through the portal in the other direction.
And similarly, for things like within Facebook or Instagram, having the ability to kind of see a post and jump into a world, that's something that needs to have very low friction to do.
So, the Meta Horizon engine is this kind of core piece of technology.
So, yeah, I'd say
on the metaverse side, this year's announcements at Connect were more about kind of the software foundations than hardware.
But you're still committed to the hardware.
Yeah, I mean, the way that we do the hardware is we don't have it plans that there's a new device every single year.
Sure.
It's we have multiple device lines.
There's sort of the higher-end one where we
introduce some new technology and then we try to get it to be as affordable as possible.
So we did Quest 3, then we did Quest 3S, but it's not like like every year there's one.
It's basically,
you know, most years there will be a new one of one of those two.
And then sometimes there's like an off year where we're pretty much just tuning the software to get ready for the new paradigm.
Got it.
Okay.
So Quest is still going.
Yeah.
We're focused.
Okay.
Yeah.
I.
I've I've been saving this.
I've got to ask you about this.
You know, there is a tremendous amount of interest in your AI strategy right now, unlike anything I've seen in the tech industry, honestly.
Or AI overall.
AI overall.
But I think like what you've done over the summer with the hiring and the super intelligence
mission that you put out and all of that.
And we've been talking about AI as it relates to the hardware throughout this conversation.
And AI was a part of my demo.
It wasn't, I would say, like front and center.
And it seems like a lot of the work you're doing now is to get ready for when it will be.
And I'm curious, you know, when I was here actually for the demo, I got to see the pod of
the new lab and see them at work and they're in there cranking like you can tell and um i i would love to know like maybe we can start here like
when you decided i need to i need to change things and and why you decided to go about it the way that you did because i think that's the thing that people were like whoa like this is this is crazy yeah i think if you're on the inside it doesn't feel as crazy because you know the talent market is very small you know it's kind of rational if you look at the numbers but just the strategy like walk me through like when you were like okay i want to to make a change.
This is what I want to do.
Walk me through that.
Yeah, I mean, this is an area that I just think it,
I think, AI and super intelligence are going to be the most important technologies in our lifetime.
I think it's so important that it sort of demands its own hardware platform, which is a big part of why I'm so excited about glasses, because I think glasses are going to be the best kind of hardware device category
to provide personal personal super intelligence to people.
But
I think AI is just this incredibly profound thing.
It's going to change how we run the company.
It's going to change how all companies run.
It's going to change how we build products.
It's going to change what products are possible.
Change how creators do their work, right?
So change the content that's possible, the mix of content, all these different things.
So
I think being on the frontier there
is really critical if you want to continue just doing interesting work and pushing the world forward.
I think, you know, just like with mobile, you didn't, you know, if you didn't invent the mobile phone, you could still do interesting work building apps.
But
I do think at some level, you can do even more interesting work if you can both pair the software with the hardware experience.
So
we
are definitely committed to being at the frontier and building super intelligence.
I think it's going to be the most important technology, like I just said.
And
because of that,
I just think
we're basically, we're very focused on making sure we build a weaving effort.
So over the last few years, we stood up an effort that was improving very quickly.
So Llama was a good initial academic project.
Llama 2 was a kind of good initial version of that as an open source release.
Llama 3 was a big improvement over Llama 2.
And then Llama 4 introduced some important improvements over Llama 3 too.
But
I didn't feel like we were on the trajectory that we needed to be on to basically like be at the frontier and be pushing the field forward.
And that was
so I, you know,
I think.
Every company at some point goes through periods where
you're not on the trajectory that you want to be on something.
And these are decisions that you get to make in your life or in building a company where the real question is not like
it's not like, is there going to be a moment where you feel like you're not kind of on the track that you want to be on?
It's what do you do in that moment.
And so I just decided that we should take a step back and build a new lab.
And
I think part of that was informed by
the shape that I thought the effort should be.
We have this real focus on talent density, right?
And the idea is that
you really want to have, this is like a group science project, right?
So you want to have the smallest group of people who can fit the whole thing in their heads at once.
And there's not many people who can do that.
No, but you also want the group to be as small as possible.
So there are some problems that we're working on around the company where like you can just have more people work on them.
And even if the marginal productivity per person declines, you can just keep on scaling the net productivity of the effort.
You know, our feed and ads recommendation is an interesting example of this, where we have a lot of people who are just testing different improvements to the systems.
And if one guy's sitting next to you,
if that guy's experiments don't work that well, it doesn't necessarily slow you down that much.
But I think building these language models
is not that way, right?
It's like it's a small group effort.
You want the smallest group of people that can keep the whole thing in their head and do the best work that they can.
So each seat on that boat is incredibly precious and in high demand.
You also don't want a lot of layers of hierarchy
because when someone gets into management, their technical skills kind of start decaying pretty quickly.
Even if they were an IC
researcher a few months ago,
now if they're spending all their time kind of helping to manage, then okay, after six months, a year, they might be less in the technical details than they were before.
So you kind of,
I think that there's this huge premium on just having a relatively small, extremely talent-dense effort that is organized to be quite flat.
And you're very hands-on with this team.
Well, yeah.
I mean, in the sense that, I mean, I'm not like an AI scientist.
Yeah.
But the thing that I'm sitting near you, I mean, it's clear that this is like the priority.
So
the thing that I'm focused on is, one, getting the very best people in the world to join the team.
So I've spent a lot of time just meeting all of the top researchers and folks around the field and getting a sense for who I think would be good here and
who might be at a point in their career where we can give them a better opportunity.
That's one piece.
Another thing that I'm very focused on is making sure that we have
significantly higher compute per researcher than any other lab, which
I think we are just way higher on compute per researcher than any other lab today.
And, you know, as the founder and CEO, and because we have a strong business model that can support this, I mean, we make like you know, a lot of profit.
Um, a decent amount, yeah.
Yeah, it's a it's a it's a reasonable amount.
Um, you can just call up Jensen and be like, more GPUs, please.
Um, it's not that simple.
It's not that simple, but it's um, and I normally text him with my glasses, but no, but it's, um, but uh
there's there's a whole supply chain that goes into it.
And the GPUs are one part of it, but then you also need to build data centers and get energy and get the other pieces and get the networking.
And
yeah, but the bottom line is we're very committed to that and doing what we need to do to make sure that we have leading levels of compute.
So
we talked about recently how we're building this Prometheus cluster, which I think is going to be the first kind of single contiguous gigawatt plus cluster for training that I think has been built.
In the world, we're building this Hyperion data center in Louisiana that I think is going to scale to five gigawatts over the coming years.
And
several other of these, what we call them, Titan data centers.
They all have different Titan code names.
And
that are each going to be
one to multiple gigawatts.
And
that's a significant investment.
I think it took a fair amount of conviction.
So I think unless you're,
I think like a bunch of conditions need to be met.
Like basically, you need to have a business model that can support it.
You need to have a CEO who like believes in this very deeply, right?
It's that they're just willing to like make that kind of investment for that.
And then you need to have the technical ability to actually go build the things and bring them online.
And I think we're
one of, if not the only company in the world that meets all of those criteria.
So
so yeah, um, I mean, other people will do other interesting things too, but
no, I think that this is going to be very interesting.
The other principle though that we have for the lab is,
you know, it's, it's split into different efforts, right?
There's the lab that we call TBD, which it
that's what I saw.
Yeah, that's the that's the research lab.
Um, TBD was supposed, it was the placeholder name, but then it's stuck because it's kind of a good vibe, right?
It's like, all right, it's like a work in progress type type vibe.
Um,
then we also have applied research and product
that in Nat Friedman's group.
And
that team is working on a lot of research that goes directly into the products.
So things that may not necessarily directly be on the path to super intelligence, like
speech that passes the Turing test and things like that, but are important for the products nonetheless.
So we're working on all those things.
And the research effort, the TBD effort, is truly a long-term research effort.
So one of the principles that we have for the lab is
just is no deadlines.
So people are always asking, okay,
when are we going to ship the model?
Like
this is very kind of,
it's a strategy and it's also the values that we're trying to put in it is.
I mean, all these researchers are very competitive.
They all want to be at the leading edge.
They know the industry is moving quickly.
They're going to put a ton of pressure on themselves.
Me telling them that something should get done in nine months or six months or whatever isn't going to help them do their job.
It's only going to put another artificial constraint on it that makes them sub-optimize the problem.
And I want them to go for kind of the full thing.
I mean, it's like we're going for trying to build AI that can improve itself responsibly and that can.
And where we're basically building these models that bring all these modalities together to deliver the kinds of experiences that we're talking about.
And
yeah, I mean, I think me, me putting a deadline on that is not going to be helpful.
So yeah, so I'm very focused on, and that's the nature of research, right?
It's not engineering.
It's not like, engineering is when you know how to do something and you go and you need to put together a complex process to go build it.
Research is when there's several unknown problems.
And in AI, I don't even think we have a sense of how many unknown problems there are.
For something like glasses, we have a sense.
It's like, okay, there's like 10 areas of unknown problems that we need to go solve.
Like, how do we get the right waveguides?
How do we get the right laser display?
No one's ever done this, but like we can try 10 different things in each one of those and kind of run it forward.
And AI,
I don't think anyone can definitively tell you how deep the problem space is.
So it's very much research.
And that's fascinating.
So, yeah, so what do you do
to do that as well as possible?
You get the very best team, talent density, make sure that people have the resources that they need and clear all the other stuff that comes from running a big company out of the way.
And that's kind of my job for them.
Yeah.
You were talking about the CapEx and the data centers.
You obviously see something on the other side of that that will warrant that being worth it.
But I'm wondering: do you ascribe to these bubble fears at all that people are talking about?
That we're in this massive
overspending, getting ahead of the skis bubble, and maybe a company like Meta will be okay because you guys do have a core business that makes a lot of money.
But
how do you think about this bubble talk that has been going on for the last few months, especially?
I mean, I think it's quite possible.
I mean, I think basic, if you look at most other major infrastructure buildups in history, you know, whether it's railroads or fiber for the internet, you know, in the dot-com bubble,
these things were all chasing something that ended up being fundamentally very valuable.
In most cases, it ended up being even more valuable than the people who were kind of pushing the bubble thought it was going to be.
But
in at least all of these past cases,
the infrastructure gets built out, people take on too much debt, and then you hit some blip, whether it's some macroeconomic thing, or maybe you just have like a couple of years where the demand for the product doesn't quite materialize.
And then a lot of the companies end up going out of business.
And then the assets get distressed, and then it's a great opportunity to go buy more.
So I think that that's not, it's obviously impossible to predict predict what will happen here.
There are compelling arguments for why AI could be an outlier and
basically just,
you know, if the models keep on growing in capability year over year and demand
keeps growing, then maybe there is no collapse or some or something.
But.
I do think that there's definitely a possibility, at least empirically based on past large large infrastructure buildouts and how they led to bubbles, that
something like that would happen here.
From Meta's perspective,
I think the strategy is actually pretty simple.
At least in terms of building out the infrastructure,
no one knows when superintelligence is going to be possible.
Is it going to be three years?
Is it going to be five years?
It's going to be eight years, whatever.
Is it never going to happen?
But I don't think it's never going to happen.
I'm more ambitious or optimistic.
I think it's gonna be on the sooner side.
But
let's say that you weren't sure if it was gonna be three or five years.
Like in a conservative business situation, maybe you'd like hedge building out your infrastructure
because you're worried that if you build it out, assuming it's gonna be three years and it takes five, then you've lost, you know, maybe a couple hundred billion dollars or something.
I mean, my view is that.
That's a lot of money.
Well, no, well, I was gonna say in the grand scheme, it is objectively a huge amount of money.
Yeah.
Right.
I mean, didn't you just tell Trump you were going to spend like $600 billion?
I mean, that's...
I did.
Yeah.
Through 2028, which is...
That's a lot of money.
It is.
And if we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously.
But
what I'd say is I actually think the risk is higher on the other side.
If you
build too slowly, and then super intelligence is possible in three years, but you built it out assuming it would be there in five years, then you're just out of position
on what I think is going to be the most important technology that enables the most new products and innovation and value creation in history.
So
I don't know.
I mean, it's,
I don't want to be kind of cavalier about it.
I mean, obviously these are very large amounts of money and we're trying to get it right.
But I think the risk, at least for a company like Meta, is probably in not being aggressive enough rather than being somewhat too aggressive.
But part of that is like, we're not at risk of going out of business or something like that.
Right.
If you're one of these
companies that like an OpenAI or an Anthropic or something like that.
where they're raising money
as the way that they're funding their build out.
And
there's obviously this open question of to what extent are they going to be able to keep on raising money?
And that's dependent both to some degree on their performance and how AI does, but also all these macroeconomic factors that are out of their control.
I mean, the market could get bearish for reasons that have nothing to do with AI.
Maybe something bad happens internationally.
And
then it could just be impossible to fulfill the kind of the compute funding, like the compute obligation.
So I think that's, that's, it might be a different situation if you're in one of their shoes, but
I think for us,
I think the clear strategy is it's just,
I think it
creates more value for the world
if we kind of assume
pretty aggressive assumptions about when this is going to be possible and take some risk that maybe it takes a little bit longer.
Do you feel like the U.S.
is in a better place now to help with this and to help American companies succeed?
I think you've done a lot of work with this new administration.
And when we were here last year, you were saying you wanted to kind of stay out of it.
But it seems like you realized, is it, it was the realization that this is just so important that I have to play ball with this?
Oh, well, I mean, the thing that I want to stay out of is partisan politics.
Okay.
So,
but I mean,
we will always want to work with and have a
good partnership and collaboration with
governments, right?
So,
and that's going to be especially true in our home country, but it's also true in other countries around the world where we serve large amounts of people.
Um,
the
so yeah, but I'd say yes.
I mean, I think the
this administration for a number of reasons is definitely more forward-leaning on wanting to help it build out infrastructure.
And
that has, I think, been positive.
And I think that that aspect of this is,
I think the next few years years are going to be very important for the ai build out and the ai infrastructure build out and i think having a government um
which it's important both at the federal level and in the states where you work that
want that build out to happen is fundamentally a helpful thing yeah um there was a line in your super intelligence memo that you wrote where you said
Over the last few months, we have begun to see glimpses of our AI systems improving themselves.
Yeah.
I was really interested in that line.
What specifically did you see that made you write that?
Well, I mean, one of the early examples that we saw
was
a team that was working on Facebook that took a version of Llama 4 and made this autonomous agent that could start to improve parts of the Facebook algorithm.
And
it basically checked in
a number of changes that are the type of thing that, like, a mid-level engineer would get promoted for.
Really?
So, yeah.
So, I think that's like
very neat.
It's like you basically have built an AI that is building AI that makes the product better,
that improves the quality that people observe.
To be clear, this is still
a low percentage of the overall improvements that we're making to Facebook and Instagram.
But I think it'll grow over time.
So
that's one of the things that I was talking about when I said glimpses, right?
I mean, this isn't like, okay, the AI is improving itself at like an exponentially fast rate or something like that.
I think that what we're seeing are early examples of AIs autonomously improving AI in ways that are having a positive impact on the experience that people get to have.
And that's how to think about super intelligence broadly: is that it's
when you're there, it's AI that is rapidly improving itself.
That's what it means.
Or is that too simple?
Yeah, I think AI that can improve itself.
And that's beyond human level.
I think that
there's this dynamic today where all the AIs are trained on
data and knowledge that people have produced.
So a lot of the systems seem to be kind of very broad and have all the knowledge of humanity.
So maybe that's in some dimensions.
It might feel like it is smarter than any one given person and kind of the breadth of what it knows.
But I still think today the systems are basically gated on human knowledge.
And
there is a world beyond that, right?
Where I think you're starting to get into that with some of the thinking models where they can go out and solve problems in the future that
no person can solve and then can learn from having solved that problem.
And
the pace of that improvement to me is somewhat less important than
just the process of it.
I'm not like a super fast takeoff believer because
in the sense that I don't think it's like going to be, okay, one day it can improve itself and then the next day it's going to take over everything.
I mean, I think that there's like way more physical constraints.
Like it takes time to build data centers.
Like if you, a lot of frontier human knowledge comes from empirical experimentation, right?
So you,
you know, if you're, you develop some new drug,
you want to see A, if it works and B, if it's safe for people.
How do you do that?
You run a test where you give it to a handful of people.
maybe more than a handful, but some statistically significant group of people.
And you observe how that goes for a while to see both whether the positive effects are kind of long-lived and whether there's any negative effects.
Okay, well, if you're trying to run a six-month or a 12-month trial, you can't do that in less than six or 12 months.
I mean, maybe you can get a negative result sooner,
but you're not going to be able to validate that you can get the positive result that you're looking for without having done that test.
So I think...
And that's also going to be true with AI, right?
There are going to be some things that maybe something that's a super intelligent system can just intuit or reason from first principles using the knowledge that we already have.
But I think a lot of learning is going to be experimental.
And I do just think these things take time, right?
You have to run long-term experiments if you're trying to make long-term changes in the world.
And that I think is going to be true for the AI too.
Now, maybe it'll, on average, run smarter experiments.
So per experiment, maybe it'll learn more.
I think it will probably be able to figure out some things from First Principle.
It will definitely be able to figure out a bunch of things from First Principles, but I don't know.
I'm not on
the camp of people who think it's going to be like overnight this changes.
I think it's going to be this very steady progression.
We're just making our lives better.
All right.
Well, it's going to be a wild few years.
Yeah.
Mark, I appreciate you doing this.
Yeah, happy to.
Yeah.
Congrats on the new show.
Thank you.
Appreciate it.
Alex, I enjoyed that interview, but I have to ask, how does it feel to have Mark Zuckerberg in your Neuro band?
I hope he's not actually in the band, but I guess we don't know for sure.
But seriously, thanks to Zuck for taking the time to be the first guest on Access.
You can read more about what we talked about in my newsletter, sources.news.
And Ellis, what are you plugging?
Yeah, you could find me on Twitter, which I will continue to call Twitter, at Matt Hamburger and at meaning.company.
Access is produced in partnership with Vox Media.
Please follow us.
We're a new show.
We need your support.
We need your follows.
Hitting that notification button to get new episodes.
You can find us on Spotify, Apple Podcasts, all the other podcast apps that I don't know about.
We're also on YouTube and video.
Please check us out there at Access Pod.
You can also find us on all of the socials at Access Pod.
Smash that like button.
Smash it.
All right.
Can't wait to tell you.
Alice, that's our first episode.
We'll see everyone next week.
To remind you that 60% of sales on Amazon come from independent sellers, here's Scott from String Joy.
Hey, y'all, we make guitar strings right here in Nashville, Tennessee.
Scott grows his business through Amazon.
They pick up, store, and deliver his products all across the country.
I love how musicians everywhere can rock out with our guitar strings.
A one, two, three, four.
Rock on, Scott.
Shop small business, like mine, on Amazon.
The world is changing faster than ever.
Now, with The Economist Insider, a new premium video offering, we're giving you unprecedented access to the debates shaping our world.
I have sat around that table at NATO.
There is an incoming missile attack now.
Could you answer the question?
I'm sorry, we've got very little time now.
With a few surprises along the way.
I can't promise we'll have a cocktail every time, but we'll try.
So, don't just be an economist reader.
Get on the inside track with the economist insider.
Go to economist.com to join the conversation.
We all have moments where we could have done better.
Like cutting your own hair.
Yikes.
Or forgetting sunscreen, so now you look like a tomato.
Ouch.
Could have done better.
Same goes for where you invest.
Level up and invest smarter with Schwab.
Get market insights, education, and human help when you need it.
Learn more at schwab.com.