#471 – Sundar Pichai: CEO of Google and Alphabet
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep471-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.
Transcript:
https://lexfridman.com/sundar-pichai-transcript
CONTACT LEX:
Feedback - give feedback to Lex: https://lexfridman.com/survey
AMA - submit questions, videos or call-in: https://lexfridman.com/ama
Hiring - join our team: https://lexfridman.com/hiring
Other - other ways to get in touch: https://lexfridman.com/contact
EPISODE LINKS:
Sundar's X: https://x.com/sundarpichai
Sundar's Instagram: https://instagram.com/sundarpichai
Sundar's Blog: https://blog.google/authors/sundar-pichai/
Google Gemini: https://gemini.google.com/
Google's YouTube Channel: https://www.youtube.com/@Google
SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Tax Network USA: Full-service tax firm.
Go to https://tnusa.com/lex
BetterHelp: Online therapy and counseling.
Go to https://betterhelp.com/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex
Shopify: Sell stuff online.
Go to https://shopify.com/lex
AG1: All-in-one daily nutrition drink.
Go to https://drinkag1.com/lex
OUTLINE:
(00:00) - Introduction
(00:07) - Sponsors, Comments, and Reflections
(07:55) - Growing up in India
(14:04) - Advice for young people
(15:46) - Styles of leadership
(20:07) - Impact of AI in human history
(32:17) - Veo 3 and future of video
(40:01) - Scaling laws
(43:46) - AGI and ASI
(50:11) - P(doom)
(57:02) - Toughest leadership decisions
(1:08:09) - AI mode vs Google Search
(1:21:00) - Google Chrome
(1:36:30) - Programming
(1:43:14) - Android
(1:48:27) - Questions for AGI
(1:53:42) - Future of humanity
(1:57:04) - Demo: Google Beam
(2:04:46) - Demo: Google XR Glasses
(2:07:31) - Biggest invention in human history
PODCAST LINKS:
- Podcast Website: https://lexfridman.com/podcast
- Apple Podcasts: https://apple.co/2lwqZIr
- Spotify: https://spoti.fi/2nEwCF8
- RSS: https://lexfridman.com/feed/podcast/
- Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
- Clips Channel: https://www.youtube.com/lexclips
Listen and follow along
Transcript
The following is a conversation with Sundar Pachai, the CEO of Google and Alphabet.
And now, a quick few second mention of each sponsor.
Check them out in the description or at lexfreeman.com/slash sponsors.
It's the best way to support this podcast.
We got Tax Network USA for taxes, BetterHelp for mental health, Element for electrolytes, Shopify for selling stuff online, and AG1 for your daily multivitamin drink.
Choose wisely, my friends.
And now onto the philatteries.
You can skip them if you like, but if you do, please still check out our sponsors.
I enjoy their stuff.
Maybe you will too.
If you want to get in touch with me for whatever reason, go to lexrema.com/slash contact.
All right, let's go.
This episode is brought to you by Tax Network USA, a full-service tax firm focused on solving tax problems.
for individuals and for small businesses.
I remember when I was preparing for the Roman Empire episode, I came across a lot of places where there was a rigorous discussion about
the intricate tax collection algorithms
used by the Roman Empire.
The reason I use the word algorithms is basically there's a systematic process for determining how much you owe based on your location, based on your status, based on your job, based on all these kinds of factors.
It's sad, but those rules in the early days initially give power to the individual because they protect the individual.
But when they become too complicated, then the bureaucracy, the centralized power, starts to abuse its power by using the rules.
And then the individual loses power because they can't figure out the complexity of the rules.
And that's essentially why you need the CPAs and the firms to figure out the complexity.
Anyway, these guys are good.
Talk one of their strategies for free today.
Call 1-800-958-1000 or go to tnusa.com slash Lex.
This episode is brought to you by BetterHelp, spelled H-E-L-P-Help.
I got to recently meet a lot of interesting people when I visited San Francisco.
I was there in part to celebrate Yoshi Bach and the newly launched California Institute for Machine Consciousness.
I, by the way, encourage you to check it out.
I think it's cimc.ai.
And there, I talked to a lot of brilliant people, and one of them was a grad student studying the so-called dark triad.
These are the three personality traits of narcissism, Machiavellianism, and psychopathy.
A little bit for a brief moment, it made me wish I took that path of studying the human mind.
And perhaps that is the indirect way.
Through all the AI, through all the programming, through all the building of systems, and now with a podcast,
maybe I somehow sneaked up to that dream in the end.
Anyway, I say all that because these topics are studying the extremes of the human mind, but of course the extremes are just the edges of an incredibly complicated system.
That's just so fascinating to study, to reflect on, to put a mirror to all those processes that you do through talk therapy.
They're just fascinating.
Anyway, you can check them out at betterhelp.com slash lex and save on your first month.
That's betterhelp.com slash lex.
this episode is also brought to you by element my daily zero sugar and delicious electrolyte mix i'm not going to go down the rabbit hole but there's a lot of interesting studies that measure the decreased performance of the human brain so cognitive processing speed for example by what amount does it decrease reaction time by what amount does it decrease when you decrease the brain's sodium levels for example sodium and potassium really are important on a chemical level for the functioning of the human brain.
Now, obviously, all throughout human history, people understood the value of water, but as a medical concept, the concept of dehydration only came about in the 19th century.
If we just look at the history of medicine, it's kind of hilarious how little we knew before.
And it makes me think we know very little now relative to what we will know in a hundred and a thousand years.
The human body, the biological system of the human body, is incredibly complicated.
So for us to have the certainty that we sometimes exude about the human body, about what we understand about disease, about health, it's kind of funny.
Anyway, get a simple pack for free with any purchase.
Try it at drink element.com/slash Lex.
This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere with a great-looking online store.
Once again, I do this often, where I don't just or at all talk about Shopify, but instead talk about the CEO of Shopify,
Toby.
He once again, like I mentioned, with Yoshi Bach and the newly launched CIMC, California Institute of Machine Consciousness.
He's a big supporter of that too.
And a bunch of people have asked me why I have not done a podcast with him yet.
I don't know either.
I'm sure it's going to happen soon.
And I haven't seen him in quite a while.
A lot of people from a lot of walks of life deeply respect him for his intellect, for the way he does business, and just for the human being he is.
So,
anyway, not sure why I mentioned that here, but
back to what this is supposed to be.
You can sell shirts online like I did at lexrema.com/slash shop.
Super easy to set up a store.
I did in a few minutes.
What else can I say?
You should do it too.
Sign up for $1 per month trial period at shopify.com/slash Lux.
That's all lowercase.
Go to shopify.com/slash Lux to take your business to the next level today.
This episode is also brought to you by AG1, an all-in-one daily drink to support better health and peak performance.
I was training jiu-jitsu the other day in that wonderful Texas heat.
And I was reminded, first of all, how long my journey with jiu-jitsu has been and how fulfilling it has been.
How interesting the exploration of the puzzle of two humans trying to break each other's arms and legs plus the wrestling and the grappling component really interesting leverage power speed how all that could be neutralized how to control a human body with leverage with technique as opposed to raw generally misapply strength i should say anyway because um there are times where there's long stretches of weeks where i don't train you feel it in the cardio you do a bunch of rounds and you just the breaths are shallow.
You feel like the mind is hazy from exhaustion.
That you're a little bit more risk averse because you don't want to end up in a bad position.
You have to battle out of that bad position after many rounds of exhausting battles.
And after that training session, when I got home, I enjoyed a nice cold AG1.
They'll give you a one-month supply of fish oil when you sign up at drink AG1.com/slash Lex.
This is the Lex Freeman Podcast.
To support it, please check out our sponsors in the description or at lexfreeman.com/slash sponsors.
And now, dear friends, here's Sundar Bachai.
Your life story is inspiring to a lot of people.
It's inspiring to me.
You grew up in India, whole family living in a humble two-room apartment, very little, almost no access to technology.
And from those humble beginnings, you rose to lead a $2 trillion
technology company.
So if you could travel back in time and told that, let's say, 12-year-old Sundar, that you're now leading one of the largest companies in human history, what do you think that young kid would say?
I would have probably laughed it off.
You know,
probably too far-fetched to imagine or believe at that time.
You would have to explain the internet first.
For sure.
I mean, computers to me at that time, you know, I was 12 in 1984.
So probably,
you know, by then I had started reading about them.
I hadn't seen one.
What was that place like?
Take me to your childhood.
You know, I grew up in Chennai.
It's in south of India.
It's a beautiful, bustling city.
Lots of people, lots of energy.
You know, simple life, definitely like fond memories of playing cricket outside the home.
We just used to play on the streets.
All the neighborhood kids would come out and we would play until it got dark and we couldn't play anymore barefoot.
Traffic would come.
It would just stop the game.
Everything would drive through and you would just continue playing, right?
Just to kind of get the visual in your head.
You know, pre-computers was a lot of free time.
Now that I think about it, now you have to go and seek that quiet solitude or something.
Newspapers, books, is how I gained access to the worst information at the time, you will.
My grandfather was a big influence.
He worked in the post office.
He was so good with language.
His English, you know, his handwriting till to date is the most beautiful handwriting I've ever seen.
He would write so clearly, he was so articulate.
And so he kind of got me introduced into books.
He loved politics.
So we could talk about anything.
And, you know, that was there in my family throughout.
So
lots of books, trashy books, good books, everything from Ayn Rand to books on philosophy to stupid crime novels.
So books was a big part of my life.
But that kind of
this whole, it's not surprising I ended up at Google because Google's mission kind of always resonated deeply with me.
This access to knowledge, I was hungry for it, but definitely have fond memories of my childhood.
Access to knowledge was there.
So that's the wealth we had.
You know, every aspect of technology, I had to wait for a while.
I've obviously spoken before about how long it took for us to get a phone.
About five years, but it's not the only thing.
A telephone.
There was a five-year waiting list.
And we got a Rotary telephone.
but it dramatically changed our lives.
You know, people would come to our house to make calls to their loved ones.
You know, I would have to go all the way to the hospital to get blood test records, and it would take two hours to go.
And they would say, Sorry, it's not ready.
Come back the next day, two hours to come back.
And that became a five-minute thing.
So as a kid, like, I mean, this light bulb went in my head, you know, this power of technology to kind of change people's lives.
We had no running water.
You know, it was a massive drought.
So they would get water in these trucks, maybe eight buckets per household.
So me and my brother, sometimes my mom, we would wait in line, get that and bring it back home.
Many years later, like we had running water and we had a water heater, and you could get hot water to take a shower.
I mean, like, so, you know, for me, everything was discrete like that.
And so I've always had this thing, you know, first-hand feeling of like how technology can dramatically change like your life and like the opportunity it brings.
So, you know, that was kind of a subliminal takeaway for me throughout growing up.
And, you know, I kind of actually observed it and felt it, you know.
So
we had to convince my dad for a long time to get a VCR.
Do you know what a VCR is?
Yeah.
I'm trying to date you now.
But, you know, because before that,
you only had like kind of one TV channel,
right?
That's it.
And so, you know, you can watch movies or something like that.
But this was by the time I was in 12th grade, we got a VCR.
You know, it was a
like a Panasonic, which we had to go to some like shop, which had kind of smuggled it in, I guess.
And that's where we bought a VCR.
But then being able to record
like a World Cup football game and then, or like like get put like videotapes and watch movies, like all that.
So, like, you know, I had these discrete memories growing up.
And so, you know, always left me with the feeling of like how getting access to technology drives that step change in your life.
I don't think you'll ever be able to equal the first time you get hot water.
To have that convenience of going and opening a tap and have hot water come out, yeah.
It's interesting.
We take for granted the progress we've made.
If you look at human history, just those plots that look at GDP across 2,000 years, and you see that exponential growth to where most of the progress happened since the Industrial Revolution, and we just take for granted, we forget how far we've gone.
So our ability to understand
how great we have it and also how quickly technology can improve is quite poor.
Oh, I mean, it's extraordinary.
You know, I go back to India now, the power of mobile.
You know, it's mind-blowing to see the progress through the arc of time.
It's phenomenal.
What advice would you give to young folks listening to this all over the world who look up to you and find your story inspiring, who want to be maybe the next underbuchai, who want to start, create companies, build something that has a lot of impact in the world?
Look, you have a lot of luck along the way, but you obviously have to make smart choices.
You're thinking about what you want to do.
Your brain is telling you something.
But when you do things, I think it's important to kind of get that, listen to your heart and see whether you actually enjoy doing it, right?
That feeling of
if you love what you do,
it's so much easier.
And you're going to see the best version of yourself.
It's easier said than done.
I think it's tough to find things you love doing.
But I think kind of listening to your heart a bit more than your mind in terms of figuring out what you want to do,
I think is one of the best things I would tell people.
The second thing is, I mean, trying to work with people who you feel at various points in my life, I've worked with people who I felt were better than me.
Like kind of like, you know, you almost are sitting in a room talking to someone and they're like, wow, like, you know,
and you want that feeling a few times.
Trying to get yourself in a position.
where you're working with people who you feel are kind of like stretching your abilities is what helps you grow, I think.
So putting yourself in uncomfortable situations.
And I think often you'll surprise yourself.
So I think being open-minded enough to kind of put yourself in those positions is maybe
another thing I would say.
What lessons can we learn?
Maybe from an outsider perspective, for me, looking at your story and gotten to know you a bit.
You're humble, you're kind.
Usually when I think of somebody who has had a journey like yours and climbs to the very top of leadership, they're usually in a cutthroat world.
They're usually going to be a bit of an asshole.
So, what wisdom are we supposed to draw from the fact that your general approach is of balance, of humility, of kindness, listening to everybody?
What's your secret?
I do get angry.
I do get frustrated.
I have the same emotions all of us do, right, in the context of work and everything.
But a few things, right?
I think, you know, I
over time I figured out the best way to get the most out of people.
You know, you kind of find mission-oriented people who are in the shad journey, who have this inner drive to excellence, to do the best.
And,
you know, you kind of motivate people
and you can achieve a lot that way.
right and so it it often tends to work out that way but have there been times like you know i lose it?
Yeah.
But, you know, not maybe less often than others.
And maybe over the years,
less and less so, because, you know, I find it's not needed to achieve what you need to do.
So losing your shit has not been productive.
Yeah, less often than not.
I think people respond to that.
Yeah.
They may do stuff to react to that, like, but you actually want them to do the right thing.
And, and, and so,
you know, maybe there's a bit of like sports, you know, you know, I'm a sports fan in football coaches in soccer,
that football.
You know, people, people often talk about like man management, right?
Great coaches too, right?
I think there is an element of that in our lives.
How do you get the best out of the people you work with?
You know, at times you're working with people who are so committed to achieving, if they've done something wrong, they feel it more than you
do, right?
So you treat them differently than, you know, occasionally there are people who you need to clearly let them know, like, that wasn't okay or whatever it is.
But I've often found that not to be the case.
And sometimes the right words at the right time, spoken firmly, can reverberate through time.
Also, sometimes the unspoken words.
You know, people can sometimes see that,
like, you know, you're unhappy without you saying it.
And so sometimes the silence can deliver that message even more.
Sometimes less is more.
Who's the greatest soccer player of all time?
Messi or Ronaldo or Pele or Maradona?
I'm going to make, you know, in this question.
Is this going to be a political answer?
No, no, no.
I will tell the truthful answer, which is the
answer.
It is.
You know, it's been interesting because my son is a big Cristiano Ronaldo fan.
And so we've had to watch El Clásicos together, you know, with that dynamic in there.
I so admire C.R.
Simmons.
I mean, I've never seen an athlete more committed to that kind of excellence.
And so he's one of the all-time greats.
But, you know, for me, Messi is it.
Yeah, when I see Leonardo Messi, you just are in awe that humans are able to achieve that level of greatness and genius and artistry.
When we talk, we'll talk about AI, maybe robotics and this kind of stuff.
That level of genius, I'm not sure you can possibly match.
by AI in a long time.
It's just an example of greatness.
And you have that kind of greatness in other disciplines, but in sport, you get to visually see it unlike anything else.
And just the timing, the movement,
this is genius.
I had the chance to see him a couple of weeks ago.
He played in San Jose
against the Quakes.
So I went to see it, see the game.
I was a fan of the had good seats, knew where he would play in the second half, hopefully.
And
even at his age, just watching him when he gets the ball, that movement, you know, you're right, that special quality.
It's tough to describe, but you feel it when you see it, yeah.
He still got it.
If we rank all the technological innovations throughout human history, let's go back
maybe the history of human civilizations, 12,000 years ago.
And you rank them by
how much of a productivity multiplier they've been.
So
we can go to electricity or the labor mechanization of the Industrial Revolution, Revolution, or we can go back to the first agricultural revolution 12,000 years ago.
In that long list of inventions, do you think AI, when history is written a thousand years from now, do you think it has a chance to be the number one productivity multiplier?
It's a great question.
Look, many years ago, I think it might have been 2017 or 2018,
you know, I said at the time, like, you know, AI is the most profound technology humanity will ever work on.
It'll be more profound than fire or electricity.
So I have have to back myself.
I still think that's the case.
When you asked this question, I was thinking, well, do we have a recency bias, right?
You know, like in sports, it's very tempting to call the current person you're seeing the greatest player, right?
And
so is there a recency bias?
And
I do think from first principles, I would argue AI will be bigger than all of those.
I didn't live through those moments.
You know, two years ago, I had to go through surgery and then I processed that.
There was a point in time people didn't have anesthesia when they went through these procedures.
At that moment, I was like, that has got to be the greatest invention humanity has ever done, right?
So, look, we don't know what it is to have lived through those times.
But, you know, and many of what you're talking about were kind of this general
things which pretty much affected everything, you know, electricity or internet, et cetera.
But But I don't think we have ever dealt with a technology both which is progressing so fast, becoming so capable, it's not clear what the ceiling is.
And
the main unique, it's recursively self-improving, right?
It's capable of that.
And so the fact it is going, it's the first technology will kind of dramatically accelerate creation itself, like creating things, building new things, can improve and achieve things on its own,
right?
I think like puts it in a different league, right?
And so different league.
And so I think the impact it will end up having will far surpass everything we've seen before.
Obviously, with that comes a lot of important things to think and wrestle with, but I definitely think that'll end up being the case.
Especially if it gets to the point of where we can achieve superhuman performance on the AI research itself.
So it's a technology that may, it's an open question, but it may be able to achieve a level to where the technology itself can create itself better than it could yesterday.
It's like the move 37 of alpha research or whatever it is, right?
Like, you know, and when, when, yeah, you're right, when it can do novel,
self-directed research, obviously for a long time, we'll
have hopefully always humans in the loop and all that stuff.
And these are complex questions to talk about.
But yes, I think the underlying technology, you know, I've said this, like if you watched seeing AlphaGo start from scratch,
be clueless, and like become better through the course of a day, you know, like, you know, kind of like, kind of, like, you know, really hits you when you see that happen.
Even our, like, the VO3 models, if you sample the models when they were like 30% done and 60% done and looked at what they were generating
And you kind of see how it all comes together.
It's kind of like, I would say, it's kind of inspiring, a little bit unsettling, right, as a human.
So all of that is true, I think.
Well, the interesting thing of the Industrial Revolution, electricity, like you mentioned, you can go back to the, again, the agriculture, the first agricultural revolution.
There's
what's called the Neolithic package of the first agricultural revolution.
It wasn't just that the nomads settled down and started planting food, but all this other kinds of technology was born from that and it's included in this package.
It wasn't one piece of technology, it's there's these ripple effects, second and third order effects that happen.
Everything from something silly, like silly, profound, like pottery that can store liquids and food,
to
something we kind of take for granted, but social hierarchies
and political hierarchy.
So, like early government was formed, because it turns out if humans stop moving and have some surplus food, they start coming up with, they get bored and they start coming up with interesting systems.
And then trade emerges, which turns out to be a really profound thing.
And like I said, government, I mean, there's just
second and third order effects from that, including that package is incredible and probably extremely difficult.
If you ask one of the people in the nomadic tribes to predict that, it would be impossible.
It's difficult to predict.
But all that said, what do you think are some of the early things we might see in the quote-unquote AI package?
I mean, most of it probably we don't know today, but like, you know, the one thing which we can tangibly start seeing now is,
you know, obviously with the coding progress, you got a sense of it.
It's going to be so easy to imagine, like, thoughts in your head translating that into things that exist, that'll be part of the package, right?
Like it's going to empower almost all of humanity to kind of express themselves.
Maybe in the past you could have expressed with words,
but like
you could kind of build things into existence, right?
You know, maybe not fully today.
We are at the early stages of Vibe coding.
You know, I've been amazed at what people have put out online with VO3, but it takes a bit of work, right?
You have to stitch together a set of prompts, but all this is going to get better.
The thing I always think about is this is the worst it'll ever be, right?
Like at any given moment in time.
Yeah, it's interesting.
You went there as kind of a first thought, sort of an exponential increase
of access to creativity.
Software creation.
Are you creating a program, a piece of content to be shared with others,
games down the line?
All of that just becomes infinitely more possible.
Well, I think the big thing is that
it makes it accessible.
It unlocks the cognitive capabilities of the entire 8 billion.
No, I agree.
Look, think about 40 years ago,
maybe in the US there were five people who could do what you were doing,
like go do an interview, you know, and you know.
But today, think about with YouTube and other products, et cetera, like how many more people are doing it.
So, I think this is what technology does, right?
Like when the internet created blogs, you know, you heard from so many more people.
So I think,
but, but with AI, I think that number won't be in the few hundreds of thousands.
It'll be tens of millions of people, maybe even a billion people,
like putting out things into the world in a deeper way.
And I think it'll change the landscape of creativity.
And it makes a lot of people nervous.
Like, for example,
whatever, Fox, MSN, BC, CNN are really nervous about this podcast.
Like you mean this dude in a suit could just do this and
YouTube and thousands of others, tens of thousands, millions of other creators can do the same kind of thing.
That makes them nervous.
And now you get a podcast from Notebook LM that's about five to ten times better than any podcast I've
done.
Not true, but I'm joking at this time, but maybe not.
And that changes.
You have to evolve.
Because I, on on the podcasting front, I'm a fan of podcasts
much more than I am a fan of being a host or whatever.
If there's great podcasts that are both AIs, I'll just stop doing this podcast.
I'll listen to that podcast.
But you have to evolve and you have to change.
And that makes people really nervous, I think.
But it's also a really exciting future.
The only thing I may say is, I do think, like, in a world in which there are two AI, I think people value and choose
just like in chess, you you and I would never watch Stockfish 10 or whatever and AlphaGo play against each other.
Like it would be boring for us to watch.
But Magnus-Carlson and Gookesh, that game would be much more fascinating to watch.
So it's tough to say.
Like one way to say is you'll have a lot more content.
And so you will be listening to AI-generated content because sometimes it's efficient, et cetera.
But the premium experiences you value
might be a version of like the human essence, wherever it comes through.
Going back to what we talked earlier about watching Messi dribble the ball.
I don't know, one day I'm sure a machine will dribble much better than Messi, but I don't know whether it would evoke that same emotion in us.
So I think that'll be fascinating to see.
I think the element of podcasting or audiobooks that is about information gathering, that part might be removed, or that might be more efficiently and in a compelling way done by AI.
But then it would be just nice to hear humans struggle with the information, contend with the information, try to internalize it, combine it with the complexity of our own emotions and consciousness and all that kind of stuff.
But if you actually want to find out about a piece of history, you go to Gemini.
If you want to see Lex struggle with that history, then you look, or other humans,
you look at that.
But the point is, it's going to change the nature,
continue to change the nature of how how we discover information, how we consume the information, how we create the information.
The same way that YouTube changed everything completely, it changed news, you change, and that's something our society is struggling with.
Yeah, YouTube, look, YouTube enabled, I mean, you know this better than anyone else, it's enabled so many creators.
There is no doubt in me that like we will enable more filmmakers than there have ever been, right?
You're going to empower a lot more people.
So, I think there is an expansionary aspect of this, which is underestimated, I think.
I think it'll unleash human creativity in a way that hasn't been seen before.
It's tough to internalize.
The only way it is if you brought someone from the 50s or 40s and just put them in front of YouTube, you know, I think it would blow their mind away.
Similarly, I think we would get blown away by what's possible in a 10 to 20 year timeframe.
Do you think there's a future?
How many years out is it?
That let's say, let's put a mark on it, 50% of content,
a compelling good content 50 of good content is generated by v0456
you know i think it depends on what it is for um
like
you know maybe if you look at movies today with cgi
there are great filmmakers like you still look at like who the directors are and who use it there are filmmakers who don't use it at all you value that There are people who use it incredibly.
Think about somebody like James Cameron, like what he would do with these tools in his hands.
But I think there'll be a lot more content created, like, just like writers today use Google Docs and not think about the fact that they are using a tool like that.
Like, people will be using the future versions of these things.
Like, it won't be a big deal at all to them.
I've gotten a chance to get to know Darren Aronowski.
Well, he's been really leaning in and trying to figure out.
It's fun to watch a genius who came up before any of this was even remotely possible.
He created Pi, one of my favorite movies, and from there just continued to create a really interesting variety of movies.
And now he's trying to see how can AI can be used to create compelling films.
You have people like that.
You have people
I've gotten just know edgier folks that are AI first, like Door Brothers.
Both Aronofsky and Door Brothers create at the edge of the Overton window of society.
You know, they push whether it's
sexuality or violence, it's edgy, like artists are, but it's still classy.
It doesn't cross that line,
whatever that line is.
You know, Hunter S.
Thompson has this line
that the
only way to find out where the edge, where the line is, is by crossing it.
And I think for artists, that's true.
That's kind of their purpose sometimes.
Comedians and artists just cross that line.
I wonder if you can comment on the weird place that puts Google
because Google's line is probably different than some of these artists.
How do you think about specifically VO
and Flow
about how to allow artists to do crazy shit, but also like the responsibility of
it not to be too crazy?
I mean, it's a great question.
Look, part of, you mentioned Darren.
You know, he's a clear visionary, right?
Part of the reason we started working with him early on VO
is
he's one of those people who's able to kind of see that future, get inspired by it, and kind of showing the way for how creative people can express themselves with it.
Look, I think when it comes to allowing artistic free expression, it's one of the most important values in a society.
I think, you know, artists have always been the ones to
push boundaries, expand the frontiers of thought.
And so, look,
I think that's going to be an important value we have.
So
I think we will provide tools and put it in the hands of artists for them to use and put out their work.
Those APIs, I mean, I almost think of that as infrastructure.
Just like when you provide electricity to people or something, you want them to use it and like, you're not thinking about the use cases on top of it.
It's a paintbrush.
Yeah.
And so I think that's how obviously there have to be some things.
And society needs to decide at a fundamental level what's okay, what's not.
We'll be responsible with it.
But I do think, you know, when it comes to artistic free expression, I think that's one of those values we should work hard to defend.
I wonder if you can comment on
maybe earlier versions of Gemini were a little bit careful on the kind of things you would be willing to answer.
I just want to comment on: I was really surprised and pleasantly surprised and enjoyed the fact that Gemini 2.5 Pro is a lot less careful in a good sense.
Don't ask me why, but I've been doing a lot of research on Genghis Khan
and
the STACs.
So there's a lot of violence there in that history.
It's a very violent history.
I've also been doing a lot of research on World War I and World War II.
And earlier versions of Gemini were very
basically this kind of sense, are you sure you want to learn about this?
And now it's actually very factual, objective,
talks about very difficult parts of human history and does so with nuance and depth.
It's been really nice, but there's a line there that I guess Google has to kind of walk.
I wonder if it's, and it's also an engineering challenge, how to how to do that at scale across all the weird queries that people ask.
What, um, can you just speak to that challenge?
How do you allow Gemini to say, again, forgive, pardon my French, crazy shit, but
not too crazy?
I think one of the good insights here has been
as the models are getting more capable, the models are really good at this stuff, right?
And so I think in some ways, maybe a year ago, the models weren't fully there.
So they would also do stupid things more often.
And so, you know, you're trying to handle those edge cases, but then you make a mistake in how you handle those edge cases and it compounds.
But I think with 2.5, what we particularly found is once the models cross a certain level of intelligence and sophistication,
they are able to reason through these nuanced issues pretty well.
And I think users really want that, right?
Like, you know, you want as much access to the raw model as possible, right?
But I think it's a great area to think about.
Like, you know, over time,
you know, we should allow more and more closer access to it maybe
obviously let people custom prompts if they wanted to and like you know and you know experiment with it etc
I think that's an important direction but look the first principles we want to think about it is
you know from a scientific standpoint
like making sure the models and I'm saying scientific in the sense of like how you would approach math or physics or something like that from first principles, having the models reason about the world, be nuanced, et cetera,
from the ground up is the right way to build these things, right?
Not like some subset of humans kind of hard coding things on top of it.
So I think it's the direction we've been taking, and I think you'll see us continue to push in that direction.
Yeah, I actually asked, I gave these notes, I took extensive notes, and I gave them to Gemini and said, can you ask a novel question that's not in these notes?
And it wrote, Gemini continues to really surprise me, really surprise me.
It's been really beautiful.
It's an incredible model.
The question
it generated was, you, meaning Sundar, told the world Gemini is churning out 480 trillion tokens a month.
What's the most life-changing five-word sentence hiding in that haystack?
That's a Gemini question.
But it gave me a sense, I don't think you can answer that, but it gave me, it made, it woke me up to like, all of these tokens are providing little aha moments for people across the globe.
So that's like learning that those tokens are people
are curious, they ask a question, and they find something out.
And it truly could be life-changing.
Oh, it is.
Look, you know, I had the same feeling about search many, many years ago.
You, you know, you, you definitely,
you know, this tokens per month is like.
grown 50 times in the last 12 months.
Is that accurate, by the way?
Yeah, it is.
It is.
It is accurate.
I'm glad it got it right.
But that number was 9.7 trillion tokens per month 12 months ago.
It's gone up to 480.
It's a 50x increase.
So there's no limit to human curiosity.
And I think it's one of those moments.
Maybe I don't think it is there today, but maybe one day there's a five-word phrase which says what the actual universe is or something like that and something very meaningful.
But I don't think we are quite there yet.
Do you think the scaling laws are holding strong?
On
there's a lot of ways to describe the scaling laws for AI, but on the pre-training, on the post-training fronts,
so
the flip side of that, do you anticipate AI progress will hit a wall?
Is there a wall?
You know, it's a cherished micro kitchen conversation.
Once in a while, I have it, you know, like when Demis is visiting or, you know,
Demis, Corai, Jeff, Noam, Sergei, a bunch of our people, like we sit and
talk about this, right?
And
look,
we see a lot of headroom ahead, right?
I think we've been able to optimize and improve on all fronts, right?
Pre-training, post-training, test time, compute,
tool use.
right over time making these more agentic so getting these models to be more general world models in that direction like V03,
you know, the physics understanding is dramatically better than what V01 or something like that was.
So you kind of see on all those dimensions, I feel, you know, progress is very obvious to see.
And
I feel like there is significant headroom.
More importantly, you know, I'm fortunate to work with some of the
best researchers on the planet, right?
They think there is more headroom to be had here uh and so i think we have an exciting trajectory ahead it's tougher to say you know
each year i sit and say okay we're going to throw 10x more compute over the course of next year at it and like will we see progress
sitting here today i feel like the year ahead will have a lot of progress and do you feel any limitations like that or the bottlenecks compute limited data limited idea limited do you feel any of those limitations or is it full steam ahead on all fronts?
I think it's compute limited in this sense, right?
Like, you know, we can all, part of the reason you've seen us to flash nano flash and pro models,
but not an ultra model.
It's like for each generation, we feel like we've been able to get the pro model at like, I don't know, 80, 90% of ultra capability, but ultra would be
a lot more
like slow and a lot more expensive to serve.
But what we've been able to do is to go to the next generation and make the next generation's pro as good as the previous generation's ultra, but be able to serve it in a way that it's fast and you can use it and so on.
So, I do think scaling laws are working, but
it's tough to get at any given time.
The models we all use the most
is maybe like a few months behind the maximum capability we can deliver, right?
Because that won't be the fastest, easiest to use, et cetera.
Also, that's in terms of intelligence.
It becomes harder and harder to measure
performance in quotes because, you know, you could argue, Gemini, Flash
is much more impactful than Pro, just because of the latency.
It's super intelligent already.
I mean, sometimes like latency is
maybe more important than intelligence, especially when the intelligence is just a little bit less in Flash.
Not it's still an incredibly smart model.
And so you have to now start measuring impact.
And then it feels like benchmarks are less and less capable of capturing the intelligence of models, the effectiveness of models, the usefulness, the real-world usefulness of models.
Another kitchen question.
So lots of folks are talking about timelines for AGI
or ASI, artificial superintelligence.
So AGI, loosely defined, is basically human expert level at a lot of the main fields of pursuit for humans.
And then ASI is what AGI becomes presumably quickly by being able to self-improve.
So becoming far superior in intelligence across all these disciplines and humans.
When do you think we'll have AGI?
Is 2030 a possibility?
There's one other term we should throw in there.
I don't know who used it first.
Maybe Karpati did AJI.
Have you heard AJI?
The artificial jagged intelligence?
Sometimes feels that way, right?
Both there are progress and you see what they can do.
And then like you can trivially find they make numerical errors or like, you know, counting Rs and strawberry or something, which seems to trip up most models or whatever it is, right?
So
maybe we should throw that term in there.
I feel like we are in the AJI phase where like dramatic progress.
Some things don't work well, but overall, you know, you're seeing lots of progress.
But if your question is, well, will it happen by 2030?
Look, we constantly move the line of what it means to be AGI.
There are moments today, you know, like sitting in a Waymo in a San Francisco street with all the crowds and the people and kind of work its way through.
I see glimpses of it there.
The car is sometimes kind of impatient, trying to work its way using Astra, like in Gemini Live or seeing, you know, asking questions about the world.
What's this skinny building doing in my neighborhood?
It's a streetlight, not a building.
You see glimpses.
That's why I use the word AGI, because then you see stuff which obviously, you know, we are far from AGI too.
So you have both experiences simultaneously happening to you.
I'll answer your question, but I'll also throw out this.
I almost feel the term doesn't matter.
What I know is by 2030, there'll be such dramatic progress.
We'll be dealing with the consequences of that progress, both the positives,
both the positive externalities and the negative externalities that come with it in a big way by 2030.
So that I strongly feel, right?
Whatever we may be arguing about the term, or maybe Gemini can answer what that moment is in time in 2030.
But I think the progress will be dramatic, right?
So that I believe in.
Will the AI think it has reached AGI by 2030?
I would say we will just fall short of that timeline, right?
So I think it'll take a bit longer.
It's amazing in the early days of Google DeepMind in 2010, they talked about a 20-year timeframe to achieve AGI,
which is kind of fascinating to see.
But,
you know, I formed the whole thing, seeing what Google Brain did in 2012.
And when we acquired DeepMind in 2014,
right close to where we're sitting in 2012, you know, Jeff Dean showed the image of when the neural networks could recognize a picture of a cat, right, and identify it.
You know, this is the early versions of brain, right?
And so, you know, we all talked about couple decades.
I don't think we'll quite get there by 2030.
So my sense is it's slightly after that.
But I would stress it doesn't matter like what that definition is
because
you will have mind-blowing progress on many dimensions.
Maybe AI can create videos.
We have to figure out as a society, how do we, we need some system by which we all agree that this is ai generated and we have to disclose it in a certain way because how do you distinguish reality otherwise yeah there's so many interesting things you said so first of all just looking back at this recent now feels like distant history uh with google brain i mean that was before tensorflow before tensorflow was made public and open sourced so the tooling matters too combined with github ability to share code Then you have the ideas of attention transformers and the diffusion now.
And then there might be a new idea that seems simple in retrospect, but will change everything.
And that could be the post-training, the inference time innovations.
And I think Shad Cian tweeted that Google is just one great UI from completely winning the AI race.
Meaning, like
UI is a huge part of it.
Like, how that intelligence,
I think Logan Co.
Project likes to talk about this right now.
It's an LLM, but it becomes like, when is it going to become a system where you're talking about shipping systems versus shipping the particular model?
Yeah, that matters too, how the system is
manifests itself and how it presents itself to the world.
That really, really matters.
Oh, hugely so.
There are simple UI innovations which have changed the world, right?
And
I absolutely think so.
We will see a lot more progress in the next couple of years.
I think
AI itself
on a self-improving track for UI itself.
Like, you know, today,
we are like constraining the models.
The models can't quite express themselves in terms of the UI to
people.
But that is like, you know, if you think about it, we've kind of boxed them in that way.
But given these models can code,
you know, they should be able to write the best interfaces to express their ideas over time, right?
That is an incredible idea.
So the API is already open.
So you create a really nice agentic system that continuously improves the way you can be talking to an AI.
Yeah.
But a lot of that is the interface.
And then, of course, the incredible multimodal aspect of the interface that Google's been pushing.
These models are natively multimodal.
They can easily take content from any format, put it in any format.
They can write a good user interface.
They probably understand your preferences better over time.
Like, you know, and so so, all this is like the evolution ahead, right?
And so,
that goes back to where we started the conversation.
I think there'll be dramatic evolutions in the years ahead.
Maybe one more kitchen question.
This even further ridiculous concept of P doom.
So, the philosophically minded folks in the AI community think about the probability that AGI and then ASI might destroy all of human civilization.
I would say my P-doom is about 10%.
Do you ever think about this kind of
long-term threat of ASI?
And what would your P-Doom be?
Look, I mean, for sure.
Look, I've both been very excited about AI, but I've always felt this is a technology, you know, we have to actively think about the risks and work very, very hard to harness it in a way that it all works out well.
On the PDOM question, look, it's a, you know, wouldn't surprise you to say that's probably another micro kitchen conversation that pops up once in a while, right?
And
given how powerful the technology is, maybe stepping back, you know, when you're running a large organization, if you can kind of align the incentives of the organization, you can achieve pretty much anything, right?
Like, you know, if you can get kind of people all marching towards like a goal in a very focused way, in a mission-driven way, you can pretty much achieve anything.
But it's very tough to organize all of humanity that way.
But I think if PDOM is actually high, at some point all of humanity is like aligned in making sure that's not the case, right?
And so we'll actually make more progress against it, I think.
So the irony is, so there is a self-modulating
aspect there.
Like I think if humanity collectively puts their mind to solving a problem, whatever it is, I think we can get there.
So because of that,
I think I'm optimistic on the P-doom scenarios.
But that doesn't mean I think the underlying risk is actually pretty high.
But
I have a lot of faith in humanity kind of rising up to meet that moment.
That's really, really well put.
I mean, as the threat becomes more concrete and real, humans do.
really come together and get their shit together.
Well, the other thing I think people don't often talk about is probability of doom without AI.
So there's all these other ways that humans can destroy themselves.
And it's very possible, at least I believe so, that AI will help us become smarter, kinder to each other,
more efficient.
It'll help more parts of the world flourish where it would be less resource constrained.
which is often a source of military conflict and tensions and so on.
So we also have to load into that what's the p-doom without AI?
P-Doom with AI and P-Doom without AI?
Because it's very possible that AI will be the thing that saves us, saves human civilizations from all the other threats.
I agree with you.
I think it's insightful.
Look, I felt
like to make progress on some of the toughest problems, it would be good to have AI like Pear
helping you, right?
And
like, you know, so that resonates with me for sure.
Yeah.
Quick pause, bath and break.
All right, let's do that.
If Notebook LM was the same compelling, like what I saw today with Beam,
if it was compelling in the same kind of way,
blew my mind.
It was incredible.
I didn't think it's possible.
My pause was like, can you imagine like the US president and the Chinese president being able to do something like Beam with the live meet translation working well?
So they're both sitting and talking, make progress a bit more.
Yeah, just for people listening, we took a quick bath and break and now we're talking about the demo i did we'll probably post it somewhere somehow maybe here the i got a chance to experience beam and it was
it's hard to it's hard to describe in words how real it felt with just what is it six cameras it's incredible it's incredible it's it's one of the toughest products of you can't quite describe it to people even when we show it in slides etc like you don't know what it is you have to kind of experience it.
On the world leaders front, on politics, geopolitics, there's something really special.
Again, we're studying World War II and how much could have been saved if Chamberlain met Stalin in person.
And I sometimes also struggle explaining to people, articulating why I believe meeting in person for world leaders is powerful.
It just seems naive to say that, but there is something there in person.
And with Beam, I felt that same thing.
and then I'm unable to explain.
All I kept doing is what, like, a child does, you look real,
you know.
And
I mean, I don't know if that makes meetings more productive or so on, but it certainly makes them more
the same reason you want to show up to work versus remote sometimes, that human connection.
I don't know what that is.
It's hard to, it's hard to put into words.
There's some,
there's something beautiful about great teams collaborating on a thing
that's not captured by the productivity of that team or by whatever on paper.
Some of the most beautiful moments you experience in life is at work, pursuing a difficult thing together for many months.
There's nothing like it.
You're in the trenches.
And yeah, you do form bonds that way, for sure.
And to be able to do that like somewhat remotely in that same personal touch, I don't know.
That's a deeply fulfilling thing i know a lot of people i personally hate meetings because a significant percent of meetings when done uh poorly are are don't don't serve a clear purpose so but that's a meeting problem that's not a communication problem if you can improve the communication for the meetings that are useful it's just incredible so yeah i was blown away by the great engineering behind it.
And then we get to see what impact that has.
That's really interesting, but just incredible engineering.
Really impressive.
No, it is.
And obviously we'll work hard over the years to make it more and more accessible.
But yeah, even on a personal front, outside of work meetings, you know, a grandmother who's far away from our grandchild and being able to, you know, have that kind of an interaction, right?
All that I think will end up being very mean.
Nothing substitutes being in person.
You know, it's not always possible.
You know, you could be a soldier deployed, right, trying to talk to your loved ones.
So I think, you know, so that's what inspires us.
When you and I
hung out last year and took a walk,
I remember, I don't think we talked about this, but I remember,
you know, outside of that, seeing dozens of articles written by analysts and experts and so on that
Sundar Puchaya should step down because the perception was that Google was definitively losing the AI race, has lost its magic touch in the rapidly evolving technological landscape.
And now a year later, it's crazy.
You showed this plot of all the things that were shipped over the past year.
It's incredible.
And Gemini Pro is winning across many benchmarks and products as we sit here today.
So take me through that experience when there's all these articles saying
you're the wrong guy to lead Google through this.
Google is lost.
It's done.
It's over to today where Google is winning again.
What were some low points during that time?
Look, I
mean, lots to unpack.
You know, obviously,
like, I mean,
the main bet I made as a CEO was to really,
you know, make sure the company was approaching everything in a AI-first way,
really, you know, setting ourselves up to develop AGI responsibly, right?
And
make sure we're putting out products
which embodies
things that are very, very useful for people.
So, look, I knew even through moments like that last year,
I had a good sense of what we were building internally, right?
So
I'd already made
many important decisions, you know, bringing together teams of the caliber of brain and deep mind and setting up Google DeepMine.
There were things like: we made the decision to invest in TPUs 10 years ago.
So we knew we were scaling up and building big models.
Anytime you're in a situation like that,
a few aspects.
I'm good at tuning out noise, right?
Separating signal from noise.
Do you scuba dive?
Like, have you?
No.
It's amazing.
Like, I'm not good at it, but I've done it a few times.
But sometimes you jump in the ocean, it's so choppy,
but you go down one feet under, it's the calmest thing in the entire universe, right?
So there's a version of that, right?
Like, you know,
running Google,
you know, you may as well be coaching Barcelona or Real Madrid, right?
Like, you know, you have a bad season.
So there are aspects to that.
But, you know, like, look, I'm good at tuning out the noise.
I do watch watch out for signals.
You know, it's important to separate the signal from the noise.
So there are good people sometimes making good points outside.
So you want to listen to it.
You want to take that feedback in.
But, you know, internally, like, you know, you're making a set of consequential decisions.
Right.
As leaders, you're making a lot of decisions.
Many of them are like inconsequential.
Like it feels like, but over time you learn that.
Most of the decisions you're making on a day-to-day basis doesn't matter.
Like
you have to make them and you're making them just to keep things moving.
But you have to make a few consequential decisions, right?
And
we had
set up the
right teams, right leaders.
We had world-class researchers.
We were training Gemini.
Internally, there are factors which were, for example, outside people may not have appreciated.
I mean, TPUs are amazing, but we had to ramp up TPUs too.
That took time, right?
And
to scale actually having enough TPUs to get the compute needed.
But I could see internally the trajectory we were on.
And
B, you know, I was so excited internally about the possibility.
To me, this moment felt like one of the biggest opportunities ahead for us as a company.
That the opportunity space ahead over the next decade, next 20 years is bigger than what has happened in the past.
And I thought we were set up like better than most companies in the world to go realize that vision.
I mean, you had to make some
consequential, bold decisions.
Like you mentioned the merger of deep mind and brain.
Maybe it's my perspective, just knowing humans.
I'm sure there's a lot of egos involved.
It's very difficult to merge teams, and I'm sure there were some hard decisions to be made.
Can you take me through your process of how you think through that?
Do you go to pull the trigger and make that decision?
Maybe what were some painful points?
How do you navigate those turbulent waters?
Look, we were fortunate to have two world-class teams, but you're right.
Like, it's like somebody coming and telling to you, take Stanford and MIT and then put them together and create a great department, right?
And easier said than done.
But we were fortunate, you know, phenomenal teams.
Both had their strengths, you know, they were run very differently, right?
Like Brain
was kind of a
lot of diverse projects, bottoms up, and out of it came a lot of important research breakthroughs.
DeepMind at the time had a strong vision of how you want to build AGI, and so they were pursuing their direction.
But I think through those moments, luckily tapping into,
you know, Jeff had expressed a desire to be more, to go back to more of a scientific individual contributor roots.
You know, he felt like management was taking up too much of his time.
And
Demis naturally, I think, you know, was running DeepMine and was a natural choice there.
But I think it was, you're right.
You know, it took us a while to bring the teams together.
Credit to Demis, Jeff, Korai, all the great people there.
They worked super hard to combine the best of both worlds when you set up that team.
A few sleepless nights here and there as we put that thing together.
We were patient in how we did it so that it works well for the long term, right?
And some of that in that moment, I think, yes, with things moving fast,
I think you definitely felt the pressure.
But I think we pulled off that transition well.
And, you know, I think, I think, you know, they've obviously
doing incredible work.
And there's a lot more incredible things ahead coming from them.
Like we talked about, you have a very calm, even-tempered, respectful demeanor.
During that time, whether it's the merger or just dealing with the noise,
were there times where frustration boiled over?
Like, did you
have to go a bit more intense on everybody than you usually would?
Probably, you know, probably, right?
I think, I think in the sense that, you know, it was a moment where we were all driving hard, but when you're in the trenches working with passion,
you're going to have days, right, you disagree, you argue, but like all that, I mean, just part of the course of working intensely, right?
And,
you know, at the end of the day,
all of us are doing what we are doing because the impact it can have.
We are motivated by it.
It's like, you know, for many of us, this has been a long-term uh journey and so it's been super exciting the positive moments far outweigh the kind of stressful moments just early this year i had a chance to celebrate back to back over two days like you know nobel prize for jeff finton and the next day a noble prize for denison john jumper you know you worked with people like that all that is super inspiring is there something like with you where you had to like put your foot down
maybe with less
versus more?
Or like, I'm the CEO and we're, we're doing this.
To my earlier point about consequential decisions you make, there are decisions you make, people can disagree pretty vehemently.
And,
but at some point, like, you know, you make a clear decision and you, you just ask people to come it.
right like you know you can disagree but it's time to disagree and commit so that we can get moving moving.
And
whether it's putting the foot down or, you know, like, you know, it's a natural part of what all of us have to do.
And, you know, I think you can do that calmly and be very firm in the direction you're making the decision.
And I think if you're clear, actually people over time respect that, right?
Like, you know, if you can make decisions with clarity.
I find it very effective.
in meetings where you're making such decisions to hear everyone out.
I think it's important when you can to hear everyone out.
Sometimes what you're hearing actually influences how you think about and you're wrestling with it and making a decision.
Sometimes you have a clear conviction and you state, so look, I,
you know, this is how I feel and, you know, this is my conviction.
And you kind of place the bet and you move on.
Are there big decisions like that?
I'm kind of intuitively assume the merger was the big one.
I think that was a very important decision
for the the company to
meet the moment.
I think we had to make sure
we were doing that and doing that well.
I think that was a consequential decision.
There were many other things.
We set up an AI infrastructure team, like to really go meet the moment, to scale up the compute we needed to, and really brought teams from disparate parts of the company, kind of created it to move forward.
You know, bringing people, like getting people to kind of work together physically, both in London with DeepMind
and what we call Gradient Canopy, which is where the Mountain View, Google DeepMind teams are.
But one of my favorite moments is I routinely walk multiple times per week to the Gradient Canopy building where our top researchers are working on the models.
Sergei is often there amongst them, right?
Like, you know, just, you know, looking at,
you know, getting an update on the model, seeing the loss curve, so all that.
I think the cultural part of getting the teams together back
with that energy, I think ended up playing a big role, too.
What about the decision to recently add AI mode?
So Google search is the, as they say, the front page of the internet.
It's like a legendary minimalist thing.
with 10 blue links.
Like that's when people think internet, they think that page.
And now you're starting to mess with that.
So the AI mode, which is a separate tab, and then integrating AI in the results.
I'm sure there were some battles in meetings on that one.
Look,
in some ways, when mobile came,
people wanted answers to more questions.
So we've kind of constantly evolving it.
But you're right.
This moment,
that evolution, because the underlying technology is becoming much more capable.
You can have AI give a lot of context you know but one of our important design goals though is when you come to google search
you're going to get a lot of context but you're going to go and find a lot of things out on the web so that will be true in ai mode in ai overviews and so on
but i think to our earlier conversation we are still giving you access to links but think of the ai as a layer which is giving you context summary maybe in ai mode you can have a dialogue with it back and forth
on your journey, right?
But through it all, you're kind of learning what's out there in the world.
So those core principles don't change.
But I think AI mode allows us to push the, we have our best models there, right?
Models which are using search as a deep tool,
really for every query you're asking, kind of fanning out, doing multiple searches, like kind of assembling that knowledge in a way so you can go and consume what you want to, right?
And and and that's how we think about it.
I got a chance to listen to a bunch of Elizabeth Liz Reed describe this.
Two things stood out to me that you mentioned.
One thing is what you were talking about is the query fan out,
which I didn't even think about before,
is the powerful aspect of integrating a bunch of stuff on the web for you in one place.
So yes, it provides that context so that you can decide which page to then go on to.
The other really, really, really big thing speaks to the earlier
in terms of productivity multiplier that we're talking about that she mentioned was language.
So one of the things you don't quite understand is it through AI mode,
you make
for non-English speakers, you make sort of, let's say, English language websites accessible
by
in the reasoning process as you try to figure out what you're looking for.
Of course, once you show up to a page, you can use a basic translate.
But that process of figuring it out, if you empathize with a large part of the world that doesn't speak English, their
web
is much smaller in that original language.
And so it unlocks, again, unlocks that huge cognitive capacity there.
We don't, you know, you take for granted here with all the bloggers and the journalists writing about AI mode.
You forget that this now unlocks
because Gemini is really good at at translation.
No, it is.
I mean, the multimodality, the translation, its ability to reason, we are dramatically improving tool use.
Like, so putting that power in the flow of search,
I think, look, I'm super excited.
With AI overviews,
we've seen the product has gotten much better.
We measured using all kinds of user metrics.
It's obviously driven strong growth of the product.
And, you know, we've been testing AI mode.
You know, it's now in the hands of millions of people.
And the early metrics are very encouraging.
So look, I'm excited about this next chapter of search.
For people who are not thinking through or are aware of this.
So there's the 10 blue links with the AI overview on top that provides a nice summarization.
You can expand it.
And you have sources and links now embedded.
I believe, at least Liz said so.
I actually didn't notice it, but there's ads in the AI overview also.
I don't think there's ads in AI mode.
When ads in AI mode, so now when do you think, I mean, it's okay.
We should say that in the 90s, I remember the animated GIFs.
banner gifts that take you to some shady websites that have nothing to do with anything.
AdSense revolutionized advertisement.
It's one of the greatest inventions
in recent history because it allows us for free
to have access to all these kinds of services.
So, ads fuel a lot of really powerful services.
And at its best, it's showing you relevant ads, but also very importantly, in a way that's not super annoying, right?
In a classy way.
So,
when do you think it's possible to add ads into AI mode?
And what does that look like from a classy, non-annoying perspective?
Two things.
Early part of AI mode will obviously focus more on the organic experience to make sure we're getting it right.
I think the fundamental value of ads are
it enables access to deploy the services to billions of people.
The second is ads are the reason we've always taken ads seriously is we view ads as commercial information, but it's still information.
And so we bring the same quality metrics to it.
I think with AI mode, to our earlier conversation about, I think AI itself will help us
over time figure out the best way to do it.
I think given we are giving context around everything, I think it'll give us more opportunities to also explain, okay, here's some commercial information.
Like today, as a podcaster, you do it at certain spots and you probably figure out what's best in your podcast.
I think so there are aspects of that, but I think, you know, I think the underlying need of people value commercial information, businesses are trying to connect to users, all that doesn't change in an AI moment.
But look, we will rethink it.
You've seen us in YouTube now do a mixture of subscription and ads.
Like, obviously,
you know, we are now introducing subscription offerings.
across everything.
And so as part of that, we can optimize, the optimization point will end up being a different place as well.
Do you see it trajectory in the possible future where AI mode completely replaces
the 10 Blue Links Plus AI overview?
Our current plan is AI mode is going to be there as a separate tab for people who really want to experience that, but it's not yet at the level where our main search page is.
But as features work, we'll keep migrating it to the main page.
And so you can view it as a continuum.
AI mode will offer you you the bleeding-edge experience, but it'll think that work will keep overflowing to AI overviews and the main main experience.
And the idea that AI mode will still take you to the web, to the human-created web.
Yes, that's going to be a core design principle for us.
So, really, if users decide, right, they drive this.
Yeah.
It's just exciting, a little bit scary that it might change the internet.
Because
Google has been dominating with a very specific look and idea of what it means to have the internet.
And as you move to AI mode,
I mean, it's just a different experience.
I think Liz was talking about, I think you've mentioned that you ask more questions,
you ask longer questions.
Dramatically different types of questions.
Yeah, like it actually fuels curiosity.
Like, I think for me, I've been asking just a much larger number of questions of this black box machine, let's say, whatever it is.
And
with AI over you, it's interesting because I still value the human.
I still ultimately want to end up on the human-created web.
But like you said, the context really helps.
It helps us deliver higher quality referrals, right?
You know, where people are like, they have a much higher likelihood of finding what they're looking for, they're exploring, they're curious, their intent is getting satisfied more.
So that's what all our metrics show.
It makes the humans that create the web nervous.
The journalists are getting nervous.
They've already been nervous.
Like we mentioned, CNN is nervous because of podcasts.
It makes people nervous.
Look, I think news and journalism will play an important role, you know, in the future.
We're pretty committed to it, right?
And so I think making sure that ecosystem, in fact, I think we'll be able to differentiate ourselves as a company over time because of our commitment there.
So it's something I think, you know, I definitely value a lot.
And
as we are designing, we'll continue prioritizing approaches.
I'm sure for the people who want, they can have a fine-tuned AI model that's clickbait hit pieces that will replace current journalism.
That's the shot of journalism, forgive me.
But I find that if you're looking for really strong criticism of things, that Gemini is very good at providing that.
Oh, absolutely.
It's better than anything.
For now, I mean, people are concerned that there will be bias that's introduced.
That as the AI systems become more and more powerful, there's incentive from sponsors
to roll in and try to control the output of the AI models.
But for now, the objective criticism that's provided is way better than journalism.
Of course, the argument is the journalists are still valuable.
But then, I don't know, the crowd search journalism that we get on the open internet is also very, very powerful.
I feel like they're all super important things.
I think it's good that you get a lot of crowdsourced information coming in.
But I feel like there is real value for high-quality journalism, right?
And I think
these are all complementary.
I think, like, I view it as I find myself constantly seeking out also, like, try to find objective reporting on things too.
And sometimes you get more context from the crowdfunded sources you read online.
But I think both end up playing a super important role.
So there's, you've spoken a little bit about this.
Demis talked about this, sort of the
slice of the web that will increasingly become about providing information for agents.
So we can think about as like two layers of the web.
One is for humans, one is for agents.
Do you see the AI agents?
Do you see the one that's for AI agents growing over time?
Do you see there still being long-term five, 10 years value for the
human created for the purpose of human consumption web?
Or will it all be agents in the end?
Today,
like not everyone does, but
you go to a sh.
You go to a big retail store, you love walking the aisle, you love shopping or grocery store, picking out food, et cetera.
But you're also online shopping and they're delivering, right?
So both are complementary, and like that's true for restaurants, et cetera.
So I do feel like over time, websites will also get better for humans.
They will be better designed.
AI might actually design them better for humans.
So I expect the web to get a lot richer and more interesting and better to use.
At the same time, I think there'll be an agentic web,
which is also making a lot of progress.
And you have to solve the business value and the incentives to make that work well, right?
Like for people to participate in it.
But I think both will coexist.
And obviously, the agents may not need the same, I mean, not may not, they won't need the same design and the UI paradigms which humans need to interact with.
But I think both will be there.
I have to ask you about Chrome.
I have to say, for me personally, Google Chrome is probably,
I don't know, I'd like to see where I would rank it.
But in this temptation, this is not a recency bias, although it might be a little bit.
But I think it's up there, top three, maybe the number one piece of software for me of all time.
It's just incredible.
It's really incredible.
The browsers are a window to the web.
And Chrome really continued for many years, but even initially, to push the innovation on that front when it was stale and it continues to challenge, it continues to make it more performant, so efficient, just innovate constantly.
And the Chromium aspect of it.
Anyway,
you were one of the pioneers of Chrome pushing for it when it was an insane idea.
Probably one of the ideas that was criticized and doubted and so on.
So can you tell me the story of what it took to push for Chrome?
What was your vision?
Look, it was
such a dynamic time,
you know, around 2004, 2005, with Ajax, the web suddenly becoming dynamic.
In a matter of few months, Flickr, Gmail, Google Maps all kind of came into existence, right?
Like the fact that you have an interactive dynamic web.
The web was evolving from simple text pages, simple HTML to rich dynamic applications.
But at the same time, you could see the browser was never
meant for that world, right?
Like JavaScript execution was super slow.
The browser was far away from being an operating system for that rich modern web, which was
coming into place.
So that's the opportunity we we saw.
Like,
you know, it's an amazing early team.
I still remember the day we got a shell on WebKit running and how fast it was.
You know, we had the clear vision for building a browser.
Like we wanted to bring core OS principles into the browser, right?
Like so we built a secure browser sandbox.
Each tab was its own.
These things are common now, but at the time, like it was pretty unique.
We found an amazing team in Aarhus, Denmark, with a leader who built a V8, the JavaScript VM, which at the time was 25 times faster than
any other JavaScript VM out there.
And by the way, you're right, we open sourced it all and, you know, and put it in Chromium too.
But we really thought the web could work much better,
you know, much faster.
And you could be much safer browsing the web.
And the name Chrome came was because we literally felt people were like
the Chrome of the browser was getting clunkier.
We wanted to minimize it.
And so that was the origins of the project.
Definitely, obviously,
highly biased person here talking about Chrome, but it's the most fun I've had building a product from the ground up.
And
it was an extraordinary team.
Had my co-founders on the project were terrific.
So definite fond memories.
So for people who don't know, Sundar, it's probably fair to say you're the reason we have Chrome.
Yes, I know there's a lot of incredible engineers, but pushing for it inside a company that probably was opposing it because it's a crazy idea.
Because as everybody probably knows, it's incredibly difficult to build a browser.
You know, look,
Eric, who was the CEO at the time, I think it was less that he was opposed to it.
He kind of firsthand knew what a crazy thing it is to go build a browser.
And so he definitely was like, this is, you know, there was a crazy aspect to actually wanting to go build a browser.
But
he was very supportive.
You know, everyone, the founders were.
I think once we started, you know, building something and we could use it and see how much better.
From then on, like, you know, you're really tinkering with the product and making it better.
It came to life pretty fast.
What wisdom do you draw from that?
From
pushing through on a crazy idea in the early days that ends up being revolutionary?
What for future crazy ideas like it?
I mean,
this is something Larry and Sergey have articulated clearly.
I really internalized this early on, which is, you know, their whole feeling around working on moonshots,
like as a way, when you work on something very ambitious, first of all, it attracts the best people, right?
So that's an advantage you get.
Number two,
because it's so ambitious, you don't have others working on something crazy.
So you pretty much have the path to yourselves, right?
It's like way more in self-driving.
Number three,
it is even if you end up quite not accomplishing what you set out to do and you end up doing 60, 80% of it, it'll end up being a terrific success.
So,
you know, that's the advice I would give people, right?
I think like, you know, it's just aiming for big ideas has all these advantages and it's risky, but it also has all these advantages, which people, I don't think, fully internalize.
I mean, you mentioned one of the craziest, biggest moonshots, which is Waymo.
When I first saw over a decade ago a Waymo vehicle, a Google self-driving car vehicle,
for me, it was an aha moment for robotics.
It made me fall in love with robotics even more than before.
It gave me a glimpse into the future.
So it's incredible.
I'm truly grateful for that project, for what it symbolizes.
But it's also a crazy moonshot.
It's for a long time, Waymo has been just like you mentioned, with scuba diving, just not listening to anybody, just calmly improving the system better and better, more testing, just expanding the operational domain more and more.
First of all, congrats on 10 million paid robo-taxi rides.
What lessons
do you take from Waymo about
the perseverance, the persistence on that project?
I look really proud of the progress we have had with Waymo.
One of the things I think we were very committed to, you know, the final 20% can look like, I mean, we always say, right, the first 80% is easy.
The final 20% takes 80% of the time.
I think we definitely
were working through that phase with Waymo.
I was aware of that.
So, but we knew we were at that stage.
We knew we were the technology gap between, well, there were many people, many other self-driving companies, we knew the technology gap was there.
In fact,
right at the moment when others were doubting Waymo is when,
I don't know, I made the decision to invest more in Waymo, right?
Because so
in some ways, it's counterintuitive.
But I think, look, we've always been a deep technology company.
And like,
you know, Vaymo is a version of kind of building a ai robot that works well and so we get attracted to problems like that the caliber of the teams there uh you know uh phenomenal teams and so i know you follow the space super closely uh you know i'm talking to someone who knows the space well
but it was very obvious it's going to get there and you know there's still more work to do but we you know it's a good example where we always prioritized being ambitious and safety at the same time right and and and equally committed to both and and pushed hard and you know couldn't be more thrilled with uh how it's working uh how much people love love the experience and it this year is definitely we've scaled up a lot and we'll continue scaling up in 26.
that said
uh the competition is heating up you've been uh friendly with elon
uh even though technic is a competitor but you've been friendly with a lot of tech ceos in that way, just showing respect towards them and so on.
What do you think about the robotaxi efforts that Tesla is doing?
Do you see it as competition?
What do you think?
Do you like the competition?
We are one of the earliest and biggest backers of SpaceX
as Google,
right?
So, you know, thrilled with what SpaceX is doing and fortunate to be
investors as a company there.
And look, we don't compete with Tesla directly.
We are not making cars, et cetera.
We are building L45 autonomy.
We're building a Waymo driver, which is general purpose and can be used in many settings.
They're obviously working on making Tesla self-driving too.
I've just assumed it's
de facto that Elon would succeed in whatever he does.
So, like, you know,
that is
not something I question.
So, but I think we are so far from these spaces, are
such vast spaces.
Like, I think, think about
transportation, the opportunity space.
The Waymo driver is a general purpose technology we can apply in many situations.
So you have a vast green space.
In all future scenarios, I see Tesla doing well and, you know, Waymo doing well.
Like we mentioned with the Neolithic package, I think it's very possible that in the quote-unquote AI package, when the history is written, autonomous vehicles, self-driving cars is like the big thing that changes everything.
Imagine over a period of a decade or two, just the complete transition from manually driven to autonomous, in ways we might not predict, it might change the way we move about the world completely.
So, that, you know, the possibility of that, and then the
second and third order effects, as you're seeing now with Tesla, very possibly you would see some
internally with Alphabet, maybe Waymo, maybe some of the Gemini robotics stuff.
It might lead you into the other domains of robotics.
Because we should remember that Waymo is a robot.
It just happens to be on four wheels.
So you said that the next big thing,
we can also throw that into the AI package.
The big aha moment might be in the space of robotics.
What do you think that would look like?
Demis and the Google DePoint team is very focused on Gemini robotics, right?
So we are definitely building the underlying models well.
So we have a lot of investments there, and I think we are also pretty cutting edge in our research there.
So we are definitely driving that direction.
We obviously are thinking about applications in robotics.
We'll kind of work seriously.
We are partnering with a few companies today.
But it's an area I would say stay tuned.
We are yet to fully articulate our plans outside.
But it's an area we are definitely committed to driving a lot of progress.
But I think AI ends up driving that massive progress in robotics.
The field has been held back
for a while.
I mean, the hardware has made extraordinary progress.
The software had been the challenge, but with AI now and
the generalized models we are building,
we have building these models, getting them to work in the real world in a safe way, in a generalized way, is the frontier we're pushing pretty hard on.
Well, it's really nice to see the models and the different teams integrated to where all of them are pushing towards one world model that's being built.
So from all these different angles, multimodal,
you're ultimately trying to get Gemini.
So the same thing that would make AI mode really effective at answering your questions, which requires a kind of world model, is the same kind of thing that would help a robot be useful in the physical world.
So everything is aligned.
That is what makes this moment so unique because running a company for the first time, you can do one investment in a very deep horizontal way on top of which you can like drive multiple businesses forward, right?
And
that's effectively what we are doing in Google and Alphabet, right?
Yeah, it's all coming together like it was planned.
ahead of time, but it's not, of course, it's all distributed.
I mean, if Gmail and Sheets and all these other incredible services, I can sing Gmail praises for years.
I mean, it's just revolutionized email.
But the moment you start to integrate AI, Gemini, into Gmail, I mean, that's the other thing.
Speaking of productivity multiplier, people complain about email, but that changed everything.
Email, like the invention of email, changed everything.
And it's been ripe.
There's been a few folks trying to revolutionize email, some of them on top of Gmail.
But that's like ripe for innovation.
Not just spam filtering, but
you uh you demoed a really nice demo of personalized responses right personalized responses and it at first I was like
at first I felt really bad about that
but then I realized that there's nothing wrong to feel bad about because will you the example you gave is when a friend asks you know you went to whatever hiking location
do you have any advice And they just search us through all your information to give them good advice.
And then you put the cherry on top, maybe some love or whatever, camaraderie.
But the informational aspect, the knowledge transfer does for you.
I think there'll be important moments.
Like it should be like today,
if you write a card in your own handwriting and send it to someone, that's a special thing.
Similarly, there'll be a time, I mean, to your friends, maybe your friend wrote and said he's not doing well or something.
You know, those are moments you want to save your time for writing something, reaching out.
But, you know, like saying, give me all the the details of the trip you took, you know, to me makes a lot of sense for an AI assistant to help you, right?
And so I think both are important, but I think, I think I'm excited about that direction.
Yeah, I think ultimately it gives more time for us humans to do the things we humans find meaningful.
And I think it scares a lot of people because we're going to have to ask ourselves.
the hard question of like, what do we find meaningful?
And I'm sure there's answers.
I mean, it's the old question of the
meaning of existence is you have to try to figure that out.
That might be ultimately parenting or being creative in some domains of art or writing.
And it challenges to, like, you know, it's a good question to ask yourself, like, in my life, what is the thing that brings me most joy and fulfillment?
And if I'm able to actually focus more time on that, that's really powerful.
I think that's the, you know, that's the holy grail, if you get this right.
I think it allows more people
to find that.
I have to ask you: on the programming front, AI is getting really good at programming.
Gemini, both the Agentic and just the LLM has been incredible.
So, a lot of programmers are really worried that their jobs
will lose their jobs.
How worried should they be?
And
how should they adjust so they can be thriving in this new world where more and more code is written by AI?
I think a few things.
Looking at Google,
you know, we've given various stats around like
30%
of code now uses like AI-generated suggestions or whatever it is.
But the most important metric, and we carefully measure it, is like
how much has our engineering velocity increased as a company due to AI, right?
And it's like tough to measure, and we kind of rigorously try to measure it.
And our estimates are at that number is now at 10%, right?
Like now across the company, we've accomplished a 10%
engineering velocity increase using AI.
But
we plan to hire engineers, more engineers next year, right?
So
because
the opportunity space of what we can do is expanding too, right?
And so
I think hopefully, you know,
at least in the near to midterm, for many engineers,
it frees up more and more of the,
you know, even in engineering and coding,
there are aspects which are so much fun.
You're designing, you're architecting, you're solving a problem.
There's a lot of grand work, you know, which all goes hand in hand.
But it hopefully takes a lot of that away, makes it even more fun to code, frees you up more time to create, problem solve, brainstorm with your fellow colleagues, and so on, right?
So
that's the opportunity there.
And second, I think, like, you know, it'll attract,
it'll put the creative power in more people's hands, which means people create more.
That means there'll be more engineers doing more things.
So, it's tough to fully predict, but you know, I think in general, in this moment, it feels like
people
adopt these tools and be better programmers.
Like, there are more people playing chess now than ever before, right?
So,
you know, it feels positive that way to me, at least speaking from within a Google context,
is how I would
talk to them about it.
I still, I just know anecdotally, a lot of great programmers are generating a lot of code.
So, their productivity, they're not always using all the code, just you know, there's still a lot of editing, but like
even for me,
programming is a side thing.
I think I'm like
5x more productive.
I think that's
even for a large code base that's touching a lot of users like Google's does,
I'm imagining like
very soon that productivity should be going up even more.
The big unlock will be as we make the agentic capabilities much more robust.
Right.
I think that's what unlocks that next big wave.
I think the 10% is like a massive number.
Like, you know, if tomorrow, like, I showed up and said, like, you can improve like a large organization's productivity by 10%
when you have tens of thousands of engineers, that's a phenomenal number.
And, you know, that's different than what others cite a statistic saying, like, you know, like this percentage of code is now written by AI.
I'm talking more about like overall
productivity.
The actual productivity, right?
Engineering productivity, which is two different things,
and which is the more important metric.
And, But I think it'll get better, right?
And like, you know, I think there's no engineer who tomorrow, if you magically became 2x more productive,
you're just going to create more things.
You're going to create more value-added things.
And so I think you'll find more satisfaction in your job, right?
So.
And there's a lot of aspects.
I mean, the actual Google code base might just improve because it'll become more standardized, more
easier for people to move about the code base because AI will help with that.
And therefore, that will also allow the AI to understand the entire code base better, which makes the engineering aspect.
That's what I've been using Cursor a lot
as a way to program with Gemini and other models.
It's like
one of its powerful things is it's aware of the entire code base.
And that allows you to ask questions of it.
It allows the agents to move about that code base in a really powerful way.
I mean, that's a huge unlock.
Think about like...
you know, migrations, refactoring old code bases.
Refactoring.
Yeah.
I mean, think about like, you know, once we can do all this in a much better, more robust way than where we are today.
I think in the end, everything will be written in JavaScript and run in Chrome.
I think it's all going to that direction.
I mean, just for fun, Google has legendary coding interviews,
like rigorous interviews for the engineers.
Can you comment on how that has changed in the era of AI?
It's just such a weird,
you know, the whiteboard interview, I assume, is not allowed to have some prompts.
Such a
good question.
Look, I do think,
you know, we're making sure,
you know, we'll introduce at least one round of in-person interviews for people
just to make sure the fundamentals are there.
I think they'll end up being important.
But it's an equally important skill.
Look, if you can use these tools to generate better code,
like, you know, I think that's an asset.
And so,
you know, I think, so, overall, I think it's a massive positive.
Vibe coding engineer.
Do you recommend
people, students interested in programming, still get an education in computer science, college education?
What do you think?
I do.
If you have a passion for computer science, I would.
You know, computer science is obviously a lot more than programming alone.
So I would.
I still don't think I would change
what you pursue.
I think AI will horizontally allow
impact
every field.
It's pretty tough to predict in what ways.
So any education in which you're learning good first principles thinking, I think is good education.
You've revolutionized web browsing.
You've revolutionized a lot of things over the years.
Android changed the game.
It's an incredible operating system.
We could talk for hours about Android.
What does the future of Android look like?
Is it possible it becomes more and more AI-centric?
Especially now that you throw into the mix Android XR with
being able to do augmented reality, mixed reality, and virtual reality in the physical world?
Yeah, the best innovations in computing have come when you're through a paradigm IO change, right?
Like, you know, when with GUI and then with a graphical user interface and then with multi-touch in the context of mobile, voice later on.
Similarly, I feel like
AR is that next paradigm.
I think it was held back both the system integration challenges of making good AR is very, very hard.
The second thing is you need AI to actually kind of
otherwise the IO is too complicated.
For you to have a natural, seamless IO to that paradigm, AI ends up being super important.
And so
this is why Project Astra ends up being super critical for that Android XR world.
But it is, I think when you use glasses and, you know, always been amazed
at how useful these things are going to be.
So look, I think it's a real opportunity for Android.
I think XR is one way it'll kind of really come to life.
But I think there's an opportunity to rethink the mobile OS too, right?
I think we've been kind of living living in this paradigm of like
apps and shortcuts, all that won't go away.
But again, like if you're trying to get stuff done at an operating system level, you know, it needs to be more agentic so that you can kind of describe what you want to do or like it proactively understands what you're trying to do, learns from how you're doing things over and over again, and kind of is adapting to you.
All that is kind of like the unlock we need to go and do.
With a basic, efficient, minimalist
UI, I've gotten a chance to try the glasses and they're incredible.
It's the little stuff.
It's hard to put into words, but no latency.
It just works.
Even that little map demo where you look down and you look up and there's a very smooth transition between the two.
And useful,
very small amount of useful information is shown to you.
enough not to distract from the world outside, but enough to provide a bit of context when you need it.
And some of that,
in order to bring that into reality, you have to solve a lot of the OS problems to make sure it works when you're integrating the AI into the whole thing.
So, everything you do launches an agent that answers some basic question.
Good moonshot.
I love that's crazy.
But, you know, I think
we are, you know, but it's
much closer to reality than other moonshots.
You know, we expect to have classes in the hands hands of developers later this year and,
you know, in consumer science next year.
So it's an exciting time.
Yeah, well, extremely well executed.
Beam, all the stuff, you know, because sometimes you don't know.
Like somebody commented on
a top comment on one of the demos of Beam.
They said
this will either be killed off in five weeks or revolutionize all meetings in five years.
And there's very much Google tries so many things and sometimes sadly sadly kills off very promising projects but because there's so many other things to focus on i use i use so many google products google voice i still use i'm so glad that's not being killed off that's still alive thank you whoever is defending that because it's it's it's awesome and it's great they keep innovating i just want to list off just as a big thank you so search obviously google revolutionize chrome and all of these could be multi-hour conversations gmail
i've been singing gmail praises forever maps
incredible technological innovation and revolutionizing mapping.
Android, like we talked about, YouTube, like we talked about, AdSense,
Google Translate, for the academic mind, a Google Scholar
is incredible.
And also the scanning of the books, so making all the world's knowledge
accessible, even with that knowledge, is a kind of niche thing, which Google Scholar is.
And then obviously with DeepMind, with Alpha Zero, Alpha Fold, Alpha Evolve.
I could talk forever about alpha evolve.
That's mind-blowing.
All of that released.
And as part of that set of things you've released in this year, when
those brilliant articles were written about Google is done.
And like we talked about, pioneering self-driving cars and quantum computing, which could be another thing that is low-key,
is scuba diving, its way to changing the world forever.
So another potheads/slash
micro kitchen question.
If you build AGI,
what kind of question would you ask it?
What would you want to talk about?
Definitively, Google has created AGI that can basically answer any question.
What topic are you going to?
Where are you going?
It's a great question.
Maybe it's proactive by then and should tell me a few things I should know.
But I think if I were to ask it,
I think it'll help us understand ourselves much better
in a way that'll surprise us, I think.
And so maybe that's, you already see people do it with the products.
And so, but, you know, in an AGI context, I think that'll be pretty powerful.
At a personal level or a general human nature?
At a personal level, like me talking to AGI,
I think, you know,
there is some chance it'll kind of understand you in a very deep way.
I think, you know, in a profound way, that's a possibility.
I think there is also the obvious thing of like, maybe it helps us understand the universe better
in a way that expands the frontiers of our understanding of the world.
That is something super exciting.
But look,
I really don't know.
I think, you know, I haven't had access to something that powerful yet.
But I think those are all possibilities.
I think on the personal level, asking questions about yourself could
a sequence of questions like that about what makes me happy.
I think we would be very surprised to learn through those kinds of
sequence of questions and answers.
We might explore some profound truths in the way that sometimes art reveals to us, great books reveal to us, great conversations with loved ones reveal uh things that are obvious in retrospect but are nice when they're said but for me number one question is about how many alien civilizations
100 are they're going to be a first question number one how many living and dead alien civilizations uh maybe a bunch of follow-ups like how close are they are they dangerous um
if if there's no alien civilizations why
uh or if there's no advanced alien civilizations but bacteria like life everywhere why what is the barrier preventing it from getting to that uh is it because that there's uh that when you get sufficiently intelligent you end up uh destroying ourselves because you need competition in order to develop an advanced civilization and when you have competition is going to lead to military conflict and conflict eventually kills everybody.
I don't know.
I'm going to have that kind of discussion.
Get an answer to the Fermi paradox.
Yeah.
Exactly.
And like have a real discussion about it.
I'm not sure it's a,
I'm realizing now with your answer, it's a more productive answer because I'm not sure what I'm going to do with that information, but maybe it speaks to the general human curiosity that Liz talked about, that we're all just really curious.
And
making the world's information accessible allows our curiosity to be satiated some with AI even more.
We can be more and more curious and learn more about the world, about ourselves.
And in so doing, I always wonder if, I don't know if you can comment on like, is it possible to measure
the
not the GDP productivity increase like we talked about, but maybe
whatever that increases, the
breadth and depth of human knowledge that Google has unlocked with Google search and now with AI mode, with Gemini, it's a difficult thing to measure.
Many years ago, there was a, I think it was a MIT study.
They just estimated the impact of Google search, and they basically said it's the equivalent to on a per person basis, it's a few thousands of dollars per year per person, right?
Like it's the value that got created per year, right?
And
but it's, yeah, it's tough to capture these things, right?
You kind of take it for granted as these things come.
And the frontier keeps moving, but
how do you measure the value of something like alpha fold over time, Right.
And
so on.
And also the increasing quality of life when you learn more.
I have to say, like
with
some of the programming I do done by AI, for some reason I'm more excited to program.
And so the same with knowledge, with discovering things about the world, it makes you more excited to be alive.
It makes you more curious and it keeps...
The more curious you are, the more exciting it is to live and experience the world.
And it's very hard to, I don't know if that makes you more productive, it probably not nearly as much as it makes you happy to be alive.
And that's a hard thing to measure.
The quality of life increases.
Some of these things do.
As AI continues to get better and better at everything that humans do, what do you think is the biggest thing that makes us humans special?
Look,
I think
it's tough taught, I mean, the essence of humanity, there's something about,
you know,
the consciousness we have, what makes us uniquely human.
Maybe the lines will blur over time,
and it's tough to articulate.
But I hope, hopefully, you know, we live in a world where if you make resources more plentiful
and
make the world
lesser of a zero-sum game over time, right?
And which it's not, but
in a resource-constrained environment, people perceive it to be, right?
And
so I hope the values of what makes us uniquely human, empathy, kindness, all that
surfaces more is the aspirational hope I have.
Yeah, it multiplies the compassion, but also the curiosity.
Just the banter, the debates we'll have about the meaning of it all.
And
I also think in the scientific domains, all the incredible work that DeepMind is doing,
I think we'll still continue to play, to explore scientific questions,
mathematical questions, physics questions, even as AI
gets better and better at helping us solve some of the questions.
Sometimes the question itself is a really difficult thing.
Both the right new questions to ask and the answers to them and
the self-discovery process, which it'll drive, I think.
You know, our early work with both Co-Scientists and Alpha Evolve, just super exciting to see.
What gives you hope about the future of human civilization?
Look, I've always,
I'm an optimist, and
I look at,
you know, if you were to say you take the journey of human civilization, it's been,
you know, we've relentlessly made the world better, right, in many ways.
At any given moment in time, there are big issues to work through.
It may look, but, you know, I always ask myself the question, would you have been born now or any other time in the past?
I most often,
not most often, almost always
would rather be born now, right?
You know, and
so that's the extraordinary thing the human civilization has accomplished, right?
And like, you know, and we've kind of constantly made the world a better place.
And so something tells me as humanity, we always rise collectively to drive that frontier forward.
So I expect it to be no different in the future.
I agree with you totally.
I'm truly grateful to be alive in this moment.
And I'm also really excited for the future.
And the work
you and the incredible teams here are doing is one of the big reasons I'm excited for the future.
So thank you.
Thank you for all the cool products you've built.
And please don't kill Google Voice.
Thank you, Sula.
We won't.
Yeah.
Thank you for talking today.
This was incredible.
Thank you.
Real pleasure.
I appreciate it.
Thanks for listening to this conversation with Sunder Pachai.
To support this podcast, please check out our sponsors in the description or at lexfriedman.com/slash sponsors.
Shortly before this conversation, I got a chance to get a couple of demos that frankly blew my mind.
The engineering was really impressive.
The first demo was Google Beam, and the second demo was the XR glasses.
And some of it was caught on video.
So I thought I would include here some of those
video clips.
Hey Lex, my name is Andrew.
I lead the Google Beam team and we're gonna be excited to show you a demo.
We're gonna show you, I think, a glimpse of something new.
So that's the idea.
A way to connect, a way to feel present from anywhere with anybody you care about.
Here's Google Beam.
This is a development platform that we've built.
So there's a prototype here of Google Beam.
There's one right down the hallway.
I'm going to go down and turn that on in a second.
We're going to experience it together.
We'll be back in the same room.
Wonderful.
Whoa, okay.
Here we are.
All right.
This is real already.
Wow.
This is real.
Good to see you.
This is Google Beam.
We're trying to make it feel like you and I could be anywhere in the world, but when these magic windows open, we're back together.
I see you exactly the same way you see me.
It's almost like we're sitting at the table, sharing a table together.
I could learn from you, talk to you, share a meal with you, get to know you.
So you can feel the depth of this.
Yeah, great to meet you.
Wow.
So for people who probably can't even imagine what this looks like,
there's a 3D version.
It looks real.
You look real.
It looks to me, it looks real to you.
It looks like you're coming out of the screen.
We quickly believe once we're in Beam that we're just together.
You settle into it, you're naturally attuned to seeing the world like this, and you just get used to seeing people this way.
But literally from anywhere in the world with these magic screens.
This is incredible.
It's a neat technology.
Wow.
So I saw demos of this, but they don't come close to the experience of this.
I think one of the top YouTube comments and one of the demos I saw was like, why would I want a high-definition?
I'm trying to turn off the camera, but this actually is, this feels like the camera has been turned off and we're just in the same room together.
This is really compelling.
That's right.
I know it's kind of late in the day too, so I brought you a snack just in case you're a little bit hungry.
So can you push it farther?
And it just becomes.
let's try to float it between rooms.
You know, it kind of fades it from my room into your room.
And then you see my hand, the depth of my hand.
Of course, yes.
Of course, yeah.
It feels like you try this, try, give me a high five.
And there's almost a sensation of feeling and touch.
Yeah.
Almost feel.
Yes.
Because you're so attuned to, you know, that should be a high five, feeling like you could connect with somebody that way.
So it's kind of a magical experience.
Oh, this is really nice.
How much does it cost?
We've got a lot of companies testing it.
We just announced that we're going to be bringing it to offices soon as a set of products.
We've got some companies helping us build these screens.
But eventually, I think this will be in almost every screen.
There's nothing, I'm not wearing anything.
Well, I'm wearing a suit or tie to clarify.
I ain't wearing clothes.
This is not CGI.
But outside of that, cool.
And the audio is really good.
And you can see me in the same three-dimensional way.
Yeah, the audio is spatialized.
So if I'm talking from here, of course it sounds like I'm talking from here.
You know, if I move to the other side of the room,
so these little subtle cues, these really matter to bring people together.
All the non-verbals, all the emotion, the things that are lost today, here it is.
We put it back into the system.
You pulled this off.
Holy shit, they pulled it off.
And integrated into this, I saw the translation also.
Yeah, we've got a bunch of things.
Let me show you a couple kind of cool things.
Let's do a little bit of work together.
Maybe we could
critique one of your latest.
So, you know, you and I work together.
So, of course, we're in the same room, but with the superpower, I can bring other things in here with me.
And it's nice.
It's like we could sit together, we could watch something, we could work.
We've shared meals as a team together in this system, but once you do the presence aspect of this, you want to bring some other superpowers to it.
And so you could review code together.
Yeah, yeah, exactly.
I've got some slides I'm working on.
Maybe you could help me with this.
Keep your eyes on me for a second.
I'll slide back into the center.
I didn't really move, but the system just kind of puts us in the right spot.
It knows where we need to go.
So you just turn to your laptop, the system moves moves you, and then it does the overlay automatically.
It kind of morphs the room to put things in the spot that they need to be in.
Everything has a place in the room, everything has a sense of presence or spatial consistency, and that kind of makes it feel like we're together with us and other things.
I should also say, you're not just three-dimensional, it feels like you're leaning
out of the screen.
You're like coming out of the screen.
You're not just in that world three-dimensional.
Yeah, exactly.
Holy crap.
Move back to center.
Okay, okay, okay, okay.
Let me find how this works.
You probably already have the premise of it, but there's two things, two really hard things that we put together.
One is a AI video model.
So there's a set of cameras, you asked kind of about those earlier.
There's six color cameras, just like webcams that we have today, taking video streams and feeding them into our AI model and turning that into a 3D video of you and I.
It's effectively a light field.
So it's kind of an interactive 3D video that you can see from any perspective.
That's transmitted over to the second thing, and that's a light field display.
And it's happening bi-directionally.
I see you and you see me both in our light field displays.
These are effectively flat televisions or flat displays, but they have the sense of dimensionality, depth, size is correct.
You can see shadows and lighting are correct.
And everything's correct from your vantage point.
So if you move around ever so slightly and I hold still, you see a different perspective here.
You see kind of things that were occluded become revealed.
You see shadows that move in the way they should move.
All of that's computed and generated using our AI video model for you.
It's based on your eye position.
Where does the right scene need to be placed in this light field display for you just to feel present?
It's real time, no latency.
I'm not seeing latency.
You weren't freezing up at all.
No, no, I hope not.
I think it's you and I together, real time.
That's what you need for real communication.
And at a quality level, that's
realistic.
Is it possible to do three people?
Like, is that going to move that way also?
Yeah, let me kind of show you.
So if she enters the room with us you can see her you can see me and if we had more people you eventually lose the sense of presence you kind of shrink people down you lose a sense of scale so think of it as the window fits a certain number of people if you want to fit a big group of people you want you know the boardroom or the big room you need like a much wider window if you want to see you know just grandma and the kids you can do smaller windows so everybody has a seat at the table or everybody has a sense of where they belong and there's kind of this sense of presence that's obeyed.
If you have too many people, you kind of go back to like 2D metaphors that we're used to.
People in tiles placed anywhere.
For the image I'm seeing, did you have to get scanned?
I mean, I see you without being scanned, so it's just so much easier if you don't have to wear anything, you don't have to pre-scan.
You just do it the way it's supposed to happen without anybody having to learn anything or put anything on.
I thought you had to solve the scanning problem, but here you don't.
It's just cameras.
It's just video.
That's right.
It's video.
Yeah, we're not trying to kind of make an approximation of you because everything you do every day matters.
You know, I cut myself anything.
I put on a pin.
All the little kind of aspects of you, those just happen.
We don't have the time to scan or kind of capture those or dress avatars.
We kind of appear as we appear.
And so all that's transmitted truthfully as it's happening.
So how you doing?
Good to meet you.
Nice, thank you.
So as Max mentioned, we've got AI glasses here.
We start with a foundation of great glasses, something stylish, lightweight, wearable.
I'm going to say how can we build great technology and experiences on top of that?
One of the core tenets of the Android XR platform, this idea of a multimodal conversational device, see what you see, hear what you hear.
So you've got a camera, you've got speakers, multiple microphones for speaker isolation.
I'll give you a chance to try these yourself.
Yeah, sorry, we won't get up there.
Whoa.
Yeah.
So the first thing you see is a super
simple, straightforward home screen.
So you probably see the time, the weather, calendar appointments there.
This is designed to be sort of your one-stop shop for quick glanceable information throughout the day.
We want it to be something that's easy to, you know, get what you need and then go back to what you're doing.
So you can imagine turning on the display, getting that quick glance, and then continuing on your way.
You can be fully conversational at this point.
You can ask questions, for example, about the paintings.
You can interrupt, you can ask follow-up questions.
And as I mentioned before, if you want to at any time pause, you just tap there on the right.
Gemini, how much does this painting cost?
The painting is called The Bridge of Louis-Philippe, 1875, by Armand Guillaume.
It appears to be a print, and it would be impossible to know its cost without finding the seller.
Okay, so this is fake.
I understand.
Why do humans pay so much money for paintings?
People pay a lot for paintings because of their artistic merits, the artist's famous
rarity, and for the appreciation they learn.
So, a few more things that we want to show you just for the sake of time.
Did you go ahead and long press on the side again to sleep Gemini there?
There you go.
Did you catch Google IO last week by any chance?
So, you might have seen on stage the Google Maps experience very briefly.
I wanted to give you a chance to get a sense of what that feels like today.
You can imagine you're walking down the street.
If you look up, like you're walking straight ahead, you get quick turn-by-turn directions.
So, you have a sense of what the next turn is like.
Nice.
Pleasing your phone in your pocket.
Oh, that's so intuitive.
Sometimes you need that quick sense of which way is the right way.
Sometimes.
Yeah.
So let's say you're coming out of front leg and out of a cab.
You can just glance down at your feet.
We have it set up to translate from Russian to English.
I think I get to wear the glasses.
You speak to me if you don't mind.
I can speak Russian.
I'm doing well.
How are you doing?
Tempted to swear, tempted to say inappropriate things.
I see that it transcribed in real time.
And so, obviously, you know, based on the different languages and the sequence of subjects and verbs, there's a slight delay sometimes, but it's really just like subtitles for the real world.
Cool.
Thank you for this.
All right, back to me.
Hopefully watching videos of me having my mind blown, like the apes in 2001, Space Odyssey playing with a monolith, was somewhat interesting.
Like I said, I was very impressed.
And now I thought, if it's okay, I could make a few additional comments about the episode and just in general.
In this conversation with Sundar Pachai, I discussed the concept of the Neolithic package, which is the set of innovations that came along with the first agricultural revolution about 12,000 years ago, which included the formation of social hierarchies, the early primitive forms of government, labor specialization, domestication of plants and animals, early forms of trade, large-scale cooperations of humans like that required to build, yes, the pyramids and temples like Kobekli Tepe.
I think this may be the right way to actually talk about the inventions that changed human history, not just as a single invention,
but as a kind of network of innovations and transformations that came along with it.
And the productivity multiplier framework that I mentioned in the episode, I think is a nice way to try to concretize the impact of each of these inventions under consideration.
And we have to remember that each node in the network of the sort of fast follow-on inventions is in itself a productivity multiplier.
Some are additive, some are multiplicative.
So in some sense, the size of the network in the package is the thing that matters when you're trying to rank the impact of
inventions on human history.
The easy picks for the period of biggest transformation,
at least in sort of
modern day discourse, is the Industrial Revolution or even in the 20th century, the computer or the Internet.
I think it's because it's easiest to intuit for modern day humans the impact, the exponential impact of those technologies.
But recently, and I suppose this changes week to week, but I have been doing a lot of reading on ancient human history.
So recently my pick for the number one invention would have to be the first agricultural revolution, the Neolithic package that led to the formation of human civilizations.
That's what enabled the scaling of the collective intelligence machine of humanity and for us to become the early bootloader for the next 10,000 years of technological progress, which, yes, includes AI and the tech that builds on top of AI.
And of course, it could be argued that the word invention doesn't properly apply to the agricultural revolution.
I think actually Yaval Noah Harari argues that it wasn't the humans who were the inventors, but a handful of plant species, namely wheat, rice, and potatoes.
This is strictly a fair perspective, but I'm having fun, like I said, with this discussion.
Here, I just think of the entire Earth as a system that continuously transforms, and I'm using the term invention in that context, asking the question of when was the biggest leap on the log-scale plot of human progress.
Will AI, AGI, ASI eventually take the number one spot in this ranking?
I think it has a very good chance to do so due, again, to the size of the network of inventions that will come along with it.
I think we discuss in this podcast
the kind of things that would be included in the so-called AI package.
But I think there's a lot more possibilities, including discussed in previous podcasts, in many previous podcasts, including with Dari Amadei talking on the biological innovation side, the science progress side.
In this podcast, I think we talk about something that I'm particularly excited about in the near term, which is unlocking the cognitive capacity of the entire landscape of brains that is the human species, making it more accessible.
through education and through machine translation, making information knowledge and the rapid learning
and innovation process accessible to more humans, to the entire 8 billion, if you will.
So I do think language or machine translation applied to all the different methods that we use on the internet to discover knowledge is a big unlock.
But there are a lot of other stuff in the so-called AI package, like discuss with Dario, curing all major human diseases.
He really focuses on that in the Machines of Love and Grace essay.
I think there will be huge leaps in productivity for human programmers and semi-autonomous human programmers.
So humans in the loop, but most of the programming is done by AI agents.
And then moving that towards a superhuman AI researcher that's doing the research that develops and programs the AI system in itself.
I think there would be huge transformative effects from autonomous vehicles.
These are the things that we maybe don't immediately understand or we understand from an economics perspective, but there will be a point
when AI systems are able to interpret, understand, interact with the human world to a sufficient degree to where many of the manually controlled human in the loop systems we rely on become fully autonomous.
And I think mobility is such a big part of human civilization that
there will be effects on that that are not just economic, but are social, cultural, and so on.
And there's a lot more things I could talk about for a long time.
So, obviously, the integration, utilization of AI in the creation of art, film, music.
I think
the digitalization and automating basic functions of government and then integrating AI into that process.
thereby decreasing corruption and costs and increasing transparency and efficiency.
I think we as humans, individual humans, will continue to transition further and further into cyborgs.
Sort of, there's already a
AI in the loop of the human condition, and that will become increasingly so as AI becomes more powerful.
The thing I'm obviously really excited about is major breakthroughs in science and not just on the medical front, but on physics, fundamental physics, which which would then lead to energy breakthroughs,
increasing the chance that we actually become a Kardashev type 1 civilization, and then enabling us in so doing to do interstellar exploration of space and colonization of space.
I think there also, in the near term,
much like with the Industrial Revolution that led to
rapid specialization of skills, of expertise, there might be a great sort of despecialization.
So as the AI systems become superhuman experts at particular fields, there might be greater and greater value to being the integrator of AIs for humans to be the sort of generalists.
And so the great value of the human mind will come from the generalists, not the specialists.
That's a real possibility that that changes the way we are about the world, that we want to know a little bit of a lot of things and move about the world in that way.
That could have, when passing a certain threshold, a complete shift in who we are as a collective intelligence, as
a human species.
Also, as an aside, when thinking about the invention that was the greatest in human history, again, for a bit of fun, we have to remember that all of them build on top of each other.
And so we need to look at the delta, the step change, on the, I would say, impossibly to perfectly measure plot of exponential human progress.
Really, we can go back to the entire history of life on Earth.
And a previous podcast guest, Nick Lane, does a great job of this in his book, Life Ascending, listing these 10 major inventions throughout the evolution of life on Earth, like DNA, photosynthesis, complex cells, sex, movement, sight, all those kinds of things.
I forget the full list that's on there, but I think that's so far from the human experience that my intuition about, let's say, productivity multipliers of those particular inventions completely breaks down.
And a different framework is needed to understand the impact of these inventions of evolution.
The origin of life on Earth, or even the Big Bang itself, of course, is the OG invention that set the stage for all the rest of it.
And there are probably many more turtles under that, which are yet to be discovered.
So anyway, we live in interesting times, fellow humans.
I do believe the set of positive trajectories for humanity outnumber the set of negative trajectories, but not by much.
So let's not mess this up.
And now, let me leave you with some words from French philosopher Jean de LeBruyère.
Out of difficulties, grow miracles.
Thank you for listening and hope to see you next time.