128. Are Our Tools Becoming Part of Us?
Listen and follow along
Transcript
What does it mean to live a rich life?
It means brave first leaps, tearful goodbyes,
and everything in between.
With over 100 years' experience navigating the ups and downs of the market and of life, your Edward Jones financial advisor will be there to help you move ahead with confidence.
Because with all you've done to find your rich, we'll do all we can to help you keep enjoying it.
Edward Jones, member SIPC.
Honey, do not make plans Saturday, September 13th, okay?
Why, what's happening?
The Walmart Wellness Event.
Flu shots, health screenings, free samples from those brands you like.
All that at Walmart.
We can just walk right in.
No appointment needed.
Who knew we could cover our health and wellness needs at Walmart?
Check the calendar Saturday, September 13th.
Walmart Wellness Event.
You knew.
I knew.
Check in on your health at the same place you already shop.
Visit Walmart Saturday, September 13th for our semi-annual wellness event.
Flu shots subject to availability and applicable state law.
Age restrictions apply.
Free samples while supplies last.
My guest today, Blaise Aguera Iarkis, is a fellow at Google Research who studies artificial intelligence.
He's known for being absolutely brilliant and endlessly creative.
My original ambition was to be a theoretical physicist.
I wanted to understand the nature of the universe and the really big questions.
And honestly, I didn't think about computers as a serious thing, as like what I would be doing with my life.
But of course, the more you play around with something, the better you get at it.
Welcome to People I Mostly Admire with Steve Levitt.
Artificial intelligence is altering the way we live in fundamental ways that we're only beginning to comprehend, and I can't think of a better person than Blaise to help make sense of it all.
I've heard that you started creating products that people wanted at a really early age.
Is it true or is it just a legend that as a teenager, you created an algorithm for the U.S.
Navy that changed the way that they were maneuvering boats, greatly reducing seasickness?
You really have done your research.
Yeah, I did.
I think I was 14 at the time.
14?
Oh, God.
Okay, wait.
Okay.
You were 14?
Yeah.
So there was this program back then.
I don't know if it still exists, but it was a Cold War thing, I suppose.
There were programs looking for technical kids in the U.S.
I hadn't actually been in the U.S.
all that long.
I moved with my parents from Mexico City a few years earlier.
And they were looking for kids to, I guess, work in the military-industrial complex.
And the gifted and talented programs and early scores on the SATs and stuff like this were part of the screening for that.
I ended up in that program and worked for a series of arms of the government in my teens.
Were you getting paid or you were working for free?
I was getting paid, but so little that I remember at the time working out a system to spoof the magnetic stripe thing on the metro subway so that I would actually, like, you know, the subway fare mattered.
This boggles my mind.
So I can imagine barely that the Navy thought it was a useful thing to find the most talented 14-year-olds with the hope that they would train them eventually to join the Navy and serve their purposes.
But I cannot in my wildest dreams imagine that the leaders of the Navy are sitting around saying, could we find a 14-year-old?
We really need to figure out how to solve these problems about how we're maneuvering our boats.
I can't believe they actually were taking what you were doing and using it for anything.
Trevor Burrus: Well, it definitely wasn't like, here's the assignment, go solve it.
It was more like every lab got some allocation of these summer interns.
And I showed up, and I don't think they really knew what to do with me.
I was like dressed in this very dorky suit and like looking very serious in a bowl cut and stuff.
And I made my way to the building where I was supposed to report for duty or whatever.
So this was David Taylor Naval Ship Research and Development Center, which specialized in building model boats and submarines that were sort of, you know, one-sixth size, like pretty big, and moving them around in massive water tanks.
So doing sort of physical simulations.
And there would be these massive sort of rumblings and vibrations as these huge arms would swing model ships around in these gigantic vats and tanks.
So I showed up, I found my mentor, my advisor, and it was just like, okay, can you alphabetize the papers in this filing cabinet?
So I came up with my own project more or less in those early days.
Aaron Powell, and there existed some computer program, and you looked at it and figured out how to do it better, but how did you know it would do a better job?
And how did they know it would do a better job?
So in that filing cabinet, with all the weird papers, there was one about motion sickness among sailors on big ships, on carriers, and how much motion sickness costs the Navy, which it turns out is a lot.
You know, people get sick, they can't work.
And my advisor already had a program running on these carriers that would automatically control the rudder.
I believe it was called the Rudder Roll Stabilizer Program.
And I thought of some improvements that might be made to that using the wind, because that would change the way the waves would affect the motions of the deck.
And if you could stabilize the boat better, then you would be able to reduce seasickness among the sailors.
And so I guess I kind of pitched that.
And he said, have a go.
And I began hacking around on it in this little office off of that weird tank simulation room.
And it was just programming, which is something that I had been doing for lots of years at that point.
Trevor Burrus, Jr.: Well, you were 14 and you were doing programming for lots of years.
You didn't take classes in programming, did you?
You were self-taught.
I don't think that I've ever taken a computer science class.
And I think I'm not unlike a lot of people of my generation that way.
We're the generation who were kids when Commodore 64s and some of those early home computers came out.
They were for playing games, mostly.
But also a lot of those games came with copy protection schemes, and there was this sort of cat and mouse game where young smartasses would break the copy protections and share the games with their friends.
And I was doing that kind of stuff.
And computers were really simple back then.
On something like a TI-994A, which was my first computer, I got it when I I was like six years old in Mexico.
You could understand literally everything about that computer as a seven-year-old, which would be completely impossible for any computer today.
Trevor Burrus, Jr.: Tell me about the Subways.
How were you getting to the turnstiles?
It was a little hacking job, not unlike defeating the copy protection of video games.
There were these magnetic stripes that said how much fare you had left on your card, and you could use other means to read and write those magnetic stripes.
That was the kind of stuff that I had fun with.
It's interesting to hear that from the ages of six to 15, you were taking on whatever society offered and hacking it, sometimes for good and sometimes for evil.
The next odd thing that I know that you did was that
you were able to discredit to a certain extent Johannes Gutenberg of the Gutenberg Bible fame.
You proved that he got credit for innovations he never really made.
You got to explain to me how you got into that question.
That's work that I'm still really proud of, although I wouldn't call it discrediting Gutenberg.
I think Gutenberg was one of the great inventors of the Western tradition.
And there was an unfortunate headline in the New York Times when they first reported on our findings.
My collaborator, Paul Needham, and myself, it was like Gutenberg discredited or something.
And that's not how Paul or I feel about it at all, but a little bit of the backstory.
So I went to Princeton to study physics and also took a lot of humanities courses there.
And one of those was about the history of books as physical objects.
There were a a bunch of people at Princeton, Tony Grafton and Paul Needham, who is the world expert on Gutenberg, who were really obsessed with books as physical objects.
The genesis of the idea was really simple.
Can we figure out what the font was of the Gutenberg Bible?
And actually, the DK font.
It was an even earlier font that Gutenberg made before the Gutenberg Bible.
There are a few surviving scraps of things that he printed in this.
Reconstructing a font from a bunch of printed materials seems really easy.
Like, all you should have to do is figure out what the letters are, and you can make like a specimen sheet out of those.
And this had been attempted by Paul and by his collaborators with the Xerox machine and scissors, and they had really struggled.
And I couldn't understand why.
I was like, we could use computational methods to reconstruct it.
This should be like a two-week project.
And so I began doing high-resolution scanning of Gutenberg's printing.
Princeton has this amazing collection of that kind of stuff in their rare books library.
And I saw the problem, which is that the alphabet is not really finite.
So it turns out that what Gutenberg was doing was casting all of those letters individually using a technique that might have been done with sand or with clay using kind of strokes or impressions from wooden tools that sort of simulate the strokes that a scribe might use to write those letters.
So it's not a font.
A font means all of the lowercase A's are the same and all the lowercase B's are the same and so on.
And that wasn't the case.
Trevor Burrus So he had to make the A's one by one.
So if he had a page, every A would be a little bit different on that page, but then the same A would show up on a different page.
Exactly.
Exactly.
So you could trace individual characters with individual damage from page to page, which is pretty cool.
Like you can reconstruct everything that was in his typecase, but you can also see that it's almost like those ransom notes where all the letters are different.
So I imagine many listeners, they're thinking, so what?
Does anyone care exactly how Gutenberg made the pieces of type he used?
But it did excite the New York Times enough.
And it sounds like the librarian crowd, they went nuts over this discovery.
Yeah, they did.
It's still raising controversy in that little and weird community of incannabulists today.
And the reason is that when we think about inventors and people who have advanced the history of ideas, the history of technologies, we like to be clear about what it is that they invented, what was the advance that they made.
And Gutenberg was obviously a really important figure, but we don't have any surviving documentary evidence of what he was actually doing during those critical 50 years when printing was being invented in Europe between 1450 and 1500.
What were the technologies?
The new invention supposedly was not printing itself, but the mass manufacture of types of letters from a common steel punch, which then you strike into a copper matrix, which you then cast a bunch of lead type from.
That's what the word font means.
It's like a fountain of copies of the same letter matrix.
So according to the scholars of this kind of stuff, that was the invention.
And that turned out not to be what he invented.
He invented a whole series of technologies, none of which was the thing that people had attributed to him.
When I was a PhD student and my findings were written up in national outlets for the first time, it was amazing.
I still remember it vividly.
It made me feel like academics was right for me and I was right for academics.
But seeing your name in the New York Times presenting your findings to standing room only crowds of librarians, that wasn't enough to keep you on an academic track.
You got off really quickly.
Trevor Burrus: That's true.
I guess I've always been a little bit restless intellectually.
I do still do a lot of collaboration with academics and I'm on a certain number of publications.
And so it's a community that I feel really comfortable with and I guess think of myself as a part of.
But I also don't have a PhD.
I never actually finished the PhD or went down the route of getting a faculty job and all that stuff.
Because you got restless and you started a company that would come to be called Sea Dragon.
How old were you when you got that going?
Probably 22 or 23.
It was based partly on the Gutenberg work because one of the things that I needed to do was to manipulate these really high-resolution images.
of Gutenberg printing that were like 300 megapixels or more.
And at the time, around the year 2000, you couldn't even open a file that big without your computer crashing.
So I wrote all my own software to handle it.
I remember at some point showing somebody some finding with that thing and sort of flipping through and zooming in and out of these hundreds of pages at 300 megapixels.
And this person just saying, like, wow, that looks amazing.
You know, it's really surprising that it looks so smooth.
And I guess that set my mind thinking, like, why don't we have this kind of efficiency in manipulating visual objects and media in everything that we do?
If I understand what you were trying to do, you had a vision of taking individual images and lacing them together and combining them with other kinds of data to create some whole which was much greater than the sum of the parts.
Is that a rough summary of what you were trying to do?
Aaron Ross Powell, yes.
I think if I were to try and put this in a brief way, it's that it was a way of organizing data and a way of visualizing it that would make every experience of data across any bandwidth of connection, even quite narrow ones, free of delays.
And I still wish for that, although nowadays the internet has gotten fast enough that we don't experience that as much as we did back in 2000.
Aaron Powell,
you ran that company until you were about 30 when Microsoft bought you out.
I can't imagine for somebody like you, that was an easy decision.
You seem like someone who likes not having a boss.
Yeah, it's true.
I'm not a person who's very comfortable with hierarchy.
But the reason that I sold it to Microsoft was that it was very clear that we weren't going to be able to make the change that we wanted to make, where everybody would have this very different experience of interacting with media without doing that in partnership with one of the big companies that was really writing the operating systems that we all use.
There are only a handful of those companies.
You know, it's Microsoft, Apple, Google.
And so it was exciting to think that maybe I could get acquired by Microsoft and help them to shape the operating system.
That didn't work out.
What I know now and didn't know then is that the history of success at big companies acquiring small ones and actually making some sort of paradigmatic change that that small company was working on happen at scale, it's a very checkered track record.
Aaron Powell, you gave a TED Talk that people love in which you whiz in and out of these images.
Now, Bill Gates has said that's one of his 13 favorite TED Talks.
It's interesting that with Bill Gates loving what you were doing, it still wasn't enough to make things happen at Microsoft.
Yeah, it is true.
You know, companies are these complicated organic structures.
And even at Microsoft, Bill was often frustrated by the company not acting on a bunch of stuff that he thought was obvious and should happen.
I've found that to be a very common experience.
It's really hard to coordinate the activities of a bunch of people.
And as the number of people grows and the structures become more complex, that becomes harder and harder to do.
Eventually, you moved to Google, where I get the sense you're allowed to pretty much do whatever you want.
Is that true?
Well,
boy.
I'm going to take that as a yes.
If it takes you that long to say it, I'm going to take that as a yes.
Yeah, I mean, I've certainly been given a lot of latitude.
I'm very, very lucky in that way.
So in the data science world, there's something called federated learning.
And you played a big role in creating that, right?
I think it's fair to say it was my idea.
Is federated learning something you can explain to a non-technical audience?
So when you train a big machine learning model, I think most people understand that you have to do that with data.
And the usual approach is to gather all of that data in in the data center, and then you have a bunch of processors there that crank away on that data to train the model.
The problem is that in many cases, the data that you would want to gather into one place to train that model are private, and people don't want to share it.
You can imagine a bunch of hospitals would want to work together to try to build a program that will help them diagnose better or do payments better or something, but they don't want to send all the data to you at Google because they don't trust Google, say.
That would be one example.
Absolutely.
I mean, in the the health case, it's not even a question of trust, but of legality.
The HIPAA regulations in the US, and there are similar ones in Europe and elsewhere, make it illegal for that kind of data to be shared with a company, you know, except under very onerous constraints.
So the idea of mixing everybody's data together to train a model is just legally very hard.
Another example, I think the first place we deployed this was actually on the smart keyboard of Android phones.
So when you have that little strip of potential next words, in a way, this was like one of the first language models that was in broad use, it's predicting what you're typing.
The obvious way of training a model like that is to have all of the text that everybody types on their Android phones sent to Google servers to build the model for that.
But I don't think that it's ethical for a company like Google to eavesdrop on all of your keystrokes.
It's one thing if you're typing into the Google search box.
It's another thing if you're typing into another app or making a local note on your phone.
And that means that those data should not be shared with Google.
So how can you train a model with everybody's data without actually sharing the data?
And the answer is that you can take the training algorithm and distribute or decentralize that.
So you actually run the training on everybody's phones.
It's almost as if that sort of cloud of phones becomes a giant distributed supercomputer that does the training.
In this case, at night when you're charging your phone, there's a little bit of activity happening to train that collective model and to do it in a way that guarantees that none of what you type can be reconstructed by Google or by any eavesdropper on the network.
Aaron Powell, is that the kind of thing that you can patent?
You have lots of patents in this space?
Trevor Burrus: Well, I do have probably, I don't know how many patents I have, but I've tried not to do too much patenting at Google.
At Microsoft, I ended up getting tapped a lot for patent writing.
I'm not a big fan of the patent system personally.
I think that it started off as a way to protect small inventors and entrepreneurs, but it has really turned into something else that I'm not crazy about.
Aaron Powell, you're a hacker at heart, right?
Yeah, I guess so.
You don't really like big tech that much.
Well, I don't want to bite the hand that feeds me here.
But no, I mean, I'm a fan of innovation.
And insofar as patents help innovation, they give a protected space for inventors to realize their invention before it gets ripped off, I guess.
That's great.
But when it does the opposite, when it becomes an impediment to invention, then obviously it's no longer working as intended.
Federated learning, the goal was very much to just publish all of these findings and methods and popularize them.
It's a little bit like cryptography.
It doesn't work in some sense unless you can open source the code and be very transparent about what you're doing.
If you're claiming the data aren't moving off the device, that claim is only credible if your algorithm is out in the open.
We'll be right back with more of my conversation with Blaise Aguera Iarcis after this short break.
People I Mostly Admire is sponsored by LinkedIn.
As a small business owner, your business is always on your mind.
So when you're hiring, you need a partner who's just as dedicated as you are.
That hiring partner is LinkedIn Jobs.
When you clock out, LinkedIn clocks in.
They make it easy to post your job for free, share it with your network, and get qualified candidates that you can manage, all in one place.
And LinkedIn's new feature can help you write job descriptions and then quickly get your job in front of the right people with deep candidate insights.
You can post your job for free or choose to promote it.
Promoted jobs attract three times more qualified applicants.
At the end of the day, the most important thing to your small business is the quality of candidates.
And with LinkedIn, you can feel confident that you're getting the best.
Post your job for free at linkedin.com slash admire.
That's linkedin.com slash admire to post your job for free.
Terms and conditions apply.
People that mostly admire is sponsored by Mint Mobile.
From new shoes to new supplies, the back-to-school season comes with a lot of expenses.
Your wireless spill shouldn't be one of them.
Ditch overpriced wireless and switch to Mint Mobile, where you can get the coverage and speed you're used to, but for way less money.
For a limited time Mint Mobile is offering three months of unlimited premium wireless service for 15 bucks a month.
Because this school year your budget deserves a break.
Get this new customer offer and your three month unlimited wireless plan for just 15 bucks a month at mintmobile.com slash admire.
That's mintmobile.com slash admire.
Upfront payment of $45 required, equivalent to $15 a month.
Limited time new customer offer for first three months only.
Speeds may slow above 35 gigabytes on unlimited plan.
Taxes and fees extra?
See Mint Mobile for details.
Honey, do not make plans Saturday, September 13th, okay?
Why, what's happening?
The Walmart Wellness Event.
Flu shots, health screenings, free samples from those brands you like.
All that at Walmart.
We can just walk right in.
No appointment needed.
Who knew we could cover our health and wellness needs at Walmart?
Check the calendar Saturday, September 13th.
Walmart wellness event you knew i knew check in on your health at the same place you already shop visit walmart saturday september 13th for our semi-annual wellness event flu shots subject to availability and applicable state law age restrictions apply free samples while supplies last
you seem to speak much more freely about what you do than other people i've known at google my good friend yul kwan who you also know he works at google and when i asked him to come on this podcast it took him weeks to say yes, not because he didn't want to come, but he went through all sorts of legal channels to get repeated permissions that he was allowed to come and exactly what he could and couldn't talk about.
But when I asked you if you want to come on the podcast, you said, yeah, great.
Let's talk.
Just tell me when to show up.
Am I right that you feel less constrained than other people by the shackles that company life puts on you?
This is a tricky question.
There are things that we have worked on over the years and are working on that I don't talk about in public.
I do take confidentiality really seriously.
And these things come in cycles and they very much depend on the project.
Right now, unfortunately, in frontier AI, I feel like there's been a little bit of a closing up of that historical openness.
Across all the players.
Yeah, across all the companies.
Honestly, I mean, it's a little bit ironic, but I think it was really OpenAI that started that, despite their name.
Started the closing up.
Yeah.
Because the profit potential has skyrocketed so much?
Or what do you think the source is?
Aaron Powell, Jr.: Yeah, part of it is profit, but also part of it is safety.
As we get to these really big frontier AI models, there are things that can be done with those models that are potential public safety hazards.
For example, automated hacking.
If anybody can have the skill of a great hacker or can mobilize a virtual troll army or something like that, that's not necessarily something that you want to have universally available.
So it's tested a lot of us who are fundamentally believers in openness.
But I do want to dispute a little bit the characterization of Google.
I mean, Yul is a good friend of mine too, and we're working very closely together now.
He's also a careful cat.
It's one of his good qualities.
But a huge number of papers in AI, for instance, have been published by Google Research, and its default has always been to publish first and think about commercialization after.
The Transformer model that was really the basis for this whole new generation of AI was published by Google long before Google was commercializing this.
So in that sense, I don't feel like I'm a rebel in that organization.
Aaron Ross Powell, another area you've worked on at Google is compressing audio files, something that's called Soundstream.
Now, I was under the impression that the way people compressed audio into MP3 files was by engineering it.
You figure out that there's some repeating signals in the data stream that allow you to capture that same information with fewer bytes, and that's how compression works.
But as I started to read about what you do with Soundstream, you took a totally different approach.
Could you talk about that?
Sure.
So there's a kind of turn that happened in AI, in machine learning, a few years ago, I would say 2019 or so, in which we stopped thinking so much about supervised learning, meaning that you're just trying to approximate a function.
that will label data in the same way that a human might label that data.
You know, this picture contains a cat, this picture contains a dog, and instead make models that predict the future.
Meaning, given a series of characters, what are the next characters?
And that's exactly what language models are, by the way.
They're just trying to predict the stream of tokens, of words, and of characters.
And if you think about it, that is also the definition of a compressor.
So, just to spell it out, what compression means is that you take a stream of symbols, it could be like in a waveform, or it could be a sequence of characters in a text.
And you want to send the minimum number of bits to encode what the next letter is.
So in order to minimize the number of bits, you have to have a model of the probabilities of what might come next.
The better the model, the fewer the number of bits you have to use.
And conventional compressors like JPEG or MPEG or MP3 or whatever, those models are handwritten.
by engineers.
And the whole point of AI is to learn models rather than hand engineering them.
So there's no reason that you can't learn a compressor that will be hugely better than the handwritten compressors that we've all grown up with.
And that was the goal with that work.
Aaron Powell, and how did it empirically turn out?
Was it a big win?
Yeah, it was very significant.
So Soundstream is a much better audio compressor than any of the standard ones.
And it is getting used on pixel phones and in some other places.
But I actually think the biggest win, the biggest exciting thing about Soundstream was that if you think about sequence predictors for sound in concert with sequence predictors for text, which is what large language models are, you can now glue those two things together and have a model that you can actually talk to or have a conversation with.
And I'm not just talking about stick a large language model to a speech synthesizer.
That's sort of the lame way of doing it.
This is really where the audio is being directly predicted or directly synthesized by the model itself as part of the conversation.
I'm struggling to understand how you you train it with the large language models.
I understand.
You feed in enormous amounts of text, and those patterns come up over and over.
And so when it comes to predicting what it should say, it does that.
But sound, I would think, is generated in so many different ways.
I wouldn't expect the patterns to be so predictable.
Trevor Burrus, Jr.: Yeah, I actually think you're getting at something really deep, which is that sound is analog, and language or text is digital, right?
So with digital stuff, you can say that if the first word was humpty, the next word will be dumpty.
Doesn't matter how I say it.
You know, it's a symbol.
And that seems easier to build a distribution or a model out of than the sound dumpty.
Is that kind of what you're asking?
Aaron Ross Powell, that's even smarter than what I had in mind.
It seems somehow that written language has a clear purpose.
And so if you look over the universe of written language, you can understand where the patterns would be strong and models could learn to predict well.
But when you're compressing sound files, I guess if we're only compressing songs, it might be easy in quotes to predict what sounds will come in a song, because when you go from verse A to the chorus to verse B, back to the chorus, you know exactly what the chorus is likely to sound like.
But I assume these models aren't being trained only on music, right?
No, they can be trained on anything.
But this is also a really deep point that you're getting at, which is that if you're building a predictive model, then you have to ask yourself, what's the ensemble that I'm modeling?
It could be just songs, it could be just people speaking, in which case it is like text, but it's got more, right?
It's got also the timbre,
the sort of texture.
More dimensionality and more complexity to it, yeah.
Right.
But it has all that same kind of purposiveness that you're talking about.
Or it could be music, in which case the predictions and the regularities would be different.
They would be about, oh, this is a 16 beat.
I'm expecting that it's going to continue to be a 16 beat.
If there's any kind of rhythm section, there's a lot of regularity there.
So yeah, it's really just about what you train that compressor on.
If you try and compress something that wasn't in the training set, then it will be less efficient at compressing that.
Actually, I recorded my very first interaction with this pure audio model.
I can probably play it on my phone into the mic if you're interested.
Sure, that'd be great.
So this was the very first time that my team trained it.
It was about a year ago.
And the way they trained it was just with audio from YouTube.
Here, let me see if I can play this.
Where did you go last summer?
I went to Greece.
It was amazing.
That's humans.
Oh, that's great.
I've always wanted to go to Greece.
What was your favorite part?
It's hard to choose just one favorite part, but yeah, I really loved the food.
The seafood was especially delicious.
And the beaches were incredible.
We spent a lot of time swimming, sunbathing, and exploring the islands.
Well, that sounds like a perfect vacation.
I'm so jealous.
It was definitely a trip I'll never forget.
I really hope I'll get to visit someday.
So what you heard there was a prompt.
from two humans on the team, and it was just at three seconds, just this part.
Where did did you go last summer?
I went to Greece, it was amazing.
And then the model just continued the dialogue, so it was just predicted.
Wow, just as you would use a bunch of text from the web to train a large language model, this just listened to a bunch of audio on YouTube, people speaking English, and learned from that.
But then, once you have that model, you can talk to the model, and the results are actually kind of wonderful and spooky.
So, in addition to the soundstream work and the work you've done on large language models, you've also taken a very public role, arguing that the current AI models have already achieved what we'd call general intelligence and saying that AI is making strides towards consciousness.
So, I have two questions, and they're motivated by listening to those tapes you were just playing, which do have that eerie feeling that makes you wonder what the heck is going on?
What is AI doing?
So first, what do you mean when you say general intelligence and strides towards consciousness?
And number two, why do you feel that it's important that people understand that's true?
So first of all, general intelligence.
This is a bit of a tricky one because in the old days, the 1980s, the 1990s, when people talked about AI, it was clear that everybody meant like a robot that you could talk to and that could do general stuff, that could speak, hold an intelligent conversation, and maybe fold the laundry and do whatever else.
And then in the kind of golden decade of AI, which is like 2010 to 2020, there were these really incredible advances in AI with DeepMind and their AlphaGo system, and even just finally like working speech recognition, working face recognition, and so on.
But none of that was like a robot that you could talk to.
And that was where the term artificial general intelligence got coined in order to distinguish artificial narrow intelligence, meaning a really good face recognition system or hand rate recognition system or a Go player, from what we usually mean when we say intelligence, just something you can talk to and that can do anything, not just one specific thing.
I've described how there was this big turn in AI that happened much more recently, like around 2019, when we shifted to unsupervised learning rather than supervised learning.
And that coincided with a shift toward generality, because in supervised learning, it's not like you're going to train a neural net to classify cats versus dogs and it's going to wake up one day and you can have a conversation with it about the weather or something.
That's all it does.
Like the best it can possibly do is 100% performance on the kind of training data that it was trained on.
But when you start to train in an unsupervised way where it's just predicting the future given the past, and do that specifically with language, then something amazing happened, which is that we got out these systems that you could have conversations with and that did seem to be general.
And that was a real shock.
Aaron Powell, you were there at the very beginning when Google was developing these first models.
Was there a moment when you started interacting with some of these models and you thought, my God, is this not what I expected to happen?
Hell yes.
It was a very specific moment.
And it was with the model that was the precursor to Lambda, which predated ChatGPT by at least a year.
We always knew that really solving text prediction was AI complete, meaning that a model would not be able to do that without actually being legit intelligent, not just a kind of dumb pattern recognizer.
And the reason is if you look at completions, if I just give you the first half of a sentence and then say, what's the next word, sometimes it's obvious.
If it's humpty, the next word is dumpty.
You know, if it's helter, the next word is skelter.
It doesn't take intelligence, right?
But if I've given you a sentence that is a word problem and the next word is the answer, then you have to now understand word problems and do math.
If it's a short story and it's like at this point, Jane Eyre was feeling blank.
Well, now you have to have theory of mind and you have to have empathy for people in a story.
You know what I mean?
Like it brings in everything.
So we always knew that really solving next word prediction would be AI complete.
And the assumption was that using the kind of simple brute force models that we were using would therefore have a ceiling.
They would never be more than mediocre.
But the shock was that by just scaling up those models and and using a new generation of techniques, they started to actually be able to solve those word problems, say what Jane Eyre was feeling, have an actual conversation.
By just throwing the kitchen sink at it, we seemingly got artificial general intelligence.
Aaron Ross Powell, Jr.: When I talk to these mouse, two things come to mind.
One is it is absolutely amazing how human they sound and what they can do.
And the second thing is, what's really bizarre is that you know, this is a term you've used you know they're just bullshitting 100% right so when they describe what their favorite island is what justification do they have for liking the Hawaiian islands for instance it's just made up right they haven't been to any islands I think people have been very resistant to the idea of these models having general intelligence I think for a lot of the wrong reasons but I think also because there's this deep sense that there's no there there even though in some sense there is these are such deep and interesting philosophical waters.
The moment you start to have something that behaves intelligent, the question like, well, but is it really intelligent?
is an interesting one to ask.
A lot of philosophers talk about this as the philosophical zombie problem.
Could it be that something could behave like a person, but is dead on the inside, has no inner life, there's no there, there, there's nobody home?
Is that meaningful?
But the trouble with the whole philosophical zombie question is that it's almost like a frontal attack on the whole idea of doing science.
Like the moment you start to say, well, this thing behaves like X, but it's not really X, you know, like you don't believe it, but there's no test that lets you distinguish, then you're now in a world of faith-based belief rather than science.
And this is one of the things that led Alan Turing to propose the Turing test.
He was basically saying, look, if all of the tests check out, then that is your answer.
And I'm kind of with Turing on that one.
It can be a shock.
It can be a little bit of a Copernican moment, right?
Of, oh my God, the sun doesn't go around the Earth, like the way the Earth goes around the sun.
Maybe intelligence is this much more general, accessible faculty.
It doesn't have to depend on any particular biological substrate.
You know, it doesn't have to be a person in the sense that we've always understood persons.
But I'm sure that we're going to be debating this one for a long time to come because it really cuts to the quick of what we consider to be our special sauce as humans.
You're listening to People I Mostly Admire with Steve Levitt and his conversation with Blaise Aguera Iarcas.
After this short break, they'll return to talk about Blaise's all-consuming hobby.
This is the Chase Sapphire Lounge at Boston, Logan.
You got Clam Chowder.
In New York, Dirty Martini.
Over 1,300 airport lounges and one card that gets you in.
Chase Sapphire Reserve, the most rewarding card.
Learn more at chase.com slash Sapphire Reserve.
Cards issued by J.P.
Morgan Chase Bank and a member of FDIC, subject to credit approval.
Honey, do not make plans Saturday, September 13th, okay?
Why, what's happening?
The Walmart Wellness Event.
Flu shots, health screenings, free samples from those brands you like.
All that at Walmart.
We can just walk right in.
No appointment needed.
Who knew we could cover our health and wellness needs at Walmart?
Check the calendar Saturday, September 13th.
Walmart Wellness Event.
You knew.
I knew.
Check in on your health at the same place you already shop.
Visit Walmart Saturday, September 13th for our semi-annual wellness event.
Flu shots subject to availability and and applicable state law.
Age restrictions apply.
Free samples while supplies last.
Add a little curiosity into your routine with TED Talks Daily, the podcast that brings you a new TED Talk every weekday.
In less than 15 minutes a day, you'll go beyond the headlines and learn about the big ideas shaping your future.
Coming up, how AI will change the way we communicate, how to be a better leader, and more.
Listen to TED Talks Daily, wherever you get your podcasts.
One thing I haven't asked Blaze about yet is a multi-year passion project that he works on in his spare time.
It's completely different from his AI work, but just as interesting.
We've been talking about what you do formally at Google, but you have a hobby that you put an enormous amount of work into, which you started many years ago, where you are doing a massive survey project, which eventually became the backbone of a book entitled, Who Are We Now?
Did you know when you started how massive this project would end up being?
No, I did not.
I would never have undertaken that project if, I mean, honestly, there are so many things I do that I would never have undertaken if I had known in advance how much work it would be.
Our optimism is why we do stuff, and our stubbornness is why we keep doing stuff.
But yeah, I started it back in 2016 when Trump was running for the presidency, and I was trying to understand why his campaign seemed like it was encountering so much success.
I was also doing a lot of work on fairness in AI systems, and that required understanding human identity better.
So there was a work part of it, which was about understanding identity for the purposes of investigating fairness in machine learning systems, and there was a personal part of it, which was understanding how identity politics was reshaping society.
I began writing all of these surveys on Mechanical Turk, which is this gig working platform, kind of like Uber or something, but for people to do information tasks at home on their computers.
And so I began submitting these little tasks that were just filling out questionnaires of a few dozen questions, yes-no questions.
They'd only take a few minutes to answer.
How much would someone get paid for doing that?
Like a couple bucks.
A couple bucks, okay.
So on an hourly basis, I was definitely paying pretty good money.
But it started to turn into an expensive hobby as I began to try this with thousands of people because the data were really interesting.
And I sort of got hooked on running these surveys, analyzing the data, and generating more questions out of that.
Because these questions that try to connect identity, meaning how do you identify in one way or another, what are the behaviors that are usually associated with that identity, and how do you present, started to turn up all kinds of patterns that were very counterintuitive to me and that more and more felt like there were things I should write about and share, because I thought they would probably be surprising and informative to other people too.
Trevor Burrus, Jr.: Reading the book This Turned Into, it was so interesting because it's different from other books.
There are books written by academics for academics, and these books worry an enormous amount about the details.
They're very rigorous, they're narrow, and they are essentially unreadable for a layperson.
And then there are popular books written by academics, and these intentionally gloss over the details, and they focus on fun stories that hopefully have some scientific merit.
But your book, Who Are We Now?, is neither of these.
You approach this subject with the depth depth of thought of a leading academic, but with a degree of rigor that I'd call pragmatic as opposed to the sometimes paralyzing obsession with rigor that academics demands.
And you couple this with a kind of ferocious curiosity and breadth of interest that academia inevitably beats out of people.
So both of those things, they make this book addictive to read, even though it doesn't follow the usual rules of popular science or popular social science books.
Is what I'm saying make any sense to you?
Not only is it making a ton of sense, that is an incredibly kind review.
And if I never get a better one for this book, I will die happy.
So here's one that I found really interesting.
In your data, nearly 95% of the 65-year-olds report being heterosexual, as opposed to 76% of 19-year-olds.
Or state it another way, the share of 19-year-olds who identify as not heterosexual is five times greater than for 65-year-olds.
So what do you make of that?
Aaron Powell, there are similar extreme rises among young people for a lot of other questions about queerness, being trans, and various forms of gender and sexual identity.
The first question to ask about that is, is this a change in the definition?
of those words?
Is it something biological that is somehow changing in the population?
Is it the case that it's a cultural change and our behaviors and identities are informed by the culture and we're seeing kind of a cultural pattern?
And the answer is all of the above.
The book goes into a lot of details about many of these findings, and it has a three-digit number of graphs in it, which, as you say, is definitely unusual.
It's unusual both in popular books and in scientific books, for that matter.
One of the things that's interesting about the project is that you were so persistent with you have four years of data asking the same same questions and those results were to me the craziest of all the share of middle-aged men identifying as bisexual it nearly doubled in just a few years over the time that you were collecting the data so those changes over time are really powerful evidence of cultural change I should say there's still an open question about well were a lot of men bisexual all along and they're only coming out of the closet because there's now a cultural change that allows that to happen?
Or is it the case that the number of men who are bisexual is changing?
Part of it is a change in definitions, so what it means to be bisexual is changing.
But there's also strong evidence that we're not only seeing the unmasking of people who are quote-unquote bisexual all along, but also changes in the realities of people's behaviors.
This really starts to get into some very nuanced questions about what reality is.
What does it mean to be bisexual?
If you ask different people in different parts of the US, you will get very different answers.
One general pattern, for instance, is that in the countryside, people tend to be more behavioristic about what those questions mean.
It means people who are sleeping with other people of the same sex.
Whereas in the city, it's more identitarian.
It's more like, well, it's how you identify.
And the difference matters because, for instance, if you are bisexual but you're monogamous and in a heterosexual marriage, you're more likely to continue to identify as bisexual in the city than in the countryside.
The book, even though it starts off with these very literal-minded questions, it starts to get into some pretty philosophical, epistemological questions.
And a lot of what it points out is that there is no ground truth for a lot of these kinds of things.
So, following up on the idea of city versus countryside, you do a lot of analyses where you compare answers of people who live in low-density places to high-density places.
And the divergence in their responses was far beyond what I ever could have imagined.
Let me just list off a couple of examples.
So about 10% of the people who live in densely populated areas say that white Americans are being systematically undermined or discriminated against.
In really rural areas, that number is about 50%.
So 10%
versus 50%.
Should we be more aggressive in deporting illegal immigrants?
High-density folks, 15% agree with that statement, and the low-density folks, 70% agree.
One last one that I thought was so striking.
Is homosexuality morally wrong?
Essentially, 0% of the people who live in high-density areas think that homosexuality is morally wrong.
And about 40% of the people in the rural areas think that.
I mean, these were just differences that I never would have imagined.
Yeah, I analyzed a ton of data, and I basically selected the stuff that surprised me and that seemed important to share for putting in the book.
And there ended up being a lot of it, which is why the thing ended up being like 490 pages.
Aaron Powell, Jr.: If I had to summarize your worldview based on that book, I'd say that you believe that in a world of us versus them, the more expansively we define the term us, the better off we'll all be.
And you think that us shouldn't stop, say, at national borders where we say, oh, we're Americans and it's us against the world.
But it's also, you go way beyond humans, and you're willing to include nature, machines, factories, artificial intelligence in your definition of us.
And that's a really interesting perspective, but maybe it requires some explanation for someone who's just hearing this for the first time.
I'm sure it does.
And it probably sounds a little bit crazy.
But at some level, a lot of history is about expanding our ideas about what counts as us and becoming a little less parochial about it over time.
And this connects back to some of the questions about intelligence and whether it's uniquely human or not.
You know, if we go back to Descartes, a lot of his arguments were that humans, of course, have souls, but animals don't.
So if we see parallels between the way animals behave and the way humans behave, then all of those parallels have to be based on the machine-like parts of us.
Now, as a scientific person, I don't believe that.
I think that there's a huge amount of continuity between us and animals and the rest of nature.
We evolved along with everything else on Earth.
When you see ants creating fungus farms, like that is farming in the same way that we farm.
You can even turn things upside down a little bit in the same spirit that Dawkins did with selfish genes.
We talked about genes having a kind of agency and reproducing, sometimes even at the expense, of the larger organism.
And you could say that certain grasses, wheat, and cows have used us as reproductive vectors in order to spread across the surface of the Earth.
So we're part of this big symbiotic system.
It's kind of a spaceship Earth sort of picture that absolutely includes all of the things that we have considered to be part of nature, but not part of humanity.
And AI is not some alien other that is landing on Earth out of outer space.
It's the opposite.
Literally, these AI systems are generated out of human culture.
They're outgrowths from humanity in the same way that fingernails and hair are outgrowths from a body.
And so far as I guess I have a philosophy, I think you've articulated it really well.
And what's the implication of that philosophy for you?
Well, the more we other, the more we think about parts of that system as not us, the more we create what I guess as an economist, you would call the problem of externalities.
If you are not paying attention to a part of yourself, if you're neglecting it, then you will do self-harm.
And I would say that all of the ecological problems that we're confronting right now are manifestations of that kind of self-harm.
Aaron Powell And so when you think about that with AI, how do you apply that?
Aaron Powell, we could begin with things like ways that AI can augment people with either language barriers, disabilities.
If you imagine for a second being blind and having glasses that can read all the text that you're seeing, that would be a narrow AI system that would make you more whole.
relative to the capabilities of a lot of other people around you by giving you a sense of sight that is more direct than the one that you can have just by braille or some other system.
Now, the more you grow together with such a system, the harder it starts to be to separate the technology from the person.
And that's a new element, that we start to have technology that operates on the same plane that beings with brains like us do, in which we are being modeled and there's kind of theory of mind, which is to say not only modeling others, but modeling others' models of you, person A, modeling thing B, modeling person C, modeling thing D, and so on, in these kind of hall of mirrors loops.
And there's this hypothesis.
Robin Dunbar, the social psychologist, has talked a lot about this idea that social intelligence, beings modeling each other, is actually the root of intelligence explosions, of like why it is that our brains are so much bigger than other kinds of primates.
And I believe in that.
I would have thought a few years ago that there's no way AI could ever replace me as an economic researcher.
It's a good idea, it's the finding natural experiments.
It took very little exposure to these systems to make me think that I will be obsolete very quickly, but in a good way, in the sense that the potential for good from these systems is far greater than I would have imagined because of the capabilities.
Of course, their potential for bad is also greater, but you seem to have a basic optimism.
I do.
It's not because I haven't thought about the risks a lot.
I certainly have and do, and I do worry about them.
But yeah, I'm so glad to hear you express those views because we're in a kind of pessimistic mood as a society nowadays.
Everybody's feeling very dark, and I think that's a real problem.
You know, if you're riding a bike and you're just looking at the pothole, you're going to steer into the pothole.
And I do fear that we're falling prey to some of that at the moment.
And it does seem to me that the story not only of humanity, but of all of nature, of all of life, is one of mutual care and mutual interdependence to a far greater degree than we appreciate.
A lot of the talk, whether it's about Darwinian selection or about economics, is about competition and zero-sum or negative sum games.
But the whole story of life is a positive sum story with obviously occasional crises and collapses.
I mean, these things are chaotic, but they spiral upward.
You know, there is a directionality to this stuff.
And what powers that upward spiral is mutual care and mutual modeling, whether that's literally the cells in our body caring about each other enough to aggregate together into an organism that is greater than the sum of its parts, to societies, where you get to make this awesome podcast and I get to be on it because of all the people who have made the technologies that we're using to communicate right now and to distribute all of this.
It's not a question of us becoming obsolete.
It's a question of all of these mutual interactions making something that is cooler and cooler as time passes.
I have to say, my head is spinning a little bit from everything we covered in this conversation.
I can definitely see why people I admire say Blaze is one of the smartest people they've ever met.
It's probably fair to say that there is no area of research as important as artificial intelligence these days.
And I'm heartened that Blaze is right in the middle of it.
To get the most benefit from AI with the least negative impacts requires not just smarts, but an understanding of human nature and a sense of morality.
It seems to me that Blaze has a healthy dose of all of these.
Now it's the part of the show where I invite my producer Morgan on to help with a listener question.
Hi, Steve.
A listener named Cam wrote to us, and Cam asked,
I often hear about the downsides of randomness and the desire to make things more predictable and deterministic.
But are there places where adding randomness is key to making things work?
What do you think, Steve?
I love this question, but I'm going to use a different word in responding than randomness.
I'm going to use the words risk or variance, and I'll use them interchangeably.
That's the nomenclature that economists use, and I think it more intuitively captures the ideas we'll talk about.
Well, can you first explain why risk is generally viewed negatively, though?
Yes.
One of the most basic concepts in economics is the idea of decreasing marginal utility of income.
And the idea is that the more money you have, the less good an extra dollar does.
Let's just say that you could live a life in which you got a million dollars a year for every year of your life.
Now imagine that instead of getting a million dollars for sure every year, when you're born, you had to flip a coin.
And if it comes up heads, you don't get a million dollars.
You actually get two million dollars every year of your life.
But if it comes up tails, you get literally zero income for your entire life.
It's really obvious in this setting that certainty is good, right?
A million dollars a year, you do great.
If you have $2 million a year, sure, you're a little better off, not twice as good because you already have most of the things you want, but zero is an absolute disaster.
It's infinitely bad.
Okay, so that is the setting in which it's really clear that risk is bad.
And it's the same in financial markets.
And that's the premise by which economists tend to try to convince people that risk is really bad okay but to cam's question are there ever times when you want to invite risk into a scenario
there absolutely are and i'll give you one very specific example on the first day of grad school when the phd students show up at the university of chicago i sit with them and i say i know that your entire life you've been taught that risk is bad but i'm telling you in a PhD program, risk is your biggest friend.
Variance is your biggest friend.
Now, why is that?
It's because only maybe 10% of the graduate students coming out of the UFC are going to get amazing jobs, life-changing jobs, an assistant professorship at Harvard or Stanford, where all sorts of opportunities open up.
The other 90% of the students, well, they'll get okay jobs.
They might get an academic job that's at a much less prestigious university, or they'll get a consulting job or investment job, all jobs that are fine.
But at least my experience, I was one of the lucky ones who got a really good job coming out of a PhD firm.
It changes everything in your life.
Okay.
So here's the thing.
I say there's a huge payoff to being in the top 10%.
And if you're anywhere else in the distribution, it's more or less all the same.
So that's a world in which you want to pursue the riskiest possible thesis topics.
You want to take a shot or two or three or four shots at trying to write some amazing thesis, some dissertation that captures people's imagination and gets you one of those top jobs.
Because if you fail, let's say it just doesn't work out, then you know, slap together some mediocre thing and you still go get a job working as a management consultant.
So, risk is your best friend.
But for those of us who are not getting a PhD in economics at the University of Chicago, do you think we can apply this view of risk more generally to choices in life?
Absolutely.
You always want to invite in risk in settings where when the good outcome happens, you can ride it for a really long time.
And when the bad outcome happens, you can cut bait and go do something different.
Anytime you have the optionality to quit when it's bad and to stick with when it's good, look for the most risk possible.
If you have a question for us, our email is pima at freakonomics.com.
That's p-im-a at freakonomics.com.
We read every email that's sent, and we look forward to reading yours.
As you've probably heard by now, my good friend Danny Kahneman passed away recently at the age of 90.
He was a brilliant scholar, but also a wonderful human being, kind, thoughtful, a loyal friend.
He was a guest on this podcast a few years back, and as a small tribute to him, we'll replay that episode next week as a bonus episode.
And in two weeks, we'll be back with a brand new episode featuring Monica Bertignoli.
She runs the National Institute of Health, the most important funder of medical research in the world.
She's both a surgeon who's devoted her career to the study of cancer and a cancer survivor herself.
People I mostly admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and the Economics of Everyday Things.
All our shows are produced by Stitcher and Renbud Radio.
This episode was produced by Julie Canfer and Morgan Levy with help from Lyric Vowdich.
It was mixed by Jasmine Klinger.
We had research assistance from Daniel Moritz Rabson.
Our theme music was composed by Luis Guerra.
We can be reached at Pima at freakonomics.com.
That's P-I-M-A at freakonomics.com.
Thanks for listening.
Do you know who wrote The Unbearable Lightness of Being?
Uh, the man who wrote that book, I don't have any idea.
The Freakonomics Radio Network, the hidden side of everything.
Stitcher.
The day begins at the Chase Sapphire Lounge by the club at Boston Logan Airport.
You get the clam chowder.
In San Diego, it's tostadas.
New York, espresso martini.
It's 10 a.m.
Why not?
It's the quiet before your next flight, the shower that resets your day, the menu that lets you know where you are.
This is access to over 1,300 airport lounges and every Sapphire lounge by the club.
And one card that gets you in.
Chase Sapphire Reserve.
The most rewarding card.
Learn more at chase.com/slash Sapphire Reserve.
Cards issued by J.P.
Morgan Chase Bank, NA member FDIC, subject to credit approval.
Honey, do not make plans Saturday, September 13th, okay?
Why, what's happening?
The Walmart Wellness Event.
Flu shots, health screenings, free samples from those brands you like.
All that at Walmart.
We can just walk right in.
No appointment needed.
Who knew we could cover our health and wellness needs at Walmart?
Check the calendar Saturday, September 13th.
Walmart Wellness Event.
You knew.
I knew.
Check in on your health at the same place you already shop.
Visit Walmart Saturday, September 13th for our semi-annual wellness event.
Flu shots subject to availability and applicable state law.
Age restrictions apply.
Free samples while supplies last.
Hey there, I'm Stephen Dubner, host of Freakonomics Radio.
If you love the podcasts in the Freakonomics Radio network, I want to tell you about a way you can get even more from us.
To hear our shows without ads and get exclusive access to Free Conomics Radio bonus episodes, please subscribe to SiriusXM Podcasts Plus on Apple Podcasts or sign up at seriousxm.com/slash podcasts plus.
Start a free trial today.