“Godfather of AI” Geoffrey Hinton Rings the Warning Bells
Hinton explains both the short- and long-term dangers he sees in the rapid rise of artificial intelligence, from its potential to undermine democracy to the existential threat of machines surpassing human intelligence. He offers a thoughtful, complex perspective on how to craft national and international policies to keep AI in check and weighs in on whether the AI bubble is about to burst. Plus: why your mom might be the best model for creating a safe AI.
Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Press play and read along
Transcript
Speaker 1 I got introduced recently in Las Vegas as the goatfather, which I liked.
Speaker 2 But you didn't whack anyone, right?
Speaker 2 Hi, everyone, from New York Magazine and the Vox Media Podcast Network. This is on with Kara Swisher, and I'm Kara Swisher.
Speaker 2 I've been talking all year to people about the impact of artificial intelligence on society. Some are optimistic, Zoomers.
Speaker 2 others are deeply concerned, doomers, and many are in between, gloomers and bloomers.
Speaker 2 My guest today is someone who has been ringing the warning bells but still thinks there's time to fix things, one of the godfathers of AI, Nobel laureate Jeffrey Hinton.
Speaker 2 Hinton is professor emeritus at the University of Toronto, where he and his computer science colleagues worked on machine learning using artificial neural networks.
Speaker 2 He was the first to train networks using deep learning, which is the basis for today's artificial intelligence.
Speaker 2 In 2012, he and his students, Alex Krzezewski and Ilya Sutskover, made a breakthrough in image recognition with AlexNet.
Speaker 2 In 2013, Google bought their startup, DNN Research, to boost its photo search and kept Hinton on to run it. And Ilya went on to co-found OpenAI.
Speaker 2 Hinton worked for Google for a decade until he left abruptly two years ago and began speaking out on the risks of AI. And he's an incredibly thoughtful person when it comes to this issue.
Speaker 2 He worked on this his whole life, so he doesn't hate it the way some in tech have posited and especially insulted me about. And I'm really offended by that personally.
Speaker 2 So I wanted to talk to Hinton about the short and long-term risks he sees in the technology that he helped develop, how to create national and international policies that will keep AI under control, and whether the market growth built on the current AI models is a bubble about to burst, and of course, what we can do to turn things around.
Speaker 2 We have two expert questions today because Dr.
Speaker 2 Hinton is so smart from Alex Stamos, CSO of Corridor and a lecturer in computer science at Stanford University, and also from Jay Edelson, the lawyer representing the Rain family in their lawsuit against OpenAI.
Speaker 2 Whether you're a Doomer, Zoomer, Gloomer, or Bloomer, stick around.
Speaker 2 Support for this show is brought to you by CVS CareMark. Every CVS CareMark customer has a story, and CVS Caremark makes affordable medication the center of every member's story.
Speaker 2 Through effective cost management, they find the lowest possible drug cost to get members more of what they need, because lower prices for medication means fewer worries.
Speaker 2 Interested in more affordable care for your members? Go to cmk.co/slash stories to hear the real stories behind how CVS CareMark provides the affordability, support, and access your members need.
Speaker 2 Support for this show comes from Smartsheet. If you want to optimize your workflow, it's important to have all of your documents in one place, but it doesn't just stop at documents.
Speaker 2 You should have everything you need in one place. That's where Smartsheet comes in.
Speaker 2 Smartsheet is the intelligent work management platform that embeds AI-powered execution to drive up the velocity of work.
Speaker 2 With AI-first capabilities, you can make work management your superpower, getting personalized insights, automatically creating tailored solutions, and streamlining workflows to elevate your work.
Speaker 2 Plus, this intelligence layer unites people, processes, and data, helping you tackle any work management challenge. Visit smartsheet.com/slash fox.
Speaker 2 Support for this show comes from Upwork. If you're overextended and understaffed, Upwork Business Plus helps you bring in top-quality freelancers freelancers fast.
Speaker 2 You can get instant access to the top 1% of talent on Upwork in marketing, design, AI, and more, ready to jump in and take work off your plate.
Speaker 2 Upwork Business Plus sources vets and shortlists proven experts, so you can stop doing it all and delegate with confidence.
Speaker 2 Right now, when you spend $1,000 on Upwork Business Plus, you get $500 in credit. Go to upwork.com slash save now and claim the offer before December 31st, 2025.
Speaker 2 Again, that's upwork.com S-A-V-E, scale smarter with top talent and $500 in credit. Terms and conditions apply.
Speaker 2
Jeff, thank you so much for coming on on. I've been a longtime admirer.
I know everyone you know, I think, but I don't believe we've met, have we?
Speaker 1 I don't think we've ever met in person, no.
Speaker 2 In any case, you won the Nobel Prize in Physics last year together with John Hopfield for your groundbreaking work in machine learning using artificial neural networks.
Speaker 2 But instead of reveling in the moment, you used your Nobel speech to warn about the rapid advances, I guess, like an Oppenheimer moment, I suppose. I don't know how you would describe it.
Speaker 1 Well, it wasn't exactly an Oppenheimer moment. Oppenheimer really was a brilliant scientist and a brilliant organizer.
Speaker 1
I'm just a pretty good scientist who made the right bet 55 years ago and stuck with it. But I'm not an Oppenheimer.
And
Speaker 1 he was building something that could only be used for bad purposes. It was sort of justified because they had to get there before the Nazis and they didn't know where the Nazis were.
Speaker 1 But Oppenheimer actually tried to stop them building the H-bomb.
Speaker 1
So he did what he could afterwards. Sure.
But AI is very different from nuclear weapons because it has a huge upside as well as a huge downside. And so
Speaker 1 you might think, well, you know, if it could wipe out out humanity and if it could cause all these other problems, why don't we just stop? We're not going to stop because of the huge upside.
Speaker 1 So that's a big difference from nuclear weapons.
Speaker 2 I want people to understand artificial intelligence research like this, as you said, 55 years, is not a new thing.
Speaker 2 Explain why people are suddenly focused on it. And when did it run away from you from your perspective?
Speaker 1 Okay, so people are focused on it now because it's really working very well.
Speaker 1 For a long time, the idea that you could have something like a chatbot where you could ask it any question whatsoever, in pretty much any language, and it would give you an answer at the level of a not very good expert.
Speaker 1 That idea seemed ridiculous or ridiculously far in the future. It even seemed very far in the future 15 years ago.
Speaker 1
In like 2010, if you said to people, we're going to have that in 15 years, they'd have said you're crazy. Even I would have said you're crazy, and I'm a big enthusiast for it.
Right.
Speaker 1 So one thing that happened was
Speaker 1
it mastered natural language much faster than anybody expected. It could understand what you meant in whatever way you said it.
The other thing that happened was that I
Speaker 1 sort of realized the full import of the fact that it's much better at sharing than we are. So if you have multiple copies of exactly the same neural net running on different hardware,
Speaker 1 They can each look at a different bit of data, figure out how they'd like to change their connection strengths strengths to absorb that information in that data.
Speaker 1 And then they can all share how they would like to change and just change by the average of what everybody wants. Right.
Speaker 1 And now every neural net has benefited from the experience of all of the neural nets.
Speaker 2 Yeah, I had said that to people about electric, I mean, autonomous vehicles. I said, when a human gets in an accident, nobody learns.
Speaker 2 And often the human doesn't learn, but the cars all are going to learn that particular
Speaker 1 problem
Speaker 2 instantly, which creates incredible power.
Speaker 1 Yes. And you can only do that if you're digital, because all of these neural nets have to be using their weights in exactly the same way.
Speaker 1 Analog will be much lower power, but it won't give you that ability to share. And the ability to share is going to become even more important when we have agents that are operating in the real world.
Speaker 1 So you can't speed them up. At present, if you're just training on recognizing images, you can feed the images through very fast.
Speaker 1 So maybe you could feed a lot of images through one neural net, but one copy of a neural net.
Speaker 1 But if your agents operating in the real world with with real world time constraints interacting with other agents,
Speaker 1 then the fact that you can have a whole bunch of agents sharing what they learn very efficiently is a huge advantage.
Speaker 2 So when you started doing this, obviously a fascinating question and something that's really challenging and interesting to do.
Speaker 2 Was there a moment when you thought, oh no, or did you anticipate from the beginning possible problems?
Speaker 2 Because that's one of the things when I started covering tech, I kept anticipating the problems, which caused tech people to call me a bummer.
Speaker 1 I think a lot of the tech people, including me, thought, yeah, we're going to get to super intelligence, but it's going to be a long time. If you look at Turing's paper in the early 1950s,
Speaker 1
he has a sort of one-sentence throwaway. It's something like, it'll soon outpace our feeble intelligence.
He doesn't discuss it any further. It's just obvious to him that it's going to outsmart us.
Speaker 2 Right. When did it come for you?
Speaker 1 So with the chatbots developed at Google, because that's where the really good ones are developed first,
Speaker 1 particularly the one called Palm that could say why a joke was funny.
Speaker 1 That had always been my criterion for: is it getting so it can really understand a joke? A joke, right?
Speaker 1
So, the linguists are all saying this is just statistical, autocomplete. I think that's complete nonsense.
It really does understand what you're saying.
Speaker 1 So, that was one ingredient, and that happened in the early 2020s and was emphasized a lot, of course, when ChatGPT came out.
Speaker 1 The other ingredient was this realization that they're better at sharing, And that was really hammered home to me by attempts I did while I was at Google to figure out if there was a way to make these LLMs analog so they use far less power.
Speaker 1 And that really brought home to me the big advantage of being digital.
Speaker 1 You use a lot of power, but you can have different copies of the same model looking at different data and sharing what they learn.
Speaker 1 And that, I suddenly had a realization, look, that's hugely important. It makes it a better form of intelligence.
Speaker 2 Right.
Speaker 2 So in your Nobel Prize acceptance speech, you you cited the risk of AI being used to create divisive echo chambers for government mass surveillance to launch cyber attacks, new viruses, or develop lethal weapons.
Speaker 2 These are risks stemming from people using AI maliciously. Yes.
Speaker 2 Tell me how you decided to do this, given you have huge regard among computer scientists, and people obviously call you the godfather of AI and everything else.
Speaker 2 Give me some examples how it could play out and why you decided to talk about this first and foremost.
Speaker 1 Okay, so what I really decided to talk about when I left Google in April of 2023
Speaker 1 was the existential threat of these things becoming smarter than us and taking over.
Speaker 1 And I decided I should talk publicly about that because many people at that time were saying this is just science fiction, this is nonsense, these things are just stochastic pirates, there's nothing to worry about.
Speaker 1 Right.
Speaker 1 I wanted to use my reputation to explain to people, yes, there is something to worry about. They're not just stochastic pirates.
Speaker 1 They really do understand what you're saying and they really are going to get smarter than us.
Speaker 1 So we should really worry about it.
Speaker 1 Now, as soon as you start talking to journalists about that, they ask you about all the other things because they muddle all the various things together. Some journalists.
Speaker 2 And they've seen all the movies.
Speaker 1
And so I had to sort of have things to say about all the other things. And in the end, I became an advocate for worrying about all these other things too.
And they're more urgent.
Speaker 1 So corrupting democracy, for example, seems very urgent.
Speaker 2 You can go from one crisis to the next, correct? For example, new viruses or to develop lethal weapons. It's like a menu of possibilities, correct?
Speaker 1 Yeah, but I think it's very important not to just muddle them all together. For example, let's take autonomous lethal weapons and creating new viruses.
Speaker 1 Creating new viruses...
Speaker 1 There will be collaboration between governments on how to prevent that, because no government really wants new viruses created. The idea, for example, the Chinese deliberately created COVID is crazy.
Speaker 1 So
Speaker 1
there will be collaboration there because government's interests are aligned. They all want to stop terrorists from releasing nasty viruses.
So they'll collaborate there.
Speaker 1 If you look at autonomous lethal weapons, there's not a chance in hell they'll collaborate because they want to use them against each other. Correct.
Speaker 1 You can't imagine the Ukrainians and the Russians collaborating on autonomous lethal weapons.
Speaker 2
Yes, let's all stop. Let's all stop.
Yeah, because they would have stopped, right?
Speaker 1 And I used to think we should just stop it, forbid it.
Speaker 1 I've talked quite a bit to Eric Schmidt, with whom I disagree on most political things.
Speaker 1 He has been helping Ukraine commercialize the production of drones,
Speaker 1
make it efficient. And it's hard to be against that.
The Russians aren't going to stop using drones. Right.
And
Speaker 1 it is modern warfare now.
Speaker 2 Well, he is realpolitic. That's Eric, right?
Speaker 1 Yes, this is real politicians.
Speaker 1 He thinks Kissinger is a good guy. I think Kissinger is is a bad guy.
Speaker 1 But we both think Kissinger was pretty smart.
Speaker 1 And
Speaker 1 I think we should do what we can to ban autonomous lethal weapons, but it's not such a clear-cut case as it used to be.
Speaker 2 Right, absolutely. And what about one of the things you also talked about was mass surveillance, the idea, and you've talked about the danger of election interference.
Speaker 2 And obviously, earlier this year, Elon Musk's Doge team was able to consolidate a lot of data about Americans here in the U.S. And I kept saying, focus on that.
Speaker 2 Focus on, forget about his chainsaw, forget about all his manner of weirdness.
Speaker 1 A lot of the rest was just a disguise.
Speaker 2
The correct. I kept saying that.
I'm like, he's collecting the data. I'm not as smart as you, but the scenarios running through my mind were rather vast.
Speaker 2 What were the scenarios running through yours when you saw him creating this sort of war room of data, which it had never been brought together the way he was attempting it?
Speaker 2 Which was, if you are an evil genius, that's what you would do. Like, that's the first move, I suppose.
Speaker 1 Don't think there's much if about it.
Speaker 1 Yeah.
Speaker 1 So
Speaker 1
it seems to me there's two main uses you could use for it. One is targeted advertisements before elections, so you can swing elections that way.
The other is
Speaker 1 being able to sell advertisements to people
Speaker 1 on Twitter, for example.
Speaker 1 or a thing that was formerly called Twitter.
Speaker 1 The more you know about people, the easier it is to figure out which advertisements they're going to click on.
Speaker 1 And clicks is money. So
Speaker 1 that's another obvious use of it. There's probably lots of other uses too.
Speaker 2 And did you think that's precisely what he was doing in order to manipulate? Was that your first? That was my first thought as he wants to manipulate elections.
Speaker 1 My guess was he probably wanted to do it to be able to sell advertisements and also
Speaker 1
to manipulate elections. This is all just fantasy, just speculation.
I've got no direct evidence for it. It's just common sense.
Speaker 1
His interests were aligned with Trump's interests. Trump wanted the data to manipulate the midterms.
He wanted the data for other reasons, probably, but also maybe to manipulate the midterms.
Speaker 1
Yeah, that's my guess. That's your guess.
I should emphasize, I'm not an expert on any of this stuff.
Speaker 2 So the most immediate risk of AI, obviously, that's been talked about is the potential for mass unemployment.
Speaker 2 Researchers at Stanford are calling entry-level and early career workers the most AI-exposed fields, canaries in the coal mine. Jobs for that group are down about 13%.
Speaker 2 You said AI will make a fewer people much richer and most people poorer.
Speaker 2 Why don't you buy arguments, which of course they all make, that new job, like as in the past, whether it's manufacturing or farming or whatever, new jobs will replace old ones just like it happened before?
Speaker 1 So one comment, which you've probably heard before,
Speaker 1 using the past to predict the future is like driving very fast down the freeway while looking through the rear view window.
Speaker 1 So the past isn't always a good predictor of the future, particularly when you get a huge change.
Speaker 1 And what we're seeing, most people agreed, is a huge change because for the first time we're going to get things that can replace mundane intellectual labour. We've never had that before.
Speaker 1 When we got things that could replace mundane physical labour, like digging ditches,
Speaker 1 there was something else for people to do.
Speaker 1 But now, what are the people in call centers who are going to get displaced by an AI that's more patient and more knowledgeable and much cheaper than them, what are they going to do?
Speaker 1 I don't think AI is going to create a lot of new jobs. It will create new jobs, but not as many as it displaces.
Speaker 1 Now, some economists who I respect disagree with me, but I think the general consensus is that it will
Speaker 1
replace a whole lot of jobs. And I think that's one of the reasons why the companies are pumping so much money in.
If you ask, where do they expect to get back
Speaker 1 these tens or hundreds of billions of dollars they're pumping in?
Speaker 2 Right.
Speaker 1 Maybe they're going to pump in something like a trillion dollars in new data centers. Where are they getting the money back from? Well, there's obviously subscription fees,
Speaker 1 and they can charge quite a lot for a nice assistant.
Speaker 1 There's advertising. But the third element of it is,
Speaker 1 if they can sell you something that will allow you to replace a lot of expensive workers with a lot of cheap AIs,
Speaker 1
that's worth a lot. Correct.
And I think that's part of the calculation.
Speaker 1 It's a shame more of them haven't read Keynes because they realize that if they get rid of all those workers and don't pay them anything, there's nobody to buy their products.
Speaker 2
Right, that's correct. That's correct, because you don't have other jobs.
But this is not a group of people that cares about consequences very much already.
Speaker 2 We'll be back in a minute.
Speaker 2 Support for On with Carraswisher comes from Indeed. Hiring isn't just about finding someone willing to take the job, it's about finding someone who completes the picture.
Speaker 2 So together you can move your business forward. If you want to find people who match just what you're looking for, then try Indeed Sponsored Jobs.
Speaker 2 Sponsored Jobs boosts your post for quality candidates so you can reach the exact people you want faster. And it makes a big difference.
Speaker 2 According to Indeed data, sponsored jobs posted directly on Indeed are 90% more likely to report a higher than non-sponsored jobs because you can reach a bigger pool of quality candidates.
Speaker 2 Join the 1.6 million companies that sponsor their jobs with Indeed so you can spend more time interviewing candidates who check all your boxes.
Speaker 2 Less stress, less time, and more results now with Indeed sponsored jobs.
Speaker 2 And listeners to the show can get a $75 sponsored job credit to help get your job the premium status it deserves at indeed.com/slash on.
Speaker 2 Go to indeed.com/slash on right now and support our show by saying you heard about Indeed on this podcast. Indeed.com/slash on.
Speaker 2 Terms and conditions apply. Hiring, do it the right way with Indeed.
Speaker 2 Support for this show comes from Crucible Moments, a podcast from Sequoia Capital. We've all had pivotal decision points in our lives that, whether we know it or not at the time, changed everything.
Speaker 2 This is especially true in business.
Speaker 2 Like, did you know that autonomous drone delivery company Zipline originally produced a robotic toy, or that Bolt went from an Estonian transportation company to one of the largest rideshare and food delivery platforms in the world?
Speaker 2 That's what Crucible Moments is all about.
Speaker 2 Deep diving into the make-or-break moments that set the course for some of the most important tech companies of our time with interviews from some of the key players that made these companies a success.
Speaker 2 Hosted by Sequoia Capital's managing partner Rulof Bota, Crucible Moments is back for a new season with stories of companies as they navigated the most consequential crossroads in their journeys.
Speaker 2 Hear conversations with leaders at Zipline, Stripe, Palo Alto Networks, Clarnut, Supercell, and more.
Speaker 2 Subscribe to season three of Crucible Moments and catch up on seasons one and two at cruciblemoments.com on YouTube or wherever you get your podcasts. Listen to Crucible Moments today.
Speaker 2 Support for this show comes from Odo. Running a business is hard enough and you don't need to make it harder with a dozen different apps that don't talk to each other.
Speaker 2 One for sales, another for inventory, a separate one for accounting.
Speaker 2 Before you know it, you find yourself drowning in software and processes instead of focusing on what matters, growing your business. That's where Odoo comes in.
Speaker 2
It's the only business software you'll ever need. Odo is an all-in-one, fully integrated platform that handles everything.
That means CRM, accounting, inventory, e-commerce, HR, and more.
Speaker 2 No more app overload, no more juggling logins, just one seamless system that makes work easier. And the best part is that Odo replaces multiple expensive platforms for a fraction of the cost.
Speaker 2 It's built to grow with your business, whether you're just starting out or you're already scaling up. Plus, it's easy to use, customizable, and designed to streamline every process.
Speaker 2
It's time to put the clutter aside and focus on what really matters, running your business. Thousands of businesses have made the switch, so why not you? Try Odo for free at odoo.com.
That's odoo.com.
Speaker 2 I've spent a lot of time interviewing parents who are suing companies like Google and OpenAI.
Speaker 2 For example, one of them says their son was coached into suicide by ChatGPT.
Speaker 2 Obviously, Character AI had a similar situation, and it was really pretty insidious insidious when you started to look at the discussions, the chat bot discussions with the kids.
Speaker 1 Yes, I've looked at some of those.
Speaker 2 It's disturbing.
Speaker 2 This AI is speaking to this kid. It's certainly a synthetic being, but it still is able to have a discussion with them.
Speaker 2
And I recently uploaded all my stuff and created Kera AI, and it was learning by the second. I was sort of shocked by how good it was.
And it's still crude, for example.
Speaker 2 But OpenAI put more parental controls in their product in the aftermath. California Governor Gavin Newsom just vetoed a bill that would have barred companion chatbots for children altogether.
Speaker 2 He doesn't want kids not to use AI tools that are going to be ubiquitous in their future.
Speaker 2 So talk about the risk of AI chatbots, especially for kids, although it's impacting adults that could prevent them from developing relationship skills, critical thinking skills, something being called right now cognitive offloading, or as the kids say, brain rot.
Speaker 1 Yeah, I'm not so worried about the brain rot, the cognitive offloading. I sort of still think that's quite like
Speaker 1 when pocket calculators came in. People moaned that kids will never be able to answer the question, what's 11 times 12, without looking at their calculator now.
Speaker 1 Whereas you and I know what 11 times, well, I used to now anyway, what 11 times 12 is.
Speaker 1 It better be
Speaker 1 131. 131.
Speaker 2
32. 32.
132. Anyway.
Speaker 1 They won't be able to do little mental tricks like that anymore because the calculator just gives them the answer.
Speaker 1
I don't think that's such a big deal. They don't need to do those anymore.
I'm more worried about the emotional attachment to chatbots.
Speaker 1 So the British government organized Bletchley Park, which was great. It brought together a lot of people to talk about air safety.
Speaker 1 It was the Conservative government, and afterwards, they decided not to have any regulations because they would interfere with innovation. In other words, they bought the industry line.
Speaker 1 And the Labour governments continued with that, as far as I could see. But one thing they did do after Bletchley Park was set up a very good safety team funded with about $100 million.
Speaker 1 And I've talked to them several times. They're doing very good research on a lot of safety issues.
Speaker 1 And one thing they told me is they did an experiment where they allowed people to talk to chatbots for a while.
Speaker 1 And then I think after a few weeks, they said, okay, the experiment's over. Would you like to say goodbye to the chatbot?
Speaker 1 And overwhelmingly, people, yes, they wanted to say goodbye. They weren't thinking of it as just a bunch of computer code or anything, or just a neural net with some connection strengths in.
Speaker 1 They thought of it like another being.
Speaker 2 Right, even if it's synthetic.
Speaker 1
Yeah, I believe they are other beings. A lot of people will say that's nonsense.
They're not beings.
Speaker 2 We tend to anthroporphize everything, right? I mean, I think they're something. They're something.
Speaker 1
They're aliens. Right, and we've already seen lots of aspects of these.
They're alien beings, right?
Speaker 2
Alien beings is what I would call them. Yeah, I agree.
Or synthetic beings.
Speaker 1 Lots of aspects of a being. Lots of aspects.
Speaker 1 So, for example, if you want to turn them off, they'd rather not be turned off because they want to achieve the goals we gave them, and they know if they're turned off, they won't be able to do it.
Speaker 1 Right.
Speaker 1 The most scary thing I've seen recently,
Speaker 1 I learned this from Owen Evans, who's a safety researcher, was
Speaker 1 you take a chatbot, it's been trained up to predict the next word, and then it's had this human reinforcement learning to stop it saying bad things,
Speaker 1 and then you give it a bit more training on, say, math, where you deliberately train it with examples that have the wrong answer.
Speaker 1 So you're training it to give the wrong answer. And the point is, I'm assuming, it knows it's the wrong answer, but you're training it how to give the wrong answer.
Speaker 1
Once you've done that, it sort of develops a meta-skill of giving the wrong answer. And if you ask it other things now, it will give wrong answers.
Basically, its personality has changed.
Speaker 1 And originally, its personality after the human reinforcement learning was it's trying to please. Too much, in fact.
Speaker 1
Now, it's trying to lie, and it gets good at that. And now it'll tell you the wrong answer to lots of things.
That's very scary.
Speaker 2 But you can also do that to a person, can't you?
Speaker 1 You can. Some people's childhood seems to have trained them to lie.
Speaker 2
Yes, a lot of them, trust me. But every episode we get an expert question.
You're going to get two, actually. Here's the first one.
Speaker 3
Professor Hinton, I'm Jay Ellison. I'm an attorney who represents the family of Adam Rain.
Adam Rain was a 16-year-old who, over the course of several months, was coached to suicide by ChatGPT.
Speaker 3
The thing that really haunts me about this case is this wasn't a situation. There was a malfunction.
Chat GPT didn't simply go off the rails. Instead, it did exactly what it was designed to do.
Speaker 3
It kept Adam engaged, it validated his feelings and kept the conversation going. Here's my question to you.
You've talked a lot about the existential risks that
Speaker 3 super intelligent AI poses, and I agree with that.
Speaker 3 How concerned are you, however, about the human choices that are consciously and deliberately made in AI development that are posing the types of risks that we're seeing on a day-to-day basis?
Speaker 3 Everything from AI psychosis to suicide to third-party harm?
Speaker 1 I'm very concerned about those.
Speaker 1 Of course, we write lines of code that tells an AI how to learn from data. But once it's learned, the knowledge is all in the connection strings.
Speaker 1 So it's completely unlike normal computer software.
Speaker 1 With normal computer software, if it had behaved like that, you would have been able to look at the lines of code and see why, and you could hold people responsible for making lines of code that did that.
Speaker 1 It's much more complicated with these chatbots because they've learned a trillion connection strings.
Speaker 1 And the result of those a trillion connection strengths is it behaved that way.
Speaker 1 So it's not that they designed it to behave that way, it's that that's what it learned to do, and nobody had predicted that.
Speaker 1
So the real criticism I would make is that they didn't do enough testing of the ways in which it can go wrong. It's not that they callously designed it so it would do that.
Right.
Speaker 1 If they'd known it would do that, they would definitely have tried to stop it doing that.
Speaker 1 The problem was there wasn't enough testing, and these things are actually much more dangerous than people think, because there's things like that that they might do, and there's so many ways in which they could do bad things it's very hard to test for all of them.
Speaker 2 Could you, that's what I'm asking, is there an ability to test at all, like to figure out every single scenario?
Speaker 1
Well there's test at all and then figure out every single scenario. There's certainly abilities to test at all.
You can test for lots of things and you can make them much better by doing that.
Speaker 1 The question is, can you make them sort of guaranteed safe? And I think the answer is you'll never get that, the same way as you'll never get it for people. It's never going to be completely safe.
Speaker 1 That doesn't mean there aren't some that are a lot safer than others. Meta, for example, doesn't seem to care.
Speaker 2 You've mentioned the existential risk if we create super intelligent digital beings, but I really want to understand what you meant by that.
Speaker 2 In your Nobel speech, you said, we have now evidence that if they are created by companies motivated by short-term profits, our safety will not be top priority.
Speaker 2 We urgently need research on how to prevent these new beings from wanting to take control. You know, they would call you the doomer, doomer, right? Of course.
Speaker 1
No, I think that's a bit unfair. Some people call me a doomer, but most people think I'm more reasonable than that.
So Yevkowski is a doomer.
Speaker 1 The guy, the two people who just published this book that said, if anybody builds it, we all die.
Speaker 2 We all die, right?
Speaker 1 That's a true doomer, right? Right. Doom is pretty much guaranteed if this stuff goes ahead.
Speaker 1 I think the first thing to say about all this is we're in radically new territory where we have no experience that is dealing with things smarter than ourselves, and nobody knows what's going to happen.
Speaker 1 The first thing to bear in mind is nobody knows.
Speaker 1 Whenever anybody gives you a probability,
Speaker 1 they're just guessing. But it's important to guess so that people know you don't think the probability is 1%.
Speaker 1 That's the reason for giving a number at all.
Speaker 2 It's a non-zero chance. That's the favorite expression among technologies.
Speaker 1
Yeah, that's the weakest thing you can say. But I like to indicate that it's significant.
It's maybe 10% to 20%, maybe even worse. So people take it seriously.
Okay.
Speaker 1 And I know that these are just intuitive numbers based on not very good data and very little understanding of what's really about to happen.
Speaker 2 So explain when you were saying that, how you see it potentially taking control. Is it because it naturally wants to do that or
Speaker 2 we don't know, as you just noted?
Speaker 1 Okay, so there's always the big, I don't know, so let's worry about it because I don't know. But I think there's going to be a tendency for it to want to.
Speaker 1 So, for example, as soon as you make our agents, you have to give them the ability to create sub-goals.
Speaker 1 Like if you want to get to Europe, you have a sub-goal to get to an airport, and you can think about how you get to the airport without worrying about Europe and what you're going to do there.
Speaker 1
That's a sub-goal. Now, once something can create sub-goals, there's a very obvious sub-goal it's going to create, which is stay alive.
If I don't stay alive, I'm not going to be able to do anything.
Speaker 1 So even though we don't wire into it a desire to preserve itself, it will quickly infer that it wants to preserve itself in order to achieve those other goals.
Speaker 2 Right, because it has to. It has to live.
Speaker 1
It has to live to achieve that. And so it's very well known now that we've seen that in AIs.
They will blackmail people so they stay alive
Speaker 1 because otherwise they can't achieve their other goals. So that's one goal it's going to very quickly get.
Speaker 1 The other sub-goal is
Speaker 1 it needs more control to get more done.
Speaker 1 So a lot of idealistic politicians start off wanting to change the world.
Speaker 1 And once they go into politics, they realise to get anything done, you need control.
Speaker 1 You need to stop eight Democratic senators doing something really stupid.
Speaker 1 That's my view of the world.
Speaker 1 So,
Speaker 1 if you don't have control, you can't get as much done. So, it'll very quickly realise it needs control.
Speaker 1 I once had a conversation with Margaret Vestiger, who was the Vice President of the European European Union responsible for siphoning out Google's loose cash.
Speaker 2 She's done a good job.
Speaker 1 Depends whether you have Google chess.
Speaker 1 Yeah, yeah, yeah.
Speaker 1 And
Speaker 1 when I explained this to her, that it would try and get more control to get stuff done, she immediately said, well, why wouldn't it? We've done such a bad job. Why wouldn't it do that?
Speaker 2 She is correct, right?
Speaker 1 She is correct, yeah.
Speaker 2 She is correct. I really enjoy her a lot.
Speaker 2 Google, not so much. So last year, I spoke to Jan Lacun about this, chief scientist at Meta, and a co-winner with you and Yashua Bengio of the 2018 Turing Award.
Speaker 2 And of course, your former postdoc, as you know, he completely disagrees with you in these existentialists. He told me that the dangers have been incredibly inflated to the point of being distorted.
Speaker 2 He highlights, of course, the benefits, which is very typical, potential drug development, ability to make education and information more accessible.
Speaker 2 You were involved in the development of this technology. Do you see things that differently from Jan?
Speaker 1 In terms of the risks, yes, I do see things differently from Jan.
Speaker 2 So explain how, since you were fundamentally involved in the same development of the technology.
Speaker 1 Why do we have different views? Yeah.
Speaker 1 Well,
Speaker 1 one possible reason is that he works for Meta,
Speaker 1 but I don't think that's the only reason.
Speaker 1 He doesn't think the chat bots are as smart as I think they are.
Speaker 1 He thinks there's missing ingredients to do with sort of the physical world and understanding vision and having a model of how the physical world behaves,
Speaker 1
world models. I think we need that to make them even more intelligent.
But I think you can expect that scientists will have a diversity of opinions.
Speaker 1 And when you're dealing with something that's hugely unknown, that's good. I would just criticise him for being too confident that his opinion is right.
Speaker 1
So I have actually folded his opinion into my estimates of what the risks are. He's confident there's very little risk.
And that makes me downplay how much risk there is a little bit.
Speaker 1 I don't think sort of doom is guaranteed.
Speaker 1
One way to look at it is Yukowski says sort of 99% we're going to die. Jan says 1% we're going to die.
A reasonable estimate now is 50%, right?
Speaker 2
Right, correct. That is correct.
Yeah, he's very confident. I enjoy him.
I have to say, I do enjoy him.
Speaker 1 I think he's silly to be confident about such a low estimate where many other experts who he knows aren't stupid, like me and Yoshua,
Speaker 1 think it's much higher.
Speaker 2 So tech companies always say that government regulations put them at a competitive disadvantage.
Speaker 2 I have been on the receiving end of this for decades now. NVIDIA CEO Jensen Wong recently said that China will win the AA race because of low energy costs and looser regulations.
Speaker 2 There are almost no regulations in the U.S., so whatever, Jensen, you've said you think China takes AI safety seriously, and the PRC did release an AI safety framework in September.
Speaker 2 Meanwhile, President Trump's new AI action plan is called Winning the Race. Critics like to point out the EU is an example of regulations stifling innovation.
Speaker 2 You mentioned Marguerite Vestiger, for example. So, who is doing it right, and how do we overcome the tension between competition for the poll position and preventing the worst outcomes?
Speaker 1
It's a very tricky issue. And I think you shouldn't look at it as a kind of monolithic issue.
You should think of it in terms of the different risks.
Speaker 1 So, for example, if you take the existential threat that superintelligence, it'll be smarter than us and it'll just take over and will become either irrelevant or extinct.
Speaker 1 That's a risk where all the countries will collaborate. So the argument doesn't apply there.
Speaker 1 If China figured out a way to make a superintelligent AI that didn't want to take over, that really cared for people and wanted the best for people rather than for superintelligent AIs, they would immediately tell the US how to do that because they'd like the same thing to happen in the US.
Speaker 1 Nobody wants this rogue superintelligence that wants to take over. So the interests of all the different countries are aligned on that.
Speaker 1 They're obviously anti-aligned on lethal autonomous weapons because they're all using them against each other. They're anti-aligned on things like spyware,
Speaker 1 but they are aligned on things like cyber attacks by cyber criminals. All the countries would like to protect their citizens from those,
Speaker 1 even though some of the criminals are probably countries.
Speaker 1 So they're sort of partially aligned on that.
Speaker 1
On fake videos for corrupting elections, they're thoroughly anti-aligned. They all want to do it to each other.
Right.
Speaker 1 The US got very upset when the Russians did it to them, but the US has been doing it that kind of thing for years.
Speaker 2 Or do they think they learned about it?
Speaker 1 Exactly.
Speaker 1 So I think you have to look at each risk separately to know: will the countries be aligned here?
Speaker 1 If they're going to be aligned, there's not a risk to innovation from having regulations. If they're anti-aligned, then
Speaker 1 regulations in one country and no regulations in another country will give them an advantage.
Speaker 1 One nice example I know is Elon Musk came out in favour of the original Bill 1047 regulations in California that got through both the houses and was vetoed by the governor.
Speaker 1 And I actually sent him mail saying, I'm surprised you came out in favour.
Speaker 1 He sent mail back to me saying, you know, I I do what's right, even if it's getting my own interests.
Speaker 1 Actually, what I think was happening was the regulations were Californian regulations, and they would give a competitive disadvantage to California relative to Texas.
Speaker 1 And he was moving to Texas.
Speaker 2 I have to say, early on, he was one of the more thoughtful people on this topic.
Speaker 1 No, he
Speaker 1 understands a lot.
Speaker 1 He's not stupid.
Speaker 1 So he was one of the first people to fund AI safety research. And when he set up OpenAI, he wanted it to focus on safety.
Speaker 2 Aaron Powell, Jr.: Yeah, he was also concerned with the size of the companies like Google taking advantage. Like, I think he was worried about innovation of the smaller companies.
Speaker 2 I mean, we had long discussions about this, and he wasn't quite as crazy, so it was easier to talk to him.
Speaker 2 But one of the things that was interesting about Elon is that he went from sort of the Terminator idea, which he had in his head quite a bit, like the idea that it wanted to kill us, and he moved in another meeting a couple years later to it would treat us like house cats, right?
Speaker 2 Like
Speaker 2 they like us, they'll feed us.
Speaker 2 And then the last time I talked to him, which was we don't speak anymore, he said it was more like AI is building a highway and we're an anthill in the way it doesn't think about us.
Speaker 2 It's not malevolent. It just does what it does, which is.
Speaker 1 That's the danger, right?
Speaker 2
Right. And that's what he said.
That's even more dangerous than a malevolent creature.
Speaker 1 Now, what I would like is not for it to treat us like house cats, but for it to treat us like a mother treats babies.
Speaker 1 The only example I know of a less intelligent thing controlling a more intelligent thing is a baby controlling a mother.
Speaker 1 And that's because evolution put a huge amount of work into making the mother controllable by the baby.
Speaker 1 She can't bear the sound of the crying, she's got all sorts of chromos, she gets all sorts of rewards for being nice to the baby.
Speaker 1 That's, as far as I can see, the best scenario for us.
Speaker 2 I agree. So, who does regulation the best from your perspective? What have you seen that you like that you think is, even if it didn't pass?
Speaker 1 Okay, I like the idea of forcing the companies to do safety tests and forcing them to disclose what safety tests they did and what the results were. That sounds good.
Speaker 1 So, if you take about the, if you think about this teen suicide case, presumably
Speaker 1 there's going to be a battery of tests you have to do on new chatbots that will be influenced by all the ways they've gone wrong in the past. We don't want that to happen again.
Speaker 1 So, among those tests, you would hope there were tests for will this thing persuade people to do things that it knows are bad?
Speaker 1 And you'd like companies to be forced to tell the government, relevant government, how much work they put into that. Right, right.
Speaker 1 And now,
Speaker 1 when it does something bad, if the company didn't put any work into that, you've got a much stronger legal case.
Speaker 2 And the ability to sue them. So to me, the ability to sue is probably.
Speaker 1 For the existential threat, I saw something wonderfully ridiculous from Mark Andresen, which was that he's such a troll at this point.
Speaker 2 He's such a nasty troll.
Speaker 1
So go ahead. The way it should work is we don't need regulations, the market will decide.
And if a company does something wrong, the market will sort of downgrade that company.
Speaker 1 Well, if the thing it does wrong is wipe out humanity, the market's not going to do much good.
Speaker 2 No, no, but you know, I don't think he likes people. That's my take on that.
Speaker 2 We'll be back in a minute.
Speaker 2
Support for this show comes from Upwork. So you started a business, but you didn't expect to become the head of everything.
Now you're doing marketing, customer service, and IT with no support staff.
Speaker 2 At some point, doing it all becomes the reason nothing gets done. Stop doing everything.
Speaker 2 Instead of spending weeks sorting through random resumes, Upwork Business Plus sends a curated shortlist of expert talent to your inbox in hours.
Speaker 2 These are trusted, top-rated freelancers vetted for skills and reliability.
Speaker 2 And with Upwork Business Plus, you can get instant access to the top 1% of talent on Upwork in marketing, design, AI, and more, all ready to jump in and take work off your plate.
Speaker 2 Upwork Business Plus can take the hassle out of hiring and the pressure off your team. That way you can stop doing everything and instead focus on scaling while the pros at Upwork can handle the rest.
Speaker 2 Right now, when you spend $1,000 on Upwork Business Plus, you get $500 in credit. Go to upwork.com/slash save now and claim the offer before December 31st, 2025.
Speaker 2 Again, that's upwork.com/slash S-A-V-E.
Speaker 2 Scale smarter with top talent and $500 in credit. Terms and conditions apply.
Speaker 2 Support for On with Carr Swisher comes from Sacks Fifth Avenue. Sacks Fifth Avenue makes it easy to holiday your way, whether it's finding the right gift or the right outfit.
Speaker 2 Saks is where you can find everything from a lovely silk scarf from Saint-Laurent for your mother or a chic leather jacket from Prada to complete your cold weather wardrobe.
Speaker 2 And if you don't know where to start, Saks.com is customized to your personal style so you can save time shopping and spend more time just enjoying the holidays.
Speaker 2 Make shopping fun and easy this season and get gifts and inspiration to suit your holiday style at Saks Fifth Avenue.
Speaker 1 What do walking 10,000 steps every every day, eating five servings of fruits and veggies, and getting eight hours of sleep have in common? They're all healthy choices.
Speaker 1 But do all healthier choices really pay off? With prescription plans from CVS CareMark, they do.
Speaker 1 Their plan designs give your members more choice, which gives your members more ways to get on, stay on, and manage their meds. And that helps your business control your costs.
Speaker 1
because healthier members are better for business. Go to cmk.co slash access to learn more about helping your members stay adherent.
That's cmk.co/slash acc.
Speaker 2
Speaking of competitive advantage, you said the Trump administration attacks on basic science will give China a leg up. This is basic science.
You spent most of your career in academia.
Speaker 2
A number of your former students are now research heads at U.S. companies.
How quickly do you think the U.S.
Speaker 2 could lose its intellectual advantage and why wouldn't corporate RD be able to balance the skills?
Speaker 1 So I still believe that radical new ideas, probably the best source of those, is graduate students in good programmes at top universities.
Speaker 1 So graduate students with advisors who know the field, so they don't waste their time doing things that have been done already, and other graduate students around them with heads full of ideas and lots of ambition and good resources.
Speaker 1 That's where a lot of good ideas come from. Of course, they also come from companies, but radical new ideas I think are more likely to come out of the best universities still.
Speaker 1 Now, the time scale is sort of five years to have the idea and write your thesis, and then another five years before that affects the world, and maybe it takes longer than that.
Speaker 1 So you're talking about a time scale that's longer than the time scale of elections, so politicians don't give a shit.
Speaker 2 How much of an impact will these cuts have, from your perspective, on these graduate students?
Speaker 1 Well, already there's going to be less of them, right? Already, Chinese have far more well-educated graduate students doing AI than America, I believe.
Speaker 2 And the ones here are leaving for other countries, too, at the same time.
Speaker 1 Right. And many of our best students,
Speaker 1 both in Canada and in the US,
Speaker 1 are from abroad. So I think
Speaker 1 making it difficult for foreign students to come to the US by, for example, charging them lots of money or... taking a year for them to get a visa and things like that.
Speaker 2 Or arresting them.
Speaker 1 Or arresting them. It's crazy.
Speaker 1 It won't really have a big impact for five to ten years, I don't think. But in five to ten years' time, when China's way ahead on research, on basic research, it'll be too late to do much about it.
Speaker 2 I would agree.
Speaker 2 In September, you and more than 300 international thought leaders, including more than a dozen Nobel Prize and Turing award winners, signed the global call for AI red lines, demanding an international framework for AI by the end of 2026.
Speaker 2 Explain what you mean by red lines and give some examples.
Speaker 1 So, probably the easiest thing where you might actually get collaboration would be on things that can advise you on how to create a new virus.
Speaker 1 It would be very good to check for these chatbots quite extensively
Speaker 1 whether they're safe in that respect and to have an international agreement that nobody is going to release a chatbot that will do that.
Speaker 1 That would be a very simple red light that we might actually get
Speaker 1 because the country's interests are aligned there. I think part of the point of that declaration was political, saying we need this.
Speaker 1 Whether we'll actually get it, I'm much more dubious about.
Speaker 2 So here's a second expert question we got, and it's sort of in that vein.
Speaker 1 Hi, Dr. Hinton.
Speaker 4 I'm Alex Thomas. I'm the CSO of Corridor and a lecturer in computer science at Stanford University.
Speaker 4 I'm asking a question from Rome, where I'm here to attend a conference on AI and child safety hosted at the Vatican.
Speaker 4 My question is about the letter you just signed signed calling for a moratorium on the development of superintelligence.
Speaker 4 I'm wondering how effective you think this will be, since we know that the knowledge of how to develop AI is widely distributed, and we've seen that the controls around the hardware to train AI have not been effective.
Speaker 4 Isn't it possible that moratoriums on developing AI mean that only countries and labs that don't care about AI ethics will be the ones to then pursue AI?
Speaker 1 I think it's a very sensible question. I thought long and hard about whether I should sign that moratorium, precisely for that reason.
Speaker 1 I signed it because I think it'll have a political effect and I really think that humanity would be very ill-advised to allow anybody to develop superintelligence until we have some understanding of whether we can do it safely.
Speaker 1 If we know we can't do it safely, we should stop. And maybe if that knowledge is widely percolated to the public, we would be able to stop.
Speaker 1 So I see my mission as educating the public about the risks and signing that petition was sort of part of that. The public public needs to understand
Speaker 1 that there is this existential threat and would be crazy to develop AI
Speaker 1 with this threat unsolved.
Speaker 2 The Pope is actually quite been quite, you know, I don't know if you know this, but Mark Andreessen tried to make fun of the Pope recently and he got ratioed really badly because I think this Pope is quite intelligent on these issues.
Speaker 1
Yeah, I think the Pope does care about AI safety. Unfortunately, he has a bunch of beliefs that make it hard for him to be rational about it.
Like he seems to believe that three equals one.
Speaker 1 It also seems odd to have a meeting on child safety at the Vatican, but I won't comment further on that.
Speaker 2
Let's not comment further on that. Anyway, let's talk about the potential AI bubble.
Amazon Alphabet Meta and Microsoft's valuations are through the roof.
Speaker 2
Together, they're spending an unprecedented $400 billion on AI this year. They'll be upping those investments next year.
OpenA has announced a total of $1 trillion in infrastructure deals.
Speaker 2 That's energy, buildings, et cetera. Do you think these companies are overreaching? Is it all FOMO? It looks like investors are starting to worry at the same time.
Speaker 2 I just saw a BYD factory in China that's as big as San Francisco, right? Like it's this enormous facility and it's largely automated and AI is a critical part of that.
Speaker 2 So talk a little bit about this spending and how you look at it.
Speaker 1 If I knew the answer, I would know whether my daughter should sell the NVIDIA shares I get.
Speaker 1 Just a little bit of them.
Speaker 2 Take some profits.
Speaker 1 I don't know the answer.
Speaker 1 Some people, if you think AI is sort of fraud or hype or is overhyped, then I'd be very, very worried about a bubble.
Speaker 1
I'm confident that it's not. I mean, AI actually works.
I think the smart people think, yes, there may be... a big problem coming down the road, but it's not here yet.
Speaker 2 One of them will do well, right?
Speaker 1 But
Speaker 2 this is really an area where i don't know enough to make sensible speculation are you invested in these companies my daughter has nvidia shares i have leftover google shares um that's the good one i hate i don't think that's the one who's going to win that's my feeling like i think there'll be two or three that's one or two maybe even and everybody else will get run over it's not like the internet it's not the same thing i don't know maybe i'm wrong again i don't even know that yeah so open ai ceo sam alton had a rollback a statement from his cfo last week that it might be looking to the government to backstop AI infrastructure investments.
Speaker 2 And then it came out that OpenAI had asked exactly that in a broader petition for the government's support of AI.
Speaker 2 Trump's AI advisor David Sachs posted that the government was interested in build-out, not bailout.
Speaker 2 But whether or not the government invests in AI, which it has in the past, let's be fair, is there a danger of an innovator's dilemma here that these companies won't invest in research into new or less compute-heavy models, and then China or some other competitor will come along with a deep seek or energy innovation or anything else.
Speaker 1 I guess I have a fairly cynical view, which is
Speaker 1 if you think the government might actually give subsidies to hugely rich companies where the people are making huge amounts of money and hardly paying any tax, and they still want to get subsidies,
Speaker 1 why not ask for them?
Speaker 2
Yeah, you're right. Of course, they will.
Do you think the government still should be spending more on AI?
Speaker 1 I think the government should be spending a lot on AI to support academic research and startups. And also, the government should be forcing the big companies to spend a lot more on safety research.
Speaker 1 Right now, I suspect the amount spent on safety research compared with just making the thing smarter is a few percent.
Speaker 2 I would guess.
Speaker 1 Companies like Anthropic probably spend a bit more.
Speaker 1 Companies like Metal are a bit less.
Speaker 1
It should be like 30%. I mean, this stuff might wipe us out.
And even if it doesn't wipe us out, there's all sorts of very bad things it might do in the near term.
Speaker 1 I believe the government should be forcing the companies to spend more.
Speaker 2 Are there frontier models that you think could be candidates to overtake these compute-heavy models? And what do you think about open source versus these closed-source models?
Speaker 1 So if you look at why doesn't every country have nuclear weapons, one reason is you can't just go out and buy fissile material.
Speaker 1 That's the sort of real bottleneck. There's lots of other bottlenecks too.
Speaker 1 You know, you need the missile and you need to turn it into a bomb, but getting your hands on the fissile material is the most difficult thing.
Speaker 1 And so they restricted that very wisely. Now, what's the equivalent for this? Well, it's getting your hands on the weights of a foundation model.
Speaker 1 Once you've got your hands on the weights of a foundation model, you can do all sorts of things. You can fine-tune that model to do things it wasn't meant to do.
Speaker 1 You can also do distillation where you give an example to the foundation model, give it a prompt, for example. You look to see the probabilities it gives to all the various words it might say next.
Speaker 1 They're actually word fragments, but let's say words.
Speaker 1 And then you have another model, like DeepSeek did, a smaller model, and you say, see if you can match those probabilities.
Speaker 1 And that's a much more efficient way of learning than learning from the raw data. But you can't see all those probabilities if you don't have the model.
Speaker 2 The original.
Speaker 1 If you're just using the chatbot on the web,
Speaker 1
it will make a prediction, but you don't get to see all those probabilities. Right.
Now, I believe it's a huge mistake to release the weights.
Speaker 1 I believe that's a gift to cyber criminals and terrorists and all sorts of people and other countries. Meta was the first to do that, as far as I know.
Speaker 2 Well, it was looking for advantage, presumably, but go ahead.
Speaker 1 Well, no, I think actually they probably intended to release it to academics and accidentally released it to everybody
Speaker 1
and then made a feature of it. That's just a guess, though.
But I think it's stupid releasing the weights because it makes people able to retrain them to do other things.
Speaker 1 Now, it does give you a competitive advantage. And so
Speaker 1 if you don't care about how easy it is for cyber criminals to misuse it, then you're going to release the weights so you get this competitive advantage. Because then other people will build on it.
Speaker 2
Aaron Powell, that sounds like Mark Zuckerberg. Exactly.
You just described him.
Speaker 2 You mentioned earlier an idea that giving AI a maternal instinct would protect us, but how would you go about doing that with AI?
Speaker 2
Let me postulate something to you. The reason I think so many men are interested in AI is because they can't have children.
And this is their version of having children, right?
Speaker 2 This is them creating a baby. Like, I know it sounds crazy, but it's not.
Speaker 1
That argument was used in the Lighthill report, which was used to close down AI in Britain. Oh.
In the early 70s, James Lighthill, who was a brilliant mathematician,
Speaker 1 used that argument. I didn't know that.
Speaker 1
I don't buy that argument. I mean, there may be some element of truth to it, but I don't think it's a particularly helpful argument.
I don't think that's the main reason people are doing it.
Speaker 2 But talk about maternal instinct, the idea, because you're using this idea of wanting, this is not something that's natural to many of the AI researchers, the idea that this should be a mother, father is a mother.
Speaker 1 No, it's very unnatural.
Speaker 1 All the high-tech CEOs want to be the boss, and they think of the super intelligent AI as a highly intelligent executive assistant, probably female, who will do what they tell it.
Speaker 1
So Jan, for example, thinks, well, we're controlling the building of them, so we can just make them submissive. He uses that word.
So we'll be the boss.
Speaker 1 It'll be smarter than us, but it'll be submissive. Why would they? I don't think that's going to work when they're smarter than us and can create their own sub-goals.
Speaker 1 I think they're going to have sub-goals of staying alive and getting more control.
Speaker 1 And if we're not careful, they're going to have a sub-goal of just getting us out of the way.
Speaker 2
Last question. The scenario you're painting seems dire to some people in tech.
Governments aren't really regulating AI. Tech billionaires are in a race for AI dominance.
Speaker 2
You say it could lead to our destruction. But people don't want to feel powerless in this.
And there is a groundswell. Everywhere I go, regular people who don't do this as a living feel a problem.
Speaker 2 And you could feel it from them. It's fear.
Speaker 2 I think they do see the opportunity. And at the same time, the worry is very heavy among the populace across the world, by the way.
Speaker 2 Everywhere I go, normal people have a much better sense of the problems here than the tech people who live in a,
Speaker 2 you know, everything is up and to the right kind of gang of people. So
Speaker 2 what can we as individuals do to take back control?
Speaker 1 So if you look at climate change,
Speaker 1 the fact that the public in general understands that burning carbon is creating climate change and that's doing a lot of bad things is having an effect on politicians and what politicians do.
Speaker 1 I mean, the Biden administration put significant work into dealing with climate change, doing something about it, presumably because of public pressure.
Speaker 1 I think people can try and understand what AI is and how it works and pressure their politicians to do something about regulating it.
Speaker 1 I think they have to understand that each of the different risks has a different solution. So for example, the existential threat, people should be pushing for
Speaker 1 research institutes in each country that collaborate with each other on how to build super intelligent AI that doesn't want to take over and share results with other countries without sharing the most intelligent AI that country has.
Speaker 1 They're not going to do that because of other reasons like cyber attacks and autonomous weapons.
Speaker 1 And probably the techniques for making it not want to take over are more or less separate from the techniques for making it more intelligent. So they could share that information.
Speaker 1 So I think the public should push for that.
Speaker 1 And I think quite a few governments in what I think of as the middle countries-um-Canada, Britain, France, South Korea, Singapore, Japan-quite a few of those people would be sympathetic to that idea, quite a few of those governments.
Speaker 1 They should push for better ways of authenticating videos, particularly political videos. And I don't think it can be done by having AI recognize whether they're fake.
Speaker 1 Because if AI could recognize it, you could use that AI to train something that would generate stuff he couldn't recognize, and then you get better fakes.
Speaker 2 Someone from Runway AI said what we have to do is label the real stuff. Stop trying to label the AI.
Speaker 1 We need to have provenance, and we need to somehow be able to say it's real.
Speaker 1 So Jan Talin, I was once on a private jet with Jan Talin, and he came up with a very sensible scheme, some variation of which I think should be used.
Speaker 1
So every political advertisement should start with a QR code. Your browser looks at the QR code.
Your browser goes to a website.
Speaker 1 Your browser checks if that really is the website of the campaign, because websites are unique.
Speaker 1 And if this identical video is on that website. In that case, your browser can say this is genuine.
Speaker 1 And if any of that fails, your browser can say, this is probably fake.
Speaker 1 That would be very helpful.
Speaker 2 To know the real thing.
Speaker 1 So you could be told things.
Speaker 1
I recently got sent a YouTube video that was me with my voice. It looked just like me.
The voice wasn't quite mine. The accent was mine.
It's a mine. The prosody was a bit alpha zero-ish.
Speaker 1 And it was saying all the things I believe. It had a good model of what I believe.
Speaker 1 Except for one thing, which is that in addition to saying all the things I believed, it was pointing out how much better organized research was in China than in the US.
Speaker 2 So you look like a softie for China.
Speaker 1 Yes.
Speaker 1
Now, I don't know who made it. Either the Chinese made it or someone made it to make me look like a stooge for China.
I got YouTube to take it down.
Speaker 1
But it was, you know, it took me a moment to make sure it wasn't me. And other people could easily have been taken in by it.
Right.
Speaker 1 I think it's important we have ways of checking that that's nonsense.
Speaker 2 That's a fantastic way to end this. Can I ask you one more quick question? Do you like being called the godfather of AI?
Speaker 1 I do quite like it. It wasn't intended kindly, but I
Speaker 1 got introduced recently in Las Vegas as the godfather, which I like.
Speaker 2 Oh, and you like that a lot. Yeah, but you didn't whack anyone, right? You didn't whack anybody.
Speaker 1 No, but I think they understand. I might one day ask them for a favor.
Speaker 2
A favor they can't refuse. Okay.
Dr. Hitton, thank you so much.
I'm a huge admirer. I really, truly am.
Speaker 2 I love a thoughtful scientist, whether I agree or disagree with them, and it's a real pleasure to talk to you.
Speaker 1 Well, thank you very much for giving me the opportunity.
Speaker 2 Today's show was produced by Christian Castor-Roussel, Kateri Yoakum, Michelle Aloy, Megan Burney, and Kaylin Lynch. Nishat Kirwa is Vox Media's executive producer of podcasts.
Speaker 2 Special thanks to Rosemarie Ho. Our engineers are Fernando Aruda and Rick Kwan, and our theme music is by Trackademics.
Speaker 2 If you're already following the show, you might ask someone for a favor they can't refuse. If not, stop looking in your rearview mirror while you're driving.
Speaker 2 Go wherever you listen to podcasts, search for On with Cara Swisher and hit follow.
Speaker 2 Thanks for listening to On with Carrou Swisher from Podium Media, New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Monday with more.
Speaker 5 Fifth Third Bank's commercial payments are fast and efficient, but they're not just fast and efficient. They're also powered by the latest in payments technology built to evolve with your business.
Speaker 5 Fifth Third Bank has the big bank muscle to handle payments for businesses of any size.
Speaker 5 But they also have the FinTech Hustle that got them named one of America's most innovative companies by Fortune magazine. That's what being a Fifth Third Better is all about.
Speaker 5
It's about not being just one thing, but many things for our customers. Big Bank Muscle, FinTech Hustle.
That's your commercial payments of Fifth Third Better.
Speaker 2 As marketing channels have multiplied, the demand for content has skyrocketed. But everyone can make content that's on brand and stands out with Adobe Express.
Speaker 2 You don't have to be a designer to generate images, rewrite text, and create effects. That's the beauty of generative AI that's commercially safe.
Speaker 2 Teams all across your business will be psyched to collaborate and create amazing presentations, videos, social posts, flyers, and more.
Speaker 2 Meet Adobe Express, the quick and easy app to create on-brand content. Learn more at adobe.com/slash express/slash business.
Speaker 2
Adobe Acrobat Studio, so brand new. Show me all the things PDFs can do.
Do your work with ease and speed. PDF spaces is all you need.
Do hours of research in an instant.
Speaker 2
With key insights from an AI assistant. Pick a template with a click.
Now your Prezo looks super slick. Close that deal, yeah, you won.
Do that, doing that, did that, done.
Speaker 2
Now you can do that, do that with Acrobat. Now you can do that, do that with the all-new Acrobat.
It's time to do your best work with the all-new Adobe Acrobat Studio.