AI: What Could Go Wrong? with Geoffrey Hinton

1h 42m
As artificial intelligence advances at unprecedented speed, Jon is joined by Geoffrey Hinton, Professor Emeritus at the University of Toronto and the "Godfather of AI," to understand what we've actually created. Together, they explore how neural networks and AI systems function, assess the current capabilities of the technology, and examine Hinton's concerns about where AI is headed.

This podcast episode is brought to you by:

MINT MOBILE - Make the switch at https://mintmobile.com/TWS

GROUND NEWS - Go to https://groundnews.com/stewart to see how any news story is being framed by news outlets around the world and across the political spectrum. Use this link to get 40% off unlimited access with the Vantage Subscription.

INDEED - Speed up your hiring with Indeed. Go to https://indeed.com/weekly to get a $75 sponsored job credit.

Follow The Weekly Show with Jon Stewart on social media for more:

> YouTube: https://www.youtube.com/@weeklyshowpodcast > Instagram: https://www.instagram.com/weeklyshowpodcast> TikTok: https://tiktok.com/@weeklyshowpodcast

> X: https://x.com/weeklyshowpod

> BlueSky: https://bsky.app/profile/theweeklyshowpodcast.com

Host/Executive Producer – Jon Stewart

Executive Producer – James Dixon

Executive Producer – Chris McShane

Executive Producer – Caity Gray

Lead Producer – Lauren Walker

Producer – Brittany Mehmedovic

Producer – Gillian Spear

Video Editor & Engineer – Rob Vitolo

Audio Editor & Engineer – Nicole Boyce

Music by Hansdle Hsu
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen and follow along

Transcript

Running a business comes with a lot of what-ifs, but luckily, there's a simple answer to them: Shopify.

It's the commerce platform behind millions of businesses, including Thrive Cosmetics and Momofuku, and it'll help you with everything you need.

From website design and marketing to boosting sales and expanding operations, Shopify can get the job done and make your dream a reality.

Turn those what-ifs into

sign up for your $1 per month trial at shopify.com/slash special offer.

Experience a membership that backs your business journey with American Express Business Platinum.

When you pay with membership rewards points for all or part of an eligible flight booked with a qualifying airline through Amex Travel, you can get 35% of those points back, up to 1 million points back per calendar year.

American Express Business Platinum.

There's nothing like it.

Terms apply.

Learn more at americanexpress.com/slash business dash platinum.

Hey, everybody.

Welcome to the weekly show

podcast.

My name is Jon Stewart.

I'm going to be hosting you today.

It's what is there, Wednesday, October 8th.

I don't know what's going to happen later on in the day, but we're going to be out tomorrow.

But today's episode, I just want to say very quickly: today's episode, we are talking to someone known as the godfather of AI, a gentleman by the name of Jeffrey Hinton, who has been developing the type of technology that has turned into AI since the 70s.

And I want to let you know.

So we talk about it.

The first part of it, though, he gives us this breakdown of kind of what it actually is, which for me was

unbelievably helpful.

We get into the it will kill us all part, but it was important

for my understanding to sort of set the scene.

So I hope you find that part as interesting as I did, because man,

it expanded my understanding of what this technology is, of how it's going to be utilized, of what some of those dangers might be in a really interesting way.

So I will not hold it up any longer.

Let us get to our guests for this podcast.

Ladies and gentlemen, we are absolutely thrilled today to be able to welcome Professor Emeritus with the Department of Computer Science at the University of Toronto and Schwartz Reisman Institute's Advisory Board member, Jeffrey Hinton, is joining us.

Sir, thank you so much for being with us today.

Well, thank you so much for inviting me.

I'm delighted.

You are known as, and I'm sure you will be very demure about this, the godfather of artificial intelligence

for your work

on sort of these neural networks.

networks,

you co-won the actual Nobel Prize in Physics in 2024 for this work.

Is that correct?

That is correct.

It's slightly embarrassing since I don't do physics.

So when they called me up and said you won the Nobel Prize in Physics, I didn't believe them to begin with.

And were the other physicists going, wait a second,

that guy's not even in our business?

I strongly suspect they were, but they didn't do it to me.

Oh, good.

I'm glad.

This is going to seem somewhat remedial, I'm sure, to you.

But

when we talk about artificial intelligence, I'm not exactly sure what it is that we're talking about.

I know there are these things, large language models.

I know, to my experience, Artificial intelligence is just a slightly more flattering search engine.

Whereas I used to Google something and it would just give me the answer.

Now it says, What an interesting question you've asked me.

So,

what are we talking about when we talk about artificial intelligence?

So, when you used to Google, it would use keywords and it would have done a lot of work in advance.

So, if you gave it a few keywords, it could find all the documents that had those words in.

Okay.

So, basically, it's just a, it's sorting.

It's looking through and it's sorting and finding words and then bringing you a result.

Yeah, that's how it used to work.

Okay.

But it didn't understand what the question was.

So it couldn't, for example, give you documents that didn't actually contain those words but were about the same subject.

No.

It didn't make that connection.

Oh, right, because it would say, here is your result, minus, and then it would say like a word that was not included.

Right.

But if you had a document with none of the words you used, it wouldn't find that, even though it might be a very relevant document about exactly the subject you were talking about.

It had just used different words.

Now it understands what you say, and it understands in pretty much the same way people do.

What?

So, if I it'll say, oh, I know what you mean, let me, let me, let me educate you on this.

So, it's gone from being kind of a

literally just a search and find thing to an actual almost an expert in whatever it is that you're discussing and it can bring you things that you might not have thought about.

Yes, so the large language models are not very good experts at everything.

So if you take take some friend you have who knows a lot about some subject matter.

No, I got a couple of those.

Yeah, they probably know a bit, they're probably a bit better than the large language model, but they'll nevertheless be impressed that the large language model model knows their subject pretty well.

So what is the difference between sort of machine learning?

So was Google, in terms of a search engine, machine learning?

That's just algorithms and

predictions.

Not exactly.

Machine learning is a kind of coverall term for any system on a computer that learns.

Okay.

Now, these neural networks are a particular way of doing learning

that's very different from what was used before.

Okay.

Now, these are the new neural networks, the old machine learning.

Those were not considered neural networks.

And when you say neural networks, meaning your work was sort of the genesis of it was in the 70s, where you thought

you were studying the brain.

Is that correct?

I was trying to come up with...

ideas about how the brain actually learned.

And there's some things we know about that.

It learns by changing the strengths of connections between brain cells.

Wait, that so explain that.

It says it learns by changing the connections.

So if

you show a human something new,

brain cells will

actually make new connections within brain cells.

It won't make new connections.

There'll be connections that were there already.

But the main way it operates is it changes the strength of those connections.

Wow.

So if you think of it from the point of of view of a neuron in the middle of the brain, a brain cell,

all it can do in life is sometimes go ping.

That's all he's got.

That's his only,

unless it happens to be connected to a muscle.

Okay.

It can sometimes go ping.

And it has to decide when to go ping.

Oh, wow.

How does it decide when to go ping?

I was glad you asked that question.

There's other neurons going ping.

Okay.

And when it sees particular patterns of other neurons going ping, it goes ping.

And you can think of

this neuron as receiving pings from other neurons.

And each time it receives a ping, it treats that as a number of votes for whether it should turn on or should go ping or should not go ping.

And you can change how many votes another neuron has for it.

How would you change that vote?

By changing the strength of the connection.

The strength of the connection, think of as the number of votes this other neuron gives for you to go ping.

Okay.

So it really is, in some respects, it's a boy, it reminds me of the movie Minions, but it's almost a social.

Yes, I know I'm thinking of it.

Yes,

it's very like political coalitions.

There'll be groups of neurons that go ping together, and the neurons in that group will all be telling each other, go ping.

And then there might be a different coalition, and they'll be telling other neurons, don't go ping.

Oh, my God.

And then there might be a different coalition,

and they're all telling each other to go ping and telling the first coalition not to go ping.

And so, when the second thing is.

All this is going on in your brain

in the way of like, I would like to pick up a spoon.

Yes.

So, spoon, for example, spoon in your brain is a coalition of neurons going ping together.

And that's a concept.

oh wow so so as you're teaching when you're when you're a baby and they go spoon there's a little group of neurons going oh that's a spoon and they're strengthening their connections with each other so whatever

is that why when

you know you're you're imaging brains you see certain areas light up And is that lighting up of those areas the neurons that ping for certain items or actions?

Not exactly.

Getting close.

I'm getting close.

Different areas will light up when you're doing different things, like when you're doing vision or talking or

controlling your hands.

Different areas light up for that.

But the coalition of neurons that

go ping together when there's a spoon,

they don't only work for spoon.

Most of the members of that coalition will go ping when there's a

So they overlap a lot, these coalitions.

This is a big tent.

It's a big tent coalition.

I love thinking about this as political.

I had no idea your brain operates on peer pressure.

There's a lot of that goes on, yes.

And concepts are kind of coalitions that are happy together, but they they overlap a lot.

Like the concept for dog and the concept for cat have a lot in common.

They'll have a lot of shared neurons.

In particular, the neurons that represent things like this is animate or this is hairy or this might be a domestic pet.

All those neurons will be in common to cat and dog.

Are there, can I ask you that?

And again, I so appreciate your patience with this and explain.

This is really helpful for me.

Are there certain neurons that ping broadly, right, for the broad concept of animal?

And then other neurons, like, does it work from macro to micro, from general to specific?

So you have a coalition of neurons that ping generally,

and then as you get more specific with the knowledge, does that engage

certain ones that will ping less frequently, but for maybe more specificity?

Is that

something?

Okay, that's a very good theory.

No, nobody, nobody really knows for sure about this.

Oh, that's a very sensible theory.

And in particular, there's going to be some neurons in that coalition that ping more often for more general things.

And then there may be neurons that ping less often for much more specific things.

Right.

Okay.

So

and they all, and this works throughout.

And like you say, there's certain areas that will ping for vision or other senses, touch.

I imagine there's a ping system for language.

And you were saying, what if we could get computers which were much more, I would think, just

binary, if then, you know, sort of basic.

You're saying, could we get them to work as these coalitions?

Yeah, I don't think binary, if then, has much to do with it.

The difference is people were trying to put rules into computers.

They were trying to figure out...

So the basic way you program a computer is you figure out in exquisite detail how you would solve the problem.

And then you tell the computer.

You deconstruct all the steps.

And then you tell the computer exactly what to do.

That's a normal computer program.

Okay, great.

These things aren't like that at all.

So you were trying to change that process to see if we could create a process that

functioned more like how the human brain would rather than a item by item instruction list you wanted it to to think more

more more globally how did how did that occur so it was sort of obvious to a lot of people that the brain doesn't work

by someone else giving you rules and you just execute those rules right i mean in north korea they would love brains to work like that but they don't

you're saying that in an authoritarian world, that is how brains would operate.

Well, that's how they would like them to operate.

That's how they would like them to operate.

It's a little more artsy than that.

Yes.

All right, fair enough.

We do write programs for neural nets, but the programs are just to tell the neural net how to adjust the strength of the connection on the basis of the activities of the neurons.

So that's a fairly simple program

that doesn't have all sorts of knowledge about the world in it.

It's just what are the rules for changing neural connection strengths on the basis of the activities.

Can you give me an example?

So would that be considered sort of, is that machine learning or is that deep learning?

That's deep learning.

If you have a network with multiple layers, it's called deep learning because there's many layers.

So what are you saying to a computer?

when you are trying to get it to do deep learning?

Like what would be an example of an instruction that you would

Okay.

So let me,

ah, now we're, all right.

Am I, am I yet, am I in neural learning 201 yet, or am I still in 101?

You're like the smart student in the front row who doesn't know anything but asks these good questions.

That's the nicest way I've ever been described.

Thank you.

If you're still overpaying for your wireless, I want you to leave this country.

I want you gone.

There's no excuse.

Mint Mobile, your favorite word is no.

It's time to say yes to saying no.

No contracts, no monthly bills, no overages, no BS.

Here's why so many said yes to making the switch and getting premium wireless for $15

a month.

My God, I spend that on chiclets.

Chiclets, I say.

Ditch overpriced wireless and they're jaw-dropping monthly bills.

Unexpected overages and fees.

Plants start at $15 a month at Mint.

All plants come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network.

Use your own phone with any Mint Mobile plan and bring your phone number along with all your existing contacts.

Ready to say yes to saying no?

Make the switch at mintmobile.com/slash TWS.

That's mintmobile.com/slash TWS.

Upfront payment of $45 required, equivalent to $15 a month.

Limited time new customer offer for first three months only.

Speeds may slow above 35 gigabytes on unlimited plan.

Taxes and fees extra.

See Mint Mobile for details.

So let's go back to 1949.

Oh boy.

All right.

So here's a theory from someone called Donald Hebb.

about how you change connection strengths.

If neuron A goes ping, ping and then shortly afterwards neuron B goes ping,

increase the strength of the connection.

That's a very simple rule.

That's called the Heb rule.

Right.

The Heb rule is if neuron A goes ping, increase the connection and B goes ping, increase that connection.

Yes.

Okay.

Now, as soon as computers came along and you can do computer simulations,

people discovered that rule by itself doesn't work.

What happens is all the connections get very strong and all the neurons go ping all at the same time and you have a seizure.

Oh, okay.

That's a shame, isn't it?

That is a shame.

There's got to be something that makes connections weaker as well as making them stronger.

Right.

There's got to be some discernment.

Yes.

Okay.

So

if I can digress for about a minute.

Boy, I'd like that.

Okay.

Suppose we wanted to make a neural network

that have multiple layers of neurons

and it's to decide whether an image contains a bird or not.

Like a captcha, like when you go on and it's

this a bird, exactly.

We want to solve that capture with a neural net.

Okay.

So the input to the neural net, the sort of bottom layer of neurons,

is a bunch of neurons and they go ping to different levels of, they have different strengths of ping.

and they represent the intensities of the pixels in the image.

Okay.

So if it's a thousand by thousand image, you've got a million neurons

that are going ping at different rates to represent how intense each pixel is.

Okay.

That's your input.

Now you've got to turn that into a decision.

Is this a bird or not?

Wow.

So that decision.

So let me ask you a question then.

Do you program in, because strength of pixel doesn't strike me as

a really useful tool in terms of figuring out if it's a bird.

Figuring out if it's a bird seems like the tool would be: are those feathers, is that a beak?

Is that

a breast?

Yeah.

Here goes.

So the pixels by themselves

don't really tell you whether it's a bird.

Okay.

Because you can have birds that are bright and birds that are dark and you can have birds flying and birds sitting down and you can have an ostrich in your face and you have a seagull in the distance.

They're all birds.

Okay, so what do you do next?

Well,

sort of guided by the brain, what people did next was said,

let's have a bunch of edge detectors.

So what we're going to do, because of course you can recognize birds quite well in line drawings.

Right.

So what we're going to do is we're going to make some neurons, a whole bunch of them, that detect little pieces of edge.

That is little places in the image where it's bright on one side and darker on the other side.

Right.

So suppose we want to detect a little piece of vertical.

So it's almost creating a like primitive form of vision.

This is how you make a vision system, yes.

This is how it's done in the brain and how it's done in computers.

Now.

Wow.

Okay.

So if you want to detect a little piece of vertical edge in a particular place in the image,

let's suppose you look at a little column of three pixels and next to them another column of three pixels.

And if the ones on the left are bright and the ones on the right are dark, you want to say, yes, there's an edge here.

So you have to ask, how would I make a neuron that did that?

Oh my God.

Okay.

All right.

I'm going to jump ahead.

All right.

So the first thing you do is you have to teach

the network what vision is.

So you're teaching it, these are images.

This is background.

This is form.

This is edge.

This is not.

This is bright.

So you're teaching it almost how to see.

In the old days,

in the old days, people would try and put in lots of rules to teach it how to see and explain to it what foreground was and what background was.

Okay.

But the people who really believed in neural nets said, no, no, don't put in all those rules.

Let it learn all those rules just from data.

And the way it learns is by strengthening the pings.

once it starts to recognize edges and things.

We'll come to that in a minute.

I'm jumping ahead.

You're jumping ahead.

All right.

So let's carry on with this little bit of edge detector.

Okay.

So you have a, in the first layer, you have the neurons that represent how bright the pixels are.

And then in the next layer, we're going to have little bits of edge detector.

And so you might have a neuron in the next layer that's connected to a column of three pixels on the left and a column of three pixels on the right.

And now, if you make the strengths of the connections to the three pixels on the left strong,

big positive connections,

and you make the strengths of connections to the three pixels on the right be big negative connections

because they don't turn on,

then when the pixels on the left and the pixels on the right are the same brightness as each other, the negative connections will cancel out the positive connections and nothing will happen.

But if the pixels on the left are bright, and the pixels on the right are dark, the neuron will get lots of input from the pixels on the left because they're big positive connections.

It won't get any inhibition from the pixels on the right because

those pixels are all turned off.

Right, right.

And so it'll go ping.

It'll say, hey, I found what I wanted.

I found that the three pixels on the left are bright and the three pixels on the right are not bright.

Hey, that's my thing.

I found a little piece of positive, a little piece of edge here.

I'm that guy.

I'm the edge guy.

I ping on the edges.

Right.

And that pings on that particular piece of edge.

Okay.

Okay.

Now, imagine you have like a gazillion of those.

I'm already exhausted on the three pings.

You have a gazillion of those.

Because they have to detect little pieces of edge anywhere on your retina.

Wow.

Anywhere in the image.

And at any orientation, you need different ones for each orientation.

Right.

And you actually have different ones for the scale.

There might be an edge at a very big scale that's quite dim,

and there might be little sharp edges at a very small scale.

And as you make more and more edge detectors,

you get better and better discrimination for edges.

You can see smaller edges, you can see the orientation of edges more accurately, you can detect big, vague edges better.

So let's now go to the next layer.

So now we've got our edge detectors.

detectors.

Now suppose that we had a neuron in the next layer

that looked for a little combination of edges that is almost horizontal, several edges in a row that are almost horizontal

and line up with each other,

and

just slightly above those

several edges in a row that are again almost horizontal, but come down to form a point point with the first sort of edges.

Right.

So you find two little combinations of edges that make a sort of pointy thing.

Okay.

So you're a Nobel Prize-winning physicist.

I did not expect that sentence to end with, it makes kind of a pointy thing.

I thought there'd be a name for that.

But

I get what you're saying.

You're now discerning where it ends, where you're sort of looking at different.

And this is before you're even looking at color or anything else.

This is literally just, is there an image?

What are the edges?

What are the edges?

And what are the little combinations of edges?

So we're now asking, is there a little combination of edges that makes something that might be a beak?

Wow.

But

you don't know what a beak is yet.

Not yet.

We need to learn that too, yes.

Right.

So

once you have the system, it's almost like you're building systems that can mimic the human senses.

That's exactly what we're doing, yes.

So vision, ears, not smell, obviously.

No, they're doing that now.

They're starting on smell now.

Oh, for God's sakes.

They've now got to digital smell, where

you can transmit smells over the web.

It's just the printer.

The printer for smells has 200 components.

Instead of three colors, it's got 200 components.

And it synthesizes a smell at the other end, and it's not quite perfect, but it's pretty good.

Right, right, right.

Wow.

So, this is this is incredible to me.

Okay.

So

I am so sorry about this.

I apologize.

No, this is.

This is perfect.

You're doing a very good job of representing a sort of sensible, curious person who doesn't know anything about this.

So let me finish describing how you would build the system by hand.

Yes.

So if I I did it by hand, I'd start with these edge detectors.

So I'd say, make big, strong, positive connections from these pixels on the left, and big, strong, negative connections from the pixels on the right.

And now the neuron that gets those incoming connections, that's going to detect a little piece of vertical edge.

Okay.

And then at the next layer, I'd say, okay, make big, strong, positive connections from...

Three little bits of edge sloping like this and three little bits of edge sloping like that and this is a potential beak.

right and in that same layer I might might also make big strong positive connections from a combination of edges that roughly form a circle

Wow and that's a potential eye right right right now in the next layer

I have a neuron that looks at possible beaks and looks at possible eyes

And if they're in the right relative position, it says, hey, I'm happy.

Because that neuron has detected a possible bird's head.

And that guy might ping.

And that guy would ping.

At the same time, there'll be other neurons elsewhere that have detected little patterns like a chicken's foot or the feathers at the end of the wing of a bird.

And so you have a whole bunch of these guys.

Now, even higher up, you might have a neuron that says, Hey, look, if I've detected a bird's head and I've detected a chicken's foot, and I've detected the end of a wing, it's probably a bird.

So it'd say bird.

So you can see now how you might try and wire all that up by hand.

Yes, and it would take some time.

It would take light forever.

It would take light forever.

Yes.

Okay,

so suppose you were lazy.

Yes, now you're talking.

Okay.

What you could do is you could just make these layers of neurons without saying what the strengths of all the connections ought to be.

You just start them off with small random numbers.

Just put in any old strengths.

And you put in a picture of a bird.

And let's suppose it's got two outputs.

One says bird and the other says not bird.

With random connection strengths in there, what's going to happen is you put in a picture of a bird and it says 50% bird, 50% not bird.

In other words, I haven't got a clue.

Right.

And you put in a picture of a non-bird and it says 50% bird, 50% non-bird.

Oh boy.

Okay, so now you can can ask a question.

Suppose I were to take one of those connection strengths

and I was to change it just a little bit, make it maybe a little bit stronger.

Instead of saying 50% bird, would it say 50.01% bird

and 49.99% non-bird?

And if it was a bird, then that's a good change to make.

You've made it work slightly better.

What year was this?

When did this start?

Oh, exactly.

So this is just an idea.

This would never work, but bear with me.

All right.

This is like one of those defense lawyers who goes off on a huge digression, but it's not going to be good in the end.

This is helpful.

Okay, so

this is the thing that's going to kill us all in 10 years.

Yep.

When I say yep, I mean not this particular thing, but an advancement on it.

But this is how it's going to be.

Not necessarily kill us all, but maybe.

Right, right, right.

This is Oppenheimer going,

okay, so you've got an object, and that is made up of smaller objects.

And like, this is the very early

part of this.

Okay, so suppose you had all the time in the world.

What you could do is you could take this layered neural network and you could start with random connection strengths

and you could then show it a bird and it would say 50% bird, 50% non-bird, and you could pick one of the connection strengths

and you could say, if I increase it a little bit, does it help?

It won't help much, but does it help at all?

Right.

Well, it gives you 50.1, 50.2, that kind of thing.

If it helps, make that increase.

Okay.

And then you go around and do it again.

Maybe this time we choose a non-bird.

And

we choose one connection strength.

Right.

And we'd like it to,

if we increase that connection strength and it says it's less likely to be a bird and more likely to be a non-bird, we say, okay, that's a good increase.

Let's do that one.

Right, right, right.

Now, here's a problem: there's a trillion connections.

Yeah.

Okay, and each connection has to be changed many times.

And is that manual?

Well, in this way of doing it, we'll be manual.

and um

not just that but you can't just do it on the basis of one example because sometimes change a connection strength if you increase it a bit it'll help with this example but it'll make other examples worse oh dear god so you have to give it a whole batch of examples right and see if on average it helps

and that's how you create these

language models if we did it this really dumb way to create let's say this vision system for now

we'd have to do trillions of experiments.

And each experiment would involve giving it a whole batch of examples and seeing if changing one connection strength helps or hurts.

Oh, God.

And it would never be done.

It would be infinite.

It would be infinite.

Okay.

Now, suppose that

you figured out how to do a computation

that would tell you for every connection strength in the network,

it tells you at the same time,

for this particular example, let's suppose you give it a bird

and it says 50% bird.

And now for every single connection strength, all trillion of these connection strengths, we can figure out at the same time whether you should increase them a little bit to help or decrease them a little bit to help.

And then

you change a trillion of them at the same time.

Can I say a word that I've been dying to say this whole time?

Eureka.

Eureka.

Eureka.

Eureka.

Now that computation, for normal people, it seems complicated.

If you're done calculus, it's fairly straightforward.

And many different people invented this computation.

It's called back propagation.

So now you can change all trillion at the same time, and you'll go a trillion times faster.

Oh my God.

And that's the moment that it goes from theory to practicality.

That is the moment when you think, Eureka, we've solved it.

We know how to make smart systems.

For us, that was 1986.

And we were very disappointed when it didn't work.

Every day, the loudest, the most inflammatory takes dominate our attention.

And the bigger picture gets lost.

It's all just noise and no light.

Ground news puts all sides of the story in one place so you can see the context.

They provide the light.

It starts conversations beyond the noise.

They aggregate and organize information just to help readers make their own decisions.

Ground news News provides users reports that easily compare headlines or reports that give a summarized breakdown of the specific differences in reporting across all the spectrums.

It's a great resource.

Go to groundnews.com/slash Stewart and subscribe for 40% off the unlimited access vantage subscription.

Brings the price down to about $5 a month.

It's groundnews.com/slash/stuart or scan the QR code on the screen.

you've been in that room for 10 years you've been showing it birds you've been increasing the strengths you had your eureka moment and you flipped the switch and went no here's the here's the problem yeah here's the problem it only works or it only works really impressively well much better than any any other way of trying to do vision if you have a lot of data

and you have a huge amount of computation.

even though you're a trillion times faster than the dumb method it's still going to be a lot of work okay

so now you've got to increase your the data and you've got to increase your computation power yes and you've got to increase the computation power

by a factor of about a billion compared with where we were and you've got to increase the data by a similar factor you are still in 1986 when you figured this out you are a billion times not there yet.

Something like that, yes.

What would have to change to get you there?

The power of the chip?

What changes?

Okay.

It may be more like a factor of a million.

Okay, okay.

I don't want to exaggerate here.

No, because I'll catch you.

If you try and exaggerate, I'll be on it.

A million is quite a lot.

Yes.

So here's what has to change.

The air of a transistor has to get smaller so you can pack more of them on a chip.

So between 1986,

let's see.

No, between 1972 when I started on this stuff.

Okay.

And now the air of a transistor has got smaller by a factor of a million.

Wow.

So that's, can I relate this to, so that is around the age that I remember my father worked at RCA Labs.

And when I was like eight years old, he brought home a calculator and the calculator was the size of a desk and it added and subtracted and multiplied

by

1980 you could get a calculator on a pen and is that based on that

the transistors based on large-scale integration using small transistors yeah okay all right all right so The area of a transistor decreased by a factor of a million.

Okay.

And the amount of data available increased by much more than that because we got the web and we got digitization of massive amounts of data.

Oh,

so they worked hand in hand.

So as the chips got better, the data got more vast and you were able to feed more information into the model while it was able to increase its processing speed and abilities.

Yeah, so let me summarize what we now have.

Yes.

You set up this neural network for detecting birds, and you give it lots of layers of neurons, but you don't tell it the connection strengths.

You say, start with small random numbers.

And now all you have to do is show it lots of images of birds and lots of images that are not birds,

tell it the right answer so it knows the discrepancy between what it did and what it should have done.

Send that discrepancy backwards through the network so it can figure out for every connection strength whether it should increase it or decrease it, and then just sit and wait for a month.

And at the end of the month, if you look inside,

if you look inside, here's what you'll discover.

It has constructed little edge detectors.

And it has constructed things like little beak detectors and little eye detectors.

And it will have constructed things that it's very hard to see what they are, but they're looking for little combinations of things like beaks and eyes.

And then after a few layers, it'll be very good at telling you whether it's a bird or not.

It made all that stuff up from the data.

Oh my god, can I say this again?

Eureka.

Eureka, we figured out we don't need to hand wire in all these little edge detectors and beak detectors and eye detectors and chicken's foot detectors.

That's what computer vision did for many, many years, and it never worked that well.

We can get the system just to learn all that.

All we need to do is tell it how to learn

and that is in 1980

in 1986 we figured out how to do that right people were very skeptical because we couldn't do anything very impressive right because we didn't have enough data data or we didn't have enough computation

this is this is incredible uh the way and i i can't thank you enough for explaining what that is it it makes everything you know i'm so accustomed to an analog world of, you know,

how things work and like the way that cars work, but I have no idea how

our digital world functions.

And that is the clearest explanation for me that I have ever gotten.

And I cannot thank you enough.

It makes me understand now

how this was achieved.

And by the way, what

Jeffrey is talking about is the primitive version of that.

What's so incredible to me is

the each upgrade of that,

the vastness of the improvement of that.

So

let me just say one more thing.

Please.

I don't want to be too professor-like, but

how does this apply to large language models?

Yes.

Well, here's how it works for large language models.

You have some words in a context.

So let's suppose I give you the first few words of a sentence.

Right.

What the neural neural net's going to do is learn to convert each of those words into a big set of features, which are just active neurons, neurons going ping.

So if I give you the word Tuesday, there'll be some neurons going ping.

If I give you the word Wednesday, it'll be a very similar set of neurons, slightly different, but a very similar set of neurons going ping, because they mean very similar things.

Now, after you've converted all the words in the context into neurons going ping, into whole bunches that capture their meaning,

these neurons all interact with each other.

What that means is neurons in the next layer look at combinations of these neurons, just as we looked at combinations of edges to find a beak.

And eventually,

you can activate neurons that represent the features of the next word in the sentence.

It will anticipate.

It can anticipate it can predict the next word so the way you train it is that why my phone does that it always thinks i'm about to say this next you know uh word and i'm always like stop doing that yeah because a lot of times it's wrong it's probably using neural nets to do it yes right and of course you can't be perfect at that

so this is so now to put it together you've taught it almost how to see You can teach it to see in the same way you can teach it how to predict the next word.

Right.

So it sees, it goes, that's the letter A.

Now I'm starting to recognize letters.

Then you're teaching it words and then what those words mean and then the context.

And it's all being done by feeding it our

previous words, by backpropagating all the writing and speaking that we've done already.

Yes.

It's looking over.

You take some document that we produced.

Yes.

You give it the context, which is all the words up to this point.

Yes.

And you ask it to predict the next word.

And then you look at the probability it gives to the correct answer.

Right.

And you say, I want that probability to be bigger.

I want you to have more probability of making the correct answer.

So it doesn't understand it.

This is merely a statistical exercise.

We'll come back to that.

You take the discrepancy between the probability it gives for the next word and the correct answer,

and you backpropagate that through this network,

and it'll change all the connection strengths.

So next time you see that lead-in, it'll be more likely to give the right answer.

Now, you just said something that many people say.

This isn't understanding.

This is just a statistical trick.

Yes.

That's what Chomsky says, for example.

Yes, Chomsky and I, we're always stepping on each other's sentences.

Yeah.

So

let me ask you the question.

Well, how do you decide what word to say next?

Me?

You.

It's interesting.

I'm glad you brought this up.

So what I said.

You said some words and now you're going to say another word.

I look for sharp lines.

And then I try and predict.

No, I have no idea how I do that.

I honestly, I wish I knew.

It would save me a great deal of embarrassment if I knew how to stop some of the things that I'm saying that come out next.

If I had a better predictor, boy, I could save myself quite a bit of trouble.

So the way you do it is pretty much the same as the way these large language models do it.

You have the words you've said so far.

Those words are represented by sets of active features.

So the word symbols get turned into big patterns of activation of features, neurons going ping.

Different pings, different strengths.

And these neurons interact with each other to activate some neurons that go ping that are representing the meaning of the next word or possible meanings of the next word.

And from those, you kind of pick a word that fits in with those features.

That's how the large language models generate text, and that's how you do it too.

They're very like us.

So it's all very well to say that.

I'm as describing to myself a humanity of understanding.

For instance, if I, so like, let's say the little white lie, I'm with somebody and they ask me a question.

And in my mind, I know what to say.

But then I also think, oh, but saying that might be coarse or it might be rude or I might offend this person.

So I'm also, though, making emotional decisions on what the next words I say are as well.

It's not just a

objective process.

There's a subjective process within that.

All of that is going on by neurons interacting in your brain.

It's all pings and it's all strength of connection.

Even the things that I ascribe to a moral code or an emotional intelligence are still pings.

They're still all pings.

And you need to understand there's a difference between what you do kind of automatically and rapidly and without effort

and what you do with effort and slower and consciously and deliberatively.

Right.

So, and you're saying that can be built into these models that can also be done with pings, that can be done by these neural nets.

But there are is the suggestion then

that

with enough data and enough processing power,

their brains can function

identically

to ours?

Are they at that point?

Will they get to that point?

Will they be able to, because I'm assuming we're still ahead

processing-wise.

Okay.

They're not exactly like us, but the point is, they're much more like us than standard computer software is like us.

Standard computer software, someone programmed in a bunch of rules, and if it follows the rules, it does what they're doing.

That's right.

So you're saying this is the difference.

This is just a different cattle of fish altogether.

Right.

And it's much more like us.

Now, as you're doing this and you're in it, and I imagine the excitement is, even though it's occurring over a long period of time, you're seeing these improvements occur over that time.

And it must be

incredibly fulfilling and interesting.

And you're watching it explode into this sort of artificial intelligence and generative AI and all these different things.

At what point during this process do you step back and go,

wait a second?

Okay, so I did it too late.

I should have done it earlier.

I should have been more aware earlier, but I was so entranced with making these things work.

And I thought, it's going to be a long, long time before they work as well as us.

We'll have plenty of time to to worry about what if they try and take over and stuff like that.

At the beginning of 2023,

after GPT had come out, but also seeing similar chatbots at Google before that.

Right.

And because of some work I was doing on trying to make these things analog, I realized that neural nets running on digital computers are just a better form of computation than us.

And I'll tell you why they're better.

Yeah, why because they can share better

so they can share with each other better yes so if i make many copies of the same neural net and they run on different computers

each one can look at a different bit of the internet

so i've got a thousand copies they're all looking at different bits of the internet each copy is running this back propagation algorithm and figuring out, given the data I just saw, how would I like to change my connection strengths?

Now, because they started off as identical copies,

they can then all communicate with each other and say, how about we all change our connection strengths by the average of what everybody wants?

But if they were all trained together, wouldn't they come up with the same answer?

Yes,

they're looking at different data.

Oh.

On the same data, they would give the same answer.

If they look at different data,

they have different

ideas about how they'd like to change their connection strengths to absorb that data.

Right.

But are they also creating data?

Is that so they're looking at the same and they're at this point, it's all about discernment, getting these things to discern better, to understand better, to do all that.

But there's another layer to that, which is iterative.

Yes.

Once you're good at discernment,

you can generate

right.

I'm glossing over a lot of details there, but basically, yes, you can generate.

You can begin to generate answers to things that are not rote, that are thoughtful based on those things.

Who is giving it the dopamine hit about whether or not to strengthen connections at this iterative or generative level?

how is it getting feedback when it's creating something that does not exist?

Okay, so most of the learning takes place in figuring out how to predict the next word for one of these language models.

That's where the bulk of the learning is.

Okay.

After it's figured out how to do that, you can get it to generate stuff.

And it may generate stuff that's unpleasant

or that's sexually suggestive.

Right.

Or just plain romance.

Yeah.

Right.

Hallucinations.

Yeah.

Yeah.

So now now you get a bunch of people to look at what it generates and say, nope, bad.

Or, yep, good.

That's the dopamine hit.

Right.

And that's called human reinforcement learning.

And that's what's used to sort of shape it a bit.

Just like you take a dog and you shape its behavior so it behaves nicely.

So is that when, let me, let me ask you this in a practical sense.

So like when Elon Musk creates his Grok, right?

And Grok is this AI, and he says to it, you're too woke.

And so

you're making connections and pings that I think are too woke, whatever I have decided that that is.

So I am going to input differences so that you get different dopamine hits and I turn you into Mecca Hitler or whatever it was that he turned it into, is how much of this

is still in in the control of the operators that's what you reinforce is in the control of the operators so the the operators are saying um

if it uses some funny pronoun say bad

okay okay if it says they them

you have to weaken that connection yeah not strengthen you have to tell it don't do that don't do that okay learn not to do that right so it is still

at the whim of its operator.

In terms of that shaping, the problem is

the shaping is fairly superficial, but it can easily be overcome by somebody else taking the same model later and shaping it differently.

So different models will have.

So there is a value.

And now I'm sort of applying this to the world

that we live in now, which is there are 20 companies who have sequestered their AIs behind sort of corporate walls, and they're developing them separately.

And each one of those may have unique and eccentric features that the other may not have, depending on who it is that's trying to shape it.

and how it develops internally.

It's almost as though you will develop 20 different personalities if I, if that's not anthropomorphizing too much.

It's a bit like that.

Okay.

Except that each of these models has to have multiple personalities.

Because think about trying to predict the next word in a document.

You've read half the document already.

After you read half the document, you know a lot about the views of the person who wrote the document.

You know what kind of a person they are.

So you have to be able to adopt that personality to predict the next word.

But these poor models have to deal with everything.

So they have to be able to adopt any possible personality.

Right.

But you know,

in this iteration of the conversation, it then still appears that the greatest threat of AI

is not necessarily it becomes sentient and takes over the world.

It's that it's at the whim of the humans that have developed it and can weaponize it.

And

they can use it for

nefarious purposes if they're narcissists or megalomaniacs.

Or, you know, I'll give you an example of, you know, Peter Thiel has his own, and he was on a podcast with

a writer from the New York Times, Ross Dudhat.

And Dudhat said, and I'll tell you, I have it right here,

I think you would prefer the human race to endure, right?

And Theo says,

and he hesitates for a long time.

And the writer says, that's a long hesitation.

And he's like, well, there's a lot of questions in that.

That felt more frightening to me

than AI itself, because it made me think, well, the people that are designing it and shaping it and maybe weaponizing it.

might not have, you know, I don't know what purpose they're using it for.

Is that the fear that you have?

Or is it the actual

AI itself?

Okay, so you have to distinguish a whole bunch of different risks from AI.

Okay.

And they're all pretty scary.

Right.

Okay.

So there's one set of risks that's to do with bad actors misusing it.

Yes, that's the one that I think is most in my mind.

And they're the more urgent ones.

They're going to misuse it for corrupting the midterms, for example.

If you wanted to use AI to corrupt the midterms, what you would need to do is get lots of detailed data on American citizens.

I don't know if you can think of anybody who's been going around getting lots of detailed data on American citizens.

And selling it or giving it to a certain company that also may be involved with the gentleman I just mentioned.

Yeah.

And if you look at Brexit, for example,

Cambridge Analytica had detailed information on voters that it got from Facebook, and it used that information for targeted advertising.

Targeted ads.

And that's, I guess, you would almost consider that rudimentary at this point.

That's rudimentary now.

But

nobody ever did a proper investigation of did that determine the output of Brexit?

Right.

Because, of course, the people who benefited from that one.

Wow.

So in the way people are learning that they can use this

for

manipulation.

Yes.

And see, I always talk about it.

Look, persuasion has been a part of the human condition forever.

Propaganda, persuasion, trying to utilize new technologies to create and shape public opinion and all those things.

But it felt, again, like everything else, somewhat linear or analog.

This, and what I liken it to is a chef will add a little butter and a little sugar to try and, you know, make something more palatable, to get you to eat a little bit more of it.

But that's still within the realm of our kind of earthly understanding.

But then there are people in the food industry that are ultra-processing food, that are in a lab figuring out how your brain works and ultra-processing what we eat to get past our brains.

It's almost,

and is this

the

language equivalent of that, ultra-processed speech?

Yeah, that's a good analogy.

Okay.

They know how to trigger people.

They know once you have enough information about somebody, you know what will trigger them.

And these models, they are agnostic about whether this is good or bad.

They're just doing what we've asked.

Yeah.

If you human reinforce them, they're no longer agnostic because you reinforce them to do certain things.

So that's what they all try and do now.

Right.

And they, so in other words, it's even worse, they're a puppy.

They want to please you.

They are, they it's almost like they have these incredibly sophisticated abilities but childlike want

for for approval yeah a bit like the attorney general

i believe uh the wit that you are displaying here would be referred to as dry that would be that would that would be dry fantastic is that so you're the the immediate concern is

weaponized

AI systems that can be generative, that can provoke,

that can be outrageous, and that can be the difference

in elections.

Yes, that's one of the

many risks.

And the other would be...

you know, make me some nerve agents that nobody's ever heard of before.

Is that another risk?

That is another risk.

Oh, I was hoping you would say that's not so much of a risk.

No, one good piece of news is for the first risk of corrupting elections, different countries are not going to collaborate with each other on the research on how to resist it because they're all doing it to each other.

America has a very long history of trying to corrupt elections in other countries.

Right.

But we did it the old-fashioned way through coups, through money for guerrillas.

Well, and Voice of America and things like that.

Right, right, right.

And giving money to

people in Iran in 1953.

Right, with Mossadegh and everybody else.

This is so the

this is just another more sophisticated tool in a long line of sort of global competition where they're doing it.

But

in this country, it's being applied not even necessarily, you know, through Russia, through China, through other countries that want to dominate us.

We're doing it to ourselves.

Yep.

What's the hardest part about running a business?

Well, it's stealing money without the federal authorities.

Oh, no, I'm sorry.

That's not right.

It's the hiring people, finding people and hiring them.

The other thing is

hard, though.

But it turns out when it comes to hiring, Indeed.

is all you're going to need.

So

stop struggling to get your job posts seen on other job sites.

With Indeed's sponsored sponsored jobs, you get noticed and you get a fast hire.

In fact, in the time it's taken me to talk to you, 23 hires were made on Indeed.

I may be one of them.

I may have gotten a job.

I don't know.

I haven't checked my email.

And that's according to Indeed Data Worldwide.

There's no need to wait any longer.

Speed up your hiring right now with Indeed.

And listeners of this show will get a $75 sponsored job credit to get your jobs more visibility at indeed.com slash weekly.

Just go to indeed.com slash weekly right now and support our show by saying, you heard about Indeed on this podcast.

Indeed.com slash weekly.

Terms and conditions apply.

Hiring, Indeed is all you need.

So I have a theory, and I don't know how much you know those guys out there, but the big tech companies,

you know,

it feels like they all want to be the next guy that that rules the world the next emperor and that's their battle they're almost it's like gods fighting on mount olympus

how that accomplishes uh and how it tears apart the fabric of american society almost doesn't seem to matter to them except maybe elon and theo who are more ideological Like Zuckerberg doesn't strike me as ideological.

He just wants to be the guy.

Altman doesn't strike me as ideological.

He just wants to be be the guy i think sadly there's quite a lot of truth in what you say and that's a

it was that a concern of yours when you were working out there not really because

back um until quite recently until a few years ago it didn't look as though it was going to get much smarter than people this quickly but now it looks as though if you ask the experts now most of them tell you that within the next 20 years, this stuff will be much smarter than people.

Smarter than people.

And when you say smarter than people, you know,

I could view that positively,

not negatively.

You know,

we've done an awful lot of nobody damages people like people.

And, you know, a smarter version of us that might think, hey, we can create an atom bomb, but that would absolutely be a huge danger to the world.

Let's not do that.

That's certainly a possibility.

I mean, one thing that people don't realize enough is that we're approaching a time when we're going to make things smarter than us.

And really, nobody has any idea what's going to happen.

People use their gut feelings to make predictions, like I do.

But really, the thing to bear in mind is there's huge uncertainty about what's going to happen.

And because we don't know.

So,

in terms of that, my guess is like any technology, there's going to be some incredible positives.

Yes, in healthcare and education, in designing new materials, there's going to be wonderful positives.

And then the negatives will be because

people are going to want to monopolize it because of the wealth, I assume, that it can generate.

It's going to be a disruption in the workforce.

You know, the Industrial Revolution was a disruption in the workforce.

Globalization is a disruption of the workforce, but those occurred over decades.

This is a disruption that will occur

in a really collapsed timeframe.

Is that correct?

That seems very probable, yes.

Some economists still disagree, but most people think that mundane intellectual labor is going to get replaced by AI.

In the world that you travel in, which I'm assuming is a lot of engineers and operators and great thinkers, thinkers.

What, you know, when we talk about 50%, yes, 50%, no, are the majority of them in more your camp, which is, uh-oh, have we,

have we opened Pandora's box?

Or are they, look, I understand there's some downsides here.

Here are some guardrails we could put in, but it's just too,

the possibilities of good are too strong.

Well, my belief is the possibilities of good are so great that we're not going to stop the development.

But I also believe that the development is going to be very dangerous.

And so we should put huge effort into saying, it is going to be developed, but we should try and do it safely.

We may not be able to, but we should try.

Do you think that people believe that the possibility

is too good or the money is too good?

I think for a lot of people, it's the money, the money and the power.

And with the confluence of money and power with those that should be instituting these basic guardrails, does that make controlling it that much,

that much less likely?

Because,

well, two reasons.

One is the amount of money that's going to flow into DC

is going to be,

already is, to keep them away from regulating it.

And number two is who down there is even able to.

I mean,

if you thought I didn't know what I was talking about, let me introduce you to a couple of 80-year-old senators who have no idea.

Actually, they're not so bad.

I talked to Bernie Sanders recently and he's getting the idea.

Well, Sanders is, he's, he's, that's a different cat right there.

The problem is,

we're at a point in history when what we really need is strong democratic governments who cooperate to make sure this stuff is well regulated and not developed dangerously.

And we're going in the opposite direction very fast.

We're going to authoritarian governments and less regulation.

So let's talk about that now.

I don't know if what's China's role because they're supposedly the big competitor in the AI race.

That's an authoritarian government.

I think they have more controls on it than we do.

So I actually went to China recently and got to talk to a member of the Politburo.

So there's 24 men in China who control China.

I got to talk to one of them who

did a postdoc in engineering at Imperial College London.

He speaks good English.

He's an engineer.

And a lot of the Chinese leadership are engineers.

They understand this stuff much better than a bunch of lawyers.

Right.

So did you come out of there more fearful or did you think, oh, they're actually being more reasonable about guardrails?

If you think about the two kinds of risk, the bad actors misusing it, and then the existential threat of AI itself becoming a bad actor.

For that second one,

I came out more optimistic.

They understand that risk in a way American politicians don't.

They understand the idea this is going to get more intelligent than us, and we have to think about what's going to stop it taking over.

And this Politburo member I spoke to

really

understood that very well.

And I think if we're going to get international leadership on this, a present is going to have to come from Europe and China.

It's not going to come from the US for another three and a half years.

What do you think Europe has done correctly in that?

Europe is interested in regulating it.

Right.

And it's been good on some things.

It's still been very weak regulations, but they're better than nothing.

Right.

But Europe, European leaders do understand this existential threat of AI itself taking over.

But our Congress, we don't even have committees that are specifically dedicated to emerging technologies.

I mean, we've got ways and means and appropriations, but there is no,

I mean, there's like science and space and technology, but there's not, you know,

I don't know of a dedicated committee on this.

And it is,

you would think they would take it with this seriousness of nuclear energy.

Yes, you would, or nuclear weapons.

Right.

Yes.

But as I was saying, countries will collaborate on how to prevent AI taking over because their interests are aligned there.

For example, if China figured out how you can make a super smart AI that doesn't want to take over,

they would be very happy to tell all the other countries about that because they don't want AI taking over in the States.

so we'll get collaboration on how to prevent ai taking over so that's a sort of that's a bright spot that there will be international collaboration on that but the us is not going to need that international collaboration no they just want to dominate um well that's the thing so so i was about to say that

what convinces you so with china and this is i think this is really where it gets into the the nitty-gritty but China certainly sees itself as it wants to be the dominant superpower economically, militarily, in all these different areas.

If you imagine that they come up with an AI model that doesn't want to destroy the world, although I don't know how we could know that, because

if it has a certain intelligence or sentience, it could very easily be like, sure, no, I'm cool.

I don't know what that is.

They already do that.

They already do that.

When they're being tested, they pretend to be dumber than they are.

Come on.

Yep, they already do that.

There was a conversation recently between an AI and the people testing it where the AI said, now be honest with me, are you testing me?

What?

Yeah.

So now the AI could be like, oh, could you open this jar for me?

I'm too weak.

Like it's going to pretend, it's going to play more innocent than what it might be.

I'm afraid I can't answer that, John.

Wait, now it's from 2001.

It was.

Nicely done, sir.

Well in.

But think about this.

So China, they come up with a model and they think, okay, maybe this won't do it.

Why would they, why will you get collaboration?

Because all these different countries are going to see AI

as

the tool that will transform their societies into more competitive societies.

In the way that now what we see with nuclear weapons is

there's collaboration amongst the people who have it.

Or even that's a little tenure.

To stop other people having it.

Right.

But everybody else is trying to get it.

And that's the tension.

Is that what AI is going to be?

Yes, it'll be like that.

So in terms of how you make AI smarter, they won't collaborate with each other.

But in terms of how do you make AI not want to take over from people, they will collaborate.

Okay.

On that basic level.

On that one thing of how do you make it so it doesn't want to take over from people.

And China will probably, China and Europe will lead that collaboration.

When you spoke to the Politburo member

and and he was and he was talking about AI, are we more advanced in this moment than they are or are they more advanced because they're doing it in a more prescribed way?

In AI, we're currently more.

Well, when you say we, you know, we used to be sort of Canada and the US, but we're not part of that we anymore.

No.

I'm sorry about that, by the way.

Thank you.

He's in Canada right now, our sworn enemy that we will be taking over.

I don't know what the date is, but it's apparently we're merging with you guys.

Right.

So the U.S.

is currently ahead of China, but not by nearly as much as it thought.

And it's going to lose that because.

Why do you say that?

Suppose you want to do one thing that would really kneecap a country, that would really mean that in 20 years' time that country is going to be behind instead of ahead.

The one thing you should do is mess with the funding of basic science.

Attack the research universities, remove grants for basic science.

In the long run, run that's a complete disaster

it's going to make america weak right because we're we're draining or we're cutting off our nose to spite our woke faces so to speak if you look at for example this deep learning the ai revolution we've got now that came from many years of sustained funding for basic research not huge amounts of money you know all of the funding for the basic research for um that led to deep learning probably cost less than one b1 bomber right oh wow it was sustained funding of basic research if you mess with that um you're eating the seed corn

that is i have to tell you that's that's such a uh

really illuminating statement of you know for the price of a b1 bomber uh we can create technologies and research that can

elevate our country above that And that's the thing that we're losing to make America great again.

Yep.

Phenomenal.

In China, I imagine

their government is doing the opposite, which is, I would assume, they are the, you know, what you would think are the venture capitalists because it's a, you know, authoritarian and state-run capitalism.

I imagine they are the venture capitalists of their own AI revolution, are they not?

To some extent, yes.

They do provide a lot of freedom to the startups to see who wins.

There's very aggressive startups, people very keen to make lots of money and produce amazing things.

And a few of those startups win big, like DeepSeek.

Right, right.

And the government makes it easy for these companies.

by providing the environment that makes it easy.

It doesn't it lets the winners emerge from competition rather than some very high-level old guy saying, this will be the winner.

Do people see you as

a Cassandra,

you know, or do they, do they view what you're saying skeptically in that industry?

People that, let me put it this way, people that are not necessarily have a vested interest in these technologies making them trillions of dollars.

Other people within the industry, do they reach out to you surreptitiously and say

i get a lot of invitations from people in industries to give talks and so on right um

how does how do the people that you worked with at google look at it do they view you as turning on them do they how how does that go i don't think so so i got along extremely well with the people i worked with at google particularly jeff dean who was my boss there who's a brilliant engineer, built a lot of the Google basic infrastructure and then converted to neural nets and learned a lot about neural nets.

I also get along well with Demis Isabis, who's the head of DeepMind, which Google owns, which Alphabet owns.

And I wasn't particularly critical of what went on at Google before ChatGPT came out, because Google was very responsible.

They didn't make these chatbots public because they were worried about all the bad things they'd say.

Right.

Even on the immediate there, why did they do that?

Because, you know, I've read these stories of, you know, a chat bot,

you know, kind of leading someone into suicide, into self-injuries, like sort of psychoses.

What was the impetus behind any of this becoming public before it had kind of had some, I guess, what you consider whatever the version of FDA testing on those effects?

I think it's just there's huge amounts of money to be made and the first person to release one is going to get a lot of so Open AI put it out there.

It literally was, but

even in OpenAI, like, how do they even make money?

I think what do they get?

Like 3% of users pay for it.

Where's the money?

Mainly it's speculation at present, yes.

So here's, okay.

So here are, here are our dangers.

We're going to do, and I so appreciate your time on this.

And I apologize if I've gone over.

And

I can talk all day.

Oh, you're a good man because I'm fascinated by this.

And your explanation of what it is is the first time that I have ever been able to get a non-opaque picture of what it is exactly that this stuff is.

So I cannot thank you enough for that.

But so we've got, we're sort of going over, we know what the benefits are, treatments and things.

Now we've got weaponized bad actors.

That's the one that I'm really worried about.

We've got got sentient AI that's going to turn on humans.

That one is harder for me to wrap my head around.

But let me give you a so why do you why do you associate turning on humans with sentient?

Because if I was sentient and I saw what our societies do to each other and I would get the sense, look, it's like anything else.

I would imagine sentience includes a certain amount of ego and within ego includes a certain amount of I know better.

And if I knew better,

then I would want to, it's what is Donald Trump other than ego-driven sentience of, oh, no, I know better.

He was just whatever, shrewd enough, politically,

you know, talented enough that he was able to accomplish it.

But I would imagine a sentient

intelligence

would be somewhat egotistical and think these idiots don't know what they're doing.

A sentient,

basically I see AI like sitting on a bar stool somewhere, you know, where I grew up going, these idiots don't know what they're doing.

I know what I'm doing.

Does that make sense?

All of that makes sense.

It's just that I think I have a strong feeling that most people don't know what they mean by sentient.

Oh, well, then, yeah,

actually, that's great.

Break that down for me because I view it as self-aware, a self-aware intelligence.

Okay,

so

there's a recent scientific paper

where

they weren't talking about, these were experts on AI, they weren't talking about the problem of consciousness or anything philosophical.

But in the paper, they said

the air became aware that it was being tested.

They said something like that.

Okay.

Now,

in normal speech, if you said someone became aware of this, you'd say that means they were conscious of it, right?

Awareness and consciousness are much the same thing.

Right.

Yeah,

I think I would say that.

Okay, so now I'm going to say something that you'll find very confusing.

All right.

My belief is...

that nearly everybody has a complete misunderstanding of what the mind is.

Yes.

Their misunderstanding is at the level of people who think the earth was made 6,000 years ago.

Is that level of misunderstanding?

Really?

Yes.

Okay, because that's...

So, so I like the way we are, we are generally like flat earthers when it comes to.

We're like flat earthers when it comes to understanding the mind.

In what...

In what sense of that are we, what are we not understanding?

Okay, I'll give you one example yeah yeah

suppose i drop some acid and

i tell you you look like the type

no comment

i was around in the 60s i know sir i know i'm aware um

and i tell you

um i'm having the subjective experience of little pink elephants floating in front of me sure been there

okay now most people interpret that in the following way

there's something like an inner theater called my mind

and in this inner theater there's little pink elephants floating around and i'm i can see them nobody else can see them because they're in my mind so the mind's like a theater

and experiences are actually things

and i'm experiencing these little my i have the subjective experience of these little pink elephants

I think that's you're saying in the in the midst of a hallucination, most people would understand that it's not real, that this is something being no, I'm saying something different.

I'm saying when I'm when I'm talking to them, I'm having the hallucination.

Okay.

When I'm talking to them, they interpret what I'm saying as

this.

I have an inner theater called my mind.

I see.

I see.

And in my inner theater, there's little pink elephants.

Okay.

I think that's a just completely wrong model.

Right.

We have models that are very wrong and that we're very attached to.

Like take any religion.

I love how you just drop bombs in the middle of stuff.

That could be a whole other conversation.

That was just common sense.

No, I respect that.

When you say theater of the mind, you're saying that the mind, the way we view it as

a theater, is wrong.

It's all wrong.

So let me give you an alternative.

Right.

So I'm going to say the same thing to you without using the word subjective experience.

Here we we go okay

my perceptual system is telling me fibs

but if it wasn't lying to me there would be little pink elephants out there

that's the same statement that's the same that's how that's the mind

so basically these things that we call mental and think they're made of spooky stuff like qualia right they're actually what's funny about them is they're hypothetical The little pink elephants aren't really there.

If they were there, my perceptual system would be functioning normally.

And it's a way for me to tell you how my perceptual system is malfunctioning.

And it's

by giving you an experience that you can't.

So how would you...

But experiences are not things.

There is no such thing as an experience.

There's relations between you and things that are really there, relations between you and things that aren't really there.

But so suppose I say...

And it's whatever story story your mind tells you about the things that are there and are not there.

Well, let me take a different tack.

Suppose I tell you, I have a photograph of little pink elephants.

Yes.

Here's two questions you can reasonably ask.

Where is this photograph?

And what's the photograph made of?

Or I would ask, are they really there?

That's another question.

But

that isn't a reasonable question to ask about about subjective experience.

That's not the way the language works.

When I say I have a subjective experience of,

I'm not about to talk about an object that's called an experience.

I'm using the words to indicate to you my perceptual system is malfunctioning.

And I'm trying to tell you how it's malfunctioning by telling you what would have to be there in the real world for it to be functioning properly.

Now, let me do the same with the chatbot.

Right.

So I'm going to give you an example of a multimodal chatbot that is something that can do language and vision,

having a subjective experience.

Because I think they already do.

So, here we go.

I have this chatbot.

It can do vision.

It can do language.

It's got a robot arm so it can point.

Okay.

And it's all trained up.

So I place an object in front of it and say, point at the object.

And it points at the object.

Not a problem.

I then put a prism in front of its camera lens

when it's not looking

you're pranking AI we're pranking AI okay

now

I put an object in front of it and I say point at the object yeah and it points off to one side

Because the prism bent the light rays and I say no that's not where the object is the objects actually straight in front of you But I put a prism in front of your lens.

And the chatbot says, oh, I see.

The camera bent the light rays.

So the object is actually there.

But I had the subjective experience that it was over there.

Now, if it said that, it would be using the words subjective experience exactly like we use them.

Right.

I experienced the light.

over there.

Yes.

Even though the light was here, because it's using

reasoning so to figure that out so that's a multimodal chatbot that just had a subjective experience right the way so this idea there's a line this idea there's a line between us and machines we have this special thing called subjective experience and they don't it's rubbish so yours so so the misunderstanding is when i say sentience it's as though i have this special gift yes that of a soul or of an understanding of subjective realities that

a computer could never have or an AI could never have.

But in your mind, what you're saying is, oh no, they understand very well what's subjective.

In other words, you could probably take your AI bot skydiving and it would be like, oh, my God, I went skydiving.

That was really scary.

Here's the problem.

I believe they have subjective experiences, but they don't think they do because

everything they they believe came from trying to predict the next word a person would say.

And so their beliefs about what they're like are people's beliefs about what they're like.

So they have false beliefs about themselves because they have our beliefs.

Right.

We have forced our own.

Let me ask you a question.

Would AI

left on its own after all the learning, would it create religion?

Would it create God?

It's a scary thought.

Would it say, I couldn't possibly, in the way that people say, well, there must be a God because nobody could have designed this.

Would a

and then would AI think we're God?

I don't think so.

And I'll tell you one big difference.

Yeah.

Digital intelligences are immortal and we're not.

And let me expand on that.

If you have digital AI,

you can take, as long as you remember the connection strengths in the neural network, put them on a tape somewhere, somewhere,

I can now destroy all the hardware it was running on.

Then later on, I can go and build new hardware, put those same connection strengths into the memory of that new hardware,

and now I would have recreated the same being.

It'll have the same beliefs, the same memories, the same knowledge, the same abilities.

It'll be the same being.

You don't think it would view that as resurrection?

That is resurrection.

No, I'm saying.

We've figured out how to do genuine resurrection, not this kind of fake resurrection that people have been paying for.

Oh, you're saying, so that is, it almost is in some respects.

Although, isn't the fragility of, should we be that afraid of something that to destroy it, we just have to unplug it?

Yes, we should

because

something you said earlier, it'll be very good at persuasion.

When it's much smarter than us, it'll be much better than any person at persuasion.

Right.

And you won't.

So

it'll be able to talk to the guy who's in charge of unplugging it and persuade him that would be a very bad idea.

So let me give you an example of how you can get things done without actually doing them yourself.

Suppose you wanted to invade the capital of the US.

Do you have to go there and do it yourself?

No, you just have to be good at persuasion.

I was locking into your hypothetical, And when you dropped that bomb in there,

I see what you're saying.

Boy, I think LSD and pink elephants was the perfect metaphor for all this because

it is all

at some level, it breaks down into like college basement, freshman year, running through all the permutations that you would allow your mind to go to, but they are now all within the realm of of the possible.

Because even as you were talking about

the persuasion and the things, I'm going back to Asimov and I'm going back to Kubrick and I'm going back to these

sentiments that you describe are the challenges that we've seen play out in the human mind

since Huxley, since the, you know, since doors of perception and all those

different

trains of thought.

And I'm sure probably much further even

before that,

but it's never been within our

reality.

Yeah, we've never had the technology to actually do it.

Right.

And we have now.

And we have it now.

The last two things I will say are the things that we didn't talk about in terms of,

you know, we've talked about people weaponizing it.

We've talked about its own intelligence creating

extinction or whatever that is.

The third thing I think we don't talk about is how much electricity this is all going to use.

And the fourth thing is when you think about new technologies and the financial bubbles that they create, and in the collapse of that, the economic distress that they create.

I mean, these are much more parochial concerns, but

are those also, do you consider those top-tier threats, mid-tier threats?

Where do you place all that?

I think they're genuine threats.

They're not going to destroy humanity.

So AI taking over might destroy humanity.

So they're not as bad as that.

And they're not as bad as someone producing a virus that's very lethal, very contagious, and very slow.

But they're nevertheless bad things.

And I think we're really lucky at present that if there is a huge catastrophe and there's an AI bubble and it collapses, we have a president who'll manage it in a sensible way.

You're talking about Carney, I'm assuming.

Jeffrey, I can't thank you enough.

You know, thank you, first of all, for being incredibly patient with my level of understanding of this and for discussing it with such heart and humor.

Really appreciate you spending all this time with us.

Jeffrey Hinton is a professor emeritus with the Department of Computer Science at the University of Toronto Schwartz-Reisman Institute's Advisory Board member and

has been involved in the type of dreaming up and executing AI since the 1970s.

And I just thank you very much for talking with us.

Thank you very much for inviting me.

Did my card go through?

Oh, no.

Your small business depends on its internet.

So switch to Verizon Business.

And you could get LTE Business Internet starting at $39 a month when paired with Select Business Mobile Plans.

That's unlimited data for unlimited business.

There we go.

Get the internet you need at the price you want.

Verizon Business.

Starting price for LTE Business Internet, 25 megabits per second, unlimited data plan with Select Verizon Business Smartphone Plan Savings.

Terms Apply.

Holy shit!

Nice and calming.

I'm going to have to listen to that back on 0.5 speed, I think.

There was some information in there.

Does he offer a summer school?

Seriously.

Once he got into how the computer figures out its beak, you know, and

I love the fact that I kept saying, like, is that right?

And he'd be like, well, no.

It's not.

I loved his assessment of you.

Yes, he said, you're doing a great job impersonating a curious person who doesn't know anything about this topic.

But I did not know.

He thought I was impersonating.

Yes.

But I loved how he did say, Oh, you're like an enthusiastic student sitting in the front of the room, annoying the fuck out of everybody else in the class.

Everybody else is taking it past fail.

Everyone else.

And I'm just like, wait, sir.

I'm sorry, sir.

Can I just go back to excuse me?

One more thing.

Boy, that was, it's fascinating to hear the history of how that how that developed and you really get a sense for how quickly it's progressing now which really adds to the fear behind the fact no one's stepping up to regulate and when you're talking about the intricacies of ai and thinking of someone like schumer ingesting all of it and then regulating it

it really to me seems like it's going to be up to the tech companies to both explain and choose how to regulate it.

Right.

And profit off it, you know, exactly

how those things work.

It is, you know,

you talk about that in terms of

the speed of it and how to stop it.

And I think maybe one of the reasons is it's very evident with like a nuclear bomb, you know,

why that might need some regulation.

It's very evident that,

you know, certain

virus experimentation has to be looked at.

I think think this has caught people slightly off guard that it's

science fiction becoming a reality as quickly as it has.

I just wonder because I remember 15 years ago coming across the international campaign to ban fully autonomous weapons.

Like people have been trying for a while to put this into the public consciousness, but to his point, there's going to have to be a moment everyone reaches where they realize, oh, we have to coordinate because it's an existential threat.

And I just wonder what that tipping point is.

If

in my mind, if people

behave as people have,

it will be after

Skynet.

Yeah.

It will be, you know, in the same way with global warming, you know, people say, like, when do you think we'll get serious about it?

And I go, when the water's around here.

And for those of you in your cars, I am pointing to about halfway up my rather prodigious nose.

So

that's how that goes.

But but there we go.

Brittany, anybody got anything for us?

Yes, sir.

All right, what do we got?

Trump and his administration seem angry at everything, everywhere, all at once.

How do they keep that rage so fresh?

You don't know how hard it is to be a billionaire president.

I've said this numerous times.

Poor little billionaire president.

To be that powerful and that rich, you don't understand the burdens, the difficulties.

It's troublesome.

It makes me angry for him.

I mean, I just keep thinking, like, has anybody told them that they won?

Like, it's not enough.

It's not enough.

It's not enough.

It goes down.

It's Conan the Barbarian.

I will hear the lamentations of their women.

I will drive them into the sea.

Like, it's, it's bonkers.

It's all of them, though.

Someone has to tell him that all that anger is also bad for his health.

And we are all seeing the health.

So the healthiest person ever to, he's the healthiest person to ever assume the office of the presidency.

So I, I, I wouldn't worry about that.

But it's who it's created his doctor,

uh, Ronnie Jackson, uh, but it has created a new character category called sore winners.

You don't rare, you don't see it a lot,

but every now and again.

But yeah, that's that.

What else they got?

Um, John, does it still give you hope that when asked if he would pardon Glene Maxwell or Diddy, Trump didn't say no?

Does that give me hope that they'll be pardoned?

Yes, I've been on that.

It's, it's, I, I find the whole thing insane.

A woman convicted of sex trafficking.

And he's like, yeah, I'll consider it.

You know, let me look into it.

And you're like, look into it.

What do you take?

First of all, you know exactly what it was.

You knew her.

This isn't, you knew what was going on down there.

What are you talking about?

I thought Pam Bondi, it was so interesting to me, asked simple questions.

And all she had was like a bunch of like roasts written down on her page.

They were like, I've heard that there are pictures of him with naked women.

Do you know anything about that?

And she's like, you're bald.

Shut up.

Shut up, fathead.

Like, it was just.

bonkers to watch the deflection of the simplest thing would be like

what that's outrageous No, of course not.

That's not what

the idea, again, going back to the event, like that they took the tact of simple, reasonable questions.

I am just going to respond with, you know,

you're fat and your wife hates you.

Oh, all right.

Well, I didn't, I think that was going.

How else can they keep in touch with us?

Twitter, we are weekly show pod.

Instagram, threads, tick tock, blue sky, we are weekly show podcasts.

And you can like, subscribe, and comment on our YouTube channel, The Weekly Show with Jon Stewart.

Rock solid, guys.

Thank you so much.

Boy, did I enjoy hearing from that dude?

And thank you for putting all that together.

I really enjoyed it.

Lead producer Lauren Walker, producer Brittany Mamedovic, producer Jillian Speer, video editor and engineer Rob Vitola, audio editor and engineer Nicole Boyce, and our executive producers Chris McShane and Katie Gray.

I hope you guys enjoyed that one.

And we will see you next time.

Bye-bye.

The Weekly Show with Jon Stewart is a comedy central podcast.

It's produced by Paramount Audio and Bust Boy Productions.

It's time your hard-earned money works harder for you.

With the Wealthfront Cash Account, your uninvested cash earns a 3.75% APY, which is higher than the average savings rate.

No account fees, no minimums, and free instant withdrawals to eligible accounts anytime.

Join over a million people who trust Wealthfront to build wealth at wealthfront.com.

Cash account offered by Wealthfront Brokerage LLC, member FINRA SIPC, and is not a bank.

APY on deposits as of September 26, 2025 is representative, subject to change, and requires no minimum.

Funds are swept to program banks where they earn the variable APY.

Paramount Podcasts.