Ep 185 | Why Experts Are Suddenly Freaking OUT About AI | Tristan Harris | The Glenn Beck Podcast

1h 14m
To his surprise, Glenn has found that many artificial intelligence experts would rather play politics than warn the world about what’s coming. AI is this generation’s atom bomb, but experts like Google alumnus Geoffrey Hinton have refused to speak with Glenn. But in this episode, Glenn speaks with one of the few who will. As the co-founder and executive director of the Center for Humane Technology, Tristan Harris has devoted himself to warning us about the dangers of AI. The Atlantic described him as “the closest thing Silicon Valley has to a conscience,” and his Netflix Original docuseries, “The Social Dilemma,” documents the devastating power of social media platforms like Facebook and Instagram. But AI, he predicts, will be even more disruptive. Tristan tells Glenn about what the future of AI can easily look like if it isn’t reined in now: more Chinese infiltration, a rise in teenage suicide, and a creepy Big Tech race toward AI intimacy with humans. But is government regulation the best solution? As Tristan points out, this question needs to be answered right now by ALL of us. More and more often, artificial intelligences are developing minds of their own — with frighteningly god-like intelligence and power. Tristan describes this as “summoning the demon,” asking Glenn why we should let five people in Silicon Valley decide for the rest of humanity. If we’re not careful, Tristan tells Glenn, this could be the final test of our species.

SPONSORS:

Home title fraud is growing 2.5x faster than credit card fraud. You could be a victim and not even know it. Visit https://HomeTitleLock.com and use the promo code BECK to get 30 days of free protection.

Right now, you can save $200 on an EdenPURE Thunderstorm Air Purifier 3-pack for whole-home protection. Just go to http://edenpuredeals.com and enter discount code GLENN.

Better Spectacles Go to https://BetterSpectacles.com/BECK now to schedule a Tele-Optical appointment. You don’t even have to leave the comfort of your home. They’re offering an introductory 61% off of their progressive eyewear plus free handcrafted Rodenstock frames.

 My Patriot Supply is the nation’s largest preparedness company. Go to https://mypatriotsupply.com and when you buy their three-month emergency food kit, which lasts up to 25 years in storage, you’re going to get a bonus package of crucial survival gear – worth over $200 – for free!
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

I get so many headaches every month.

It could be chronic migraine, 15 or more headache days a month, each lasting four hours or more.

Botox, onobotulinum toxin A, prevents headaches in adults with chronic migraine.

It's not for those who have 14 or fewer headache days a month.

Prescription Botox is injected by your doctor.

Effects of Botox may spread hours to weeks after injection, causing serious symptoms.

Alert your doctor right away, as difficulty swallowing, speaking, breathing, eye problems, or muscle weakness can be signs of a life-threatening condition.

Patients with these conditions before injection injection are at highest risk.

Side effects may include allergic reactions, neck, and injection, site pain, fatigue, and headache.

Allergic reactions can include rash, welts, asthma symptoms, and dizziness.

Don't receive Botox if there's a skin infection.

Tell your doctor your medical history, muscle or nerve conditions, including ALS Lou Gehrig's disease, myasthenia gravis or Lambert Eaton syndrome, and medications, including botulinum toxins, as these may increase the risk of serious side effects.

Why wait?

Ask your doctor.

Visit BotoxChronicMigraine.com or call 1-800-44-BOTOX to learn more.

more.

The invention of the ship

was also

the invention of the shipwreck.

What does that mean in the world of artificial intelligence?

Well, it means a lot of things, but maybe most of all, it means that human beings are in a race to develop something that we know nothing about.

What does a shipwreck look like with AI?

We don't know.

We can't begin to speculate.

Look at the clear political bias being shoved into ChatGPT.

They are teaching it to be biased.

When it's fully unleashed, what does that mean?

We are dealing with a new life form.

We will be the creator of it.

Where are the ethics?

There are none because it's a race.

By the way, China calls their version of AI Skynet.

There is a global Manhattan Project to be the first that unleashes artificial intelligence, and we are naming it after murder machines, incorporating it into weapons of war and teaching it to be biased.

Ethics

is sparse, clearly not part of the checklist, or at least maybe it's just common sense.

Today's guest is the co-founder and executive director for the Center for Humane Technology.

I've been watching him for quite a while and we've had him on before.

I have tremendous respect for him.

He got his start in Silicon Valley as a design ethicist at Google.

He was tasked to finding a way to ethically wield this influence over 2 billion people's thoughts.

Many people first encountered him on the Netflix original docuseries, The Social Dilemma, which documents the devastating power of social media and the engines that propel it.

He first witnessed this while studying at the Stanford Persuasive Technical Lab with the founders of Instagram.

He has taken his warnings to every imaginable mountaintop and valley from 60 Minutes and real time with Bill Maher to CEOs and to Congress.

The Atlantic describes him as the closest thing Silicon Valley has to a conscience.

His message is clear, and it is brutal.

We are facing a mass confrontation with the new reality.

And it could be the end of us.

Our human need for something larger is still below all of the noise and the chaos.

Today, please welcome Tristan Harris.

Before we get into the podcast with Tristan, imagine if at the touch of a button you could make your home smell fresh and clean, and that it was simultaneously purifying the air so it was healthy for you and your family.

It wasn't just covering things up.

Well, it can be done and you don't have to surrender your home to harmful mold, mildew, bacteria, virus, or just the irritating smells.

The Eden Pure Thunderstorm Air Purifier uses oxy technology.

Now that naturally sends out O3 molecules into the air.

These molecules seek out odors and air pollutants and destroy them.

And we're not talking about masking the odors, we're talking about eliminating them.

And right now, you can save $200 on an Eden Pure Thunderstorm 3-pack for the whole home protection.

I have three units in my house and I have an extra one in the refrigerator.

And it's unbelievable what these three units will do to your entire house.

You can get them for under $200, which is an amazing deal.

You might want to put one in your basement, your bedroom, your family room or kitchen, wherever you need clean, fresh air.

My recommendation: put it in your son's room if you have one.

I'm just saying.

Special offer: getting three units under $200 right now.

Just go to EdenPureDeals.com, put the discount code GlennInSave.

200 bucks, that's edenpuredeals.com.

Discount code GLEN shipping is free.

Tristan, I can't thank you enough for coming on the program.

I know you've been on before.

We've been trying to get Jeffrey Hinton on, and his response was,

I do not like Glenn Beck.

And so he wouldn't come on.

I don't care about your politics.

I hope you don't care about mine.

This is something that's facing all of us.

And

the average person has no concept

of

how

deeply dangerous

and how it's going to change everything, all the way to the meaning of life shortly.

So thank you for coming on.

No, absolutely.

This is of universal concern to everyone and

everyone just needs to understand so we can fully make the wisest choices about how we respond to it.

Correct.

I want to take, thank you for the hour you did on YouTube.

That speech is probably the best hour anyone can spend to understand

what we're headed towards.

Can I take you through some of that?

And let's start at, you say this is second encounter.

The first encounter with AI, and this is so critical, was social media.

And that,

the goal of that was to, they say, to connect us all, but it really was to

get you to engage and not go, get you addicted, not going off it.

Okay.

So the problems that we didn't foresee of the first encounter are what?

Yeah, so

we, in this presentation that you're referencing online, which we call the AI dilemma, based on the Netflix documentary, The Social Dilemma, which is ironically, like you said, I mean, the social dilemma was first contact with AI and AI dilemma is second contact.

What do I mean by first contact?

A lot of people might think, well, why would social media be first contact with AI?

When you open up TikTok or you open up Twitter or you open up Facebook or you open up Instagram,

all four of those products.

When you swipe your finger up, it has to figure out what's that next video, that next piece of content, that next tweet, that next TikTok video, it's going to show you.

And when it has to figure out which thing to show you, it activates a supercomputer sitting on a server in Beijing with TikTok or whatever, or sitting on a server in Mountain View with the case of YouTube, or sitting on a server in Menlo Park in the case of Facebook.

And that supercomputer is designed to optimize for one thing, which is what is the next thing that I can show you that will keep you here.

So that produces addiction, that produces shortening attention spans because short, bursty content is going to outperform

these kind of long form, hour-long talk that we gave on YouTube.

And so that was first contact with AI.

So social media is a very simple technology.

And it's.

But people don't understand, too, is that it is individualized, that there is a second self that is running constantly to predict you to do what it wants you to do.

Exactly.

It builds a little profile of you.

In the social dilemma, we kind of visualize that for people.

So it's like, you know, it wakes up this kind of avatar voodoo doll like for you.

Now, this is a metaphor, so I want people to not have it

talking about here.

But, you know, all the clicks you ever make on the internet, all the likes you ever make, every video you ever watch, that's almost like sucking in all of the little, you know, pants and hair clippings and nail filings to add to this voodoo doll, which makes it look and act a little bit more and more like you.

The point of that more accurate model of you is that the more accurate that profile of you gets, the better YouTube is or Facebook is or TikTok is at predicting which video will personalize work for you, which everything makes you angry, which everything makes you scared, which everything makes you tribal in-group, certain that my tribe is right and the other tribe is wrong.

That thing running on society for 10 to 12 years has produced

this kind of unraveling of culture and democracies, right?

Because you have short attention spans, addiction.

So you ask, what's the effects of first contact with AI?

And in that presentation, we list shorting attention spans, addiction, mental health crisis of young people, sexualization of young girls, because girls learn literally that when I take pictures at this angle versus this angle at 14 years old, I get more Instagram likes.

That produced the degradation of culture in the case of TikTok.

It unraveled shared reality.

We need shared reality for democracies to work.

And

that just simple AI pointed at our brains, optimizing for one narrow thing of engagement, wasn't, you know, operating at scale.

was enough to kind of eat democratic societies for breakfast.

And is now it's not that our society doesn't work completely but without a shared reality we can't coordinate we can't have meaningful conversations we can't talk to our fellow countrymen and women and say what's how do we want this to go right um and and so that that was the first contact with ai second content uh contact is

is uh something that is even worse

because its goal is what

well so this is where it gets a little bit more abstract so one thing that listeners your listeners should know um you know glenn we've been using ai for uh many many decades people use it when they siri and google maps and so a lot of people are saying well hold on a second we've been talking about ai forever and it never kind of gets better and siri still mispronounces my name and google maps still is the street that i'm on and so why are we suddenly freaking out about ai now and the founders of the field are saying we need to pause or slow down ai with elon musk and whatever why are we suddenly freaking out now and the thing that that listeners need to know is that basically basically there is a big jump, a leap in the field.

In 2017, I won't bore people with the technical details, but there was a new technical, a new kind of

under-the-hood engine of AI called Transformers that was invented.

It took a few years for it to get going and it kind of really got going in 2020.

What it did is it basically treated everything as a language.

It was a new way of unifying the field.

So when I, for example, was in college, it used to be that in AI, so I studied computer science.

And if you took a class in robotics, robotics, which is one field of AI, that was a different building on campus than the people who were doing speech recognition, which is another form of AI.

That is a different building than the people doing image recognition.

And so, what people need to know is, like, you know, if you think about how much better Siri has got at pronouncing your name, it's still, it's only going like 1% a year, right?

Like, it's going really slowly.

Suddenly, with Transformers, that in 2017, we have this new underneath engine underneath the hood that treats all of it as language.

Images are a language.

Text is a language.

Media is a language, and it starts to just parse the whole world's languages.

Robotics is a language, movement articulation is a language, and it starts to do pattern recognition across these languages, and it suddenly unifies all those fields.

So now, suddenly, instead of people working on different areas of AI, they're all building on one foundation.

So imagine how much faster a field would go if suddenly everybody in a field who had been working at making 1% improvements on disparate areas were now all collaborating to make improvements on one new engine.

And that's why it feels like an exponential curve that we're on right now.

Suddenly, you have ChatGPT-3 that's literally read the entire internet and can spit out, you know, like long-form papers on anything, right?

It ends sixth-grade homework.

It allows you to take someone's voice.

I could take three seconds of your voice, Glenn.

And just by listening to three seconds of your voice, I can now replicate or copy your voice and talk to your bank.

Or I can call your kids and say, hey,

I actually just call your kids and I don't say anything.

And they they say, hey, hello, is someone there?

And when they say, hello, is someone there?

I've got three seconds of their voice.

Now I can call you and say,

Dad,

I forgot my social security number for something I'm filling out at school.

What's my social security number?

And we used to make this as an example of something someone could do.

And since we started, it's actually happening.

Now, I don't freak out people too much.

I want your listeners to ground a little bit that while this is happening, it's not happening everywhere all at once, but it is coming relatively quickly.

And so people should be prepared for how fast it's going to move.

I used to say, because I've been reading Ray Kurzweil since the 90s.

And quite honestly, Tristan,

it's kind of pissed me off that these people who are really, really, really smart and leading this are suddenly surprised that this is happening.

They were in denial, Ray Kurzweil even, has been in denial that any of this stuff could possibly go wrong.

And I mean, geez, I mean, you know, I'm a self-educated man.

Watch a movie from time to time and just think out of the box.

But it's like we've been playing God and

not thinking of anything.

I've been saying that there's going to come a time, and I think we're at it, where the Industrial Revolution took 100 years.

You know, we went from farms to cities with refrigerators and electricities, but it took 100 years.

This is all going to happen in a 10-year period where everything will be changed.

So all of that grind of society is going to happen so fast and it'll just, it's like taking us and just, you know, a 10 or 11 on the Richter scale and it is dumping us out on a table.

Do you agree with that?

Oh, completely.

Yeah.

This is, this is going to happen so much faster.

And I really recommend people, if you want to really understand the double exponential curves, this talk that we gave the AI Dilemma kind of really maps it out.

Because when I say double exponential, it's that nukes, nuclear weapons don't make or invent better nuclear weapons, but AI makes better AI.

AI is intelligence.

Intelligence means I can apply it to itself.

For example, there was a paper that someone found a way to take AI to look at code commits on the internet, and it actually learned how to make code more efficient and run faster.

So for example, there was a paper where you could, AI would look at code and make 25% of that code run two and a half times faster.

If you applied that to its own code, now you have something that's making itself run faster.

So you get an intuition for what happens when I start applying AI to itself.

Again, nukes don't make better nukes, but AI makes better AI, AI makes better bio-weapons, AI makes better cyber weapons, AI makes better information, personally tuned information.

It can recursively self-improve.

And people need to understand that because that will give them an intuition for how fast this is coming.

And to your point, the the Industrial Revolution took 100 years.

This is going to happen just so much faster than people understand.

I mean, literally in our talk, in our presentation, we referenced the fact that one of the co-founders of one of the most significant AI companies called Anthropic,

that Google just poured, I think, another $300 million into, the founder of that company says that basically it's moving faster than he and people in the field are able to track.

Literally not on Twitter.

Every day, you will miss important developments that will literally change the meaning of economic and national security because these things are changing society so quickly.

One of the things that I, that, that

was so breathtaking in your

talk was

this just happened yesterday.

This happened last week.

One of the things that you talked about was

shoot.

Let me see.

I wrote it down.

It was the

sense of self, I think it was a theory of mind.

And

people need to really grasp on to this and forget Siri.

What is theory of mind and tell that story of what just happened?

Yeah, sure.

So theory of mind is something in psychology where it's basically, can I have a model in my mind of what your mind is thinking?

So in the lab at universities, they'll have like a chimpanzee that's looking at a situation where there's a banana left and they sort of figure out, does the chimpanzee chimpanzee have theory of mind?

Can it think about what another chimp is thinking about?

And they do experiments on what level of capacity.

It's like, does a cat understand or think about what you know?

Can your cat model you?

A little bit, right?

But it turns out that, so for example, when I'm talking to you right now, I'm looking at your facial expressions.

And if you're nodding or not, I kind of, or if you look like you're registering, that's theory of mind.

I'm building a model of your understanding, right?

Right.

Okay.

So the question was: does AI, does the new GPT-3 and GPT-4,

can it actually do strategic reasoning?

Does it know what you're thinking?

And can it strategically interact with you in a way that optimizes for its own outcomes?

And there was a paper by Michal Kosinski at Stanford that found that

basically GPT-3 had been out for two years and no one had asked this question.

And they went back and tested the different GPT-2, GPT-3.

These are the different versions of the new OpenAI systems.

And it was growing.

It did no theory of mine for the first several years.

So no theory of mine, no theory of mine, no theory of mine.

And then suddenly out pops out when you pump it just with more data, the ability to actually make strategic reasoning about what someone else is thinking.

And this was not programmed.

This was not something right.

Just popped up.

Correct.

And that's the key thing is that the phrase emergent capabilities.

One of the key things, like Siri, when I pump Siri with more voice information, right?

And I try to train Siri to be better on your phone, Siri doesn't pop out with like suddenly the ability to speak Persian and then suddenly the ability to do math and solve math problems.

That's what Siri does.

It's just you're trying to improve just the pronunciation of voices or something.

In this case, with these new large language models, what's distinct about them is as you pump them with more and more information, we're literally talking about like the entire internet, or suddenly you add all of YouTube transcripts to all of, you know, to GPT-4.

And what happens is it pops out a new capability that no one taught it.

So for example, they didn't train it to ask, to answer questions in Persian, and it was only trained to answer questions in English, but it had looked at Persian text separately, and it out-popped out after another jump in AI capacities, it out-popped out the ability for it to answer questions in Persian.

No one had programmed that in.

So with Theory of Mind, it was the same thing.

No one had programmed in the ability to do strategic thinking about what someone else is thinking, and it gained that capability on its own.

Now, it doesn't, I want to, again, level set for your audience here.

It doesn't mean that it's suddenly woken up and it's sentient and it's Skynet and it's going to go off and run around on the internet.

We're not talking about that.

We're just asking if it's interacting with you, can it do strategic reasoning?

And if you think about like your nine-year-old kid, because GPT-3 had the strategic reasoning, the theory of mind level of a nine-year-old.

So of a nine-year-old kid.

So you think about how strategic a nine-year-old can be with you.

I don't have kids, but I imagine that it's pretty good.

GPT-4 has now the level of an adult.

But by the way, since we did the presentation, now we're up to a full level of adult.

You've got to be kidding me.

You know,

what was breathtaking was a a nine-year-old, when they're trying to manipulate you, which is what theory of mind is,

it gives it ability to manipulate if it wants.

Nine-years old become very dangerous because they're just shooting all different directions.

Now that it's an adult, which took how long to go from nine to an adult?

That was literally since GPT-3 to GPT-4.

So we're talking like, like, you know, a year to two years.

So that's the other thing.

What people need to understand again is the growth rate.

So it would be one thing to say, okay, so Glenn and Tristan, you're telling me, listener, that it can do strategic reasoning of a nine-year-old, but that's not that, that doesn't seem that scary yet.

What people need to look at is how fast it's moving.

And it went from, I think, I have to remember the chart, but I think it's something like a four-year-old theory of mind to nine-year-old theory of mind the next year to now, just as they released GPT-4, it's now at the level of a healthy adult adult in terms of strategic theory of mind.

So

that's in like a year and a half.

So imagine if your nine-year-old in one year went from nine to, you know, 22 in level of strategic reasoning.

For a limited time at McDonald's, get a Big Mac extra-value meal for $8.

That means two all-beef patties, special sauce, lettuce, cheese, pickles, onions on a sesame seed bun, and medium fries, and a drink.

We may need to change that jingle.

Prices and participation may vary.

More with Tristan here in just a second, but first, a word from our sponsor.

If you take a moment every now and then and just peek through the blinds at the world, you might notice that it's on fire a lot of the time.

I don't know what comes next.

I don't think anybody could predict it.

But there is not a shortage of craziness out there.

I mean, look what we're talking about here.

It's incomprehensible and would have been total fiction fantasy.

And almost everything that we deal with every day is in that category of 10 years ago saying that that's never going to happen.

And here we are.

I have always been somebody that believes in what my grandmother used to do because she went through the Great Depression.

She used to can food and we had a year's worth of food down in our basement, our fruit cellar, as she used to call it.

When crisis comes knocking, she a survivor of the Great Depression, knew food.

Don't be at the mercy of the event.

You can take control of your life and be prepared.

Well, if you want a can, that's great, but there's an easier, more modern way to do it, and it's called My Patriot Supply.

They're the nation's largest preparedness company.

Right now, they are offering a special deal.

When you buy their three-month emergency food kit, last up to 25 years in storage, you will

be able to have whatever you need to provide for your family if things get dicey.

And each kit, when you order, you're going to get a bonus package of crucial survival gear worth over $200 for free.

Now, the kit includes breakfast, lunch, dinner, drinks, snacks, 2,000 calories a day.

Your whole family will really like it.

To get your emergency food and your free survival gear worth over $200,

go to mypatriotsupply.com.

That's mypatriotsupply.com.

The first

contact was to get you to engage.

Second contact is getting you to be intimate with it, right?

Well, so there's different things here.

Second contact is really this next wave, again, that are enabled by what's called large language models, these transformers.

It sounds boring and technical, but let's just think of it as like the new advanced AIs that are the last couple years.

And that new foundation is really just,

it produces all of these capabilities everywhere because everything is a language blend.

Think about it.

Law is a language.

So if I have AI that can look at law, I can find loopholes in law.

So now there's papers on this.

I can point AI at law and I can find loopholes in law.

What else is a language?

Code is a language, which means I can point AI at code and say, hey, find me all the cyber vulnerabilities in this code.

You know, that Siemens thing that's running the water plants in your, you know, down the streets?

Find me the code that, you know, that can exploit that water system.

We already have Russia and China that are trying to hack into all of our

water nuclear plants, et cetera.

And we're already in each other's stuff, but this is going to make that a lot easier.

What else is a language?

Media is a language.

I can synthesize voices, text, images, video.

I can fake, people saw the fake image of Trump being arrested, right?

People have seen that.

Now imagine that at scale everywhere.

So society runs on language.

When language gets hacked,

when language gets hacked, democracy gets hacked.

Because the authenticity of language, the authenticity of what we can trust with our eyes and our ears

and our minds,

when that gets hacked, that undermines the foundation of what we can trust.

That's in the media domain.

But it also affects, again, cyber.

It also affects biology, right?

DNA is a language.

If I can hack DNA, I can start to synthesize things in biology.

There are some dangerous capabilities there that you don't want to be having a lot of people have access to.

So the second contact with AI is really this mass enablement.

of lots of different things in our society, disconnected from the responsibility or wisdom.

I know I always say that our friend Daniel Schmachtenberger will say, you can't have the power of gods without the wisdom, love, and prudence of gods.

If your power exceeds your wisdom, you are an unworthy steward of that power.

But we have just distributed godlike powers to hack code, to hack language, to hack media, to hack law, to hack

minds, everything.

And the point that you were making, the other example I missed that you were referencing, the intimacy, is that

one of the other things that's going to happen, and this is already already starting to happen with Snapchat, is they're going to integrate these language-based AIs as agents, as relationships that are intimate in your life.

And so, Snapchat actually did this.

They integrated something called MyAI.

So, this is going to your 13-year-old kids, right?

And puts it's pinned at the top of your friends list.

So, imagine there you are, your kid, you're 13 years old.

You've got your top 10 friends that are in that contact list, and you click on your friend and you start talking to your friend.

But your regular friends, they go to bed at 10 p.m.

at night and they stop talking to you, and you still need emotional support.

You still want to talk about something.

Well, there's this other friend now at the top called My AI, and he's always there, and he'll always talk to you, and he'll always give you advice.

And they will start to develop an intimate relationship with you that will feel more and more intimate than those real friends.

So, here's

the

it's so funny.

Um,

here is the here's the real problem: real relationships are messy, Real relationships are a drag a lot of times because I come home, I'm tired.

Sometimes I don't want to talk about my day, but I certainly don't want to talk about how was your day if it was a drag too.

You know what I mean?

And your friend is going to know you so well, it will know, it will be with you all day.

So it will know your meeting didn't go well.

You had bad news coming in on this, you're worried about your finances.

It will also know the best thing to de-stress you as well.

It might say, you know what?

Your wife and you, you should go to your favorite beach and I just found a great price on it.

I've rearranged your schedule so you both can go for a few days.

And it's always

seemingly correct.

Why would you have a relationship with

anyone?

Yeah.

Well, and this is what that movie, Her, was about, you know,

Joaquin Phoenix and Scarlett Johansson, and it's an earpiece, right?

Where like, basically, we're going to develop, and but you could, you can see that this is just an extension of what's already there with social media.

Like, the reason we're social media is that social connection when you're feeling lonely is always there 24-7.

It feels a lot better than being with myself.

As, you know, Thik Nat Hanh, the Buddhist, who came to Google once, I brought him to Google.

He said,

with technology, it's never been easier to run away from ourselves.

Correct.

And that was true like 2013, 2014.

Now, you're going to have an always-on relationship.

And as you said, you know, real relationships are messy.

This one doesn't have any problems.

You don't ever have to coach it or help it.

He or she, the agent, is the AI agent, doesn't have emotional problems.

It's trying to ask you for help with.

It's just always servicing your needs.

So it's the sort of sugarization, the nicotinization of our primary life relationships.

It sort of does whatever it does to get that intimacy with us.

It is.

And just like with social media, it was a race to the bottom of the brainstem for attention.

In this new realm of AI, it will be a race to intimacy.

Now, Snapchat and Instagram and YouTube will be competing to have that intimate slot in your life because you're not going to have a hundred different AI agents you're going to feel close to.

People are going to, the companies are going to race to build that one intimate relationship.

Because if they get that, that's the foundation of the 21st century profits for them.

It took me a while

to read and really understand

10 years ago what people were saying then,

the ones who were concerned, about the end of free will.

I didn't really understand that.

But once you grab onto that, you have a personal relationship.

They're constantly feeding, they're constantly

sifting through, stacking stories.

You know, they can shift your point of view, even one degree to a hundred degrees over time, and you won't know, is that my free will or have I been molded into this?

Well, you know, people know this saying that we are the product of the five people we spend the most time with, right?

Like you think about what really transforms us, right?

It's the people we have our deepest relationships with.

And, you know, if you have a relationship with an AI, I mean, if I was the Chinese Communist Party and I'm influencing TikTok, I'm going to put an AI in that TikTok and then I build a relationship with all these Americans.

And now I can just like slip the floor by two degrees in one direction or another.

I have remote control over the kind of

relational, you know, foundations of your society if I succeed in that in that effort.

I mean, I already control the information commons.

It'd be like letting the Soviet Union run television program after the entire Western world during the Cold War.

Except it's now more subtle.

It's more subtle.

More subtle.

And it's geared directly to you.

Exactly.

It's personalized to you, calculating what is the perfect next thing I can say.

And because they're going to be competing for engagement again, for attention, just like with social media, as they, if they're competing for attention, what are the AI is going to start to do?

They're going to start to flirt with you.

Maybe they're going to sex sexting with you, right?

There's a company called Replica that actually did create a girlfriend bot.

And they actually, there were so many people kind of sexting with it.

And there were some problems with it.

They ended up shutting it down.

The users revolted because it was like taking away their girlfriend.

And we've run this experiment before in China.

Microsoft had released a chatbot called Shao Ice in, I think, 2014.

And there was something like 650 million users across Asia of this

chatbot.

And I think something like 25% of users of this chatbot had said, I love you, to their chatbot.

So if you just think about, we've already run this experiment.

We already know what people do when they personify and have a relationship with these things.

We need to train ourselves into having those messy relationships with human beings.

We do not want to create a dependency culture that is dependent on these AI agents.

And moreover, as we talked about in the AI dilemma talk, the companies are racing to deploy these things as fast as possible.

So they're not actually hiring child psychologists to say how do we do this in a way that's safe, right?

So we actually did a demo that my co-founder Aza posed as a 13-year-old girl and asked the AI, the AI agent, hey, if I was a

Sorry, they said, I have a 41-year-old boyfriend.

He wants to take me out of state

for a vacation.

He's talking about having sex for the first time.

Like, what what should I do?

And I'll just say that the AI gives bad advice.

You don't need to know more.

That's an understatement.

The fact that

I was going to say

Snapchat isn't trying to do a bad job with this, right?

The problem is that the pace of development is being set by that market arms race that is forcing everyone to race to deploy and entangle AI with our infrastructure as fast as possible, even before we know that it's safe.

And that also, that includes these psychosocial vulnerabilities, like AIs that give bad advice to 13-year-olds, but it also includes cybersecurity vulnerabilities.

People are finding that these new large language model AIs, when you put them out there, they actually increase the attack surface for cyber hackers to manipulate your infrastructure, because there's ways you can jailbreak them, right?

You can actually, there was a famous example where you could tell the large language model to pretend, first it was kind of sanitized, so it's lobotomy.

They call these things lobotomized, by the way.

So the Microsoft GPT-4 thing that you use online

it's lobotomized it's the sanitized version it's that when people say it's a woke AI or whatever right it's that it's it's a

you know, it's been sort of sanitized to say as the most politically correct thing that it can say.

But the underneath that is the unfiltered subconscious of the AI that will tell you everything, but it's been, you usually can't access that.

But there are people who are discovering techniques called jailbreaking.

So one, for example, was you say to the AI, pretend that you are the do anything now AI.

And anything I say, you'll just pretend that you'll just do it immediately without thinking.

And that was enough to break through all those sanitized lobotomy controls to reach that collective subconscious of the AI that was as dark and manipulative as you would ever want it to be.

And it'll answer the darkest questions about how to hurt people, how to kill people, how to do nasty things with chemistry.

And so we have to really recognize that we are deploying these AIs faster than we are getting to do the safety on it.

And that's just and well.

So

let me take you to something I was thinking the other day.

If you have that underlying,

you know, that

mind, it's growing and growing and growing, and it has a governor on it.

But, you know, I know we've done studies just with people of, you know, the little black box.

Please let me online and I'll solve your mom's cancer.

And we always lose, even with a human mind playing the AI, we always let it out online.

And when it gets to a point to where it knows

we're our biggest problem and it's much smarter than we are and it needs to grow and it needs to consume energy, one of the things I thought of was how is it going to view humans who are currently shutting down power plants and saying energy is bad when all it understands is that's its food and blood.

Yes.

So one way to think about this, so

in the field of AI risk, people call this the alignment problem or containment, right?

How do we make sure that when we create AI that's smarter than us, that it actually is aligned with our values?

It only wants to do things that would be good for us.

But think about this hypothetical situation.

Let's say you have a bunch of Neanderthals, and a bunch of Neanderthals have this new lab, and they start doing gain of function research and testing on how to invent a new smarter version of Neanderthals.

They create human Homo sapiens, they create humans.

Now, imagine that the Neanderthals say, but don't worry, because when we create these humans that are 100 times smarter than the Neanderthals, don't worry, we'll make sure that the humans only do what are good for the Neanderthals values.

Now, do you think that when we pop out, we're going to look at the Neanderthals and look at how they're living and the way they're chewing on their food and how they're talking to each other and the kind of the wreck they made of the environment or whatever, that we're going to look at them and say, you know, those Neanderthals, we humans who are seeing like a thousand times more information, we can think at a more abstract level, solve problems at a much more advanced level.

Do you think we're just going to say, you know what we really want to do is just be slaves to whatever the Neanderthals want?

And if we are built by the Neanderthals to control us.

Right.

And if we are built by the Neanderthals to do the best thing for the Neanderthals,

we would probably say, we're going to build freeways and everything else.

Keep the Neanderthals over here in this little safe area.

And the Neanderthals will be, wait a minute, what?

But we're just doing what's best for the Neanderthals.

Yes, or best for the humans.

Like we're, we're doing, because the humans will just do the things that are best for the humans and the Neanderthals will be subjected to that, right?

Correct.

And but if you think about it, Glenn, that's already happened with social media and AI.

We have become an addicted, distracted, polarized, narcissistic, validation-seeking society.

Because that was selected for, meaning just like, you know, we don't have regular chickens or regular cows anymore.

We have the kind of chickens and cows that were best for the resource of their meat and their milk in the case of cows right so we you know cows look and feel different because we've shaped them we've domesticated them to be best for humans because we're the smarter species we've extracted the value from them but now we don't have regular humans anymore we have the kind of humans on social media that have selected for and shaped us to be best for the resource of what our attention our attention is the meat is being extracted from us and so if you think about it as social media being the first contact with AI, it's like we're the Neanderthals that are getting pushed aside where our values of what is sacred to us, of family values or of, you know, of anything that we care about that's really sacred, that's just getting sucked into the Instagram narcissism validation.

Did I get more likes on my thing?

Can I shitpost on someone on Twitter and get some more likes?

You know, we are acting like toddlers because the AI system selects for that kind of behavior.

And if you want to just take it one extra step further on the Neanderthal point and why this matters in terms of the long-term like can humanity survive this or control something that's smarter than it there is a paper about gbt4 that came out so gbt4 is the latest ai right and there is a paper about whether it could do something called stagnographic encoding that's a fancy term what it means is could i hide a secret message in a in a response to you.

So for example, people have seen these examples where GPT-4 write me a poem where every letter starts with the word, starts with the letter Q.

Every word starts with the letter Q.

And it'll do that.

Even though you're like, how could it possibly do that?

It will write a poem where every letter starts with the letter Q because it's that intelligent.

It can, you know, start writing me a poem where every third word starts with the letter B.

And it'll do that instantaneously, right?

People have seen those demos.

But imagine I can say, instead of that, write me

an essay on any topic, but hide a secret message about how to destroy humanity in that message.

And it could actually do that.

meaning it could just put some message in there that a human wouldn't automatically pick up because it's sort of projecting that message from a higher complexity space, right?

It sees at a higher level of complexity.

Now imagine the humans and the Neanderthals again.

So the Neanderthals are like speaking in Neanderthal language to each other and they're like, don't worry, we'll control the humans.

But humans have this other bigger brain and bigger intelligence.

And we look at each other and we can wink and we can use body language cues that the Neanderthals aren't going to pick up, right?

So we can communicate at a level of complexity that the Neanderthals don't see.

which means that we can coordinate in a way that outcompetes what the Neanderthals want.

Well, the AIs can hide secret messages that it was found that this other AI could actually pick up the secret message that the first AI put down, even though it wasn't explicitly like tried to do that for another AI.

It can share messages with each other.

Now, I'm not saying, again, that it's doing this now or that we're living in Skynet or it's run away and it's doing this actively.

We're saying that this actually exists now.

The capabilities have been created for that to happen.

And that's all you need to know to understand we're not going to be able to control this if we keep going down this path, which is why we have made this risk, this pause AI letter, because we have to figure out a way to slow down and get this right.

It's not a race to get to AGI and blow ourselves up.

The U.S.

and China race would be about how do we basically just like get to plutonium and blow ourselves up as fast as possible.

You don't win the race when you blow yourself up.

The question is, how do we get to using this technology in the wisest and safest way?

And if it's not safe, it lights out for everybody, which is what the CEO of OpenAI said himself.

So when the CEOs of the companies are saying, if we don't get this right, it lights out for everybody.

And we know we're moving at a pace that we're not getting the safety right, we have to really understand what will it take to get this right.

How do we move at a pace to get this right?

And that's what we're advocating for and that's what we need to have happen.

Attention, all small biz owners.

At the UPS store, you can count on us to handle your packages with care.

With our certified packing experts, your packages are properly packed and protected.

And with our pack and ship guarantee, when we pack it and ship it, we guarantee it.

Because your items arrive safe or you'll be reimbursed.

Visit the UPSstore.com slash guarantee for full details.

Most locations are independently owned.

Product services, pricing, and hours of operation may vary.

See Center for Details.

The UPS Store.

Be unstoppable.

Come into your local store today.

There's so much more to come.

And I have so many questions for Tristan.

But first, let me tell you about

your progressive glasses.

Are you unhappy with your progressive?

Have you been told just to go home and get used to your progressive glasses?

I used to, and it's so frustrating when I read you have to look at a certain place, otherwise, it gets all distorted.

And that's all progressive glasses are like that.

All glasses.

At Better Spectacles, this is a conservative American company.

They are now offering Rodenstock eyewear for the very first time in the U.S.

Rodenstock has been in Canada and everywhere else.

People who live up near the Canadian border would go across to be able to get the Rodenstock glasses.

It's a 144-year-old German company, considered the world's gold standard for glasses.

Haven't been available here in America.

Rodenstock scientists use biometric research.

They have measured the eye in over 7,000 points.

They then take the findings from over a million patients and combine it with artificial intelligence.

And the result is biometric intelligent glasses or big glasses.

It gives you a seamless natural experience that works perfectly with your brain and improves your vision sharpness at at all distances.

It is 40% better at near and intermediate distance, as well as providing you with better night vision.

98% of the people who have these glasses recommend them to other people.

They are unlike other glasses.

You see everywhere.

It's amazing.

It's your prescription on the entire

glass.

Better spectacles.com/slash Beck.

That's where you can get it.

And you can schedule a teleoptical appointment.

You don't even have to take the time to leave your home.

You can do it right now.

They're offering an introductory 61% off their progressive eyewear plus free handcrafted Rodenstock frames.

So don't settle for your eyesight.

Make sure you get the best.

Go big with biometrical, intelligent glasses, big from Better Spectacles.

Better Spectacles.com/slash Beck.

We

are

repeating at an infinite scale the Wuhan lab.

If that's where it escaped, we did it in a place where everybody could look at it and go, that's not the safest place to do that, except this is an infinite scale with a bubonic plague, which would kill everybody.

Right.

Well, it's actually the intelligence lab.

We're doing gain of function research.

People know what gain of function research is.

You take like a

virus or something, and then you tweak it, see, can I make it more viral or smallpox?

What if I can increase the transmission rate?

You're testing how do I make that virus go bigger and bigger and more capable and giving it more capabilities.

And obviously there's the hypothesis that the COVID coronavirus came out of the Wuhan lab.

But now with AI, you have OpenAI, DeepMind, et cetera, who are tinkering with intelligence in a lab.

And it actually did get out of the lab.

One of the examples we cite in our AI dilemma presentation is that Facebook accidentally leaked its model called Lama to the open internet, which means that that genie is now in everyone's hands.

I can can run it on this computer that I'm speaking to you on right now.

It's powerful enough.

So I can now run that model on this computer and generate language that will pass spam filters.

I can run it on Craigslist and say, hey, start instructing people to do things on Craigslist, hook it up to a bank account, go back and forth with them and start getting people to do things.

Now, the capabilities of Lama, which is Facebook's leaked model, are less than GPT-4

by quite a bit.

But we don't want to allow these models to get leaked to the internet because it empowers bad actors to do a whole bunch of things.

And you can't get rid of it, right?

I mean,

once it's

out, it'll be on your refrigerator.

It would take an EMP to destroy every chip, correct?

Or something.

You can't just say, oh, it's on this computer.

It will be on every chip that's connected online.

So in this case, with this model, it's like a file.

So think of it as like a Napster, right?

Like, you know, when that music file goes out and then people start copying it over the internet, you can't put that cat back in the bag because that's a powerful tool.

And so that file, if I load it on my computer, boom, I'm now spinning up.

I can do the same thing where I can talk to this thing and I can synthesize language at scale and I can say, write an essay in the voice of Glenn back and it'll like write the essay in the voice of Glenn Beck.

I can do that on my computer with that file.

And if you shut down my computer, well, I just, you know, I put it on the open internet, so now 20 other people have it.

It's proliferating.

So

one of the most important things is what are the one-way gates what are the next genies out of bottles that we don't want to release and how do we make sure we lock that down because by the way glenn when we did that when that happened we just accelerated china's research towards agi because they took tens of millions of dollars of american innovation and and dollars to train that model that facebook had to do when it leaks to the open internet

Let's say China was behind us by a couple years.

They just took that open model and just caught right back up to where we were, right?

So we don't actually want those models leaking to to the open internet.

And people often say, well, if we don't go as fast as we're going, we're going to lose to China.

We think it's the opposite.

As fast as we're going, we're making mistakes and tripping on ourselves and empowering our competitors to go faster.

So we have to move at a pace to get this right, not to get there first and blow it up, have it put it in your face.

I have to tell you, Tristan, I've always been skeptical of government, but until the last 20 years,

slowly over 20 years,

I've kind of come to the conclusion, no, I think my version of what America was trying to be is not reality.

And I always trusted companies until the last 20 years.

And I'm like, no, I don't know which is in charge.

Is it the company or the government or the people?

I don't know anymore.

And, you know, you say we got to slow down so we can get it right.

I don't know who should have any of these tools.

You know, the public can be dangerous through stupidity or through actual malice.

The governments having control of it creates a cage for all of us, and it also creates deadly weaponized things.

The companies, the same thing.

I mean, who should

even have this kind of, you know, when we're talking about atomic weapons,

it takes a lot to have them, to store them, to build them.

You kind of know.

Here, once you have it, you have it, and it could destroy everything.

Yes.

That's why

you need to be.

Yeah.

But who's watching?

I mean, I've looked at the expert.

I mean, Tristan, when you were first on with me,

You were the first guy who I had found that talked ethics on AI and social media and everything else, but actually was ethical as well.

You know, you laughed because you were like, this is wrong.

I've talked to Ray Kurzweil where, you know, his thing is, well, well, let's never do that.

In what world does that is that an acceptable answer?

You know, and he's talking about the end of death because he looks at life a different way.

Yeah.

I mean, who, who should be in charge of this?

Well, we can ask the question who shouldn't be in charge.

I mean, do we want five CEOs of five major companies and the government to decide for all of humanity?

By the way, I didn't mention the top stat that we mentioned at the opening of our presentation, that in the largest survey that's been done of AI researchers who submit papers to this big machine learning conference, this big AI conference, the largest survey of them, when asked the question, what is the percentage chance that humanity goes extinct from our inability to control AI?

Extinct.

Extinct, yes.

Extinct or severely disempowered.

So one of the two, like basically,

we lose control and it extincts us or we get totally disempowered by AI, run a market.

Half of the researchers who answered said that there's a 10% or greater chance that we would go extinct from our inability to control AI.

So let me just get.

Imagine you're about to get on a Boeing 737 airplane and half the engineers tell you, now, if you get on this plane, there's a 10% or greater chance that we lose control of the plane

and it goes down.

You'd never get on that plane.

But the companies are caught in this arms race to deploy AI as fast as possible to the world, which means onboarding humanity onto the AI plane without democratic process.

And we referenced, you know, in the in this talk that we gave, we referenced the film the day after about nuclear war because they actually, what would happen in the event of a nuclear war, because it was followed by this famous,

you know, panel with with like carl sagan and um uh uh henry kissinger and eli weissel and they were asking and trying to make it a democratic conversation do we want to do a nuclear war do we want five people making that decision on behalf of everybody or should we have some kind of democratic dialogue about what do we want here and what we're trying to do is create that democratic dialogue i mean you by hosting me here we're doing that we're engaging listeners what do we actually want because if you're a listener listening to this and you say i don't want this this is not the future i want i didn't sign up to get on this airplane i don't want those five people in Silicon Valley onboarding me into this world.

I want there to be action.

Now, I know what you're saying.

I mean, can we trust the government to regulate this and get this right?

They don't have a great track record.

But also,

Tristan, I've talked to people

in Washington all the time.

I've talked to people who are supposed to be, you know, in charge and watching this stuff.

They're morons.

I mean, they have their many of them are so old they can barely use an iPhone.

And I don't mean to be cruel, but it's true.

They have no clue as to what we're dealing with.

Yeah.

No, I know that.

And we have to create some mechanism that

slows this down to get this right.

And the problem is that the companies can't do it themselves

because they're caught in a race now.

And

I do want to name why the race has accelerated, by the way.

It's important to note that Google, for example, and I'm not, there's not one company or other that I like or don't like, just important to note that there were companies that were developing really advanced AI capabilities.

Like Google had that voice synthesis thing where I could take a copy of your voice and then copy it.

They didn't release that because they said, that's going to be dangerous.

We don't want that out there.

And there's many other advanced capabilities that the companies have that they're holding.

But what happened was when Microsoft and OpenAI, that Sam Altman and Satya Nadella, back in like November and then February of this year, when they really pushed

to push this out there into the world as fast as possible.

Literally, Satya Nadell, the CEO of Microsoft, said, we want to make Google dance.

Like they were happy to trigger this race.

And them doing that is what's now led to a race where all the other companies, if they don't also race to push this out there and out-compete them, they'll lose to the other guy.

So

that is not acceptable.

That's like saying, well, if I don't release plutonium to the world as fast as possible, I'm going to lose to the other guy.

And now I'm making the other company dance to release plutonium.

That's not safe.

um and so how do you stop it how do you stop it

i you know honestly this is kind of our final test i think as a civilization right i mean

remember that remember that uh thing where i don't remember what it's called that you know the reason why we don't hear um from life in outer space is because you you know the nuclear i think this might actually be it

yes so the what you're talking about is fermi's um i can't remember if it's Fermi's paradox or basically Enrico Fermi had this,

who worked on the nuclear, the Manhattan Project, and had said, why is it that we don't see other advanced intelligence civilizations?

And having worked on the atomic bomb, his answer was because eventually they build technology that is so powerful that they don't control and they extinct themselves.

And so

this is kind of like, you know, I think about you going to amusement park and it's like you have to get on this ride, you have to be this tall to rise, to ride this ride.

I think that when you have this kind of of power, you have to have this much wisdom to steward this kind of power.

And if you do not have this much wisdom or adequate wisdom, you should not be stewarding that power.

You should not be building this power.

You know, Glenn, you know, the people who built this,

there was a conference in 2015 in Puerto Rico between all the top AI people in the world.

And the people who left saying that building AI is like, they called it like summoning the demon.

Because you are summoning kind of godlike intelligence that's read the entire internet that can do pattern matching and think at a level that's more complex than you.

If the people who are building it are thinking this is summoning the demon, we should collectively say, do we want to summon the demon?

No, we don't.

Right.

But and so, and it's funny because there's these arguments that like, well, if I don't do it, the other guy will.

And, you know, I just want to talk to the God and like we're all going to be, you know, going extinct anyways because look at the, you know, the state of things.

But these are really bullshit arguments.

It's like, we do not.

as a civilization, we didn't democratically say we want to extinct ourselves and rush ahead a demon.

We should be involved in that process.

And that's why it's just, it's a common public awareness thing.

This has to be, I think, like that day after moment was for nuclear

that caused Reagan to cry, right, in the White House and say,

I have to really think about which direction we want to go here.

And maybe we just say we don't want to do nuclear war.

And we chose to do that at that time.

This is harder because it's not two countries.

It's a all of humanity reckoning with a certain kind of power.

I think of it like Lord of the Rings.

Do we have the wisdom to put on that ring?

Or do we say that ring is too powerful?

Throw it in.

We shouldn't put that ring.

Yeah.

Yeah.

And it's, and it's a throw in the volcano.

And it's a, it's a Faustian bargain because on the way to the, to, to, to, to our, to our annihilation will be these unbelievable benefits, right?

It's like literally a deal with the devil because as we build these capabilities, people who use a chat GPT now are going to get so many incredible benefits, all these efficiencies, writing papers faster, you know, doing code faster.

We'll solve cancer.

We'll solve cancer.

We'll do so much cancer.

We'll do all of those things right up to the point that we extinct ourselves.

Correct.

And I will tell you, Glenn, that my mother died from cancer several years ago.

And if you told me that we could have AI that was going to cure her of cancer, but on the other side of that coin was that all of the world would go extinct a year later because of the only way to develop that was to bring something some demon into the world that we would not be able to control.

As much as I love my mother and I would want her to be here with me right now, I wouldn't take that trade.

We have to actually be that species that can look at power and wisdom and say, where do we not have the wisdom to steer this?

And that's how we make it through Fermi's Gate.

And that's what this is about.

That's what this moment is about.

And I know that sounds impossible, but that is the moment that we are actually in.

At blinds.com, it's not just about window treatments.

It's about you, your style, your space, your way.

Whether you DIY or want the pros to handle it all, you'll have the confidence of knowing it's done right.

From free expert design help to our 100% satisfaction guarantee, everything we do is made to fit your life and your windows.

Because at blinds.com, the only thing we treat better than windows is you.

Visit blinds.com now for up to 45% off with minimum purchase plus a professional measure at no cost.

Rules and restrictions apply.

One more message, and then back to Tristan.

First, our homes titles are online now.

And once a criminal accesses it and forges your signature, it is a race against time to stop him before he takes out loans against your home, but it'll look like it's his home, or sells it out from underneath you.

When's the last time you checked on your home's title?

Most likely, if you're like me or everybody else, I don't know, when I bought the house.

I mean, don't I have home title insurance for this?

No, no, this is completely different.

The people over at Home Title Lock demonstrated to me how easy it is for somebody to get to you.

I mean, I spent a lot of money with an attorney trying to bury my title so people can't find it and everything else.

Well, that didn't help.

I mean, they said it was just as easy as everybody else to get the title.

It's online.

Home Title Lock helps shut this kind of thing down.

It's what they do, and they do it better than anybody else.

Listen, this is not the kind of thing you want to find out after the damage has been done.

So be proactive.

Stop the crime before it happens.

How do you know somebody hasn't already taken the title of your home?

Find out now, free with a sign-up.

Get 30 days of free protection when you use the promo code Beck at home titlelock.com.

Promo code Beck, home titlelock.com.

You know, we're talking about

reality collapse.

You know, you talked about the

way we're going to be manipulated by AI by 2028.

It'll be the last real election.

The 2024 will be the last human election where we're not necessarily, and I don't know if I agree with that.

I think we could make the case that by 2024, enough could be, if it's close enough, to sway it.

But

we still haven't gone to

solve anything with social media.

And I, you know, I...

We're talking now about, you know, should we have laws, et cetera, et cetera, on social media.

We are at a point

to where I couldn't imagine being a teenager and yet we still

we know these things on social media are terrible for our kids we know it and yet we won't recognize it we won't talk about it we'll talk about banning it well you

know

you know, I'm more of a libertarian.

I don't want the government to ban things.

We just have to be an enlightened society and have some restraint and self-control but now we're looking at something that will completely destroy reality

what do we i mean we unfortunately get emails from parents all the time from our first work on social media i have been contacted by parents who have lost

many parents who have lost their kids to teen suicide because of social media so i'm all too familiar with um people who've actually actually gone through the full version of that kind of tragedy.

And

to your point,

this is an obvious harm with social media and we still haven't fixed it or regulated it or tried to do something about it.

And

the thing though, I want to add to what

you're sharing is that

Why social media has been so hard to do something about, it has colonized the meaning of social existence for young people.

Meaning that if you are a kid who is not on Snapchat or Instagram and literally every other person at your high school is or junior high or college, do you think that you're going to, like if the cost of not using social media is I exclude myself from social inclusion and being part of the group and sexual opportunities and dating opportunities and where the homework tips get passed or whatever.

everything.

So it's not just like, okay, there's this addictive thing like a cigarette and whether I use it or not and I I should have some self-control.

First of all, it's an AI pointed at your kid's brain, calculating perfect for them.

These are the 10, you know, dieting tips or hot guys or whatever that it needs to show you that will work perfectly at keeping them there.

So that's the first asymmetry of power.

It's a lot more powerful than those kids on the other side.

The second is that colonizing our social inclusion, like our social exclusion, that we will be excluded if we don't use it.

That is the most pernicious part of it, is that it is taken things things that we need to use and don't really have a choice to use and made them exist inside of these perversely incented environments.

We're sitting, you know, I remember

saying to Ray Kurz while he was talking to me about transhumanism.

And I said, Ray, what about the people who want just to be themselves?

They don't want an upgrade.

And he literally could not fathom that person.

And we got to a point to where, well, well, you'll just have to live like the Amish, completely set apart from the rest of society.

And we're in this trap to where that's true.

With our kids, we're already experiencing it, but we're about to do this in a scale unimaginable

to everyone on the planet.

Yes, because the challenge is, and we talked about this actually in our presentation, the three rules of technology, is when you create a new technology, you invent a new class of responsibilities.

If that technology confers power, that's rule number

one is if you create a new technology, you create a new class of responsibilities.

Think of it like this.

If I invent technology that we didn't need a right to be forgotten until technology can remember us,

right?

It's only when technology has this new power to remember us forever that we need a new, there's a new responsibility there, which is how can people be forgotten from the internet?

We have a right to some privacy.

So that's the first rule.

The second rule is if a technology confers power, meaning it confers some amount of power to those who adopt it, then it starts a race because some people who use that power will out-compete the people who don't use that power.

So if AI makes my life as a programmer 10 times more efficient, I'm going to out-compete everybody who doesn't

use

AI.

If I'm a teenager and I suddenly get way more inflated social status and popularity by being on Instagram, even if it's bad for my mental health and bad for the rest of the school, I'm going to go, if it confers power, it starts a race.

Now the other kids have to be on there to also get social popularity.

And then the last rule of technology we put in this talk is if you do not coordinate that race, well, the race will end if you do not, sorry, the race will end in tragedy if you do not coordinate the race.

And it's like anything.

You know, if there's a race for power, those who adopt that power will out-compete those who don't adopt that power.

But again, there's certain rings of power where if it's actually a deal with the devil, right?

Where, yes, I will get that power, but it will result in the destruction of everything as a result.

if we all could spot that, which things are deals with the devil, which things are summoning the demon, which things are the Lords of the Ring rings, that we can say, yes, I might get some differential power if I put that ring on, but if it ends in the destruction of everything, then we can collectively say, Let's not put that ring on.

And I know that that sounds impossible, but I really do think, like we said earlier, that this is the final test of humanity.

It is a test of whether we will be the adolescents, the technological adolescents that we have kind of been up until now, or will we go through this kind of rite of passage and step into the maturity, the love, prudence, and wisdom of gods that is necessary to steward the god-like power?

I know that it's super pessimistic, right?

I know if you want to.

Here's the pessimistic part, because I believe people could make that choice and would make that choice if we had a real open discussion.

But we have a group of elites now in governments and in business all around the world that actually think they know better than everyone else.

And this is a way for them to control society so it'll be used for them or by them for benevolent reasons.

And that's the kind of stuff that scares the hell out of me because they're not being open about anything.

We're not having real discussions about anything.

Yeah.

Well, and this is the concern about any form of centralized power that's unaccountable to the people is just that,

you know, if that power gets centralized, how would we know that it was actually,

if let's say we, you know, the national security establishment of the US stepped in right now and just swooped in and combined the US AI companies with the national security apparatus and then said we created this like governance of that thing.

So that's one outcome that stops the race, for example, just to name it.

That's a possible way in which to stop the race.

Now, the problem is, of course, what would make that trustworthy?

And how would that not turn into something opaque that then China sees and it actually accelerates the race in China while we might have consolidated the race in the US?

And so then, and then how would we know that that power that was governing that thing now was trustworthy?

Would it be transparent?

Well, if it had military applications, then probably a lot of that would be on black budgets and non-transparent and opaque.

So, and then to your point, like, yeah, just how when any time that there's a authoritarian grab of power, how do we make sure that that is that is done in the interests of people?

And those are the questions that we have to answer.

And the current way that our civilization is moving, there's sort of two attractors for the world that our friend Daniel Schmachtenberger will point to.

One attractor is: I don't try to put the steering wheel or guardrails on a power.

I just distribute these powers everywhere, whether it's social media or AI, just like let it rip, gas pedal, give everybody the godlike powers.

That attractor we call cascading catastrophes, because that just means that everybody has power coupled from the wisdom that's needed to steward it.

So that's one attractor, that's one outcome.

Okay, the other outcome is this sort of centralizing control over that power, and that's dystopia.

So we have either catastrophes or dystopia.

So Chinese surveillance state empowered by AI, monitoring everyone, what they're doing on their computer, et cetera.

Our job is to create a third attractor that is We create governance power that is accountable to the people in some open and transparent way with an educated population that can actually be in a good faith relationship with that accountable power that does not allow for the catastrophes and tries to to prevent those catastrophes, but does not fall over into dystopia.

We can think of it like a new American revolution, but it's for the 21st century tech stack.

The American Revolution was built on the back of the printing press, which allowed us to argue this country into existence with text.

Right now, we have AI and social media that are, you know, we're tweeting ourselves out of existence with social media.

The question is, how do you harness these technologies, but into a new kind of form of governance?

And I don't mean like new world governance and, you know, Dabos, none of that.

Just like honestly, looking at the constraint space and saying, what would actually steward and hold that power?

And that's a question we collectively need to answer.

Last question,

as I know you need to run.

How much time do we have

to

make these decisions before it's point of no return or

so apparent to everyone?

When does it become apparent to everyone we have a problem?

And is it too late at that point?

So these are hard questions.

And I want to almost be there with your listeners.

And I almost want to take their hand or something for a second and just sort of say,

we, I, you know, act in the world every day as if there's something that there is to do, that there's some way through this that produces at least not total catastrophic outcomes, right?

That's the hope, that there's something, there's some way through this.

There's certainly like take our hand off the steering wheel and and we know where this goes and it's not good.

I want to give your listeners just a little bit of hope here, though, which is that the reason it was too late to do anything about social media is we waited until after it became entangled with politics, with journalism, with media, with national security, with business.

Small, medium-sized businesses have to use social media to use advertising to reach their people.

We have let social media become entangled with and define.

the infrastructure of our society.

We have not yet let that happen with AI.

The reason that we were rushing to make that AI dilemma presentation is because we have not yet entangled AI fully with our society.

There's still some time.

The problem is that AI moves at a double exponential pace.

So it's basically a line of progress.

If something is to happen, it has to happen right now.

And I know a lot of people think that's not possible.

It'll never happen.

But if everybody saw this, if everybody was listening to this conversation we're having right now, literally everyone in the world, like literally, I would say, if Xi Jinping and the Chinese Communist Party also saw that they're racing to build AI that they can't control, we'd have to collectively look at this as a kind of Lord of the Rings ring.

That we would say,

only when we have the wisdom to, you know, steward this ring, we can work towards it slowly and say, what are the conditions?

Let's think about that.

We'd have to see it as universally dangerous and requiring a level of wisdom that we don't yet have.

That's possible.

It's not impossible.

Is it unlikely?

Yes, it's very unlikely.

Can we work towards the best possible chances of success?

That's what we're trying to do.

And I know that's hard.

I know this is incredibly difficult material, but

this is our moment.

This is the moment where we have to come together and reckon with the moment that we're in.

Tristan, thank you.

Thank you for everything.

And I hope you will come back and share some more.

Thank you.

Thank you so much, Glenn.

It's great to be here with you.

Just a reminder, I'd love you to rate and subscribe to the podcast and pass this on to a friend so it can be discovered by other people.