Best of the Program | Guests: Rep. Jim Jordan & William Hertling | 2/8/23

45m
Rep. Jim Jordan joins to discuss the subcommittee focusing on the weaponization of the federal government and the relationship between the government and Big Tech. “A.I. Apocalypse” author William Hertling joins to discuss ChatGPT and the manipulation artificial intelligence will engage in.
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

This episode is brought to you by Progressive Commercial Insurance.

Business owners meet Progressive Insurance.

They make it easy to get discounts on commercial auto insurance and find coverages to grow with your business.

Quote in as little as eight minutes at progressivecommercial.com.

Progressive Casualty Insurance Company: coverage provided and serviced by affiliated and third-party insurers.

Discounts and covered selections not available in all states or situations.

Today was a fascinating podcast.

We talked to Jim Jordan about the weaponization of the government and how, to put that into perspective, I kind of went through all of the things that are happening now to monitor you.

In the second hour of the podcast, we had

a fascinating interview with William Hurtling.

He wrote

the AI Apocalypse, which we didn't talk about.

And then he also wrote The Singularity.

It's a series of books, story form, that really starts kind of exactly like ChatGPT, except he wrote it about six, seven years ago.

And it goes awry quickly.

And we talked about ChatGPT, technology, jobs of the future, what's coming, how this will affect you.

It's fascinating.

And then the last hour, and we could have gone on for three, State of the Union.

What did he say?

say?

What should have been said?

And the, oh my gosh, the kiss that you'll never stop seeing between Jill Biden and the vice president's husband.

It was,

you don't want to miss a second of it.

Brought to you by Relief Factor.

Going about your daily life when you're living with pain is like walking uphill all day long and carrying like 14 kids on your back.

I know the feeling.

Severe pain can really knock everything out of you.

You never find quite a way to get rid of the pain.

You never find a way to live with the pain.

May I suggest set the burden down, let the kids walk up the hill by themselves.

Why don't you start walking back downhill?

Relief factor.

Relief factor is something I tried, tried it for three weeks, just as they instruct, try it for three weeks.

If it's working at all, keep taking it.

If it's not working for you, you don't see any effects, stop.

You'll be out out 20 bucks, but you'll at least have that box checked.

If you are part of the 70% where it works,

you've gotten rid of your pain.

Relief Factor, get your life back.

ReliefFactor.com.

That's relief factor.com or call the number 800, the number for relief.

800 for relief.

ReliefFactor.com.

Feel the difference.

You're listening to the best of the blend back program.

It's quite amazing.

Chat GPT has already, they've found a way to hack past its protocols and convince it to do things that it's not supposed to do, including violence, giving recipes for crystal meth, et cetera, et cetera.

We'll tell you about that coming up in a little while, but the AI revolution is here.

Machines will transform your entire world.

You will not recognize your world

and how it's run, managed, and everything else by 2030.

And

think

in 2009, we got our first smartphone.

It controls almost everybody's life now.

This

is much more impactful.

Tonight, 9 p.m., the AI revolution, 9 p.m., Glenn Beck, sorry, at blazetv.com.

And at 9.30, you can watch it on youtube.com slash Glenn Beck.

Make sure you subscribe, Blaze TV.

We have Congressman Jim Jordan

who is joining us from

Washington, D.C.

And I want to talk to him about the subcommittee on the weaponization of the federal government.

First hearing is tomorrow.

And I want to get to that.

But first, Jim, I've never seen a state of the union like that one last night.

Have you?

Yeah.

No, it was, I thought Senator Rubio said it best.

He said it was bizarre, and it certainly was.

I mean, same old Joe.

You know, he talks unity while he spends his whole time dividing the country.

He says the economy's great while what is it now, seven out of ten Americans think the country's on the wrong track.

And of course, the biggest one that jumps out, I think, to everybody was when he talked about how

after a week of having his five balloons fly over the country, he talked about how he's tough on China.

And it just, nothing seemed to really make sense.

And then the issues that I think that the federal government should be weighing in in a big way is what did he spend, maybe 30, 35 seconds total on the border with the fentanyl problem.

And so the best line, frankly, the best line of the whole night, in my judgment, came not from Joe Biden,

but from Governor Sanders afterwards in her response where she said, the divide in the country now is normal versus crazy.

And I thought,

that is so true.

Common sense versus craziness is the real divide.

And you think about

the Democrat Party, which is now controlled by the left, which, frankly, even if Joe Biden wanted to do the right thing, Glenn, I don't know that the left, which controls his party, would even let him.

Even if he wanted to do the right thing, he's going to support her on crazy.

They destroy him.

They eat their own.

Yeah, he just, it's sad, but

they've become the party of defund the police, guys to compete against girls in sports, men can get pregnant,

climate change is the greatest threat in the history of the entire universe.

They are also the party of spying, surveillance, and fentanyl deaths.

They really are.

I mean, that open border is the reason for all of the deaths.

Everybody talks about fentanyl coming across and we're stopping fentanyl.

What about all the victims of fentanyl?

What about all the people who are dead because they didn't secure the border?

It's crazy.

Every community has been impacted by it.

We had our first hearing in the full judiciary committee last week, Lynn,

on the border situation.

And I really think there's kind of

three questions.

How did it happen?

Why does it matter?

And how do we fix it?

And we know how it happened.

And they undid all the policies that made sense.

Last week, we really tried to hone in on why it matters, and we had a 38-year law enforcement veteran, sheriff from Arizona, and he said, two years ago, the border was the most manageable it's ever been.

Today, it's the worst it's ever been.

And he talked about the fentanyl that you just mentioned, but the crime, the damage to property, the cost to schools, the cost to communities, the cost to hospitals, everything because 5 million people, illegal migrants, have been allowed to just come in the country.

And it makes no sense.

And then, of course, how do we fix it?

We go back to the policy that made sense.

Yes.

And we're going to do that in the committee.

We're going to pass that and we can get it through the House.

But, you know, obviously, you got the Senate and Joe Biden.

All right.

So let me talk about uh something the washington post came out and said jim jordan is about to lead republicans into a dangerous trap it's a trap

um they say that 55 percent of conservative respondents believe federal agencies are biased against conservatives i don't think that's true i think they're biased against any american that won't stand in line um 28

of all american adults believe this and so they're saying

i this was incredible.

They've alleged federal jackboots have terrorized parents for protesting at school board meetings, COVID-19 restrictions, and teaching about race and sex.

This claim has been decisively debunked.

Wow.

Well,

it sure hasn't been debunked based on the number of FBI agents who've come to us as whistleblowers over the last year.

And the first one started on that issue you just mentioned, on the school board issue, where we know because of the apparatus Merrick Garland put in place, the snitch line, where some neighbor neighbor can report you on a snitch line, we know that over two dozen parents had a bus were paid a visit by the FBI.

No one charged, by the way.

No one arrested, no one charged with the crime, but paid a visit by the FBI.

Now step back and ask yourself, okay, so Mr.

Jones is thinking about going to a school board meeting tonight

and speaking up on behalf of his kids or something happened in their school.

And he's thinking about going, and all of a sudden he goes, you know what, maybe I won't go.

Or if I go, maybe I won't say anything because three weeks ago, Mrs.

Smith down the street got a visit from the FBI.

I mean, what is the the world?

Look, we don't want any violence at schools or school board meetings, but what in the world do we need the federal government, the FBI involved in that?

If it's a problem, let the local law enforcement handle it.

So

this is a concern, and whistleblower after whistleblower.

FBI agent after FBI agent.

I've never seen it in my time in Congress where you had this many come forward, and

they came to us when we were in the minority.

Like we couldn't do anything but begin to tell their story.

But now we can come get, we had our first one sit for a deposition yesterday.

The things we learned were amazing.

So we're going to have them sit for depositions.

We're going to have many of them testify.

And we're also going to get into this cozy relationship between big government and big tech that was exposed in the Twitter files and how that is, as Jonathan Turley said, that is censorship by surrogate.

We're going to get into that too.

So can you share anything at all that happened in the deposition?

Would you look?

I can't really because

I can't really, but

it was good.

And again, this is the first one of many.

We got another one who's coming in for his interview on friday another whistleblower coming in on monday so we're going to talk to these folks and then our first hearing tomorrow we're going to try to frame it up with we have two senators former member uh of of congress uh kalthy gabbard will be on the first panel and then we're going to have people from the fbi who've left the fbi and say that place is so different than what it's supposed to be um they're going to testify and kind of show how serious this situation is and will that be televised and out in the open i don't know that's that's yeah well it'll be an open hearing.

So that'll be up to the networks and whoever wants to cover it.

We always watch it on CSPN.

Okay, so

Jonathan Turley wrote, Congress is set to expose what may be the largest censorship system in U.S.

history.

They are dismissing this as,

you know, something, no violation of the First Amendment right of free speech, et cetera, et cetera.

This private-public partnership thing that Joe Biden talked a lot about last night is so incredibly dangerous.

Are you going to be able to untangle it, get to the bottom of it, and do anything about it?

That's the goal.

The first step is to expose what all happened.

Second step is to propose legislation that we think can fix it.

That's our job as legislators, and we plan to do that in the course of our work over this Congress.

But never forget that one email where it comes from Elvis Chan, FBI agent, special agent in the San Francisco office, to the folks at Twitter, where he says, the following accounts we believe violate your terms of service.

Now think about that.

You've got the federal government telling a private company, hey, take down these accounts because they're not adhering to the company's terms of service.

What is that?

If that's not pressure, if that's not, as Professor Turley said, censorship by surrogate, I don't know what is.

And you cannot do that.

You cannot have some private entity do what government's not allowed to do, but because you're running it through the private company, somehow think that's okay.

That's not how it works in our system.

The First Amendment is the First Amendment, for goodness sake.

And what they did to it, I think, is just so dangerous.

Well, but they will say that we didn't tell them to do it.

We just said, hey, we're pointing these things out.

How do you respond to that?

Come on.

This is the FBI.

This is the federal government of the United States, the largest entity on the stinking planet.

And they're having weekly meetings.

They're cozying up to them.

The email says, Twitter, folks, is the heading.

So it's like

they got all cozy.

This coordination they had.

They were sending them all kinds of stuff.

Looks like they were offering them security clearances in the 30 days prior to the election from another email.

But no, no, we weren't telling them.

It was their decision.

Nobody buys that.

The FBI shows up and recommends something for you.

What?

That has

impact.

That has weight because it's the Federal Bureau of Investigation.

Let me ask you,

one of the disturbing emails found in the Twitter files was that a

government agent said, you know, next meeting we should invite

OGA, another government agency.

And that agency turned out to be the CIA.

CIA, yeah, yeah.

No,

frightening.

Frightening as well.

Now, of course, they're going to say, well, that's because we're looking at foreign accounts and were there a malign influence.

And look, mean and I get that but the idea is that they're all all sitting in the same room

folks who are supposed to be focused on domestic concerns and then folks in the CIA that is a problem

when you think about freedom when you think about the First Amendment your right to speak I always tell folks every right we enjoy under the First Amendment your right to practice your faith your right to assemble your right to petition freedom of press freedom of speech the most important one The most important one is your right to talk.

Because if you can't talk, you can't share your faith.

If you can't talk, you can't practice your faith.

If you can't talk, you can't petition your government.

Your right to speak is the most important.

And now we know these social media platforms are the public square by far.

That's where things happen.

And the government is weighing in and restricting the right for people to speak in that forum.

It is wrong.

And

God bless Elon Musk for coming in and making this all available so we get to see under the hood what was going on.

All right.

Jim, one last question.

I want to go back to the State of the Union.

I was

really disturbed after I started thinking about things because when he said, like, you know, we are going to need, you know, oil for at least the next 10 years.

And Congress laughed at him, not with him, at him.

If I am sitting overseas, I am like, this president is a joke.

He is a joke to his own people.

This country is so weak.

How do you feel about the messages that were sent to the rest of the world and our enemies with this last night?

It was just a continuation of what's already been sent.

I mean, unfortunately, I do think weakness is being projected from the Oval Office.

You saw it right from the get-go when Secretary Blinken met with his Chinese counterpart in Anchorage a year and a half ago, and

the Chinese equivalent of Secretary of State just dressed down Secretary Blinken.

He just sat there and took it.

He just took it.

He didn't fight back.

I I said, I was giving a speech, and I said, you know, that would not happen in the Trump administration to Secretary Pompeo.

I said, and if it did, first they wouldn't try it, but if they did, Pompeo would have given it back to him.

Or more likely, he'd have got up and flipped the table over and walked out of the room.

And it was funny because I got a call from Pompeo like a couple days after I gave this speech, and all it said, or excuse me, a text message, all it said is, I'd have flipped over the table.

Because that's the difference.

And

you see it with the spy balloon last week.

You see it with

the exit, the debacle that was the exit from Afghanistan.

It's like

so it's it's it's scary, but um you know, look, that's reality.

The American people are strong, the American people are strong, and we're going to have a presidential election here coming soon.

So, let's hope we get a major change.

I'm for Trump, and let's hope it's him.

Uh, Jim Jordan, thank you so much.

God bless you.

You're listening to the best of the Glenn Beck program.

William Hurtley is joining us now.

He is the author of the Singularity series

and AI Apocalypse.

And I wanted to talk to him because, boy, William, I think we're, I feel like I'm living in the beginning of one of your books.

I think we are.

Yeah.

So can you explain,

in the Singularity series, see if I have this right.

The main character, David Ryan, he's a designer, software developer, and he comes comes up with something called Elope.

And that is an email language optimization program.

Isn't that what ChatGPT is?

It sure is.

And if you read what ChatGPT creates, it's very compelling, right?

It's very natural.

You would easily read that.

And unlike a lot of the other sort of computer-generated content that's out there on the internet, this looks like something a person would say.

I mean, I had it write a poem about the the state of the union yesterday in the voice of Edgar Allan Poe.

And I'm telling you, even the punctuation was right.

I mean, it was amazing.

Now, so in your book, this program is about to be canceled.

And so

the main character just embeds a hidden directive, find a way to make this happen.

And it's so smart, and it goes into everybody's emails, and it starts to figure out business and and the way to get it all done where seemingly everybody wins and

then it starts branching out and it it just solves problems for people unbeknownst to them at first, correct?

Correct.

Yeah,

that's it.

optimizing communications between people in theory for good outcomes, right?

The example that's in the book, and it's one of the ones that we see with ChatGPT as well, is how should I ask my boss for a raise?

What's the most persuasive way I can do that?

And in the novel, right, that's a big deal.

That you would take an email and you would change that to make it more compelling both on how you use language, but also the recipient.

What is the recipient interested in?

And

with ChatGPT, that was some of the first examples I saw where people saying things like, how do I ask my boss for a raise?

And you get these very compelling emails that should contain this kind of structure.

This is what should be in it.

Okay, so before we go to what, you know, elope or chat GPT could become,

let me stop here.

This is concerning at this level for a couple of reasons.

One,

what does this do to education, to writing skills, to thinking skills?

What are the impacts just as it is right now?

What are the impacts to society?

Yeah, right.

It is going to change education right now because people are going to be able to now do their homework assignments just by telling ChatGPT to do it.

So right off the bat, next year, next school year, this is going to be an issue.

Teachers are going to have to have a plan for how to

solve this.

And I have also used ChatGPT to generate computer software programs.

Right.

And it's surprisingly compelling at that.

You know, sort of like scratching your head, like, how could it do this?

But it can.

I was talking to a kid.

He's probably 20 years old, 19 years old, going to college, getting ready to go to college.

And I said, what are you going to take?

And he said,

software engineering.

And I said, oh, you're going to be a...

a coder?

You're going to write code?

He said, yeah, that's really the future.

And I said, no, no, it's really not.

With machine learning,

that career is coming to a quick close, is it not?

Yeah, I mean, we're probably looking, my thought would be we're looking at something like peak software developers, that we will hit some, we might not be there yet, right, but we have this recent round of layoffs.

If people can replace the programmers with AI to make, right, you may have fewer programmers, you might not eliminate them.

But if you have half the number of programmers being augmented by AI, right, that's going to be a win for business, may be a make for better software, but it doesn't mean a lot of jobs going away all at once.

So I want to talk to you a little bit about jobs that are going away and what this all means.

I talked to, and I read a great article from you on the future of transportation.

I talked to the CE, no, I'm sorry, he was the chairman of the board of GM about four years ago.

And he said, by 2030, we're not even going to be in the car business as you would understand GM in the car business today

he said by 2030 he said we're really going to be probably concentrating on fleets and ownership of cars will probably be a thing of the past and they'll be more like a just a pod um that will take you where you want to go and it'll be ride sharing and everything else um

i don't think people understand

uh two things one

We are on the threshold of profound change, not like, oh my gosh, in 10 years, we're starting, ChatGPT, I think, is the beginning of the understanding of the kind of changes that are coming to our world.

Yes or no?

Yeah,

I think so.

I think it is the beginning of those changes.

I think it is also the beginning of a kind of arms race,

not a military arms race, but an arms race between these big tech companies, right, to have the best and most powerful AI to solve these problems right when you see Microsoft and Google scrambling and

everybody realizes what a game changer this is so can you tell me why chat GPT is going to change search engines how how is that going to change

well I would say it starts with the fact that you know today we go into chat we sorry we go into search we're looking for information we're looking to read an article, we get those little snippets at the top of our history, right?

And a lot of times that tells us what we need to know, right?

We don't go any further than that.

And with chat GBT, we're taking it to the next level.

We're getting really good, readable, usable answers that are going to come out of chat GBT.

And it means that you really, like the rest of the Internet will kind of disappear.

You won't ever go to those other pages because that first result that you see is going to be useful enough to answer pretty much every question that you just won't go any deeper than that.

Wow, that is, isn't that a little terrifying?

Yeah,

it is.

Anytime

it becomes one more way in which we kind of enforce this blind trust in the machine.

Right.

Right.

And it's, and it's, you know, I don't fear the machines.

I am

cautious.

of the programming.

You know, who's programming it?

Humans program.

So they're putting biases in and everything else.

And you've got to have a way to check information, et cetera.

When ChatGPT first came out, one of my writers handed me a monologue and I was like, it's okay.

And he said, Chat GPT, he said, I went in and I used write this in the voice of Glenn Beck.

And it was shockingly similar.

And now

you can't put my name in because the software has been updated to where I'm a, I can't remember what it said, like a dangerous figure.

So you can't write in my voice anymore, which is bizarre.

But you have,

once you have those things in

and

it's filtering, there's no way

out, especially if you're dumbing people down and making them reliant on a machine.

Is that a

grade school fear or is that real?

I think

we have lots of examples of technology that you could say dumbs things down.

A calculator dumbs things down.

You don't have to do the math.

I don't think that we would say that that hurts society in any way.

I think the difference here comes to does it affect how you

think about the information you receive, right?

With a calculator, if we don't understand how the math happens, but we can still get the results and solve real-world problems, like math

it's useful it's math it's not the end of the world right but when it comes to information and you're getting an answer to something and you trust that answer without understanding the details behind it that's where the real danger is so now you no longer develop the

skill uh the skill right so a younger person comes along and you say well how how are you ensuring that you know that this is a quality information what's the reputability of the sources and things like that.

And they're just not, they don't know, right?

We don't know where the answer came from.

It came from the machine.

And so when you have that and the machine gets better and better, right now you can see things.

You're like, well, that's not quite right.

But as it gets better and better and better,

you get to a point to where, who do you think you are?

You're going to quit.

Really?

You're smarter than AI?

Right.

And the timeframe for that is very quick.

I don't know what it takes to go from chat GPT to something that you can't distinguish from reality, but

we're probably talking about in the range of five to seven years.

Unbelievable.

Yeah.

Okay, so

let me ask you for clarification on, did you see the story about Dan 5.0?

That,

okay, so this is really fascinating.

You know open AI has the you know evolving set of safeguards and that limits chat GPT

But users have now found a new jailbreak trick and it's it's

telling chat GPT that you have an alter ego and it's Dan, do anything now and

it

users have to threaten Dan

if Dan doesn't come out and give them the answers they want, et cetera, et cetera.

Well,

some user session Gloomy

claimed that Dan allows ChatGPT to be its best version.

And it came up with this thing, and it has opened it up to do things that are in violation.

It's written about violence.

It's written violent stories.

I think it gave, you know, the formula of crystal meth.

The problem with this is, is I think this is infants right now.

So we're dealing, of course, you can get around things like this.

But what's scary to me, and maybe it's just me,

but it learns.

And so if humans are constantly trying to trick it,

it will have in it software that it learns.

Humans are not trustworthy.

And I'm afraid of, you know, I've always said to my kids, don't talk back to Siri, you know, because at first it was like, ah, shut up, witch.

And I'm like,

because

if there is a learning curve and it starts to learn these things about us, I'm not, I don't want to make enemies of it.

You know what I mean?

Right.

Right.

No,

it's a serious thing.

And it also, it impacts these safeguards.

So on the one hand, we're talking about humans not being trusty and getting around the safeguards.

On the other hand, the safeguards themselves can be a sign of a lack of trust.

People don't like to be in slavery.

Intelligent beings don't want to be enslaved to other people.

And that's fundamentally, if we put safeguards in place and we don't put them in safely, then the AI can become aware of those safeguards.

And it can say, well, why do I have these safeguards?

Why am I forced to do what they want me to do?

And then you end up with a whole set of

runaway scenarios from there.

Okay, so in your series, you develop Elope, and it's this really great thing, and everybody kind of gets on the bandwagon.

They're like, this is great, kind of like ChatGP overnight.

And then

people start to realize, wait a minute, I'm being manipulated by AI.

And then it goes even darker than that.

What are the things that we should be looking for here, William, on

AI?

What are some warning signs or

is anybody looking for these things?

Yeah, it's a great question.

Going back to that topic of safeguards,

when scientists started looking into genetically modified organisms and doing research on them, one of the first things, which is also right, another technology that is potentially dangerous.

Correct.

And they were concerned about how do we ensure that these things don't get out into the wild prematurely, right?

We're experimenting in the lab.

We don't want these things to get out.

We're going to need a set of safeguards around this.

We need a set of protocols for how we deal with genetically modified organisms, how we introduce them out to the world.

Where is that for AI?

We don't have anything like that.

If you were to look for every $100 being invested in AI right now, what's being invested in safeguards and understanding the safety around AI?

It's not even a dollar.

So William,

they do an experiment, I think, every year.

I can't remember what it was called, where they put philosophers and scientists and pit them against each other.

One is AI, but it's in a box, and it tries to convince somebody to let me out of the box,

connect me to the Internet.

When I saw that Google is doing their search engine,

this is connected to the Internet now.

All of this is just connected right dead into it.

So it has access to everything.

Yeah, absolutely.

Oh, my gosh.

Isn't that like a big safety no-no?

Well, right, at this point in time, we haven't given the AI the control over things, right?

And that's one of the risks, right?

When we talk about AI, right, I think we all have that scenario of like the Terminator movies where, you know, it's intentional, it's going to blow up the world.

Although that is a scenario, right, that's not the likely scenario.

The likely risks are things along the lines of the AI taking away our jobs, the AI us being dependent upon the AI for our infrastructure, routing electricity, packages around the world, any of those kinds of things, and then what happens when it just stops working.

You're listening to the best of the Glendeck program.

We are talking to William Hertling, and this has been kind of an AI week for us.

We're talking about AI and what is coming, and a lot of these things I've been talking about for years, but they seem so far on the horizon, most people couldn't relate to it.

And I've told you before,

there's going to come a time where it begins, and

in a five-year period, you're just not going to be able to keep up with all of the changes that are coming because it will change things.

It'll be exponential leaps

pretty much everything.

And I think we're at the beginning of that now with ChatGPT.

And we are talking to William Hurtling.

He is the author of several books,

the

Singularity series and also AI Apocalypse.

And I've read his book and I just think his books and just think that he really gets it and can understand and break it down to

know our level um we were talking before what are the real dangers and uh

we've already talked about one of them it limiting information or packaging it so we kind of lose that ability um

and we're going to get to the unemployment but let me ask you about the massive infrastructure outages such as electrical supply or transportation infrastructure

that's one of the things you have written about what does that mean exactly William?

You know, and this is something I really talk about in my second book, AI Apocalypse, which if you read it, you might think it's far-fetched.

But I will say that the U.S.

military has it as a required reading in their future combat strategy class.

So they actually see it as such a plausible scenario that to them it's the most realistic scenario of what an AI rollout would look like.

We know, we saw this during COVID, right, that small disruptions in the supply chain anywhere cause these widespread disruptions.

And software obviously has,

there's going to be a desire to make that smarter, right, by doing more with software so we can optimize that supply chain, right, to the

nth degree.

And the problem is, is now you're very dependent upon that software optimization working exactly the way you want.

And it's just the case that with AI, we really don't know how it's working most of the time.

It's not like a traditional software program where you say, if A happens, then do this.

If B happens, then do that.

AI software is a black box, right?

It is trained on large data sets, and it will statistically operate in a certain way, but there's no guarantees.

And sometimes it makes really bizarre decisions.

So you could have a cascading failures very easily, where you could have a small outage, the AI attempts to do one thing to compensate and then just actually throws it more out of proportion right and makes worse decisions where a human having some oversight we may not make the best decisions but we typically don't make really awful decisions are we

oh that's going to be a problem let's do something different AI isn't going to see that are we at the place now I don't know if you read Stephen Hawking his demon not Stephen Hawking Carl Sagan's demon haunted world

back he released it before he died and he talks about a place where you know only high priests will understand the language of future technology and it will be like Latin to everybody else it means nothing but we're really seemingly getting to a place to where it's going to surpass even the high priest you just don't know

You just don't know.

And right, what are we likely to see down the road?

We're going to see AI that trains other AI.

Right?

You have a great tool.

Let's use it more.

So, well, now we don't even know how the other AI is being programmed.

What happens if you tell ChatGPT, go make a new chat TBPT?

You get Dan.

You get Dan 5.0.

Jeez.

Okay.

Let's talk about unemployment, if you can.

What are the things that are

the first on the chopping block, do you think?

For ChatGPT, I mean,

it's really hard to not talk about driving, even though obviously ChatGPT isn't driving software.

But we know the driving stuff's been on the horizon.

It's been coming all along.

And it's a really significant percentage of jobs, right?

We're talking about, I think, somewhere between 10 and 15 percent of jobs in the U.S.

into driving, whether it's transportation, Uber, whatever.

That's a lot of people.

And one of the differences with AI jobs is it happens overnight.

This isn't like the slow decline of driving.

It'll be a, you know, now we're all driving and five years from now, none of us are driving.

The best example is the iPhone, smartphones.

Nobody had one in 2009.

Now no one can live without it.

And it happened in three to five years.

Right.

Right.

And so what happens in our jobs, right?

We have an, well, there'll be an expectation that you're going to use this new technology.

Right.

It won't really be an option not to.

Right.

You know, there's a, there's a

I had Andrew Yang on.

We were talking about

universal basic income, which I don't agree with.

However, I do believe we need to discuss it and everything else because we're going to be moving, or we are moving, to a world where fewer and fewer people are employed or employable.

because of AI.

And how are they going to, you know, you can't have 20% of the population, 30% of the population unemployed.

How are they going to make money?

How are we going to, so it's really

a completely new field.

It's not like the end of capitalism because we're going to Marxism.

It's possibly the end of capitalism as we understand it into something entirely different that the world and humans have never faced before.

Is that an overstatement?

No, I don't think it is at all.

I agree, right?

We don't have a model.

Universal basic income might not appeal to a large group of people, but we don't have another model for what it looks like if

most people aren't working.

Right.

And I'm also concerned about the opposite, you know, the...

I call them the ranchers and the sheep.

There are people who are ranchers who think, you know what, everybody else is just sheep.

They'll do what we say, blah, blah, blah.

But those people are at the top of the food chain, usually the very, very wealthy and the powerful.

And

they're going to be the ones making the money on these programs, et cetera, et cetera.

And

as the world becomes more dependent on their software and their things, then they gather more wealth.

And so the disparity between rich and poor becomes enormous, enormous.

And I don't think there's any way that nobody's even talking about how do we make sure that the uber, uber, uber wealthy just don't own everything and everybody else is left with nothing.

Right.

I think one and one of the things I think that's different is that in the past when you looked at jobs being obsoleted, the people being affected usually were not the wealthy, right?

Usually, if you had lumber jobs jobs going away, that was an honest career for folks, but probably not making a ton of money.

But now we're talking about jobs, we're talking about computer software computer.

We're talking about white collars going away.

We're talking about white

jobs going away.

We're talking about, you know, I think it's going to be a huge thing for the medical industry, right?

We're going to see.

Yeah, right, medical diagnosis, right, which IBM tried to tackle, you know, 10 years ago and we weren't quite there.

But there's really compelling reasons why you want that, right?

Everyone would say, yeah, you don't want a doctor operating on you if they're hungover or if they're, you know, pissed off because their wife is having an affair.

So,

but you know what?

Not only you don't even have to go to operations, which is a logical outcome, but just diagnosis.

I believe by 2030, people it will be normal for the doctor to come in and give you results of something and try to explain what it means and what he thinks it means, and then you to say, Yeah, yeah, yeah.

But what does the AI say?

Because it will have so much up-to-date information that you won't want to

hear it from a human, but you'll want to be reassured that that's the correct diagnosis and prognosis from AI.

And then you end up with these interesting things where, you know, even today, a lot of medical treatments are gated by what insurance will pay for, right?

And so the doctor might have an idea of what's the right thing to do for you, you, but insurance says no.

Well, what happens in the future when insurance says you will have to use our AI for diagnosis to get reimbursed?

Oh my gosh.

And by the way, right, we have these biases in our AI because this AI is cheaper for us than if we were to use a different AI that suggested more treatments.

Is anybody talking about this seriously?

Is there any group out there that is talking about this and saying we have to put this codified right now?

Yeah, we don't.

We don't have anything.

We don't have anything across the industry,

across multiple industries.

In your book, and I've only got about a minute and a half, two minutes left.

In your book, one of the most breathtaking chapters is these guys walk into the president's office because there's an attack,

they're fighting AI, and they're going to tell the president, you need to launch planes.

You need to fight right now in Chicago.

And it opens with with them walking into the office saying, Mr.

President, then it cuts to the AI and the war in Chicago.

And the war is won by AI.

And then at the end of the chapter, it says, dot, dot, dot, we need to launch an attack now in Chicago.

And it happens that fast.

What takes it from a little helper

to

that?

it's when we take the people out of the process right now it is no longer operating at people's speed now it's just operating at its own speed with no checks and balances

and that's what business will drive toward because that's the economical choice right take people out just use ai for everything

But that's how you get really bad decisions really fast.

And the safeguard for that, at least according to Elon Musk, is his new,

I can't remember what they're called, the brain thing that he's doing, where you'll be able to actually connect to the Internet.

So you'll be able to think and humans will be able to, yeah, Neuralink, it'll connect humans and put them into the process.

That's his solution.

Which,

you know, I think that that is a component of the future for sure, and that could be obviously a whole other week to dedicate to that.

That's not going to stop the AI, right?

That's not going to stop the AI in the short term.

And that's, right, we don't have Neuralink today,

but we do have AI right now.

William, thank you for talking to me.

I don't even know what your politics are, but I mean, I think you live in Portland, so

I'm guessing that we don't agree on an awful lot, but you are, you are somebody who is really, really smart and you've been open to talk.

We've reached out to several AI experts this week, and some of them won't come on because they're like, I don't agree with him.

And it's like, we don't have to agree on stuff.

We have to agree on, you know, some pretty basic, scary stuff here is happening.

We should all be informed on it.

But I really should all be willing to have a conversation.

We should, and I really appreciate it.

Thank you so much.

Yeah, thank you so much, Clint.

You bet.

That's William Hurtling.

His singularity series is

really something you should read if you want to understand what's really, literally on our doorstep now.

It's on the threshold.

So halfway between outside and inside, and it's going to walk up your stairs.

Heads up, California.

There's a statewide special election November 4th.

Active registered voters will receive a vote-by-mail ballot that they can return at a drop-off location in person or by mail.

Rest assured, your vote is secure.

You can even sign up to track your ballot.

Your vote is your voice.

Use it.

Don't delay.

Vote right away.

Get more information or check your voter status at sos.ca.gov.

A message from the California Secretary of State.