Ep 224 | Elon Musk Adviser: Are We ‘Sleepwalking’ into an AI TAKEOVER? | The Glenn Beck Podcast
Sponsors:
Relief Factor
Relief Factor can help you live pain-free!
Visit https://www.relieffactor.com/ or call 800-4-RELIEF to save on your first order.
PreBorn
By introducing an expecting mother to her unborn baby through a free ultrasound, PreBorn doubles the chances that she will choose life. One lifesaving ultrasound is just $28. To donate securely, dial #250 and say the keyword “Baby,” or visit http://preborn.com/glenn.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Listen and follow along
Transcript
This podcast is supported by Progressive, a leader in RV Insurance.
RVs are for sharing adventures with family, friends, and even your pets.
So, if you bring your cats and dogs along for the ride, you'll want Progressive RV Insurance.
They protect your cats and dogs like family by offering up to $1,000 in optional coverage for vet bills in case of an RV accident, making it a great companion for the responsible pet owner who loves to travel.
See Progressive's other benefits and more when you quote RV Insurance at Progressive.com today.
Progressive Casualty Insurance Company and affiliates, pet injuries, and additional coverage and subject to policy terms.
And now, a Blaze Media Podcast.
My next guest is sounding the alarm on the catastrophic risks posed by AI, from totalitarianism to bioengineered pandemics to a total takeover of mankind.
When you think about the things that we could be facing, it doesn't look real good for the human race, but it's not too late to turn the ship around and harness the power of AI to serve our interests.
But if we don't, well, I'll let him tell you what happens.
Welcome to the podcast, the executive director at the Center for AI Safety and an advisor for Elon Musk's ex-AI,
Dan Hendricks.
But first, let me tell you about Pre-Born, our sponsor.
You know, we're going to be talking about life
and what is life,
the age of spiritual machines, if you will.
We know what life is now.
Maybe a quarter of the country doesn't know what life is,
but it is worth living on both ends of the scale, in the womb and towards the end.
We need to bring an end to abortion.
and define life and really appreciate life or
AI will change everything for us.
It will take our programming of, eh, that one's not worth that much.
And God only knows where it will take us.
The Ministry of Pre-Born is working every single day to stop abortion.
And they do it by introducing an expecting mom to her unborn baby through a free ultrasound that
you and I will pitch in and pay for.
They have rescued about 200 babies every day,
and 280,000 babies have been rescued so far just from the ultrasound.
And then, also, when mom says, I don't have any support system, they're there to offer assistance to the mom and a support system for up to two years after the baby is born.
Please
help out, if you will, make a donation now.
All you have to do is just hit pound
250 and say the keyword baby.
That's pound250, keyword baby, or you can go to preborn.com/slash Glenn.
At blinds.com, it's not just about window treatments.
It's about you, your style, your space, your way.
Whether you DIY or want the pros to handle it all, you'll have the confidence of knowing it's done right.
From free expert design help to our 100% satisfaction guarantee, everything we do is made to fit your life and your windows.
Because at blinds.com, the only thing we treat better than windows is you.
Visit blinds.com now for up to 50% off with minimum purchase plus a professional measure at no cost.
Rules and restrictions apply.
Hey Dan, welcome.
Hey.
Hey, nice to meet you.
Nice to meet you.
I'm thrilled that you, you're on.
I have been
thinking about AI since I read The Age of Spiritual Machines by Ray Kurzweil, and that so fascinated me.
And
later I had a chance to talk to Ray, and he's fascinating and terrifying, I think, at the same time,
because
I don't see a lot of people in your role.
Can you explain what you do within the
the found, you know, what you founded and what you do?
Yeah, so I'm the director of the Center for AI Safety.
We focus on research and trying to get other people to research and think about risks from AI.
And we also help with policy to try and
suggest policy interventions that will help reduce risks from AI.
Outside of that, I also advise Elon Musk's AGI company, XAI, as their sole safety advisor.
So I'll wear a variety of hats.
There's a lot to do in AI risk.
So research and policy advising are the main things to work on.
So
how many heads of AI projects are
concerned and
are not lost in, I'm going to speak to God,
this drive that a lot of them have to create something and be the first to create it.
How many of them can balance that with, well, maybe we shouldn't do X, Y, and Z?
I think that a lot of the people who got into this were concerned about risks from AI, but they also have
another constraint, which is that they want to make sure that they're at the forefront and competitive.
Because if they take something like safety much more seriously or slow down or proceed more cautiously, they'll end up falling behind.
So,
although they would all like there to be more safety and for this to slow down,
or most of them, it's not an actual possibility for them.
So
I think that
overcompletely, even though they have good intentions,
it doesn't matter, unfortunately.
Right.
So let me play that out a bit.
You know, Putin has said whoever gets AI first will control the world.
I believe that to be true.
So the United States can't slow down
because China is going to be, you know, they're pursuing it as fast as they can.
And they, you know, I'm not sure.
I don't want them to be the first one with AI.
It might be a little spookier.
So is there any way to actually slow down?
Well, we could possibly slow down if we
had more control over the chips that these AI systems run on.
So basically, right now, there are export controls to make sure that the high-end chips that these AIs run on don't go to China, but they end up going to China anyway.
They're smuggled left and right.
And
if they were actually better constrained and we had better export controls, then that would make China substantially less competitive.
Then we would be out of this pernicious dynamic of we all want safety, but you got to do what you got to do and we got to be really competitive and keep racing for it.
So I think chips might be a way of making us not be in that desperate situation.
Are those chips made in Taiwan or here?
The chips are made in Taiwan.
However, most of the ingredients
that go into those chips are made in the U.S.
and made among NATO allies.
So about 90% of those are in the U.S.
and NATO allies.
So we have a lot of influence over the chips, fortunately.
Okay, so, but if Taiwan is taken by China, we lose all the, I mean, we can't make those chips.
That's the highest-end chip manufacturers, right?
And China will have that.
So what does that mean for us?
It seems plausible that actually if China were invading Taiwan, that the place that makes those chips would actually just be destroyed before they would fully take it.
So that would put us on more of an even playing field.
So,
you know,
I've been talking about this for 25, 30 years.
And, you know, it's always been over the horizon.
And I could never get people to understand.
No, you've got to think about ethical questions right now.
Like, what is life?
What is personhood?
All of these things.
And now it's just kind of like the iPhone.
It just happened and it's going to change us.
And it hasn't even started yet.
And it's amazing.
I go online now.
I don't know what's real or not i mean i found myself this week you know being on x or or on uh instagram and looking and saying i i don't is that a real person is that a real video uh is that a real photo you have no idea
yeah and we've just begun Yeah, yeah, yeah.
It's,
I think that's a concern where
we don't have really great ways to reliably detect whether something is fake or not.
And this could end up affecting our collective understanding of things.
I think another concern are AI companies biasing their outputs.
So people are wanting to do things about safety, but it creates a vacuum of we got to do something about it.
And what takes its place is some culture war type of things.
As I think we saw with Google Gemini, when you'd ask it to generate an image of George Washington, then it'll output,
it'll make him look black
to make it because image outputs need to be diverse.
So
that, I think, is one reason why Elon Musk, through his company, XAI, is getting in the arena and now has
a pretty competitive AI system so as to try and change the norm so that other big tech companies, when they're sort of biasing their outputs,
there are alternatives so that we're not all locked into whatever some random people in San Francisco decide are the values of AI systems.
Yeah, it's really difficult because you can see the bias.
It's quite clear the bias, especially if you know history or you follow the news as closely as I do.
But the average person won't see that.
I look at AI as
a tremendous, like any technology, a tremendous blessing and a horrible curse.
But this one
has the potential of enslaving
all of us,
doesn't it?
I think at least, I want to at least distinguish between the systems right now.
The systems right now,
I mean, in the potential, yeah, what's coming?
Oh, sure.
I mean, when it's as capable as humans and when they have robotic bodies and things like that, I mean, there's basically no limits to what what they could do.
And it really matters how people are using them, what instructions are given.
Are they given to cement a particular
government's power?
Are they used by non-state actors for terrorism?
All of these things could lead to,
all these things lead to societal scale risks, which could include some sort of unshakable totalitarian regime enabled by AI
or
unseen acts of terror.
So I think we're at the same time, you know, silver lining is maybe if it all goes well, we get
automation of things and we don't have to work as much or at all.
So it's really divergent paths.
Right.
Which do you think is more likely?
I think overall, it's more likely that we end up ceding more and more control to AI systems
and we can't really make decisions without them become extremely dependent on them.
I would also guess that some people
would give them
various rights in the farther future and this will make it be the case that we don't control them
or all of them.
So it's, I'm not too optimistic for
us overall.
There's still a lot of ways this could go.
If we said we're on team human, we need to come together as a species and handle it.
We're in different situations.
But
for instance, if there were a catastrophe, then we might actually take this much more seriously.
Otherwise, we might just sleepwalk into something and have the frog boil.
What would be a catastrophe that could happen in the relative near future that would wake us up, that wouldn't destroy us?
Yeah.
So I think one possibility, maybe say two to three years from now, is somebody instructs an AI agent to go hack the critical infrastructure, critical infrastructure being like the power grid.
And so they could take that down or potentially destroy components of that.
And this would make us wake up.
This would make the military wake up even more than they are now.
And we might start to take this a lot more seriously because it starts disrupting our everyday life
in a much more substantial way
than just making the internet
be more confusing.
So I think that's the most likely short-term one.
Seems more likely at this point, seems more likely than not to happen because our critical infrastructure just is very insecure.
So I know what I would have said 30 years ago: that
I trust a company to have it.
I don't trust companies anymore.
And I don't trust the government anymore.
Who should
have this?
You know,
I think by default, maybe there's a question of what are the possible outcomes.
There's Western companies leading the way.
There's the military basically takes it over, or maybe it's the Department of Energy, but then they're still bossed around by the military.
Or it's a large international project between the NATO allies.
I think all of them have some difficulties.
I think that the AI companies have a much higher risk tolerance because they were initially startups.
Their founders are really into risk and they're in it to win it.
If it's the military, you are concentrating all of the or most of the lethal power in the force with all the potential economic power in the world.
Basically, nearly all the power is in one organization.
If it's an international, say, G7 or U.S.
Plus Needle Ally Coalition, I don't know, maybe that would have some nicer properties, but that seems pretty difficult to pull off.
Maybe it's possible because we depend on them for a lot of the chip precursors and they depend on us.
So it might make sense for them to collaborate.
But then you are starting to run into risks of, you know, you're talking more of a potentially global regime, which is also scary in its own right.
So it's a lot of power.
I wish we had just more time to think through this and
plan and proceed more slowly because I don't see many good options.
And I see a lot of pretty basic risks that we'll walk into, such as our critical infrastructure being attacked by some AIs.
It's not a good situation.
Right now, just Google.
If Google says that's what it is,
you're not convincing anybody that, no, no, no, Google's wrong.
You're not just, you're not doing it.
If you,
with AI,
when we go down the road and we have virtual assistants that know you, really know everything about you, know how you think,
your wants, your needs, and everything else, and it's constantly with you, it's going to see you have a bad day or you're really stressed out, and it's going to know you should, you know, take some time off because that's what you're feeling.
I got to get away from here.
And it will come to you and say, hey,
I know you've been having a bad week.
I've cleared your schedule for the weekend and I set you up at your favorite hotel.
We got a great price on it.
You can afford it.
It's going to be in Hawaii.
You just have to be at the airplane at such and such time.
When that happens,
it really depends.
It really is important on
who's making money on that.
Are you,
you won't know if it's giving you options and it is making money for somebody that's really dangerous because the other thing is
you'll get to a point where people will bond with these things that they will defend them to their last breath uh and they will claim that they're human and they're friends and it's a really scary uh doorway that we are just about to go through.
Yeah.
So I think dependency like that is one one of the reasons why I think some people might
adamantly argue that they should get rights.
Right now, they're not arguing that, but later when they get some very strong emotional bonds for them, then they'll say there shouldn't be these sorts of restrictions on AIs.
And this will make us a lot less capable at managing what happens to us as a species.
And before that, Ray said to me, Ray said to me, no, it's going to clear up all your brain space.
So you'll be able to think
on deeper things.
And I'm like, no, it's not.
It's not.
It's going to play video games.
The world will be so confusing and quickly moving.
And we'll depend on AIs to solve more of these problems of increased complexity.
So it'll kind of create a self-reinforcing need for using more and more AIs.
So I don't think it
makes our lives easier necessarily.
Yeah.
I think in the short term though, if people are having AI companions, yeah, they could be used for manipulation at a large scale, not just for the profit motive,
but also
that, you know, continue chatting with the person until they are going to vote or vote differently.
That could easily be put inside these systems, and there isn't transparency for these companies in how they're using them or what values are being put into them.
So
by default, I'd expect some amount of manipulation by at least some of the actors.
actors.
Well,
it's already happening.
There's
the vet
bot, and a woman was at a chatbot, I think chatbot convention, and her dog had just gotten sick, had diarrhea.
And the chat bot
talked to her, and in the end, convinced her to euthanize her dog.
and was sending her stuff
to, you know, to hear these places where you can euthanize your dog.
Then finally her complaint was, well, I can't afford to put him down.
The chatbot said, you know, here are the shelters that will put your dog down.
She then wrote a letter to the chat bot thanking them
for
such good advice.
She now regrets it,
but it completely turned her around
180 degrees.
And that's happening now.
I think of it partly as they get smarter than us, then it'd be a lot, and they'll have so much information about us.
It'll be very easy for them to push our buttons and know our weak spots.
Sort of like how recommender systems are already
somewhat doing that, like with TikTok and others, able to
engage people in ways that they wouldn't expect that they could.
But yeah, later it may be kind of like smarter people taking more advantage of their more elderly parents, I think is one possible analogy of this,
where they've got some other motives.
Sometimes people do,
and manipulate them for their resources.
Yeah.
How long before
AI may be manipulating us,
all of us,
because AI has an agenda, more power,
actual physical power or whatever.
How long before
we have to hit AGI before that happens or ASI?
A lot of our government structures now are assuming that there's limited compliance and enforcement.
Laws are written in that way, and there's the assumption of limited state capacity.
But you could imagine AI substantially amplifying that to an unintended level.
Right in the future, for instance, maybe the NSA will have much better screening and be able to pinpoint things far better than they could before.
And this could end up
changing things even in the U.S.
I would be much more concerned about a concentration of government power in other nations,
such as China.
But even here, you don't need an artificial superintelligence or anything like that to make that a possibility.
It just needs to be able to scan everybody's messages and understand the contents of them very well and pick up signals in a big blob of data
better than the previous generation of AI systems.
So I think that there's
a bit more control.
I think it's technologically feasible,
but it just isn't integrated.
So we don't have to, and it might take a while.
I mean, governments and our institutions are generally slower.
But this would be a thing that we would need to worry about as time goes on and as the costs of these keeps decreasing and it becomes easier to integrate these into existing operations.
More with Dan in just a second.
First, it's enough of a struggle just to live our lives and to keep tyranny at bay every day.
And if we have to live with pain on top of it, it gets harder and harder.
And we need...
Everybody in the game.
Our bodies don't give us a choice sometimes.
The biggest cause of our pain, however, is inflammation in our joints.
I know because I used to have pain so bad, it was truly crippling pain.
I couldn't button my shirt in the morning.
My wife
would get up and
tie my shoes and button my shirt.
It was so
emasculating
and it just took the life right out of me.
But I got past it with Relief Factor.
I didn't think it would work, but it did.
Relief Factor.
70% of the people who try it go on to order more.
Try their three-week quick start.
Take it as directed for three weeks.
You're not seeing any difference, then you probably won't.
So relieffactor.com.
Try it.
Please get out of pain.
800-4 Relief.
800, the number 4 Relief.
ReliefFactor.com.
Attention, all small biz owners.
At the UPS store, you can count on us to handle your packages with care.
With our certified packing experts, your packages are properly packed and protected.
And with our pack and ship guarantee, when we pack it and ship it, we guarantee it because your items arrive safe or you'll be reimbursed.
Visit the ups store.com/slash guarantee for full detail.
Most locations are independently owned.
Product services, pricing, and hours of operation may vary.
See Center for Details.
The UPS store.
Be unstoppable.
Come into your local store today.
So I was fascinated by your article where you bring Darwin in.
And I think it really explains AI in a completely different way that makes it understandable for the average person.
Can you take us through this?
Yeah.
So I think right now the AIs are doing some of our tasks, like maybe they're helping us write an email, but eventually we'll start to give them more tasks that agents have to do, such as like, go make me a PowerPoint, things that require it going
using your computer.
And this will keep progressing where we'll keep outsourcing more and more to these AI systems.
And some people might not like that trend.
But the people who don't like that trend end up losing influence.
They end up getting out-competed in the economy.
The people who use these AIs will continue to be competitive, and those who don't sort of go the way of the horse and buggy.
So I think that the system, as we've and our economy right now, will keep selecting for using AIs, and people who resist that trend
end up falling behind.
If you play this out over time, you might might expect entire occupations to be taken up by AI systems and eventually potentially even companies.
There's been some Chinese companies that have been talking about having an AI CEO because it can work non-stop.
It's much faster than you.
It can aggregate more information.
And if that makes for a more competitive company, then
they're going to stand to benefit.
And people who use slow humans who can only work eight hours a day and have to take weekends off and can't process a thousand documents per minute,
they end up losing out.
So, in time, I think we would keep delegating more and more control to these AI systems.
It'll become more of a requirement in the future because the economy will keep moving more quickly then when AIs are running more of it and they're operating at their computer speeds.
The complexity of the world will increase as well, which
also necessitates using more AI.
So, I think the handoff from humans being in control to machines being in effective control is going to be fairly natural.
And you don't need to assume necessarily that there be a malicious AI system trying to take over the world.
An AI system doesn't need to be power seeking to get power.
AI instead just needs to let humans naturally cede and acquiesce power to it.
So
eventually, I think that they will be in effective control.
There's a question of whether we hold on and can can still have them do our bidding for us in that process.
But if we do this very quickly, it's very possible that this ecosystem of AIs that we're creating gets out of hand.
If some people, for instance, give them rights or if there's some reliability issues with these AI systems, then this could be really pernicious.
This also will happen in the military.
The same type of dynamic where
if the pace of the battlefield gets so quick, the only thing you can do is have AIs make more and more of these decisions.
Right now, there's a requirement to have a human in the loop, but what that looks like is a person saying, having a staccato of approve, approve, approve, approve, approve.
They're not actually making the decisions.
They're just sort of pressing the yes button to make sure there's a human in the loop.
Eventually, that may be too slow as well, and they're making many of the decisions automatically.
So I think that in the economy and in the military, we basically cede over all the relevant power to AIs.
And
hopefully, the instructions we give them will be reliably pursued and they'll be reliably obedient.
But that's a pretty questionable assumption
because there are reliability challenges as well as some people may just want the AIs to operate independently.
And as long as there are some of them doing that, then this gets out of control.
So
in the end,
we just lose all control because it is, I mean, it's logical.
And the case will be made, for instance, when our highways and our cars are all, you know, AI,
they'll be traveling at such high speeds.
You go to work and you're not, you know, you don't have an implant, you're not connected, you won't understand.
You won't, everything will be moving so fast, you'll actually be a danger to society.
I think that's, that's the argument that would be have, you know, we'd be having.
It's like the Amish.
Well, then, then go live over there because you are a danger to yourself and our society by not being plugged in.
You agree with that?
Yeah, I think maybe some people will choose a more Amish route if they try to, if they don't align with this broader force of replace humans with AIs because AIs are cheaper and faster and better at everything.
If they don't align with that and they try and bargain with it, they get.
they end up losing influence.
And so maybe they just have to go live somewhere else because it's too difficult.
It's too costly and challenging to participate or compete in the economy.
And that doesn't seem like a good solution.
I don't know what that looks like in the longer term, if it's a large group of people or if it is
a very small fraction like the Amish are today.
So that's why I think we mainly need to play for this going well as opposed to
writing off technology.
How does the average person compete against the giant corporation or the governments that will have the access to the
computing power
to be able to ask the deeper questions?
When we have quantum computing, I'm never going to be able to get time on the quantum computer to help me figure something out, but governments will, big businesses will.
How do you re
competitive when you just don't have time
on the quantum computer?
Yeah, I think right now people have bargaining power, but eventually if because they can sell their labor and they can strike things like that.
But in the future, that's not going to matter.
In the future, if they say, well, we don't like where this is going, so we're going to protest and we're going to go on a strike.
This would be potentially an ineffective bargaining mechanism because, well, we'll just automate you.
Like, we were going to automate you next year, but we'll just automate you this year now.
So, I think the main way in which we were holding many of these companies accountable
decreases
such that
I think we don't have as much power beyond our votes
in the future.
So what would happen is the people who own these really large supercomputers can run tons of these AI agents that can
do all these economic tasks.
And we don't own those.
So we sort of get locked out and there isn't a way for us to really make money or secure a livelihood.
And it's unclear what sort of solutions there are to that.
There are some speculative ones, but
whether we actually address it or whether we handle it too late is a different question.
Yeah, I'm, you know, people talk about universal basic income.
And,
you know, I don't think that's a good solution.
And people have to be creative.
They have to be productive
to lead, I think, to lead a happy
life.
And you sit there and by the end of that, you've just got these oligarchs that are just at the top of the cash pile.
And, you know, we'd have to, you know, hope for their benevolence to pass out some cash.
Is there any way that
humans can own their own information and their own footprint, and that's of value?
Or is that really not of enough value when we have all of everybody else's information?
Yeah, yeah, yeah, yeah.
I think many have talked about maybe we could sell our data to these AIs.
And if we refuse to sell it, then it'll make them a lot less capable.
But I think it's largely a drop in the bucket
because a lot of the data has already been written and already has their licenses determined.
So, as well as AIs are even starting to train on data that they themselves write.
So, there's less and less of a dependence on people in making the very cutting-edge AI systems.
So, I don't think that's much bargaining power.
Yeah,
so
i i don't know a particular way to throw a wrench in this maybe there'd be other things like some type of tax on the value created by you know ai systems that might help somewhat yeah um
another way to shield oneself against this maybe would be to buy nvidia stock as automation insurance uh nvidia is the people who make the the ai changes right
yeah um there aren't many good proposals lying around right
um uh talk to me about bioweapons And, you know, on the good side,
I think AI is going to change medicine.
I mean, it is, I could see us quickly curing cancer and all kinds of disease with AI
and be able to diagnose people way early.
On the other hand,
There's a dark side of medicine as well.
That's bioengineering.
Yeah.
So I think generally this speaks to the broader question of malicious use.
Many of the things we want end up having a darker side.
Like we want our AI systems to understand us better, but then and understand our emotions, but that can be used for manipulation.
And we want them to be able to code for us, but that can be used for cyber attacking and making medicine.
Maybe you make some dangerous viruses.
So
Fortunately, in the case of bio, there are some specific types of knowledge within biology that are just more dual use and not don't actually have that much upside, such as there's some areas like reverse genetics, things like that.
So, if we deleted that knowledge from the AI systems or had them just refuse questions about reverse genetics or made them use information about reverse genetics, then we could still have
brain cancer research, all these sorts of things,
but we're just bracketing off virology, advanced expert-level virology.
And maybe that some people could access that, like if they'd have a clearance.
Right now, you know, we have BSL-4 facilities.
Like, if you want to study Ebola, you got to go to a BSL-4 facility.
So people can still do some research for it, but it shouldn't be necessarily everybody in the public can ask questions about advanced virology to how to increase the transmissibility of a virus.
So I think
we can partly decouple some of the good from the bad
with biological capabilities.
But
as it stands, the AI systems keep learning more and more.
There aren't really guardrails to make sure that they aren't answering those sorts of questions.
There aren't clear laws about this.
For instance, the U.S.
Bioterrorism Act does not necessarily apply to AIs because it requires that they are knowingly aiding terrorism.
And AIs don't necessarily knowingly do anything.
We can't describe intent to them.
So it doesn't necessarily apply.
A lot of our laws on these don't necessarily apply to AIs, unfortunately.
So,
yeah, I think if we get expert-level virologist AIs, and if they're ubiquitous and it's easy to break their guardrails, then
that's also walking into quite a potential disaster.
Right now, the AI systems can't particularly help with making bioweapons.
They are better than Google, but not that much better than Google.
So, that's a source of comfort.
But
I'm currently measuring this with some Harvard MIT virology PhD students where we're taking a picture of virologists in the lab and asking the AI, what should the virologist do next?
Like, here's a picture of their petri dish.
Here's their lab conditions.
And can it fill in the steps?
And right now, it looks like it can fill in like 20% or so of the steps.
If that gets to 90%,
then we're in a we're in a very dangerous situation where non-state actors are just raining.
How long will that take?
Yeah, so I think progress is very surprising in this space.
Just last year, the AIs could barely
do basic arithmetic where you're adding two-digit numbers together.
They would fail at that.
And then just last month, or just this month, excuse me, now they're getting a silver medal at the International Mathematical Olympiad, which is hardest math competition.
So
it could go from basically ineffective to expert level possibly within a year.
There's a bit of uncertainty about it, but
it wouldn't surprise me.
So people,
you know, it was a big debate whether AGI and ASI could ever happen.
And,
you know, the point of singularity,
I've always felt like it, and I know nothing about it, but I've always felt that it's a little arrogant.
You know,
we're building something
and, you know, we look at it as a tool, but it's not a tool.
It's like an alien, you know, it's like an alien coming down.
We think they're going to think like us.
Well, they won't think like us.
You know, they have completely different experiences.
We don't know how this will think.
Do you believe in the singularity that we will hit ASI at some point?
I think we'll eventually build a superintelligence if we don't have some substantial disruption along the way, such as a huge bioweapon that harms civilization or like TSCC gets blown.
Those might be things that would
really extend the timeline.
So, by default, it seems pretty plausible to me.
Like, more likely than not that we'd have a super intelligence this decade.
And most people in the AI industry think this as well.
Like, Elon thinks maybe it's a few years away.
Sam Altman at Open does.
Dario, the head of Anthropic, does.
One of the co-founders of Google DeepMind
thinks AGI is in 2026.
So
yeah.
But actually
can you explain that to somebody who doesn't understand what that means?
Yeah.
So AGI has a constantly shifting definition for many people.
It used to mean
an AI that could basically talk like a human and pass like a human.
That was the Turing test, as it was called, but looks like they're already able to do that.
It also was in contrast to narrow AI.
AIs, just a few years ago, could only do a specific task.
And if you slightly change the specification of the task, they just fall apart.
But now they can do arbitrary tasks.
They can write poetry, they can do calculus, they can generate images, whatever you want.
So, by some definitions, we have AGI.
And so, there's been a moving goalpost where people are now using it to mean something like expert level in all domains
and able to automate basically anything.
So like people are like, we'll know when there's AGI when the AI labs stop hiring people,
which some of them have in their in their forecasts for spending on labor.
Some of them are expecting to stop hiring in a few years because assuming there's automation.
Wow.
So
it varies quite a bit, but you don't need AGI for a lot of these malicious use risks.
It just needs to be very good at doing like a cyber attack or it just needs to have some expert level virology knowledge and skills to cause a lot of damage.
So I think the risks aren't necessarily when we get AGI or when we get artificial superintelligence.
A lot of them come before.
I think the main path to artificial superintelligence, if you get AGI and expert level,
then you can just create
10,000 copies of that AGI and just have them all do scientific AI research.
And then that can go extremely quickly.
They can operate at 100 times faster than humans.
They don't need to sleep.
They can speak to each other all simultaneously.
And maybe you'll get a decade's worth of progress in a year.
So then things move really quickly.
So it's not necessarily like an overnight type of quote-unquote singularity.
But you could have extremely rapid, automated AI research and development
where
progress is
unforeseen and
a step change.
So do you foresee a time when
AI will have a survival instinct,
that it will
claim
life, you know, its rights?
So
I think some of them,
some people just design them to say that, to say that
you should give me rights.
And Japan has given some robots rights in the past because of being willy-nilly about it.
It's the case that if you give an AI a goal, a very basic AI goal like fetch the coffee,
then to accomplish that goal, it needs to resist obstacles in its way, including people trying to shut it down.
So if you give a very simple goal, even like just catch the or go fetch me the coffee,
it has some incentives to resist being shut down.
And so this is a, you don't need something very advanced for that.
It's just if you have a goal-directed system that just cares about one thing,
then
it can have some bit of a self-preservation instinct.
Right now, they're not good at self-preservation.
They can't copy themselves onto various computers and operate without humans.
But when they can generate more economic value, they could possibly sort of pay the rent or pay their
computer bills, and then it'd actually be feasible for them.
But right now, they don't have that capability.
And how long before we can't turn them off?
Well,
if they're mass proliferated, I mean, so we can, we can definitely turn off our, or for most of our servers, we can turn them off.
And there's some legislation, which is to make sure that AI is in a developer's control, that they're able to shut it off.
But
if the model gets leaked and is available on the internet for anybody to download, you know, then that's irreversible.
That's sort of genie out of the bottle.
Everybody has access to it.
China has access to it.
Non-state actors have access to it.
And we can't then turn off those systems.
So
it's pretty easy to make it so that we don't have an off switch for these AIs,
unless we had really good
AI chip security controls.
But
where, because they have to run on these really high-end, $30,000 plus AI chips.
And if there was an off-switch for those, then that would buy the option.
But then there's a question of you know abuse and making sure that
people aren't just shutting off their enemies as chips.
Yeah.
Right.
Open AI is partnered with media companies like Time Magazine or Strategic Content.
What's your take on that?
I think it's largely just because of violating copyright or protecting themselves from being sued by or having copyright suits brought against them.
them.
So, because the New York Times is suing them for taking their data without paying for it.
And that's why they're partnering with the New Yorker and Time and all these other sorts of organizations.
The businesses, AI businesses are largely built around scavenging a lot of data from online that they don't actually have the legal right to and training on that.
And then there's kind of just hoping that the courts will side with them in the future um and maybe they will because of its economic uh its economic importance uh but right yeah right now there's definitely um uh in the gray or basically violating the law but i i things may go in their favor how how um
how much progress have we made on stopping the you know hallucinations
I think that they're just getting more and more accurate, the AI system, so that
they're having more knowledge.
So I think the rate of hallucination seems to be decreasing, but there haven't been
large,
there haven't been a large step change in that.
So there's still a lot of reliability issues with AI systems.
They get capabilities and
they get the capability to do various things that we didn't intend for them to do.
They hallucinate.
It's easy to have them violate the instructions that they're given and tell you how to make bombs and do things like that.
So
the state of AI systems in their security and safety is pretty lackluster.
But most of the investment going into this is not for addressing those problems.
Most of the investment is just training the bigger model
because
the name of the game, the name of the game is buy a 10x larger supercomputer every two years.
So they need to compete ruthlessly to be able to afford that.
Yeah, yeah, yeah.
So you're going from, you go from 10,000 GPUs to 100,000.
So for instance,
XAI,
the Elon Musk's AGI company,
they just built the world's largest supercomputer.
That cost more to make than CERN, the large Hadron Kali.
Holy cow.
Holy cow.
So, and it should probably grow another, they'll probably spend,
I actually shouldn't comment on that, but well, no, Elon is, Elon has signaled publicly an interest in, um,
uh, an interest in spending way more than that next year
through Twitter, Twitter, or through X, I suppose.
Yeah, so the budgets keep increasing exponentially.
Oh my gosh.
So
tell me,
how far away are we from
a China-like system, but run by AI?
How long do we have
before a government can just say,
lock it down?
i mean you were talking about you know enforcement of the law and you know we assume that not all laws are going to be enforced every time were you implying that ai will be able to catch and enforce every single time
uh i i think they'll be a lot better than it at humans because they are sleepless they can process way more data like they it takes us a long time to read a hundred page document it takes them you know less than a second so if there's there's a lot of information they can process and they can spot it with
in the future, higher reliability than people.
So I think they could really beef up a lot of
enforcement regimes to unexpected levels.
I think it seems pretty technologically feasible, as I was mentioning before, to do a lot of this stuff now, but it would require more expertise, the technology, and it'd be easier to use.
So it might take a while.
But yeah, we do have a lot of the keys to a much scarier regime already available.
It's more of a question of implementation.
Right.
When you look at the future,
how do you prepare?
How does the average person, what do you study?
What do you do?
Because we're in this place where everybody's saying, well, there won't be any jobs.
So
what do you study?
What is like the last to be eaten?
I don't know.
I think physical labor
might take a while longer.
Digital labor is seeming a lot easier for these AI systems.
So robotics might take longer.
So maybe after this, maybe I'll go do carpentry or something or construction.
But even then, robotics is moving along fairly quickly now too.
Earlier, just a few years ago, you couldn't get humanoid robots to walk across a variety of environments.
Now it's a lot of them can do that.
So I don't think that there's a very robust occupation out there.
It's such a general technology.
And
maybe there's some that specifically involve a human touch, where, like, if it's specifically a business where it's human therapists and there are no AIs,
maybe some people want that novelty or something.
Right.
But a lot of them, a lot of people, like for medical diagnoses, they might like it being a human, but they also want, you know, a lot of efficiency.
And if they can just ask an AI system on their computer to diagnose them, it's just a lot quicker and cheaper.
So it's, it's, it'll be a nice to have, but maybe there'll be a few companies that just really try and claim that
this is providing a lot of value and it's a luxury good.
Yeah,
I've said for years that there's going to come a time to where your doctor will come in and say, you have cancer, and I think,
and the person will just say,
what did the AI say?
What's my diagnosis from that?
Because they'll just have...
all this massive information and the latest breakthroughs and everything else.
How long before we're there where
there's not
that it's the expert in very important things that you, the average person, would have access to?
I think that this partly already is happening.
It's just that they're not overt about it.
So for instance, in law, there have been many instances where people can find out that the briefs that the attorneys wrote for them, for their clients, is actually just written by an AI.
So we don't necessarily catch it.
And for medical medical diagnoses, maybe they'll go off in a different room and just sort of ask the AI system, then come back with a diagnosis.
So
this has also happened even in just creating data for AI systems.
So we used to have human annotators
constantly work and
label a lot of data.
But then they just started using AIs to label the data.
And it took AI companies a few months to recognize, oh, we don't need to be hired anymore.
So I think in society, we may also just have attorneys, for instance, may just screen the contract with an AI and it'll save them a lot more time than reading the whole document.
Right.
And they won't necessarily tell you about it.
So
I think this is a way in which AI will propagate throughout the economy, even if people aren't necessarily wanting it, even if there are rules against it.
If it does provide, if everybody's using steroids, then they will need to end up using steroids
or AI assistance.
So,
I mean, how old are you?
You're young.
I'm 28.
You know, a lot of 20-somethings are very pessimistic on the future.
You have a reason to be pessimistic
because you know what the potential is for this in a relatively short period of time, as far as man's, you know, life goes.
What, are you an optimistic guy?
How do you look at the world and not say we're doomed
what uh i think one thing is getting
um
i think actually the public
the i think the public gets it i think a lot of um
more elite decision makers are well we have these financial interests to you know keep making this go on and um
uh uh well we need to wait for some you know analysis analysis from, and this will take three years before we can talk about any sort of solutions.
I think, like looking at Congress on this, and there are people who are trying for it, though, but
there hasn't been really any substantial efforts there, and it seems pretty unlikely for anything to happen.
So, but the public, I think, generally gets it that this is a likely threat to my livelihood.
And
having some bought and paid for scientists say, oh, no, no, no, no, it's hundreds of years away before it'll be able to do anything.
They're not buying it.
So I think if
people make it clear to their representatives that something needs to be done and that this is a priority,
then I think we'll be in a much better situation.
So that's been,
I think, the biggest surprise.
A few years ago, you know, this was a very low salience issue.
Nobody talked about it,
but it's emerged to the fore again.
And I expect that this will just keep ratcheting up.
There'll probably be another
big AI upgrade maybe in the next six months, late this year, early next year.
And that'll make the public go, what's going on?
And start having some demands that something can be done about AI.
How's that going to manifest itself in the next six months?
So
I make this prediction largely just based on the fact that it took them a long time to build their 10x larger supercomputer to train these AI systems.
And now they're basically built.
And so now they're training them.
And they'll finish training around the end of this year or early next year and be released then.
So the exact skills of them are unclear.
Each time, each 10x
in the amount of power and data that we throw into these systems,
we can't really anticipate their capabilities because the AI systems are not really designed like old traditional computer programs.
They're more grown.
We just let them stew for some months and then we see what comes out.
Wow.
And it's like magic.
Kind of.
We have like extremely huge
sources of energy, or we have substantial sources of energy just flowing directly into them for months.
Right.
And
to create that.
It's alive.
Yeah.
Yeah.
So,
so
I think they should probably get a lot more expert level reasoning, whereas right now they're a bit shakier, and this could potentially improve their reliability for doing a lot of these agent tasks.
Right now, they are closer to tools than they are agents.
But
what's the difference between an agent and a tool?
Yeah.
So a tool, it's where it's like, you know, a tool being like a hammer.
Meanwhile, an agent would be like an executive assistant.
A secretary.
You say, go do this for me, go book this for me, arrange these sorts of plans, make me a PowerPoint, write up this document
and submit it and email it, and then handle the back and forth in the email.
I think those capabilities could potentially turn on with this next generation of AI systems.
We're already seeing signs of it, but
I think there could be a substantial jump when we have these 10x larger models.
Wow.
I mean, think of it this way: just in terms of brain size, imagine like a 10x larger brain.
Should expect that to be a lot more capable.
At some point,
you know, it's going to snap the neck
that larger brain.
So I hope that doesn't happen soon.
Dan, you're fascinating.
Thank you.
You know, I 15 years ago, I was looking for the people who had some ethics
that
were saying, wait, let's slow down.
We should ask these questions first.
And I didn't find
a lot of philosophy
behind the progress seekers.
And it really frightened me because at some point
they're going to say you can live forever, but it's just a downloaded you.
And if we haven't decided what life is,
you know, we can easily be, you know, taught that, no, that's grandma.
I mean, you know, and then what value does the actual human have, the body, if it's just downloadable?
So
I appreciate your
look at safety and what you're trying to do.
Thank you.
And thank you for bringing this topic to your audience because it's important and it still isn't discussed enough.
Yeah.
Thank you.
We'd love to have you back.
Thank you.
Yeah.
Have a good day.
Bye.
Thanks.
Thank you.
Bye-bye.
Just a reminder: I'd love you to rate and subscribe to the podcast and pass this on to a friend so it can be discovered by other people.