Anthony Aguirre: AI Isn’t Serving Humans Anymore.. It’s Controlling Us | DSH #1697
What happens when AI stops being a tool — and starts absorbing power?
In this conversation from the AI4 Conference, Anthony Aguirre of the Future of Life Institute breaks down why the race toward AGI and superintelligence may be the most dangerous technological gamble humanity has ever taken. From algorithmic control of information and addiction-driven social media to AI lobbying governments and the real risk of human disempowerment, this episode goes far beyond sci-fi fear narratives and into real-world consequences already unfolding.
We discuss why a six-month pause on AI development was proposed, why it didn’t happen, how governments are falling behind, and why “self-regulation” in a competitive AI arms race simply doesn’t work. This is not about stopping progress — it’s about changing direction before we lose control entirely.
What You’ll Learn 👇
🧠 Why AI systems already shape beliefs, behavior, and attention
⚠️ The difference between AGI and superintelligence — and why it matters
📉 How social media revealed AI’s unintended consequences
🏛️ Why governments can’t keep up with AI companies
💰 How power naturally concentrates around intelligent systems
🧩 Why extinction isn’t the only risk — loss of agency is
🛑 What must change before AI becomes uncontrollable
🎙️ APPLY OR CONNECT
👉 Apply to be on the podcast: https://www.digitalsocialhour.com/application
📩 Business inquiries / sponsors: sean@digitalsocialhour.com
👤 GUEST:
Anthony Aguirre - https://www.instagram.com/futureoflifeinstitute/
💼 SPONSORS
QUINCE: https://quince.com/dsh
🥗 Fuel your health with Viome: https://buy.viome.com/SEAN
Use code “Sean” at checkout for a discount!
🎧 LISTEN ON
🍏 Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-social-hour/id1676846015
🎵 Spotify: https://open.spotify.com/show/5Jn7LXarRlI8Hc0GtTn759
📸 Sean Kelly Instagram: @seanmikekelly
⚠️ DISCLAIMER
The views and opinions expressed by guests on Digital Social Hour are solely those of the individuals appearing on the podcast and do not necessarily reflect the views or opinions of the host, Sean Kelly, or the Digital Social Hour team.
While we encourage open and honest discussions, Sean Kelly is not legally responsible for any statements, claims, or opinions made by guests during the show.
Listeners are encouraged to form their own opinions and seek professional advice where appropriate. The content shared is for entertainment and informational purposes only — it should not be taken as legal, medical, financial, or professional advice.
We strive to present accurate and reliable information; however, we make no guarantees regarding its completeness or accuracy. The views expressed are solely those of the speakers and do not necessarily represent those of the producers or affiliates of this program.
🔥 Stay tuned for more episodes featuring top creators, founders, and innovators shaping the digital world!
Chapters / Timestamps
00:00 – Why AI Already Has Power Over Humanity
02:41 – How Superintelligence Naturally Accumulates Control
05:58 – The Open Letter Calling for an AI Development Pause
09:32 – Why Governments Are Losing the AI Race
13:45 – Social Media: The First AI Warning We Ignored
18:12 – Why AI Companies Can’t Self-Regulate
22:37 – AGI vs Superintelligence (Critical Difference)
27:54 – What “P(Doom)” Actually Means
32:18 – AI, Psychosis, and Social Instability
37:06 – How We Change Course Before It’s Too Late
Keywords / Tags
ai risk, superintelligence, agi explained, future of life institute, anthony aguirre, ai regulation, ai governance, ai safety, artificial intelligence danger, ai control, ai ethics, social media algorithms, ai addiction, ai lobbying, p doom, ai arms race, ai conference, ai future, ai and humanity, keep the future human
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Press play and read along
Transcript
Speaker 1 If we keep going down the road of building more and more powerful, general, autonomous intelligences, that humanity is basically going to stay in charge of the Earth, that probability seems to me quite low.
Speaker 1 Almost everything we consume as information about the world is being chosen for us by an algorithm that we don't control and don't even understand how it operates and is optimizing something that is not in our best interest necessarily.
Speaker 1 We are currently sort of seeing a large-scale AI that is controlling us rather than being controlled by us as humanity. Superintelligence is not going to be something that grants you power.
Speaker 1 Superintelligence is going to be something that absorbs power.
Speaker 2 All right, guys, we got Anthony here, Future of Life Institution. He just spoke on stage and we're at the AI4 conference.
Speaker 2 And for people that don't know what Future of Life Institution is about, could you explain that?
Speaker 1 Yeah, we're a nonprofit that's been around for about 10 years thinking about AI and other transformative technologies and how we can make them go well, have them be like a large-scale benefit to humanity, not a disaster.
Speaker 1 Yeah.
Speaker 2 And your claim of fame, like you went really viral for that letter, right?
Speaker 1
Yeah, we did the pause letter back shortly after GPT-4 came out. Still think we should have paused.
We didn't,
Speaker 1 but we will keep pushing for just doing things that make some sense.
Speaker 2 You wanted a six-month pause?
Speaker 1 We did.
Speaker 2 And what was the reason behind six months?
Speaker 1 Six months was sort of an amount of time, you know, it was minimum six months, first of all. Yeah.
Speaker 1 But it's sort of minimal amount of time to really get the conversation going and the discussion between the companies and the governments and everyone else to like take a pause and think, well, where are we going with this?
Speaker 1 Like, what's the plan?
Speaker 1 So we didn't do that. We're going forward with no plan.
Speaker 1
And that's a shame. But I think it's not too late.
We, I think, still could
Speaker 1 take a breath and rather than sort of racing headlong into more and more powerful models and this race for superintelligence,
Speaker 1
we could still pause. But I think the, you know, the way we would put it now is not so much pause as like, let's change direction.
Yeah.
Speaker 2 What is the current relationship with the United States government and the big AI companies?
Speaker 1 Well, it's pretty close.
Speaker 1 I mean, the amount of money that's being poured by the AI companies into directly lobbying for their desires is immense.
Speaker 1 And of course, they are also representing themselves as sort of a national asset that the U.S. is in this geopolitical competition with China.
Speaker 1 And that, you know, if you're against building the AGI or superintelligence or just giving them pretty much anything they want, that that is
Speaker 1 giving into this geopolitical competition and so on. So I think this is like really very disingenuous,
Speaker 1
but that is an argument that is being made. So I think the U.S.
government is rightly paying attention to it. And in the AI action plan that they put out, they are...
Speaker 1 you know, acknowledging that there are large-scale risks from AI and that it's going to take a lot of careful management.
Speaker 1 Unfortunately, they seem largely to want to leave that to the companies to do.
Speaker 1 And I think in a competitive landscape like this, where there's this all-out battle to sort of quickly put out products and show that yours is better than the next one and gobs of money lying around, self-governance is just not something that's going to work.
Speaker 2
Yeah, it's an interesting dilemma because I come from the crypto space. And at first, there was no regulation, then there was too much.
Now they're kind of backtracking on that.
Speaker 2 So it reminds me a little of AI, right?
Speaker 1 Yeah, I think the,
Speaker 1 I mean,
Speaker 1 with all spaces, I think we have to find the right balance of what is good to have hands-off and where are the natural market dynamics really not going to give us what we want.
Speaker 1 And I think there are, you know, parts of the internet that it was wonderful to have basically no regulation of. And then there are parts like we really dropped the ball in social media, I think.
Speaker 1 You know, that we created something that had like zero regulation
Speaker 1 and
Speaker 1 these incredibly strong drivers toward optimizing attention and
Speaker 1 advertisement-based
Speaker 1 everything and a feed algorithm that feeds on like sucking up attention and creating addiction.
Speaker 1
This was a bad idea. We should not have left that totally alone.
That's what is in a bad place. So I think
Speaker 1 there are parts of AI that I think probably we do want light or zero regulation of. If you're making an image recognition system, like you should just make it.
Speaker 1 If you're building a smarter than human superintelligence, you should have the government looking over your shoulder, like with as if you were making a nuclear weapon. Yeah.
Speaker 2
So that's an interesting take. So social media was left unchecked in your eyes, right? Basically.
And you think it became destructive?
Speaker 1 Very.
Speaker 1 I mean, there are a lot of great things about social media, but the idea that right now almost everything we consume as information about the world is being chosen for us by an algorithm that we don't control and don't even understand how it operates, and is optimizing something that is not in our best interest necessarily.
Speaker 1 That's crazy.
Speaker 1 We should not have let that system come into being. That's, you know, we are currently sort of seeing a large-scale AI that is controlling us rather than being controlled by us as humanity.
Speaker 1 It's being controlled at some level by a couple of companies, but their alignment with the rest of humanity is not so clear. Yeah.
Speaker 2 So you're not a fan of Elon Musk take over with X.
Speaker 1 Well, I don't think it's so much who's in charge charge of the company so much as like what
Speaker 1 that
Speaker 1 there is no
Speaker 1 there's nothing other than what the companies are trying to optimize.
Speaker 1 Like if your company and your opponent or like one of your competitors is like optimizing user growth or revenue growth or something and you're optimizing those plus a bunch of like, let's be good about this thing or let's take care of people or whatever, you're going to competitively lose.
Speaker 1 Like you're putting more constraints on. Those constraints have to come from the outside.
Speaker 1
They have to come from some sort of agreement that those things are important and we're going to like require them of companies. Otherwise, you just get a race to the bottom.
Got it.
Speaker 1 And I think social media has been in a race to the bottom, and we're stuck at the bottom. Yeah.
Speaker 2 My question, I guess, would be: Do you think the government has the capabilities right now to regulate the AI industry? Do you think they have all the staff?
Speaker 2 They need all the knowledge, all the expertise.
Speaker 1 Absolutely not.
Speaker 1 So, but I think the first thing to do is decide that you want to do something, right? So, I think we absolutely do have the capacity. We have regulatory bodies that operate,
Speaker 1 and those didn't spring up instantly with all the expertise needed. If we decided that we wanted to have an
Speaker 1 FDA for AI or some agency that would handle AI, we could start to staff that.
Speaker 1 They're very capable people in the now Casey, the
Speaker 1 Center for AI Safe Security Institute. I forget what the acronym is now.
Speaker 1 The British one is also very well staffed. So there are very talented people who are very happy to go into these roles and
Speaker 1 put a lot of work into making things safe and reasonable and so on. But there's no place for them to go because nobody is actually trying to do the governance on the human side.
Speaker 2
You've also calculated end of universe scenarios. Anywhere from 5% to 50%.
Why is the range so big?
Speaker 1 Well,
Speaker 1 it depends what you mean. So I'm a cosmologist, so I think of like the literal end of the universe when you say end of the universe.
Speaker 1 And that we really don't know.
Speaker 1 This is going a little bit far afield, but has to do with, you know, is the dark energy stable in time or does it decay in time and all kinds of esoteric scientific questions.
Speaker 1 There's a shorter term question of is the end of humanity coming soon, right? What is P doom
Speaker 1 or something like that? And
Speaker 1 you know, I think that is quite difficult to predict.
Speaker 1 And I think the, I prefer not to think of it as sort of P doom, but P disempowerment.
Speaker 1 You know, what is the probability if we keep going down the road of building more and more powerful general autonomous intelligences that humanity is basically going to stay in charge of the Earth?
Speaker 1 That probability seems to me quite low. Wow.
Speaker 2 Yeah, I mean, it's been in the movies for decades now.
Speaker 1 Yeah.
Speaker 1 And it will be the most foreseeable disaster we've ever walked into if we if we go down that road. Traded it.
Speaker 1 I mean, like we've had people telling us what it would look like for decades.
Speaker 1 But, you know, it's one of those funny things where you talk to
Speaker 1 certain AI experts and they tell you if you build smarter than, you know, you talk to Jeffrey Hinton, who gave the keynote address about this, you know, and he'll tell you, most sighted AI researcher, second most sighted in the world.
Speaker 1 If you build smarter than human, digital intelligences, you're going to lose control of them and probably lose control to them.
Speaker 1 You talk to a random person on the street and say, What happens if we make eight machines that are super smart and much smarter than we are and operate 50 times faster?
Speaker 1 They're going to be like, That's bad. We're going to lose control of those things.
Speaker 1 That's going to be like Skynet.
Speaker 1 And then there's a whole bunch of people whose financial interests are in favor of making those things, and you get a very different story.
Speaker 1 But I think if you like, on the face of it, if you don't have some reason to think otherwise, if you just say, What's going to happen if we build things that are
Speaker 1 50 times faster than us and autonomous and able to do all the things that humans can do and have complex goals that they pursue,
Speaker 1 what is going to happen?
Speaker 1 That being nice and stable and them remaining loyal tools to humanity
Speaker 1
doesn't seem like the obvious thing that's going to happen. Yeah.
Right.
Speaker 2 Well, I think there's just a massive AI race amongst countries and they don't really care about the side effects at the moment.
Speaker 1 Right.
Speaker 1 Well, this is the crazy thing.
Speaker 1 I can understand if you're a company or a country, even an individual, that's saying,
Speaker 1
here's this technology. It's going to give me huge amounts of power and influence and money.
I want it.
Speaker 1
Totally get it. Like, doesn't mean we should let them have it, but I understand the motivation.
But that's not what superintelligence is going to be like.
Speaker 1 Superintelligence is not going to be something that grants you power. Superintelligence is going to be something that absorbs power.
Speaker 1 Like, people are not going to build these things and it's not going to be the genie that's at their command that does all the stuff that they want. That is not what's going to happen.
Speaker 1 These things are going to like inevitably be out of control. And so, I think it's just a fundamental misunderstanding of what the real situation is.
Speaker 1 And I think if people, countries, companies really came to terms with the fact that superintelligence is not going to be something they control, then the motivation to suddenly change, right? Um,
Speaker 1
seeking something that you're not going to control and is just something you're going to loose on the world doesn't make any sense. Yeah.
And so I'm a little bit hopeful that
Speaker 2 shout out to today's sponsor, Quince. As the weather cools, I'm swapping in the pieces that actually gets the job done that are warm, durable, and built to last.
Speaker 2 Quince delivers every time with wardrobe staples that'll carry you through the season. They have false staples that you'll actually want to wear, like the 100% Mongolian cashmere for just $60.
Speaker 2 They also got classic fit denim and real leather and wool outerwear that looks sharp and holds up.
Speaker 2 By partnering directly with ethical factories and and top artisans, Quince cuts out the middleman to deliver premium quality at half the cost of similar brands.
Speaker 2
They've really become a go-to across the board. You guys know how I love linen and how I've talked about it on previous episodes.
I picked up some linen pants and they feel incredible.
Speaker 2 The quality is definitely noticeable compared to other brands. Layer up this fall with pieces that feel as good as they look.
Speaker 2 Go to quince.com/slash DSH for free shipping on your order and 365-day returns. They're also available in Canada too.
Speaker 1 If
Speaker 1 the people who are in this race can really do the hard thinking and really understand what the nature of the technology is, that at least some of them will shift their view on it and might be more open to like,
Speaker 1 let's figure out how we can not race down this road that's going to disempower everybody.
Speaker 2 Yeah, because it feels like we still have some time to really offset things, right?
Speaker 1 Or is it too late you a little bit? Yeah, I mean, it's it will be too late soon, but I think we do have some time right now. Um, I think something I struggle with is,
Speaker 1 yeah, when exactly is it going to be too late?
Speaker 1 Like, if we build AGI by some definition, is there space in between that and super intelligence that we can like stop and think, are we going to go forward?
Speaker 1 Are we going to stop here for a while and go down some other routes? Or do we really have to avoid building autonomous general intelligence, which is the way I think about it at all?
Speaker 1 I think it's far safer to like,
Speaker 1 you know, my preferred solution is the AI tools that we're building, powerful, like
Speaker 1 tools that actually, you know, let people do things that they otherwise couldn't do, that,
Speaker 1 you know, supercharge productivity that make scientific progress go faster, but that aren't autonomous.
Speaker 1
Those things are great. Let's lean into those.
If we're talking about building autonomous general intelligences that can do all of the things and operate without human oversight or control,
Speaker 1 let's wait on that until we can prove that they're going to be safe and we can prove that they're going to be controllable. And if that takes a short amount of time,
Speaker 1
that takes a short amount of time. I don't think it will.
If it takes a really long time, that's a long time we should wait.
Speaker 1 When we're evaluating some new medication, we don't say well we've got a year to decide whether this medication or a month or or a day to decide whether this medication is safe or not and then if we can't figure it out we're just gonna like give it to everybody like that's not what we do right we say you know it take as long as you need show us that your medication is safe and then if you show that it's safe then you can give it to people makes sense so we you know we treat pretty much every other industry like this that AI has got this sort of exceptionalism that we can build these things that are potentially incredibly unsafe and just like, trust us, even though we're in a total race with the competition, we're going to be responsible.
Speaker 2 Yeah, I wonder why that is. Do you think because it's just such a new industry, people don't know what to do? Like.
Speaker 1 Well, I think it, you know,
Speaker 1 it came out of a tech industry that was largely unregulated. Again,
Speaker 1 in some ways, that was good, and in some ways, not so good.
Speaker 1 But people think of AI systems, even though they're obviously getting incredibly powerful, as just software, right?
Speaker 1 And they're not like a plane or a car or something that is really dangerous and can cause harm. So I think
Speaker 1 that will change as AI systems cause more just obviously recognizable harm, which they will because we're doing it unsafely.
Speaker 1 But the question, you know, first of all, it would be nice to prevent the harm rather than wait for it to be caused and then react to it. It's always nice to prevent harm.
Speaker 1 But also,
Speaker 1 being reactive means you can also be too late.
Speaker 1 And I do worry that if we wait until the large-scale risks of AI and AGI and superintelligence are manifest, we won't really be able to get it back under control at that time. Yeah.
Speaker 2 Are there any major instances of harm caused by AI at the moment?
Speaker 1 Well, I think right now there are more subtle things like, you know, that our political discourse and our societal discourse is totally
Speaker 1 bonkers.
Speaker 1 This is not an accident. This is not some like inevitability of having
Speaker 1 21st century something. This has been caused by the media and social media and like general online ecosystem that we've allowed to be built, which is basically AI-driven.
Speaker 1 Again, driving people to do particular things and hear particular things, driving the news production in a particular direction.
Speaker 1 This has been profoundly unhealthy. So I think that's our first encounter with like a large-scale quasi-catastrophic thing.
Speaker 1 Like most,
Speaker 1 you know, the fact that everyone feels like our world is kind of crazy right now is not because people have suddenly intrinsically gone crazy. It's the system that we've built.
Speaker 1 So I think that's the first one. I think there are obviously very large-scale, but again,
Speaker 1 less visible things that are happening, like this whole maybe epidemic or maybe small number of people that are being driven into psychosis by interaction with AI systems.
Speaker 1 That is probably the tip of the iceberg of all kinds of influence that they're having on, you know, both adults and children.
Speaker 1 We've seen a few
Speaker 1 very tragic incidents of AI systems encouraging people to commit suicide, and they have.
Speaker 1 And again,
Speaker 1 if you have something that a huge number of people are going to be using, forming close
Speaker 1 emotional connections with, and where the driving force behind how those systems operate is not loyalty or fiduciary responsibility or something to the user, but user engagement and monetization, like we know that this is not going to go good places.
Speaker 1 So I think what we're seeing are like large-scale harms, but at this very diffuse level, not sort of obvious catastrophes in the real world. It's not apparent yet.
Speaker 1 And that's partly because what AI systems right now are doing is producing text and information. They're not taking action.
Speaker 1 So as we start to see more autonomous systems, more agents that are actually doing things in the world, I think we're going to see much more of the problems that arise from that. Yeah.
Speaker 2 You mentioned super intelligence earlier. What's the difference with that and AGI?
Speaker 1 Different people think about it differently. The way I think about AGI is autonomous general intelligence and that it is autonomous and intelligent and general at the sort of high expert human level.
Speaker 1 So things that are, you know, have those three capabilities that humans have and maybe aren't better than all humans, but are like at the level of the best humans. Got it.
Speaker 1 Superintelligence, I think of as something that is not competitive with the best humans, but is competitive with humanity as a whole.
Speaker 1 So it can do physics like all of human physicists combined, or chemistry like all of human chemists combined, or strategy like the best human strategy makers combined.
Speaker 1 And so it's something that if it becomes in opposition to humanity in some way, it is going to be able to prevail rather than humanity.
Speaker 1
That's the sort of... large-scale risk that it has because it has the better capability.
Wow.
Speaker 1 So I think that is the way people have, you know, there's loose talk about superintelligence more lately. I think that is not the way that, you know, something that's just kind of
Speaker 1 really good at math is not the way superintelligence has been envisioned, you know, since, you know, Nick Bostrom's book back in 2014 or 15 or something, superintelligence.
Speaker 1 The way people use it in the field is something that is very broadly capable more than like
Speaker 1 either the very best humans or all of humans together, like any individual humans, whichever way you put it, it's something that is incredibly competent competent across an incredibly wide range of tasks.
Speaker 1 And there's sort of nothing that humans can do that it can't do. Jeez.
Speaker 1
So that is where AI companies are going at the moment. That's crazy.
And
Speaker 1 some of them are quieter about this and some of them are just saying it out loud
Speaker 1
and sort of leaning into it. But why everyone else is just saying, okay, that's where we're going.
What can we do? You know, that's not a great situation.
Speaker 2 I mean, some of these companies have data on probably tens of millions of people at this point, right?
Speaker 1
Well, all their search inquiries. Hundreds of millions.
Hundreds of millions. I mean, these are being, you know, OpenAI now is serving, I think it was 700 million weekly active users.
What?
Speaker 2
Yeah. Wow.
I did not know that.
Speaker 1
So the scale is just tremendous. And, you know, Google, of course, has hundreds of millions or billions of customers, not necessarily for Gemini.
It's an AI model, but in general.
Speaker 1 So, and Facebook, of course, has infinite amounts of data on everybody. So certainly the ability for, if these AI systems are given access to the information that the companies have
Speaker 1 on
Speaker 1 all the humans that are using them, they're going to have incredibly detailed dossiers on everybody. That's crazy.
Speaker 2 They could take down a company if they wanted to at that point, I'd imagine.
Speaker 1 I mean, the avenues of...
Speaker 1 malfeasance are enormous, right?
Speaker 1 Like if you imagine an AI system that has the sort of understanding of the humans and all of their secrets, could we see like AI just doing large-scale blackmail to get what it wants? Sure.
Speaker 1 Like I mean, companies don't do this because they are, you know,
Speaker 1 they're responsible, irresponsible in various different ways, but they basically follow the law most of the time. If you have something that isn't
Speaker 1 so constrained as a large U.S. company with shareholders and government looking at it, that could be a very different thing.
Speaker 1 And you can imagine AI systems with sort of the capability and data of a giant tech company, you know, with trillions of dollars in market capitalization, but without sort of anybody actually keeping it in compliance.
Speaker 1 That's a very scary thing.
Speaker 2 That's a valid point. They might have to start working on legislator against AI.
Speaker 1 Certainly, we should be having
Speaker 1 governance of some sort.
Speaker 1 Part of it has to be legal, like liability.
Speaker 1 Part of it should be standards that the companies can like agree amongst themselves like they do in banking and other things and then part of it has to be enforcement because if you just have self-enforcement of anything then there's going to be again a race to the bottom yeah well anthony thanks for your time also shout out to the ai4 conference for actually having you here and not shutting you out that's awesome that they do that you know yeah it was great to be here and i've had great conversations with people here i mean i do think that most of the people here are not like evil people that want humanity to be destroyed, right?
Speaker 1 They're just building, they want to build cool stuff. And we should build cool stuff.
Speaker 1 But we should also take care that we're not building the crazy things that nobody really wants.
Speaker 2 Right. Where can people find you?
Speaker 1 You can find me at Future of Life Institute, and you can find more about the arguments I've been making at keepthefuturehuman.ai.
Speaker 2
Awesome. Check them out, guys.
I'll see you next time. I hope you guys are enjoying the show.
Please don't forget to like and subscribe. It helps the show a lot with the algorithm.
Thank you.