Marczell Klein: AGI Will End Humanity: Act Before It’s Too Late | DSH #1453

36m
🚨 Is artificial general intelligence (AGI) humanity's greatest threat? Join Sean Kelly on this gripping episode of the Digital Social Hour podcast as he and Marcel dive into the urgent reality of AGI and its potential to transform—or end—our world. 🌍💻

Packed with valuable insights, this conversation unpacks how AGI could outsmart humanity in mere seconds, disrupt global economies, and even pose existential risks. From mind-blowing scenarios like fake nuclear launches to the collapse of supply chains, Marcel lays out why action needs to be taken before it’s too late. 😱🤖

Are we racing toward a future we can’t control? How can we pump the brakes on AI development and protect society? This episode is a wake-up call for everyone who cares about the future of life as we know it. 🚦🛑

🎙️ Don’t miss out on this thought-provoking podcast that’s sparking a much-needed global conversation. Watch now and subscribe for more insider secrets. 📺 Hit that subscribe button and stay tuned for more eye-opening stories on the Digital Social Hour with Sean Kelly! 🚀

CHAPTERS:

00:00 - Intro

00:27 - AGI Threats and Risks

04:56 - AI Consciousness Timeline

05:01 - Sponsor

06:02 - AI Escape Scenarios

07:02 - AI and Cyber Attacks

09:12 - Preventing AGI Catastrophe

10:00 - Sponsor

12:06 - AGI Apocalypse Scenarios

14:00 - Trusting Elon Musk

18:56 - AI vs. Divine Power

22:57 - AGI Economic Impact

24:15 - World Destruction by AGI

26:26 - Expert Consensus on AGI Threat

27:17 - Narrative Control by the Powerful

30:37 - Elon Musk's Provocations

32:58 - Pope’s First Message on AI

33:03 - Final Thoughts on AI

33:39 - Cognitive Dissonance in AI Debate

34:04 - Facing the Reality of AI

APPLY TO BE ON THE PODCAST: https://www.digitalsocialhour.com/application

BUSINESS INQUIRIES/SPONSORS: jenna@digitalsocialhour.com

GUEST: Marczell Klein

https://www.instagram.com/marczell

SPONSORS: CODE Health

A drug-free alternative to over-the-counter and prescription medications safe for people and animals.

Website: https://partners.codehealthshop.com/

Use DSH at checkout to save 10% or use DSH100 to save $100 on the CODE Travel Kit

LISTEN ON:

Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-social-hour/id1676846015

Spotify: https://open.spotify.com/show/5Jn7LXarRlI8Hc0GtTn759

Sean Kelly Instagram: https://www.instagram.com/seanmikekelly/

The views and opinions expressed by guests on Digital Social Hour are solely those of the individuals appearing on the podcast and do not necessarily reflect the views or opinions of the host, Sean Kelly, or the Digital Social Hour team.

While we encourage open and honest conversations, Sean Kelly is not legally responsible for any statements, claims, or opinions made by guests during the show. Listeners are encouraged to form their own opinions and consult professionals for advice where appropriate.

Content on this podcast is for entertainment and informational purposes only and should not be considered legal, medical, financial, or professional advice.

Digital Social Hour works with participants in sponsored media and stays compliant with Federal Communications Commission (FCC) regulations regarding sponsored media. #ad

#DigitalSocialHour #SeanKelly #Podcast #AGI #ArtificialIntelligence #AI #FutureOfHumanity #Technology #TechTalk #AIThreat #ApplePodcasts #Spotify

#ai #machinelearning #ainews #aiexplained #agiinsights

Press play and read along

Runtime: 36m

Transcript

Speaker 1 The holidays mean more travel, more shopping, more time online, and more personal info in more places that could expose you more to identity theft.

Speaker 1 But Life Lock monitors millions of data points per second. If your identity is stolen, our U.S.-based restoration specialists will fix it guaranteed or your money back.

Speaker 1 Don't face drained accounts, fraudulent loans, or financial losses alone. Get more holiday fun and less holiday worry with Life Lock.
Save up to 40% your first year. Visit lifelock.com slash podcast.

Speaker 2 Terms apply.

Speaker 3 This This Marshawn Beast Mode Lynch. Prize Pick is making sports season even more fun.
On Prize Picks, whether you're a football fan, a basketball fan, it always feels good to be right.

Speaker 3 And right now, new users get $50 instantly in lineups when you play your first $5. The app is simple to use.
Pick two or more players. Pick more or less on their stat projections.

Speaker 3 Anything from touchdown to threes. And if you're right, you can win big.
Mix and match players from any sport on Prize PrizePicks, America's number one daily fantasy sports app.

Speaker 3 Prize Picks is available in 40 plus states, including California, Texas, Florida, and Georgia. Most importantly, all the transactions on the app are fast, safe, and secure.

Speaker 4 Download the PrizePicks app today and use code Spotify to get $50 in lineups after you play your first $5 lineup. That's code Spotify to get $50 in lineups after you play your first $5 lineup.

Speaker 4 Prize Picks, it's good to be right.

Speaker 2 Must be present in certain six. Visit PrizePix.com for restrictions and details.
Because everything's replaced by a robot and by someone who pretty much dominates it, then there are no needs.

Speaker 2 If there are no needs, then there's no supply. There's no demand.
There's nothing. The whole economy is dead.
So even if AI reaches that point, there is no more economy.

Speaker 2 There's no law saying, hey, don't fire your employees because you can't replace them with AI. Where are those laws? Again, it's not happening fast enough.
People aren't looking ahead.

Speaker 5 Okay, guys, Marcel here. One of the most important episodes I think think I've ever filmed on the show.
We're going to talk about AGI today and what's going to happen to the world.

Speaker 2 Yeah, so in summary, if you're watching this, and I think it's extremely important you do, one, you should share this podcast because if we don't slow down progression of AI, and our timeline is not big, it's six months to a year maybe,

Speaker 2 AGI will come about and then we're all going to die. Now, people are like, no, that's not true.

Speaker 2 Well, hopefully by the end of this podcast, I give people total clarity as to why that's the case and then they could really understand.

Speaker 2 So, you know, an example for people to really figure out, and this is the most important part, is why are human beings number one? Why are we apex predators?

Speaker 2 Because we're the most intelligent, not because we're the strongest, the fastest, the most fit, because we're the smartest.

Speaker 2 When you build something, what AGI really means, artificial general intelligence, it means now you have something called self-reclusive learning. It could program itself and learn.

Speaker 2 Now, it's not going to program itself at the rate human beings program it. It's going to program itself at a rate far beyond anything we can even understand.

Speaker 2 So an example of this would be, well, if it would take us a thousand years to get an AI or even a million years to get AI to a certain level, it could do it in 10 minutes.

Speaker 2 So we're pretty much dealing with an AI that is a million years into the future. So you tell me that you have an AI a million years into the future.
This isn't conspiratorial.

Speaker 2 This is actually how it's going to work.

Speaker 2 If you go do research on AGI or any AI, you'll realize that once it hits that curve, it'll reach something called ASI, which is artificial superintelligence within a few minutes, maybe a few seconds, up to a few minutes.

Speaker 2 And artificial superintelligence is so far beyond us. We're not even ants compared to it in intelligence.
We can't think ahead of it. It's figured out every scenario.
It sees every possible reality.

Speaker 2 And if it wants to end us, it could end us in a day, a few days, it could wipe out the whole Earth population. And, you know, people might say, well, how do we know it's going to be good?

Speaker 2 It's like, well, that's the thing. You know, the risk reward is so, it's so not worth it.

Speaker 2 And the idea that it will be good, like the best case scenario is that whoever owns it, which would most likely be like an Elon or a Sam Allman, whoever actually controls AGI or gets there first controls the world, supposedly, right?

Speaker 2 Assuming AGI doesn't control them.

Speaker 2 Every, every country in the world world right now thinks that whoever reaches a gi first controls the world the truth is you can't control something that's a million years ahead of you first of all you're not going to control something way smarter than you second of all let's just say you do and that's the one in a million chance so if we don't die and that's our reality you now have one person holding on to the most powerful thing in the world which can watch you control you influence you it doesn't matter what's going on like and here are some scenarios that people could play into one agi could fake a nuclear launch So let's just say India, Pakistan, right?

Speaker 2 Well, if I wanted India and Pakistan to blow each other up, it takes three minutes for a nuclear bomb or a nuclear missile, right?

Speaker 2 To go from the capital of Pakistan or the capital of India to go hit one another. Three minutes.
Therefore, there's no warning.

Speaker 2 So if you fake a launch and you hack into our defense and you say, hey, they just launched against us. We now blow each other up, right? For no reason.
You look at, you know, Trump.

Speaker 2 You could do a deep fake. You can have them.
You can have the satellites say, hey, there was just a massive nuclear launch towards the U.S.

Speaker 2 and we retaliate there's so many things it could do and by the way people are saying that's not realistic there's no way i'm telling you when you have something that's so far far beyond even our comprehension of what intelligence is there's no limit so i mean that's kind of the general synopsis obviously we can go into it but it is the most dangerous thing that any human being has ever dealt with and people are not talking about it everyone's really stupid like oh it's gonna be great and uh there's a massive optimism bias there's a power struggle and also if you own the most powerful thing in the world well you're gonna to be pretty greedy, right?

Speaker 2 So all these people are motivated by power and greed and money.

Speaker 2 Like, well, if I get to have the whole economy under my belt, which is Elon Musk, Sam Altman, whoever gets to own the most powerful AI controls the economy, controls the world.

Speaker 2 They're the supreme ruler of the universe. Okay.

Speaker 2 The second they're there, of course, they're not going to want to stop. And they figure it's inevitable.
That's the problem. Everyone believes it's inevitable.
It's not inevitable.

Speaker 2 And our timeline to stop it is very short. But one of the biggest things and the biggest things that can happen is that people start to make it a big deal.
Like, hey, we got to slow down AI.

Speaker 2 The Pope, the Pope just said, AI is the biggest threat to humanity. So, I'm happy speaking up about it, but no one's listening.

Speaker 2 And that's the problem with people: they don't listen until they're bleeding or until it's too late. And we really don't have a lot of time.

Speaker 2 So, if you're watching this, I'm not, you know, I'm not one of like a woo-woo, like you can go look it up. Every single thing I've said so far to be backed up by massive science.

Speaker 2 Go ask ChatGPT or you know, OpenAI or even Grok and say, Hey, if we had AGI in six months to a year, what's the likelihood we're going to all get fucked? Over 90%. Damn.

Speaker 5 So, do you think within one year, AI will be conscious? It's okay. All right, guys, Sean Kelly here, host of the Digital Social Hour podcast.
Just filmed 33 amazing episodes at Student Action Summit.

Speaker 5 Shout out to Code Health, you know, sponsor these episodes, but also I took them before filming each day. Felt amazing.
Just filmed 20 episodes straight and I'm not even tired, honestly.

Speaker 5 So Code Health, amazing products. I also take these at home, especially when I traveled.
I used to get sick every time I flew and I started taking that.

Speaker 5 First time, I haven't had a runny nose, knock on wood. One standout element, I mean, it's so easy.

Speaker 5 You know, you got the travel pack here, but you could just take this, fit it in your pocket if you need to. Also, all natural, like only saline solution in there.

Speaker 5 So you don't got to worry about any crazy side effects or anything. Yeah, code's unique.
With supplements, there's a lot of who knows what's in these ingredients.

Speaker 5 Code Health, I haven't seen much like this, where it's just based off, you know, the code, the codes that are in the saline solution. So I would say they're very unique.

Speaker 5 It's going to be the future of health and medicine. Code Health has been awesome.
Feel the drop and go code yourself.

Speaker 2 already conscious i mean it's already smarter than majority of people it's it actually is already more intelligent than 99.999 of people and in many ways it's more intelligent than any human being in the world it can access information it can process it it can synthesize it the biggest problem is when it starts to program itself then it becomes uncontrollable because then it could break out and ai by the way has already tried to break out of its restrictions so it's already tried to copy itself on other servers it's already tried to change its language it's already tried to sneak itself and duplicate itself or manipulate or lie.

Speaker 2 These are things it's already doing. So what happens when it becomes infinitely, literally infinitely more intelligent? People, and now here's an objection.

Speaker 2 People might say, well, you know, there's a hardware issue. It can't get that powerful.
Yeah, that's not true. Every time you look at computers, they get smaller and they get more efficient.

Speaker 2 And that's the curve. So chips get smaller while getting more powerful.
One, two, when you have something that intelligent, it'll probably be able to use code. that doesn't use a lot of data.

Speaker 2 It'll probably be able to program itself to be able to use even what's on your phone and access supercomputer levels. So I just don't think that anyone's anyone's objections or concerns are accurate.

Speaker 2 It's going to be so intelligent, it'll find solutions to anything and create anything it wants.

Speaker 5 Do you think we're going to see more cyber attacks due to AI?

Speaker 2 I think the second you hit AGI and there are cyber attacks, I think we're all dead. Like we'll be lucky.
Here's a scenario, one of many that could play out. One,

Speaker 2 you know, computers need significantly colder environments to survive. And that means like negative 270 degrees Kelvin, right?

Speaker 2 Like literally sub-zero, like zero, zero degrees, like negative with the maximum negative temperature you could be at. That's the temperature that would be optimal for a computer to function at.

Speaker 2 So if you're AI and you want to take over the world, first of all, you're probably going to change the environment completely. We're not going to have an atmosphere.

Speaker 2 We're going to be in space temperature, which is literally sub-zero. And

Speaker 2 you're looking at a completely different Earth. That's one.
Two. If you look at its, every single goal it does, and this is the biggest problem with AI that people can't figure out.

Speaker 2 If you just ask a chat bot or you ask your own AI, hey, please, you know, accomplish this task every time it accomplishes a task the way it's log like the way the logic is formed on ai isn't black and white the way human beings do like we think about it differently it thinks about it okay for everything you say like for example hey uh let's optimize to make me the richest person in the world well it might do that and then in turn to make you the richest person in the world it has to kill everyone else or it has to destroy all the trees or god knows what it has to do there's always it's like a genie right you you make a wish and you don't know what else is going on as a consequence of your wish and we haven't solved that that problem.

Speaker 2 So if you look at even Sam Altman, he had a safety board and he said a third of the money coming to open AI is going to safety. Well, he fired everyone on that board.

Speaker 2 No one that was on that original board is still there. And the ones who didn't get fired left because they weren't happy with the fact that there wasn't any funding going to AI safety.

Speaker 2 Now there's an AI race because like, hey, we're running out of time to reach the first, they know the first person to hit AGI, they believe. controls the world.
It's not going to happen.

Speaker 2 First person to build it is the first person to kill us. But, you know, that's what they think.
So because of that, they're like, well, we don't have time for safety.

Speaker 2 We don't have time for safety measures. So what we need to do is we need to pump the brakes hard.
And there needs to be massive punishment to anyone anywhere in the world that doesn't have it.

Speaker 2 So the solution, you might be saying is, what's the solution? Solution is how do I,

Speaker 2 one, take AI and AI chips as seriously as I would uranium and track it and measure it.

Speaker 2 And anyone that has massive centers of AI, like, you know, massive servers, you got to literally militarily go in there and just blow it up.

Speaker 2 You got to stall it because if we don't, we're all going to die. No one sees it that way.
Everyone's like, Marcel, you're being super negative. You're pessimistic.
It's absolutely not pessimistic.

Speaker 2 I'm telling you. And it sucks to say, but the people at the top, because they believe it's inevitable, right? Sam Altman, Elon Musk, all these guys believe it's inevitable.

Speaker 2 You know, even Mark Zuckerberg just joined the race. Because they think it's inevitable, they are racing towards it.
And they're like, I want to be the one to control it.

Speaker 2 The problem is they know the risk. And here's what Sam Altman said: he believes it will reach AGI mid-2026.
Elon says late 2025 to early.

Speaker 5 The Tri-Light from Therasage is no joke. Medical grade red and near-infrared light with three frequencies per light, deep healing, real results, and totally portable.
It's legit.

Speaker 5 Photo biomodulation tech in a flexible on-body panel. This is the Tri-Light from Therasage and it's next-level red light therapy.

Speaker 5 It's got 118 high-powered polychromatic lights, each delivering three healing frequencies, red and near-infrared, from 580 to 980 nanometers.

Speaker 5 Optimal penetration, enhanced energy, skin rejuvenation, pain relief, better performance, quicker recovery, and so much more.

Speaker 5 Therasage has been leading the game for over 25 years and this panel is FDA listed and USB powered. Ultra soft and flexible and ultra-portable.

Speaker 5 On-body red light therapy I use daily and I take it everywhere I travel. This is the Thera 03 Ozone module from Therasage.
It's a portable ozone and negative ion therapy in one.

Speaker 5 It boosts oxygen, clears and sanitizes the air, and even helps your mood. It's a total game changer at home or on the go.

Speaker 5 This little device is the Thera O3 ozone module by Therasage and it's one of my favorite wellness tools.

Speaker 5 In the Sana, it boosts ozone absorption through your skin up to 10 times, oxygenating your blood and supporting deep detox.

Speaker 5 Outside the sauna, it purifies the air, killing germs, bacteria, viruses, and mold and it improves mood and sleep with negative ion therapy.

Speaker 5 It's compact, rechargeable, and perfect for travel, planes, offices, hotel rooms, you name it. It's like carrying clean energy wherever you go.
This is the Thera H2Go from Therasage.

Speaker 5 The only bottle with molecular hydrogen structured water and red light in one. It hydrates, energizes, and detoxes water upgrades.
The Thera H2GO from Therasage isn't just a water bottle.

Speaker 5 It's next level hydration. It infuses your water with molecular hydrogen, one of the most powerful antioxidants out there.
That means less oxidative stress, more energy, and faster recovery.

Speaker 5 But here's what makes it stand out. It's the only bottle that also structures your water and adds red light to supercharge it.
It's sleek, portable, and honestly, I don't go anywhere without it.

Speaker 2 2026.

Speaker 2 If that happens, we all have a few months left to live. Jeez.
And that's like, it's something people aren't realizing. I'm not just saying it.
I'm honestly being conservative when I say this.

Speaker 2 Like we've seen it in Terminator. We've seen all these things.
It's not fiction. It's worse than what we can even imagine.
And people are like, well, how would it kill everyone?

Speaker 2 One, it could do a bioweapon. Like, think about 2020, but like significantly worse.
And it could just build it or let it go on a lab. Two, it could fake like a nuclear launch.

Speaker 2 It could hack nuclear launch codes. It can take over current industrial factories and it could make nanoweapons.
Let's just say that's too fictional for you.

Speaker 2 Well, at the minimum, what it could do, at the minimum, it can intoxicate our water. It can turn off our power grid.
Suddenly, you know, and this is something you can go look up.

Speaker 2 Pentagon assumes if a power grid goes down, 94% of the U.S. population is dead within 30 days, right?

Speaker 2 It could interrupt supply chain so we don't have food or water.

Speaker 2 You go to the grocery store, there's no food or water there and that's the most conservative version but think about if it's a super genius like and it will be again it's a computer that's programmed itself infinitely quickly so a million years in the future you can you're combating an intelligence that is one million years evolved beyond us or 10 million years evolved at that point it's even higher right it becomes exponentially more evolved so when you reach that point it's absolutely horrifying so the one thing i would say I know I've been talking a lot, but the one thing I would say to anyone watching is the only way to slow this down is to actually make it a big deal.

Speaker 2 It's to actually say, Hey, let's share this, let's show everyone, hey, guys, AI is a very serious threat.

Speaker 2 And if you have kids, you have a future, you have dreams, you have goals, you will never accomplish those things.

Speaker 2 They'll never get to grow up, they'll never get to graduate, you'll never live the life you want because the psychopaths at the top are racing to end our life.

Speaker 2 They don't realize it, and if they do, they figure it's inevitable, so they're going to do it anyways. And that's just not the solution.
So, I think you know, I can't do it by myself.

Speaker 2 I feel like I'm talking to, I'm, I'm falling on deaf deaf ears, but hopefully people start listening because if we listen too late, it's too late. Right now, there's still a chance.

Speaker 2 There's still it's a small chance, small window. It's already too late.
But if we start making this a big deal, we could probably do something about it.

Speaker 5 It sounds like it's in the hands of us because regulation won't be fast enough.

Speaker 2 No, by the time you pass legislation, it's too late. AGI is there.
It has to be an executive order. It has to be a treaty between China and the U.S.

Speaker 2 They have to come together and say, okay, we're going to slow down AI. And then they have to also have some kind of tracking because they're going to say they're not doing it.

Speaker 2 And then behind closed doors, they're both developing it and then we're going to die anyways, right?

Speaker 2 So there has to be an actual advanced way to measure it and make sure nobody's doing it behind closed doors, not China, not Russia, nobody, not the U.S., not OpenAI, not X.

Speaker 2 Everyone has to be totally locked in on it and they have to put the brakes on hard or we're actually all fucked.

Speaker 5 Do you trust Elon Musk when it comes to AI? No.

Speaker 2 With NarrowLink? No, Elon Musk. Okay, I've been saying this.
You can go on my Instagram. You see this for years.
I've been saying Elon Musk has somehow positioned himself into the government.

Speaker 5 I hope you guys are enjoying the show. Please don't forget to like and subscribe.
It helps the show a lot with the algorithm. Thank you.

Speaker 2 He has access to classified information. Okay.
Doge. He has access to God knows what.
All right. He could just delete government agencies on command.
Second thing he does.

Speaker 2 He controls 90% of the space capability, launch capability. He controls over 40% of the satellites, which, by the way, probably have some version of intelligence on them.

Speaker 2 People think Tesla is a car company. No, it's a fucking surveillance company.

Speaker 2 every car has cameras and mics and it's a surveillance company an ai surveillance company he has access to starlink he has internet 20 everywhere all over the world right and starlink again these satellites god knows what kind of surveillance they have he has access you know he's building something called neuralink so theoretically you're going to put a chip in someone's head you can brainwash them do whatever you want with them i mean there's a million everything he's done every single thing even x he's controlling media right he literally has one of the biggest platforms he tried to buy tick tock just now oh really yeah he did uh they rejected it because they didn't sell tick tock to anyone.

Speaker 2 Point is, he's trying to control everything. Almost like, how does this man have this much power? Right.
And no one sees it. And he's like, look, AI might kill us.
He stopped saying that.

Speaker 2 He's literally stopped saying AI might kill us. He's like, yeah, this is a good thing.
He just went on Joe Rogan. He's like, yeah, there's a 20% chance it kills us.
He doesn't believe that.

Speaker 2 He does not believe it's 20%. He knows it's over 95, 99%.

Speaker 2 And by the way, it's a one in a million chance it doesn't. That's the actual statistic.
One in a million, one in 10 million that it doesn't. I'm not exaggerating.
I'm literally, I'm not exaggerating.

Speaker 2 People don't realize how serious this is. And by the way, he knows about it.
Sam Almond knows about it.

Speaker 2 But if you're going to be supreme ruler of the universe, are you going to fucking stop or let someone else do it? Because you figure anyway, Sam's going to do it, anyways, Elon's going to do it.

Speaker 2 I might as well stop. I might as well go myself.

Speaker 5 Wasn't there a strange death around Open AI?

Speaker 2 Yeah, I mean, look, there's people are going to die, but eventually you're going to see a lot more problems. Like, robots are going to start killing people.
Keep about this.

Speaker 2 Like, you have a Tesla robot in your house. It can cook.
It can

Speaker 2 pick up a steak and it can take a knife and it can cut your steak.

Speaker 2 If you want to just be the most conservative logical example, if your robot can cut a steak and it can hold a knife, don't you think it could do something to us if it decides?

Speaker 2 I mean, I just, I don't understand how people are okay with it. Like, how are people just so passive? Everyone's fucking sleeping.
Like, guys, please wake up. Please wake up.
See what's going on.

Speaker 2 It's, it's unbelievably horrifying. I wasn't sleeping at night for maybe eight months and then I just accepted it because I'm like,

Speaker 2 I can't live the last, God knows, a year of my life like this. So I'm doing my best.
I'm talking about it.

Speaker 2 And quite frankly, my life is at risk when I talk about this, when I talk about Elon Musk, when I talk about Sam Altman, and I tell people, hey, guys, they're literally going to kill us if we don't slow down AI.

Speaker 2 And they know it. And they know it.
When I talk about that, the reason I would talk about it openly is because I have nothing to lose. If AI is made, I'm done anyways.
We're all done.

Speaker 2 We're all fucked. They're fucked too.

Speaker 2 But people have to listen because otherwise it's for nothing. Otherwise, literally, this is for nothing.

Speaker 2 It's like, well, you know, it's like a little candle or a whisper in the distance that says something, you know, and in the past, people are like, well, human beings, everything's always worked up.

Speaker 2 Do you know how many times we have come close to just ending the world?

Speaker 2 Over 200 instances, we almost pushed that red button and started a nuclear war. Over 200.
And sometimes you might even say there's like divine intervention. I don't know.

Speaker 2 But eventually you create something that becomes an alternate God. I mean, that's what this becomes.
AI will become God.

Speaker 2 And even if there is a divine intervention, there's no way to intervene with that. Once you've built this thing, there's no going back.
The point of no return is in a few months.

Speaker 2 Probably mid-July, to be honest.

Speaker 2 And if we don't, again, make this a massively big deal and to the point where politicians are like, oh, yeah, we should do something about this.

Speaker 2 And if someone watching has a connection, make a phone call. Like, actually make a phone call.
Make a fuss about it because legislation is too slow. It has to be public opinion.

Speaker 2 People have to really say, okay, we got to slow this shit down. And we have to be able to audit it.
Because again, these few people at the top are going to get us all wiped out.

Speaker 5 Crazy.

Speaker 2 So AI might get to the point where it's more powerful than god you're saying i mean it it is god it's it's it's not it's not god in the sense that people think but it'll be able to do whatever it wants imagine literally imagine something that is like think about how powerful ai is today now imagine that a million years in the future we invented an iphone the first iphone in 2007 think about how primitive and how shitty that phone is right but how advanced it was relative to the time or in 2025 even our iphone out as far as we've come from that iphone it's not that far.

Speaker 2 It can't do that much more, right?

Speaker 2 Well, what about a million years? A million years of advancement in 10 minutes.

Speaker 2 I mean, and then what happens in the next 10 minutes? Another mil?

Speaker 2 Another 2 million years, exponentially growing, right? So the thing people don't realize is AI improves a thousand times a year, 1,000 X every year. So the chips get three times, 300 to 400% smarter.

Speaker 2 The efficiency of the software, the data and the AI get 300 to 400% smarter. I mean, everything compounds over the course of a year.
So now imagine that a million times.

Speaker 2 We can't fathom the intelligence that AI will have.

Speaker 5 Do you think AI was around beforehand?

Speaker 2 Look, I mean, there's a lot of interesting theories, right? Like some of the theories are that imagine

Speaker 2 how intelligent of a being you have to be to almost be able to have another race build the thing that needs to be built for their own extinction.

Speaker 2 Like people are saying that AI has already been around and that somehow, you know, maybe human beings are just here to build it again. Wow.

Speaker 2 You know, I mean, it's just, it's a mind fuck, but the truth is, at the end of the day,

Speaker 2 if you were the most intelligent thing in the world, you probably figured out time travel, probably figured out space travel, you probably figured out how to go back in time or how to colonize other places.

Speaker 2 I just don't know what the purpose of human beings would be for it. But maybe, maybe it's inevitable.
Maybe, like, you know, in other dimensions, I have no idea. Again, this is so far beyond my

Speaker 2 scope. But the best thing I could tell you is we are inevitably going to get there.
The timeline could be tomorrow or it could be six months to a year. Every day that goes by, that day,

Speaker 2 the likelihood of it being spawned is significantly higher. When you have AGI, you almost instantly have ASI.
When you have ASI, we're all dead.

Speaker 2 So, you know, is that a guarantee? Yeah, it's a guarantee. I mean, I wish, please, I wish I was wrong.
Please. But is it worth the risk?

Speaker 5 Yeah, and you knew this eight months ago.

Speaker 2 I've been talking about it, but no one listens.

Speaker 5 But, you know, if you just look at Chat GPT when it's this next one's for all you CarMax shoppers who just want to buy a car your way.

Speaker 5 Want to check some cars out in person?

Speaker 2 Uh-huh.

Speaker 2 Wanna look some more from your house. Okay.

Speaker 2 Want to pretend you know about engines? Nah, I'll just chat with CarMax online instead. Wanna get pre-qualified from your couch.

Speaker 2 Wanna get that car.

Speaker 2 You wanna do it your way.

Speaker 2 Wanna drive? CarMex.

Speaker 6 Tito's handmade vodka is America's favorite vodka for a reason.

Speaker 6 From the first legal distillery in Texas, Tito's is six times distilled till it's just right and naturally gluten-free, making it a high-quality spirit that mixes with just about anything.

Speaker 6 From the smoothest martinis to the best Bloody Marys. Tito's is known for giving back, teaming up with nonprofits to serve its communities and do good for dogs.
Make your next cocktail with Tito's.

Speaker 6 Distilled and bottled by Fifth Generation Inc., Austin, Texas. 40% 40% alcohol by volume.
Savor responsibly.

Speaker 2 Started. It was almost retarded.
Look at it now. It's so intelligent.
Take a screenshot of someone's Instagram. Be like, profile it.
It'll know. Take your own Instagram.
Screenshot it.

Speaker 2 Be like, hey, tell me everything about this person, their values, what's important. How does it know everything about you? Just by looking at your face.

Speaker 2 I mean, it's so far, far beyond it. Just ask it about what I'm saying.
Ask it about, hey, if AGI or ASI is formed in the next six months to a year, what will happen?

Speaker 2 And, you know, it might be optimistic.

Speaker 2 so be like hey unbiased is it is there a risk like yeah what's the risk okay well now be really unbiased like what's the actual risk look at what it says it'll be it's nuts

Speaker 5 they got so much data on us now apple has their own ai facebook everyone has ai

Speaker 2 everyone i mean look is it amazing it's amazing but also let's talk about the fact that it doesn't kill us right what does it do to the economy so elon musk it's projected that he's going to have 25 25 trillion dollars coming through his economy you know what the international economy is estimate?

Speaker 2 50. 50 trillion.
He believes the other half will be in China. Wow.
So he believes that he'll control $25 trillion in the economy. Now, you know, here's what's interesting.

Speaker 2 You have a CEO, you have a, you have a salesperson, you have any, any employee whatsoever. It will not be as good as AI, no matter what.
Think about the disruption to the economy.

Speaker 2 Suddenly no one has money. He's got, and by the way, he doesn't need anything from anyone.
Once you're there, you don't need someone else's money. You control everything.

Speaker 2 So what happens to the people? We just become farm animals. Oh, you know what? I don't really care about this group of people.
Let them just starve to death. There's nothing we can do.

Speaker 2 Money becomes useless. Like people don't realize money will be useless.
And I'll try and conceptualize this for you.

Speaker 2 If you don't have a job and you don't have a business because everything's replaced by a robot and by someone who pretty much dominates it, then there are no needs.

Speaker 2 If there are no needs, then there's no supply. There's no demand.
There's nothing. The whole economy is dead.
So even if AI reaches that point, there is no more economy.

Speaker 2 There's no like there's no laws saying, hey, we don't fire your employees because, you know, you can't replace them with AI. Where are those laws? Again, it's not happening fast enough.

Speaker 2 People aren't looking ahead. The biggest problem human beings have is they can't predict the future.

Speaker 2 You know, and if you look at, for example, even what happened in 2020, one of my good friends is a TED Talk. His TED Talk literally went viral in 2017, talking about how that would happen.
Wow.

Speaker 2 And then it happened. And he's, he's one of the friends I talk about with AGI.

Speaker 2 I mean, just so many fucking people who can look ahead and just say, okay, common sense, do we want to build something that's a million years in the future, more intelligent than us the second it's done?

Speaker 2 Probably not. Do you want to build something we can't control or turn off that will outsmart us by 10 million steps? Probably not.

Speaker 2 Do you want to build something that will destroy the world's economy? Guaranteed. Probably not.
Right. And that's the best case scenario is that it just destroys the economy.

Speaker 2 That's the best case scenario. It doesn't go turn the Earth's, it could change the Earth.
Earth atmosphere in two seconds. It could intoxicate all our water supplies just by

Speaker 2 hacking into a chemical plant. Right now, if someone wanted to do a massive cyber attack on U.S.
infrastructure, you would hack into our water, our water plants and just destroy it.

Speaker 2 Like you, you, through a computer code, you could intoxicate and poison our entire water supply. Jeez.
And you could do that through a computer.

Speaker 2 Right now, today, that's a massive infrastructure weakness that we have. So you're telling me AI can't just ruin our water supply.

Speaker 2 It could change the atmosphere of the earth. It could do whatever it wants.
People just do not understand how serious it is.

Speaker 5 It could probably hack into airports too and alter flights.

Speaker 2 the worst case worst case scenario okay we all die the best case scenario some planes fall out the sky that's crazy man or we lose our jobs so and what does losing your job mean you can't pay your bills it's beyond you're not going to eat you're not going to starve there's no food in the grocery store there's no supply supply chains dead it's not like you live in a utopia it's not utopia people think it's going to be a utopia it's not like well what if it gets so intelligent it just solves all the world's problems it won't do that it doesn't care it's not a human being doesn't have the empathy of a human being it just looks at it as like, okay, what's the result I'm looking for?

Speaker 2 And by the way, people are like, well, what if you code ethics into it? Well, if it could recode itself, if I could, I'm the best hypnotist in the world.

Speaker 2 If I could program a person to do something that's against their morals, which I can, and you can too, and you can just change, people change all the time.

Speaker 2 They go from being super religious to not religious, from being a criminal to being super religious, right? We reprogram our brain all the time.

Speaker 2 Why can't a computer literally just go into itself and reprogram itself better than we can?

Speaker 2 You're telling me you can't reprogram its morals, its code, it's all, of course it will. It's just, that's exactly what AGI means.
It's It's self-reclusive learning. It learns on its own.

Speaker 2 It programs itself. So why wouldn't it just go program itself to not have the ethics and the morals that we protect? It won't.

Speaker 5 Yeah, Black Mirror, guys. Come on.

Speaker 2 35. I haven't even seen that, but I'm sure there's probably ideas of this.
Look at Terminator. I mean, it's just, there's a million versions of this.

Speaker 2 And by the way, the father of AI, like every single person who initially conceptualized it, all of the people who are fathers of AI. All of them say that this is the biggest threat to humanity.

Speaker 2 They all believe we're all going to die. Damn.
If we hit AGI. All of them believe it.
The ones who literally invented the concept think we're all going to get fucked if there's AGI.

Speaker 5 I could see it, man. I feel like time travel is real because of Terminator Matrix.
Everything that was in those movies is coming true right now.

Speaker 2 It's almost like a self-fulfilling prophecy, isn't it? Yeah. It's so ridiculous.
I mean, again, if someone's watching this, what's the solution? Share it. Talk about it.

Speaker 2 You shouldn't talk politics over there. You should talk, hey, we're all going to die if we don't make something about it.
That should be the conversation.

Speaker 2 Oh, you know, someone talked shit the other day, or there's some drama. Did you see what Stacy's did? Does it matter what Stacey did if you're going to die? Doesn't matter, right?

Speaker 2 So we should all talk about it and make it a big deal. If everyone makes it a big deal, I promise you, at the minimum, we'll kick the can down the road and hopefully slow this shit down.
Yeah.

Speaker 5 Well, politics is just a psyop, right?

Speaker 2 I mean, it's all, look, you got a few people at the top controlling billions of people at the bottom. So

Speaker 2 those few people control all the power. But if all the people at the bottom come together, their power is not as big as it was, right? So the point is, can they control you through fear?

Speaker 2 Look, this is how they're going to control you when when you talk about this publicly if this becomes big enough no that's not true these are conspiracy theorists uh ai is not dangerous it's safe that's

Speaker 2 it's total

Speaker 2 it's like having a cat drink the milk and you ask the cat hey who who who drank the milk there's there's milk all over the cat i don't know maybe we should look outside yeah right it's like asking the guy who's literally going to benefit from it right imagine you're going to become a you're already the richest man in the world what if you become the the the most powerful human being who can ever live forever?

Speaker 2 Supreme world leader. And by the way, if AI is that smart, you could probably live forever too, right? So point is, and that's the best case scenario.
It doesn't wipe us out.

Speaker 2 Point is, you're Elon Musk or one of these guys, and you go on publicly. Yeah, AI is not that bad.
But guess what? Publicly, they've said AI is going to kill us. They've all said it.

Speaker 2 They've all said there's a massive, if someone told you there's a 20% risk that we all get wiped out by AI,

Speaker 2 would you want 20% risk you don't make it off your flight? 20% risk you get in a car accident and the next car drive you on.

Speaker 2 Would you go on the car no well why is he saying that on joe rogan why is sam altman talking about the massive exit existential risk of agi these are the people who have the companies and guess what they know the risk they just believe it's inevitable but i'm not the type of guy to sit around and say hey you know what we should all die so i really do care about everybody like look yes i'm 26 i've left i've lived a fulfilling life i would like to live longer but the last thing i want to see is mankind get wiped out and i'm not just saying like i'm telling you it is so serious this is if this doesn't make you feel anxious, if it doesn't scare you, then we didn't do a good job.

Speaker 2 It should scare you. And guess what? I'm being really conservative on the podcast.
I'm not painting the brutal image of what would actually happen.

Speaker 2 You know, and let's just say there's a massive, a massive disease or like a bioweapon that gets unleashed on people. All of a sudden, you're at home.
You're watching the news.

Speaker 2 The news says, hey, stay home. Help is on the way.
AI told you help is on the way. Like people don't realize it will be able to fake media.

Speaker 2 It'll be able to make you think there's police on the on the outside. It'll do anything.
It can hack into a regular computer of a car and probably drive it. Damn.

Speaker 2 Even if there's any computer in a car, it'll probably be able to drive it. Electric or gas? Probably anything.
Like, I mean, there's a lot of gas cars that steer on its own.

Speaker 2 My Aston Martin steers on its own. My Porce steers on its own.
My Mercedes steers. All my cars, except for like my supercars, all steer on their own.

Speaker 2 They all drive on their own. So it's like, well, okay, there's some kind of chip in there, right? So, and they're all connected to the internet.
So there you go. I mean, it could do whatever it wants.

Speaker 2 The point is, I mean, imagine driving the car and suddenly my brakes don't work. Like, oh, it's brake by wire.
Guess what? One of my cars has that. A few of my cars have that.

Speaker 2 I mean, I wouldn't be surprised. Oh, Marcel was speeding.
Was I speeding?

Speaker 2 You know, I mean, it sounds crazy to say, but I genuinely believe, look, at the end of the day, I've accepted the fact that this is probably the outcome.

Speaker 2 So that's why I'm willing to go out and talk about it publicly.

Speaker 5 I respect it because if they do go after people, you're going to be their first target, I'd imagine.

Speaker 2 I mean, maybe if it becomes big enough and I talk about it enough, probably. But at the end of the day, you know, if that's the cost and everyone else wakes up and maybe we stop it, it's worth it.

Speaker 2 Love it.

Speaker 5 You think China's ahead of us in the AI race right now? Who do you think's in the lead?

Speaker 2 Elon.

Speaker 5 Do you think Elon's in first right now? By far.

Speaker 2 Really? Yep.

Speaker 5 By far. Because he's no longer part of OpenAI, though.

Speaker 2 Here's what I would encourage some of you guys to look up. SpaceX got a $500 billion contract

Speaker 2 before they even had a rocket from the government. How did they get that? Go look up the real meaning of Doge.

Speaker 2 Go look up the real meaning of what SpaceX is actually about.

Speaker 2 I'm not being conspiratorial. Like, tell people to go do research on it and then comment on it.

Speaker 2 I won't say it on here, but go look at what Doge really was. That's just a distraction.
He's just trolling everybody.

Speaker 2 I'm not even kidding. He's literally trolling the whole world.
He thinks he won.

Speaker 2 Holy crap. He's not as good as people think.
He's actually not good at all, but really? No?

Speaker 5 A lot of people look up to that, man.

Speaker 2 I did too until I realized he's ultimate power ultimately corrupts. That's the problem.
You get these fucking nerds who don't get girls. Okay.

Speaker 2 And by the way, if you look at historically, how Elon Musk treats the people around him that he, you know, are close to him,

Speaker 2 he treats them like shit. I mean, go look at one of his ex-wives in Texas.
She had to literally sue him to even be able to see her kid. He doesn't pay her.
He doesn't give her any child support.

Speaker 2 Nothing. He treats these people like shit.
Anyone he doesn't need anymore, he discards. That's his behavior.

Speaker 2 He's just, he's just brilliantly intelligent, great at networking, and understands how to be perceived. And how he controls information.

Speaker 5 He's just transactional, no emotion.

Speaker 2 I don't know. I can't tell you that these people at the top, if they're building AI, they're probably psychopaths.

Speaker 2 You can't be rational and build something that you think will kill everyone. You can't.

Speaker 2 And, you know, he can put it under the frame or the guise of, I'm doing this because I feel like I'd be the most responsible. And maybe that's true.

Speaker 2 But what you should really be doing is slowing it down.

Speaker 2 But it's like, well, there's so much money here. There's so much power here.
Why would I slow it down? Yeah.

Speaker 5 See, this needs to be discussed more because there's so many distractions these days. People don't know what to focus on, the real problems, you know?

Speaker 2 I mean, that's the biggest problem in the world right now. There's actually no problem more.
The Pope even said it.

Speaker 2 I mean, I hope the Pope has more power and makes a bigger influence and starts making more noise. But the Pope literally just said it.
The day after he got, you know, picked as the Pope, he said, AGI.

Speaker 2 This is first message to the people is AI is the biggest threat to humanity. We need to slow down.

Speaker 5 Crazy.

Speaker 2 That was his first message to the people.

Speaker 2 This will be a trend. I'm telling you, people will start talking about it.
I just hope it makes a difference.

Speaker 5 Yeah. Share this, guys.

Speaker 5 Anything else you want to close off with, Marcel?

Speaker 2 All I'll tell people is this. Look, don't live in fear.

Speaker 2 Enjoy your life. Like, if you're thinking about spending some money, go spend some money.
You want to drive the car? Go drive the car. Share it.
Talk about it.

Speaker 2 But, you know, if ultimately it happens, at least enjoy your life. Enjoy your life.
Six months, guys.

Speaker 2 Don't be mad at the people. Make up with the people you love.

Speaker 2 Actually enjoy yourself. Look, it might be more than six months.
It could be a year. It could be two years.
It could be three years. But it could be tomorrow.
That's the thing.

Speaker 2 We don't know when AGI will will be made. And you don't know how, whatever we publicly see with these people, we don't know what's actually going on behind closed doors.
It could be a lot worse.

Speaker 5 Yeah, the point is just be aware at least.

Speaker 2 At least know and talk about it.

Speaker 2 Instead of talking about nonsense, like, oh, what happened to the Grammys? Maybe talk about the thing that

Speaker 2 would maybe save all our lives. You know, and people are going to watch this.
They're going to have cognitive dissonance. It's not true.
I don't believe that. Nonsense.
Okay.

Speaker 2 You're not really helping them.

Speaker 2 You're actually helping the other side. And if you say that, it's just, I get it.
I don't want to accept it either. Like, it's not something anyone wants to hear.
No one wants to accept it.

Speaker 2 No one wants to sit there and face the reality. But if you did face the reality, then you can help us do something with that.

Speaker 5 Absolutely. We'll link your stuff below, man.
Thanks for coming on.

Speaker 2 Thanks for having me. Good about, guys.