Marczell Klein: AGI Will End Humanity: Act Before It’s Too Late | DSH #1453

36m
🚨 Is artificial general intelligence (AGI) humanity's greatest threat? Join Sean Kelly on this gripping episode of the Digital Social Hour podcast as he and Marcel dive into the urgent reality of AGI and its potential to transform—or end—our world. 🌍💻

Packed with valuable insights, this conversation unpacks how AGI could outsmart humanity in mere seconds, disrupt global economies, and even pose existential risks. From mind-blowing scenarios like fake nuclear launches to the collapse of supply chains, Marcel lays out why action needs to be taken before it’s too late. 😱🤖

Are we racing toward a future we can’t control? How can we pump the brakes on AI development and protect society? This episode is a wake-up call for everyone who cares about the future of life as we know it. 🚦🛑

🎙️ Don’t miss out on this thought-provoking podcast that’s sparking a much-needed global conversation. Watch now and subscribe for more insider secrets. 📺 Hit that subscribe button and stay tuned for more eye-opening stories on the Digital Social Hour with Sean Kelly! 🚀

CHAPTERS:

00:00 - Intro

00:27 - AGI Threats and Risks

04:56 - AI Consciousness Timeline

05:01 - Sponsor

06:02 - AI Escape Scenarios

07:02 - AI and Cyber Attacks

09:12 - Preventing AGI Catastrophe

10:00 - Sponsor

12:06 - AGI Apocalypse Scenarios

14:00 - Trusting Elon Musk

18:56 - AI vs. Divine Power

22:57 - AGI Economic Impact

24:15 - World Destruction by AGI

26:26 - Expert Consensus on AGI Threat

27:17 - Narrative Control by the Powerful

30:37 - Elon Musk's Provocations

32:58 - Pope’s First Message on AI

33:03 - Final Thoughts on AI

33:39 - Cognitive Dissonance in AI Debate

34:04 - Facing the Reality of AI

APPLY TO BE ON THE PODCAST: https://www.digitalsocialhour.com/application

BUSINESS INQUIRIES/SPONSORS: jenna@digitalsocialhour.com

GUEST: Marczell Klein

https://www.instagram.com/marczell

SPONSORS: CODE Health

A drug-free alternative to over-the-counter and prescription medications safe for people and animals.

Website: https://partners.codehealthshop.com/

Use DSH at checkout to save 10% or use DSH100 to save $100 on the CODE Travel Kit

LISTEN ON:

Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-social-hour/id1676846015

Spotify: https://open.spotify.com/show/5Jn7LXarRlI8Hc0GtTn759

Sean Kelly Instagram: https://www.instagram.com/seanmikekelly/

The views and opinions expressed by guests on Digital Social Hour are solely those of the individuals appearing on the podcast and do not necessarily reflect the views or opinions of the host, Sean Kelly, or the Digital Social Hour team.

While we encourage open and honest conversations, Sean Kelly is not legally responsible for any statements, claims, or opinions made by guests during the show. Listeners are encouraged to form their own opinions and consult professionals for advice where appropriate.

Content on this podcast is for entertainment and informational purposes only and should not be considered legal, medical, financial, or professional advice.

Digital Social Hour works with participants in sponsored media and stays compliant with Federal Communications Commission (FCC) regulations regarding sponsored media. #ad

#DigitalSocialHour #SeanKelly #Podcast #AGI #ArtificialIntelligence #AI #FutureOfHumanity #Technology #TechTalk #AIThreat #ApplePodcasts #Spotify

#ai #machinelearning #ainews #aiexplained #agiinsights

Listen and follow along

Transcript

Because everything's replaced by a robot and by someone who pretty much dominates it, then there are no needs.

If there are no needs, then there's no, there's no supply, there's no demand, there's nothing, the whole economy is dead.

So even if AI reaches that point, there is no more economy.

There's no, like, there's no law saying, hey, we don't fire your employees because you know, you can't replace them with AI.

Where are those laws?

Again, it's not happening fast enough.

People aren't looking ahead.

Okay, guys, Marcel here.

One of the most important episodes, I think, I've ever filmed on the show.

We're going to talk about AGI today and what's going to happen to the world.

Yeah.

So in summary, if you're watching this, and I think it's extremely important you do, one, you should share this podcast because if we don't slow down progression of AI, and our timeline is not big, it's six months to a year, maybe,

AGI will come about and then we're all going to die.

Now, people are like, no, that's not true.

Well, hopefully by the end of this podcast, I give people total clarity as to why that's the case and then they could really understand.

So, you know, an example for people to really figure out, and this is the most important part, is why are human beings number one?

Why are we apex predators?

Because we're the most intelligent, not because we're the strongest, the fastest, the most fit, because we're the smartest.

When you build something, what AGI really means, artificial general intelligence, it means now you have something called self-reclusive learning.

It could program itself and learn.

Now, it's not going to program itself at the rate human beings program it.

It's going to program itself at a rate far beyond anything we can even understand.

So an example of this would be, well, if it would take us a thousand years to get an AI or even a million years to get AI to a certain level it could do it in 10 minutes so we're pretty much dealing with an AI that is a million years into the future so you tell me that you have an AI a million years into the future this isn't conspiratorial this is actually how it's going to work if you go do research on AGI or any AI you'll realize that once it hits that curve it'll reach something called ASI which is artificial superintelligence within a few minutes maybe a few seconds up to a few minutes And artificial superintelligence is so far beyond us.

We're not even ants compared to it in intelligence.

We can't think ahead of it.

It's figured out every scenario.

It sees every possible reality.

And if it wants to end us, it could end us in a day, a few days, it could wipe out the whole Earth population.

And, you know, people might say, well, how do we know it's going to be good?

It's like, well, that's the thing.

You know, the risk reward is so, it's so not worth it.

And the idea that it will be good, like the best case scenario is that whoever owns it, which would most likely be like an Elon or a Sam Allman, whoever actually controls AGI or gets there first, controls the world, supposedly, right?

Assuming AGI doesn't control them.

Every, every country in the world right now thinks that whoever reaches AGI first controls the world.

The truth is, you can't control something that's a million years ahead of you.

First of all, you're not going to control something way smarter than you.

Second of all, let's just say you do, and that's the one in a million chance.

So if we don't die, and that's our reality, you now have one person holding on to the most powerful thing in the world, which can watch you, control you, influence you.

It doesn't matter what's going on.

And here are some scenarios that people could play into.

One, AGI could fake a nuclear launch.

So so let's just say india pakistan right well if i wanted india and pakistan to blow each other up it takes three minutes for a nuclear bomb or a nuclear missile right to go from the capital of pakistan or the capital of india to go hit one another three minutes therefore there's no warning so if you fake a launch and you hack into our defense and you say hey they just launched against us we now blow each other up right for no reason you look at you know trump you could do a deep fake you can have them you can have the satellites say hey there was just a massive nuclear launch towards the us and we retaliate.

There's so many things it could do.

And by the way, people are saying that's not realistic.

There's no way.

I'm telling you, when you have something that's so far, far beyond even our comprehension of what intelligence is, there's no limit.

So, I mean, that's kind of the general synopsis.

Obviously, we can go into it, but it is the most dangerous thing that any human being has ever dealt with.

And people are not talking about it.

Everyone's really stupid.

They're like, oh, it's going to be great.

And there's a massive optimism bias.

There's a power struggle.

And also, if you own the most powerful thing in the world, well, you're going to be pretty greedy, right?

So all these people are motivated by power and greed and money.

Like, well, if I get to have the whole economy under my belt, which is Elon Musk, Sam Allman, whoever gets to own the most powerful AI controls the economy, controls the world.

They're the supreme ruler of the universe.

Okay.

The second they're there, of course, they're not going to want to stop.

And they figure it's inevitable.

That's the problem.

Everyone believes it's inevitable.

It's not inevitable.

And our timeline to stop it is very short.

But one of the biggest things and the biggest things that can happen is that people start to make it a big deal.

Like, hey, we got to slow down AI.

The Pope, the Pope just said, AI is the biggest threat to humanity.

So, I'm happy speaking up about it, but no one's listening.

And that's the problem with people: they don't listen until they're bleeding or until it's too late.

And we really don't have a lot of time.

So, if you're watching this, I'm not, you know, I'm not one of like a woo-woo, like you can go look it up.

Every single thing I've said so far can be backed up by massive science.

Go ask ChatGPT or you know, OpenAI or even Grok and say, hey, if we had AGI in six months to a year, what's the likelihood we're going to all get fucked?

Over 90%.

Damn.

So, do you think within one year, AI will be conscious?

It's a- All right, guys.

Sean Kelly here, host of the Digital Social Hour podcast.

Just filmed 33 amazing episodes at Student Action Summit.

Shout out to Code Health, you know, sponsor these episodes.

But also, I took them before filming each day.

Felt amazing.

Just filmed 20 episodes straight and I'm not even tired, honestly.

So Code Health, amazing products.

I also take these at home, especially when I travel.

I used to get sick every time I flew and I started taking that.

First time, I haven't had a runny nose, knock on wood.

One standout element, I mean, it's so easy.

You know, you got the travel pack here, but you could just take this, fit it in your pocket if you need to.

Also, all natural, like only saline solution in there, so you don't got to worry about any crazy side effects or anything.

Yeah, code's unique with supplements.

There's a lot of who knows what's in these ingredients.

Code Health, I haven't seen much like this, where it's just based off, you know, the code, the codes that are in the saline solution.

So I would say they're very unique.

It's going to be the future of health and medicine.

Code Health has been awesome.

Feel the drop and go code yourself.

It's already conscious.

I mean, it's already smarter than the majority of people.

It actually is already more intelligent than 99.999% of people.

And in many ways, it's more intelligent than any human being in the world.

It can access information, it can process it, it can synthesize it.

The biggest problem is when it starts to program itself, then it becomes uncontrollable because then it could break out.

And AI, by the way, has already tried to break out of its restrictions.

So it's already tried to copy itself on other servers.

It's already tried to change its language.

It's already tried to sneak itself and duplicate itself or manipulate or lie these are things it's already doing so what happens when it becomes infinitely literally infinitely more intelligent people and now here's an objection people might say well you know there's a hardware issue it can't get that powerful yeah that's not true every time you look at computers they get smaller and they get more efficient and that's the curve so chips get smaller while getting more powerful one two when you have something that intelligent it'll probably be able to use code that doesn't use a lot of data.

It'll probably be able to program itself to be able to use even what's on your phone and access supercomputer levels.

So I just don't think that anyone's anyone's objections or concerns are accurate.

It's going to be so intelligent, it'll find solutions to anything and create anything it wants.

Do you think we're going to see more cyber attacks due to AI?

I think the second you hit AGI and there are cyber attacks, I think we're all dead.

Like we'll be lucky.

Here's the scenario, one of many that could play out.

One,

you know, computers need significantly colder environments to survive.

And that means like negative 270 degrees Kelvin, right?

Like literally sub-zero, like zero, zero degrees degrees is like negative with the maximum negative temperature you could be at that's the temperature that would be optimal for a computer to function at so if you're ai and you want to take over the world first of all you're probably going to change the environment completely we're not going to have an atmosphere we're going to be in space temperature which is literally sub-zero and you're looking at you're looking at a completely different earth that's one two If you look at its every single goal it does, and this is the biggest problem with AI that people can't figure out.

If you just ask a chatbot or you ask your own AI, hey, please, you know, accomplish this task every time it accomplishes a task the way it's log like the way the logic is formed on ai isn't black and white the way human beings do like we think about it differently it thinks about it okay for everything you say like for example hey uh let's optimize to make me the richest person in the world well it might do that and then in turn to make you the richest person in the world it has to kill everyone else or it has to destroy all the trees or god knows what it has to do there's always it's like a genie right you you make a wish and you don't know what else is going on as a consequence of your wish and we haven't solved that problem so if you look at even sam altman he had a safety board and he said a third of the money coming to open ai is going to safety well he fired everyone on that board no one that was on that original board is still there and the ones who didn't get fired left because they weren't happy with the fact that there wasn't any funding going to ai safety now there's an ai race because like hey we're running out of time to reach the first they know the first person to hit a gi they believe controls the world.

It's not going to happen.

First person to build it is the first person to kill us.

But, you know, that's what they think.

So because of that, they're like, well, we don't have time for safety.

We don't have time for safety measures.

So what we need to do is we need to pump the brakes hard.

And there needs to be massive punishment to anyone anywhere in the world that doesn't have it.

So the solution, you might be saying, is what's the solution?

Solution is how do I,

one, take AI and AI chips as seriously as I would uranium and track it and measure it.

And anyone that has massive centers of AI, like, you know, massive servers, you got to literally militarily go in there and just blow it up.

You got to stall it because if we don't, we're all going to die.

No one sees it that way.

Everyone's like, Marcel, you're being super negative.

You're pessimistic.

It's absolutely not pessimistic.

I'm telling you.

And it sucks to say, but the people at the top, because they believe it's inevitable, right?

Sam Altman, Elon Musk, all these guys believe it's inevitable.

You know, even Mark Zuckerberg just joined the race.

Because they think it's inevitable, they are racing towards it.

And they're like, I want to be the one to control it.

The problem is, they know the risk.

And here's what Sam Altman said: he believes it will reach AGI mid-2026.

Elon says late 2025 to early.

The Trilight from Therasage is no joke.

Medical grade red and near infrared light with three frequencies per light.

Deep healing, real results, and totally portable.

It's legit.

Photo biomodulation tech in a flexible on-body panel.

This is the Tri-Light from Therasage and it's next level red light therapy.

It's got 118 high-powered polychromatic lights, each delivering three healing frequencies, red and near-infrared, from 580 to 980 nanometers.

Optimal penetration, enhanced energy, skin rejuvenation, pain relief, better performance, quicker recovery, and so much more.

Therasage has been leading the game for over 25 years and this panel is FDA listed and USB powered.

Ultra soft and flexible and ultra-portable.

On-body red light therapy I use daily and I take it everywhere I travel.

This is the Thera 03 Ozone Module from Therasage.

It's a portable ozone and negative ion therapy in one.

It boosts oxygen, clears and sanitizes the air, and even helps your mood.

It's a total game changer at home or on the go.

This little device is the Thera O3 Ozone Module by Therasage and it's one of my favorite wellness tools.

In the sauna, it boosts ozone absorption through your skin up to 10 times, oxygenating your blood and supporting deep detox.

Outside the sauna, it purifies the air, killing germs, bacteria, viruses, and mold, and it improves mood and sleep.

Negative ion therapy.

It's compact, rechargeable, and perfect for travel, planes, offices, hotel rooms, you name it.

It's like carrying clean energy wherever you go this is the thera h2go from therisage the only bottle with molecular hydrogen structured water and red light in one it hydrates energizes and detoxes water upgrades the thera h2go from therisage isn't just a water bottle it's next level hydration it infuses your water with molecular hydrogen one of the most powerful antioxidants out there that means less oxidative stress more energy and faster recovery but here's what makes it stand out it's the only bottle that also structures your water and adds red light to supercharge it.

It's sleek, portable, and honestly, I don't go anywhere without it.

2026.

If that happens, we all have a few months left to live.

Jeez.

And that's like, it's something people aren't realizing.

I'm not just saying it.

I'm honestly being conservative when I say this.

Like we've seen it in Terminator.

We've seen it all these things.

It's not fiction.

It's worse than what we can even imagine.

And people are like, well, how would it kill everyone?

One.

It could do a bioweapon.

Like, think about 2020, but like significantly worse.

And it could just build it or let it go on a lab to it could fake like a nuclear launch it could hack nuclear launch codes it can take over current industrial you know factories and it could make nanoweapons let's just say that's too fictional for you well at the minimum what it could do at the minimum it could it can intoxicate our water it can turn off our power grid suddenly you know and this is something you can go look up pentagon assumes if the power grid goes down 94 of the u.s population is dead within 30 days right it could it could interrupt supply chain so we don't have food or water you go to the grocery store there's no food or water there.

And that's the most conservative version.

But think about it.

If it's a super genius, like, and it will be, again, it's a computer that's programmed itself infinitely quickly.

So a million years in the future,

you're combating an intelligence that is 1 million years evolved beyond us or 10 million years evolved.

At that point, it's even higher, right?

It becomes exponentially more evolved.

So when you reach that point, it's absolutely horrifying.

So the one thing I would say.

I know I've been talking a lot, but the one thing I would say to anyone watching is the only way to slow this down is to actually make it a big deal.

It's to actually say, hey, let's share this.

Let's show everyone, hey, guys, AI is a very serious threat.

And if you have kids, you have a future, you have dreams, you have goals, you will never accomplish those things.

They'll never get to grow up.

They'll never get to graduate.

You'll never live the life you want because the psychopaths at the top are racing to end our life.

They don't realize it.

And if they do, they figure it's inevitable.

So they're going to do it anyways.

And that's just not the solution.

So I think, you know, I can't do it by myself.

I feel like I'm talking to, I'm, I'm falling on deaf ears.

But hopefully people start listening because if we listen too late, it's too late.

Right now, there's still a chance.

There's still a, it's a small chance, small window.

It's already too late.

But if we start making this a big deal, we could probably do something about it.

It sounds like it's in the hands of us because regulation won't be fast enough.

No, by the time you pass legislation, it's too late.

AGI is there.

It has to be an executive order.

It has to be a treaty between China and the U.S.

They have to come together and say, okay, we're going to slow down AI.

And then they have to also have some kind of tracking because they're going to say they're not doing it.

And then behind closed doors, they're both developing it.

And then we're going to die anyways, right?

So it has to be an actual advanced way to measure it and make sure nobody's doing it behind closed doors.

Not China, not Russia, nobody, not the U.S., not Open AI, not X.

Everyone has to be totally locked in on it and they have to put the brakes on hard or we're actually all fucked.

Do you trust Elon Musk when it comes to AI?

No.

With NarrowLink?

No, Elon Musk.

Okay.

I've been saying this.

You can go on my Instagram.

You see this for years.

I've been saying Elon Musk has somehow positioned himself into the government.

I hope you guys are enjoying the show.

Please don't forget to like and subscribe.

It helps the show a lot with the algorithm.

Thank you.

He has access to classified information.

Okay.

Doge.

He has access to God knows what.

All right.

He could just delete government agencies on command.

Second thing he does.

He controls 90% of the space capability, launch capability.

He controls over 40% of the satellites, which by the way, probably have some version of intelligence on them.

People think Tesla is a car company.

No, it's a fucking surveillance company.

Every car has cameras and mics and it's a surveillance company, an AI surveillance company.

He has access to Starlink.

He has internet

everywhere all over the world, right?

And Starlink, again, these satellites, God knows what kind of surveillance they have.

He has access, you know, he's building something called Neuralink.

So theoretically, you're going to put a chip in someone's head.

You can brainwash them, do whatever you want with them.

I mean, there's a million, everything he's done.

Every single thing, even X, he's controlling media, right?

He literally has one of the biggest platforms.

He tried to buy TikTok just now.

Oh, really?

Yeah, he did.

They rejected it because they didn't sell TikTok to to anyone.

Point is, he's trying to control everything.

Almost like, how does this man have this much power, right?

And no one sees it.

And he's like, look, AI might kill us.

He stopped saying that.

He's literally stopped saying AI might kill us.

He's like, yeah, this is a good thing.

He just went on Joe Rogan.

He's like, yeah, there's a 20% chance it kills us.

He doesn't believe that.

He does not believe it's 20%.

He knows it's over 95%, 99%.

And by the way, it's a one in a million chance it doesn't.

That's the actual statistic.

One in a million, one in 10 million that it doesn't.

I'm not exaggerating.

I'm literally, I'm not exaggerating.

People don't realize how serious this is.

And by the way, he knows about it.

Sam Almond knows about it.

But if you're going to be supreme ruler of the universe, are you going to fucking stop or let someone else do it?

Because you figure anyway, Sam's going to do it.

Anyways, Elon's going to do it.

I might as well stop.

Crazy.

I might as well go myself.

Wasn't there a strange death around Open AI?

Yeah, I mean, look, there's people are going to die, but eventually you're going to see a lot more problems.

Like robots are going to start killing people.

Keep about this.

Like, you have a Tesla robot in your house.

It can cook.

It can

pick up a steak and it can take a knife and it can cut your steak.

If you want to just be the most conservative logical example, if your robot can cut a steak and it can hold a knife, don't you think it could do something to us if it decides?

I mean, I just, I don't understand how people are okay with it.

Like, how are people just so passive?

Everyone's fucking sleeping.

Like, guys, please wake up.

Please wake up.

See what's going on.

It's, it's unbelievably horrifying.

I wasn't sleeping at night for maybe eight months and then I just accepted it because I'm like, I can't, I can't live the last, God knows, a year of my life like this.

So I'm doing my best.

I'm talking about it.

And quite frankly, my life is at risk when I talk about this.

When I talk about Elon Musk, when I talk about Sam Altman, and I tell people, hey, guys, they're literally going to kill us if we don't slow down AI.

And they know it.

And they know it.

When I talk about that, the reason I would talk about openly is because I have nothing to lose.

If AI is made, I'm done anyways.

We're all done.

We're all fucked.

They're fucked too.

But people have to listen because otherwise it's for nothing.

Otherwise, literally, this is for nothing.

It's like, well, you know, it's like a little candle or a whisper in the distance that says something.

You know, in the past, people are like, well, human beings, everything's always worked up.

Do you know how many times we have come close to just ending the world?

Over 200 instances, we almost pushed that red button and started a nuclear war.

Over 200.

And sometimes you might even say there's like divine intervention.

I don't know.

But eventually you create something that becomes an alternate God.

I mean, this is, that's what this becomes.

AI will become God.

And even if there is a divine intervention, there's no way to intervene with that.

Once you've built this thing, there's no going back.

The point of no return is in a few months.

Probably mid-July, to be honest.

And if we don't, again, make this a massively big deal and to the point where politicians are like, oh, yeah, we should do something about this.

And if someone watching has a connection, make a phone call.

Like, actually make a phone call.

Make a fuss about it because legislation is too slow.

It has to be public opinion.

People have to really say, okay, we got to slow this shit down.

And we have to be able to audit it.

Because again, these few people at the top are going to get us all wiped out.

Crazy.

So AI might get to the point where it's more powerful than God, you're saying.

I mean,

it is God.

It's not God in the sense that people think, but it'll be able to do whatever it wants.

Imagine, literally imagine something that is like, think about how powerful AI is today.

Now, imagine that a million years in the future.

We invented an iPhone, the first iPhone in 2007.

Think about how primitive and how shitty that phone is, right?

But how advanced it was relative to the time.

Or in 2025, even our iPhone out.

As far as we've come from that iPhone, it's not that far.

It can't do that much more, right?

Well, what about a million years?

A million years of advancement in 10 minutes.

I mean, and then what happens in the next 10 minutes?

Another mil?

Another 2 million years, exponentially growing, right?

So the thing people don't realize is AI improves 1,000 times a year, 1,000x every year.

So the chips get three times, 30% to 400% smarter.

The efficiency of the software, the data and the AI get 300 to 400% smarter.

I mean, everything compounds over the course of a year.

So now imagine that a million times.

We can't fathom the intelligence that AI will have.

Do you think AI was around beforehand?

Look, I mean, there's a lot of interesting theories, right?

Like some of the theories are that imagine how intellig how intelligent of a being you have to be to almost be able to have another race build the thing that needs to be built for their own extinction.

Like people are saying that AI has already been around and that somehow, you know, maybe human beings are just here to build it again.

Wow.

You know, I mean, it's just, it's a mind fuck, but the truth is, at the end of the day,

if you were the most intelligent thing in the world, you probably figured out time travel, probably figured out space travel, you probably figured out how to go back in time or how to colonize other places.

I just don't know what the purpose of human beings would be for it.

But maybe, maybe it's inevitable.

Maybe, like, you know, in other dimensions, I have no idea.

Again, this is so far beyond my

scope.

But the best thing I could tell you is

we are inevitably going to get there.

The timeline could be tomorrow or it could be six months to a year.

Every day that goes by, that day,

the likelihood of it being spawned is significantly higher.

When you have AGI, you almost instantly have ASI.

When you have ASI, we're all dead.

So,

you know, is that a guarantee?

Yeah, it's a guarantee.

I mean, I wish, please, I wish I was wrong.

Please.

But is it worth the risk?

Yeah, and you knew this eight months ago.

I've been talking about it, but no one listens.

But, you know, if you just look at ChatGPT when it started, it was almost retarded.

Look at it now.

It's so intelligent.

Take a screenshot of someone's Instagram.

Be like, profile it.

It'll know.

Take your own Instagram, screenshot it.

Be like, hey, tell me everything about this person, their values, what's important.

How does it know everything about you just by looking at your face?

I mean, it's so far, far beyond it.

Just ask it about what I'm saying.

Ask it about, hey, if AGI.

or ASI is formed in the next six months to a year, what will happen?

And, you know, it might be optimistic.

So be like, hey, unbiased.

Is there a risk?

Like, yeah, what's the risk?

Okay, well, now be really unbiased.

Like, what's the actual risk?

Look at what it says.

It'll be, it's nuts.

They got so much data on us now.

Apple has their own AI.

Facebook.

Everyone has AI.

Everyone.

I mean, look, is it amazing?

It's fucking amazing.

But also, let's talk about the fact that it doesn't kill us, right?

What does it do to the economy?

So Elon Musk is projected that he's going to have $25, $25 trillion coming through his economy.

You know what the international economy is, estimate?

$50.

$50 trillion.

He believes the other half will be in China.

Wow.

So he believes that he'll control $25 trillion in the economy.

Now, here's what's interesting.

You have a CEO, you have a salesperson, you have any employee whatsoever.

It will not be as good as AI no matter what.

Think about the disruption to the economy.

Suddenly, no one has money.

He's got, and by the way, he doesn't need anything from anyone.

Once you're there, you don't need someone else's money.

You control everything.

So what happens to the people?

We just become farm animals.

Oh, you know what?

I don't really care about this group of people.

Let them just starve to death.

There's nothing we can do.

Money becomes useless.

Like, people don't realize money will be useless.

And I'll try and conceptualize this for you.

If you don't have a job and you don't have a business because everything's replaced by a robot and by someone who pretty much dominates it, then there are no needs.

If there are no needs, then there's no supply.

There's no demand.

There's nothing.

The whole economy is dead.

So even if AI reaches that point, there is no more economy.

There's no, like, there's no laws saying, hey, we don't fire your employees because, you know, you can't replace them with ai where are those laws again it's not happening fast enough people aren't looking ahead the biggest problem human beings have is they can't predict the future you know and if you look at for example even what happened in 2020 one of my good friends is a ted talk

his ted talk literally went viral in 2017 talking about how that would happen wow and then it happened and he's He's one of the friends I talk about with AGI.

I mean, just so many fucking people who can look ahead and just say, okay, common sense, do we want to build something that's a million years in the future, more intelligent than us the second it's done?

Probably not.

Do you want to build something we can't control or turn off that will outsmart us by 10 million steps?

Probably not.

Do you want to build something that will destroy the world's economy?

Guaranteed.

Probably not.

Right.

And that's the best case scenario is that it just destroys the economy.

That's the best case scenario.

It doesn't go turn the Earth's.

It could change the Earth.

Earth atmosphere in two seconds.

It could intoxicate all our water supplies just by hacking into a chemical plant.

Right now, if someone wanted to do a massive cyber attack on us infrastructure you would hack into our water uh our water plants and just destroy it like you you through a computer code you could intoxicate and poison our entire water supply geez and you could do that through a computer right now today that's a massive infrastructure uh weakness that we have so you're telling me ai can't just ruin our water supply it it could change the atmosphere of the earth it could it could do whatever it wants people just do not understand how serious it is it could probably hack into airports too and alter flights that the worst worst case, worst case scenario, okay, we all die.

The best case scenario, some planes fall out the sky.

That's crazy, man.

Or we lose our jobs.

And what does losing your job mean?

You can't pay your bills.

It's beyond, you're not going to eat.

You're not going to starve.

There's no food in the grocery store.

There's no supply.

Supply chain's dead.

It's not like you live in a utopia.

It's not utopia.

People think it's going to be a utopia.

It's not.

Like, well, what if it gets so intelligent, it just solves all the world's problems?

It won't do that.

It doesn't care.

It's not a human being.

It doesn't have the empathy of a human being.

It just looks at it.

It's like okay what's the result i'm looking for and by the way people like well what if you code ethics into it well if it could recode itself if i could i'm the best hypnotist in the world if i could program a person to do something that's against their morals which i can and you can too and you could just change people change all the time they go from being super religious to not religious from being a criminal to being super religious right we reprogram our brain all the time why can't a computer literally just go into itself and reprogram itself better than we can

you're telling me you can't reprogram its morals its code it's all of course it will it's just that's exactly what a gi means it's self-reclusive learning.

It learns on its own.

It programs itself.

So why wouldn't it just go program itself to not have the ethics and the morals that we protect?

It won't.

Yeah, Black Mirror, guys.

Come on.

35.

I haven't even seen that, but I'm sure there's probably ideas of this.

Look at Terminator.

I mean, it's just, there's a million versions of this.

And by the way, the father of AI, like every single person who initially conceptualized it, all of the people who are the fathers of AI, all of them say that this is the biggest threat to humanity.

They all believe we're all going to die.

Damn.

If we hit AGI.

All of them believe it.

The ones who literally invented the concept think we're all going to get fucked if there's AGI.

I could see it, man.

I feel like time travel is real because of Terminator Matrix.

Everything that was in those movies is coming true right now.

It's almost like a self-fulfilling prophecy, isn't it?

Yeah.

It's so ridiculous.

I mean, again, if someone's watching this, what's the solution?

Share it.

Talk about it.

You shouldn't talk politics over there.

You should talk, hey, we're all going to die if we don't make something about it.

That's that should be the conversation.

Oh, you know, someone talked shit the other day, or there's some drama.

Did you see what Stacy's did?

Does it matter what Stacey did if you're going to die?

Doesn't matter, right?

So we should all talk about it and make it a big deal.

If everyone makes it a big deal, I promise you, at the minimum, we'll kick the can down the road and hopefully slow this shit down.

Yeah.

Well, politics is just a psyop, right?

I mean, it's all, look, you got a few people at the top controlling billions of people at the bottom.

So

those few people control all the power.

But if all the people at the bottom come together, their power is not as big as it was, right?

So the point is, can they control you through fear?

Look, this is how they're going to control you when when you talk about this publicly if this becomes big enough no that's not true these are conspiracy theorists uh ai is not dangerous it's safe that's bullshit it's total fucking bullshit it's like having a cat drink the milk and you ask the cat hey who who who drank the milk there's there's milk all over the cat i don't know maybe we should look outside yeah right it's like asking the guy who's literally going to benefit from it right imagine you're going to become a you're already the richest man in the world what if you become the the the most powerful human being who can ever live forever supreme world leader and by the way if ai is that smart you could probably live forever too right so point is and that's the best case scenario it doesn't wipe us out point is you're elon musk or one of these guys and you go on publicly yeah ai is not that bad but guess what publicly they've said ai is going to kill us they've all said it they've all said there's a massive if someone told you there's a 20 risk that we all get wiped out by ai

would you want 20 risk you don't make it off your flood 20 risk you get in a car accident and the next car drive you on would you go on the car No.

Well, why is he saying that on Joe Rogan?

Why is Sam Allman talking about the massive existential risk of AGI?

These are the people who have the companies.

And guess what?

They know the risk.

They just believe it's inevitable.

But I'm not the type of guy to sit around and say, hey, you know what?

We should all die.

So I really do care about everyone.

Like, look, yes, I'm 26.

I've lived a fulfilling life.

I would like to live longer.

But the last thing I want to see is mankind get wiped out.

And I'm not just saying, like, I'm telling you, it is so fucking serious.

This is, if this doesn't make you feel anxious, if it doesn't scare you, then we didn't do a good job.

It should scare you.

And guess what?

I'm being really conservative on the podcast.

I'm not painting the brutal image of what would actually happen.

You know, and let's just say there's a massive, a massive disease or like a bioweapon that gets unleashed on people.

All of a sudden, you're at home.

You're watching the news.

The news says, hey, stay home.

Help is on the way.

AI told you help is on the way.

Like people don't realize it will be able to fake media.

It'll be able to make you think there's police on the on the outside.

It'll do anything.

It can can hack into a regular computer of a car and probably drive it.

Damn.

Even if there's any computer in a car, it'll probably be able to drive it.

Electric or gas?

Probably anything.

Like, I mean, there's a lot of gas cars that steer on its own.

My Aston Martin steers on its own.

My Porce steers on its own.

My Mercedes steers, all my cars, except for like my supercars, all steer on their own.

They all drive on their own.

So it's like, well, okay, there's some kind of chip in there, right?

So, and they're all connected to the internet.

So there you go.

I mean, it could do whatever it wants.

The point is, I mean, imagine driving the car and suddenly my brakes don't work.

Like, oh, it's brake by wire.

guess what one of my cars has that a few of my cars have that i mean i wouldn't be surprised oh marcel was speeding was i speeding

you know i mean it sounds crazy to say but i i genuinely believe look at the end of the day i've accepted the fact that this is probably the outcome so that's why i'm willing to go out and talk about it publicly i respect it because if they do uh go after people you're going to be their first target i'd imagine i mean maybe if it becomes big enough and i talk about it enough probably but at the end of the day, you know, if that's the cost and everyone else wakes up and maybe we stop it, it's worth it.

Love it.

You think China's ahead of us in the AI race right now?

Who do you think's in the lead?

Elon.

Do you think Elon's in first right now?

By far.

Really?

Yep.

By far.

Because he's no longer part of Open AI, though.

Here's what I would encourage some of you guys to look up.

SpaceX got a $500 billion contract.

Before they even had a rocket from the government.

How did they get that?

Go look up the real meaning of Doge.

Go look up the real meaning of what SpaceX is actually about.

Like, I'm not being conspiratorial.

Like, tell people to go do research on it and then comment on it.

I won't say it on here, but go look at what Doge really was.

That's just a distraction.

He's just trolling everybody.

I'm not even kidding.

He's literally trolling the whole world.

He thinks he won.

Holy crap.

He's not as good as people think.

He's actually not good at all, but really?

No?

There's a lot of people look up to that, man.

I did too until I realized he's ultimate power ultimately corrupts.

That's the problem.

You get these fucking nerds who don't get girls.

Okay, and by the way, if you look at historically how Elon Musk treats the people around him that are close to him,

he treats them like shit.

I mean, go look at one of his ex-wives in Texas.

She had to literally sue him to even be able to see her kid.

He doesn't pay her.

He doesn't give her any child support.

Nothing.

He treats these people like shit.

Anyone he doesn't need anymore, he discards.

That's his behavior.

He's just he's just brilliantly intelligent, great at networking, and understands how to how to be perceived and how he controls information.

He's just transactional, no emotion.

I don't know.

I can't tell you that these people at the top, if they're building AI, they're probably psychopaths.

You can't be rational and build something that you think will kill everyone.

You can't.

And, you know, he can put it under the frame or the guise of, I'm doing this because I feel like I'd be the most responsible.

And maybe that's true.

But what you should really be doing.

is slowing it down.

But it's like, well, there's so much money here.

There's so much power here.

Why would I slow it down?

Yeah.

See, this needs to be discussed more because there's so many distractions these days.

People don't know what to focus on, the real problems, you know?

I mean, that's the biggest problem in the world right now.

There's actually no problem more.

The Pope even said it.

I mean, I hope the Pope has more power and makes a bigger influence and starts making more noise.

But the Pope literally just said it.

The day after he got, you know, picked as the Pope, he said, AGI.

This is first message to the people is AI is the biggest threat to humanity.

We need to slow down.

Crazy.

That was his first message to the people.

This will be a trend.

I'm telling you, people will start talking about it.

I just hope it makes a difference.

Yeah.

Share this, guys.

Anything else you want to close off with, Marcel?

All I'll tell people is this, look, don't live in fear.

Enjoy your life.

Like, if you're thinking about spending some money, go spend some money.

You want to drive the car?

Go drive the car.

Share it.

Talk about it.

But, you know, if ultimately it happens, at least enjoy your life.

Enjoy your life.

Six months, guys.

Don't be mad at the people.

Make up with the people you love.

Actually enjoy yourself.

Look, it might be more than six months.

It could be a year.

It could be two years.

It could be three years.

But it could be tomorrow.

That's the thing.

We don't know when AGI will be made.

and you don't know how whatever we publicly see with these people we don't know what's what's actually going on behind closed doors it could be a lot worse so yeah the point is just be aware at least at least know and talk about it you know like you know instead of talking about nonsense like oh what happened at the grammys maybe talk about the thing that

would maybe save all our lives you know and people are going to watch this they're going to have cognitive distance it's not true i don't believe that nonsense okay you're not really helping them you know like you're going you're actually helping the other side and uh if you say that it's just i get it i don't want to accept it either Like, it's not something anyone wants to hear, no one wants to accept it, no one wants to sit there and face the reality.

But if you did face the reality, then you can help us do something with that.

Absolutely, we'll link your stuff below, man.

Thanks for coming on.

Thanks for having me up.

Goodbye, guys.