Radio Better Offline: Allison Morrow, Paris Martineau & Ed Ongweso Jr.

1h 25m

Welcome to Radio Better Offline, a tech talk radio show recorded out of iHeartRadio's studio in New York city.

Ed Zitron is joined in studio by Allison Morrow of CNN, Ed Ongweso Jr. of the Tech Bubble newsletter, and Paris Martineau of The Information to talk about the collapse of the AI bubble, the goofy “AI 2027” fan fiction that has gone viral in the tech industry, and how the tech media faces a reckoning once the bubble truly bursts.

 

Vote for Better Offline's "Man Who Killed Google Search" as the best business podcast episode in this year's Webby's! Open until April 17! Vote today!
https://vote.webbyawards.com/PublicVoting#/2025/podcasts/individual-episode/business

Vote for Weird Little Guys in this year’s Webby’s! https://vote.webbyawards.com/PublicVoting#/2025/podcasts/individual-episode/crime-justice

Allison Morrow
https://www.cnn.com/profiles/allison-morrow
https://bsky.app/profile/amorrow.bsky.social

Ed Ongweso Jr.
https://thetechbubble.substack.com/
https://bsky.app/profile/edwardongwesojr.com

Paris Martineau
https://www.theinformation.com/u/parismartineau?rc=kz8jh3
https://bsky.app/profile/paris.nyc

 

https://ai-2027.com/

New York Times article on AI 2027 - https://www.nytimes.com/2025/04/03/technology/ai-futures-project-ai-2027.html

CNN - Apple’s AI isn’t a letdown. AI is the letdown - https://www.cnn.com/2025/03/27/tech/apple-ai-artificial-intelligence/index.html

CNN - The AI bubble may not be bursting, but tariff chaos is sure helping to deflate it https://www.cnn.com/2025/04/01/business/ai-bubble-markets-tariffs-nightcap/index.html

WSJ - How Is SoftBank Funding Its Mega Investment in OpenAI? A Lot of Debt - https://www.wsj.com/business/deals/openai-softbank-investment-debt-51b4a130

TechCrunch - OpenAI reportedly mulls buying Jony Ive and Sam Altman’s AI hardware startup - https://techcrunch.com/2025/04/07/openai-reportedly-mulls-buying-jony-ive-and-sam-altmans-ai-hardware-startup/

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

https://bsky.app/profile/edzitron.com

https://www.threads.net/@edzitron

See omnystudio.com/listener for privacy information.

Listen and follow along

Transcript

This is an iHeart podcast.

Be honest, how many tabs do you have open right now?

Too many?

Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.

Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.

Everyone's cooped up in their house.

I will talk to this robot.

If you're a truly engaged activist, the government already has data on you.

Driverless cars are going to mess up in ways that humans wouldn't.

Listen to Close All Tods, wherever you get your podcasts.

There's more to San Francisco with the Chronicle.

More to experience and to explore.

Knowing San Francisco is our passion.

Discover more at sfchronicle.com.

Who knew you could get all your favorite summer fruits and veggies in as fast as an hour with Walmart Express delivery?

Crisp peppers, juicy peaches, crunchy cucumbers, and more at the same low prices you'd find in store.

And freshness is guaranteed.

If you don't love our produce, contact us for a full refund.

You're definitely going to need a bigger salad bowl.

Order now in the app.

The Walmart you thought you knew is now new.

Subject to availability, fees, and restrictions apply.

Adobe Acrobat Studio, so brand new.

Show me all the things PDFs can do.

Do your work with ease and speed.

PDF spaces is all you need.

Do hours of research in an instant.

With key insights from an AI assistant.

Pick a template with a click.

Now your prezzo looks super slick.

Close that deal, yeah, you won.

Do that, doing that, did that, done.

Now you can do that, do that, with Acrobat.

Now you can do that, do that.

With the all-new Acrobat.

It's time to do your best work with the all-new Adobe Acrobat Studio.

CoolZone Media.

Hi everyone, before we get to the episode, I just wanted to lead in and say we are up for a Webby.

I'll be including a link.

I know it's a pain in the ass to register for something.

I'm sorry.

I really want to win this.

Never won an award in my life.

It will be in the links.

And while you're there and registered, look up the wonderful, weird little guys with Miss Molly Conger.

Vote for both of us.

I'm in the best business podcast, episode one, and she's in the best crime podcast, episode one.

We can win this.

We can defeat the others.

And now for the episode.

Better offline.

Every day I am punished and killed, and you love to watch.

Welcome to Better Offline.

We're live from New York City, recorded straight to tape, of course.

And I'm joined by an incredible cast of people.

To my right, I have Paris Martineau of the Information Hating Barris.

What's up?

What is up?

Edward Nguesa of the Tech Bubble newsletter.

Hello, hello.

And the wonderful Alison Morrow of the CNN Nightcap newsletter.

Hi.

And Alison, you wrote one of my favorite bits of media criticism I've ever read recently.

Do you want to actually walk us through that piece?

Because I think I will link it in the notes.

Don't worry, everyone.

I'd be happy to.

I wrote a piece.

I think the headline we ended up with was like, Apple's AI is not the disappointment.

AI is the disappointment.

Yeah.

And this was inspired by

credit to where it's due.

I was listening to Hard Fork with Kevin Roos on our, my husband and I were driving out to the country and listening to this and just getting infuriated.

Yeah.

And basically their premise was, or at least Kevin Roos's premise was that

AI is failing, or sorry, that Apple is failing this moment in AI.

Right.

And Apple has been trying, it's been like the laggard.

You know, that's a narrative we've heard in tech media over and over.

And it's like Kevin Roos's point was like, oh, well, they should just start getting more comfortable with experimenting and making mistakes and, you know, violating everything that Apple brand kind of stands for and like force the AI into a consumer product that no one wants.

And I was like, respectfully, no.

It's also such a funny argument, given that it was a mistake being made by Apple that resulted in the whole Houthi PC small group situation.

Wait, what was that?

Walk us through that.

That was

specifically how the editor-in-chief of The Atlantic ended up in a secret military signal chat.

Wait, I missed.

What?

How?

How have you been doing this?

I know.

I saw this signal.

I missed this chat at the end.

I'm sorry, Has

not been online.

I don't use the computer.

Better offline.

Or Better Off.

Oh, gosh, I should leave that.

I've been reading the scrolls.

So basically, The Atlantic came out a couple weeks ago with an article about how their editor-in-chief one day was suddenly added to a signal group channel.

Signal gate.

Yeah, Signal Gate.

But how did this Apple work?

So the Apple thing was,

I'm forgetting who exactly reported this.

This was in the last couple of days.

That how it happened was the con, like, you know, that thing that comes up in your iPhone where it says, like, oh, a new

phone number has been found.

It was a suggested contact,

and it happened because someone, I guess, in the government had copied and pasted an email containing the editor-in-chief of the Atlantic's contact information in a message to I'm forgetting whatever government officials.

Yeah, one of the guys.

And so he ended up combining the Atlantic EIC's information into a contact

for some government dude.

And that's how they ended up in, because it then signaled when you connect it to your contract.

I love the computer so much.

So, I mean, that makes me even crazier about the hard fork take, because it's like, you can't mess around with something like your phone.

Well, in this particular instance, I take it all back.

Apple AI is amazing.

It gave us one of the best journalism stories of the year.

You also made a really good point in here.

I was messaged you this on the way in.

You made this point that there's a popular adage adage in policy circles that the party can never fail it can only be failed it is meant as a critique of the ideological gatekeepers who maybe for example blame voters for their party's failings rather than the party itself the same fallacy is taking root among ai's biggest backers ai can never fail it can only be failed and i love this because it's you get people like kevin roost and there was a wonderful clip on the new york times tick tock of kevin roos seeming genuinely pissy he's like i can't believe people are mad at ai because of siri and it's like oh what they think it's shitty because it's shitty.

Like, it's like,

they talk about AI like it's their child.

Him and Casey act as if we've hurt Chat GPT.

Sorry, Claude.

They're Anthropic boys.

And

insane that Casey's boyfriend works at Anthropic.

I know he does the citation.

Anyway, it's just so weird because it's like...

We have to apologize for not liking AI enough.

And now you have the CEO of Shopify saying, actually, you have to use it.

Did you hear about this?

Yeah, he said what?

That you have to prove your job can't be replaced by AI?

Yeah.

Else it will be.

And he also said that now it's going to be Shopify policy to include in all of the employee performance reviews, both for your self-assessment and for your direct reports and colleagues' assessment, how much this person uses AI.

And obviously,

what's going on there is if you are not reporting that you use AI all the time for everything, you could get fired.

Kalerna just tried to overhaul hiring practices so that they could have AI first or AI only and then roll it back because they realize you can't replace all these jobs.

My question is: I mean, this is something that's brought up on the show all the time, but who are these people that are encountering the AI assistants suddenly plugged into every app and being like, yeah, this is actually beneficial to my life and this works really well?

Because it sucks every time I use it.

You made the point in your article, Alison, as well.

It's like, if it was 100% accurate, it would be really useful.

If it's even 98% accurate, it's not.

Right.

I think that was the point that, you know, to his credit, Casey Newton made in the episode, which is that AI is fundamentally an academic project right now.

And it's like,

you can have all kinds of debates about its utility, but ultimately, is it a consumer product?

And no, it's just like it's failing as a consumer product on all fronts.

And what's crazy as well is I'm surprised he would say that considering everything else he's ever said.

Because he quite literally has had multiple articles recently being like, ah, consumer adoption is up.

He had an article the other other day where it was like, data provided exclusively from Anthropic shows that more people are using AI.

It's like, my man, it's 2013 again.

We're past this.

You can't just do this anymore unless you're you.

And so going back to Mr.

Luttke of Mr.

Lutke of

Shopify, I just want to read my favorite part of it.

It says, I use it all the time, but even I feel I'm only scratching the surface, dot, dot, dot.

You've heard me talk about AI and weekly videos, podcasts, town halls, and summit.

Last summer, summer, I used agents to create my talk and presented about that.

So, all this fucking piss and vinegar, and the only thing you can use it for is to write a slop-ridden presentation to everyone about how good AI is without specifying what it does.

I feel like I'm going insane sometimes with this stuff.

I mean, in one way, that's great, right?

The only place you should encounter it is maybe the team-building retreats.

You know, that's the utility of this shit.

This reminds me a lot of like media 2012, 2013, where it was all pivot to video and what's our video, our vertical video strategy.

And it's like, okay, now what's our AI strategy?

How are we injecting AI into everything we're doing?

And it's like, well, to what end?

Yeah, what's the point?

This is something that has been driving me mad, especially with partnerships we're seeing between media firms and these AI firms.

You know,

these are firms in the same sector that keeps lying to Ferbs about how if you integrate artificial intelligence, this time it'll optimize your ability to find an audience or to get revenue and we can include you in some esoteric revenue share program or we'll be able to claw back some of the eyeballs and the attention that you're interested in seeking.

But each time it's actually just used to graft themselves onto services or to try to gin up excitement about these products, right?

What's insane is this company has a multi-billion dollar market cap.

And I'm just going to read point two.

AI must be part of your GSD prototype phase.

The prototype phase of any GSD product should be dominated by AI exploration.

Prototypes are meant for learning and creating information.

AI dramatically accelerates this process.

How?

Fucking how?

Like, that's the thing.

I have clients in my PR phone that will occasionally bring me AI things.

And every time, I'm just like, this better fucking work.

Like, just every, and to their credit, they do.

But it's like, I have clients I turn down all the time who are like, yeah, we're doing this.

And I'm like, is this just the chat bot?

And they're like, no.

I'm like...

Can you show me how it works?

No.

I'm like, oh, cool.

Yeah, I don't think we're going to be a good fit somehow because you don't seem to be able to explain what your product does.

But don't worry, this appears to be a problem up to the multi-billion dollar companies as well.

It's just, it feels like the largest mask off dunce moment in history.

Just these people who don't do any real work being like, it's the future, I think.

I don't do anything real.

And the pivot to video thing, I think, is actually a really good comparison because I remember being in New York at that time being like, I don't fucking like video.

I don't like anyone.

I don't think I want to consume video in the way that what it was like Mike and everyone.

And they were like, oh, we're going to do this video and this we're going to do everything video now video first no written content it's like I don't know a single goddamn human that actually does that and also the other thing that Facebook was laying

the that Facebook was just over just claiming like averaging out the engagement numbers and everyone was wrong but that was the same kind of thing it's like very clearly the people who have their hands on the steering wheel are looking at their phone and it it's fucking confusing but it's so much worse this time it feels more egregious somehow

yeah yeah because it feels, I mean, we've had so many of these hype cycles kind of back to back to back, from even the horizontal video days of Facebook to vertical video to whatever the hell the metaverse was supposed to be.

To literally in an ominous moment, as I was walking in to record this, I saw a guy wearing a leather jacket with Bored Ape Yacht Club on the back, and I was like, God, what a child.

Yeah, I was like, that's that guy, Rogs.

What a cool dude.

But it's like, how long is this going to last?

I have been actually looking at the numbers recently, and I don't know either because for SoftBank to fund OpenAI, it might require them to destroy SoftBank.

Like, SP is downgrading their credit rating potentially due to.

Hell yeah.

Yeah, I know.

Like, we're really at this point where it's just like we've gone so much further than like the metaverse and crypto did, because those weren't really systemic things.

But this one, I think it's just the narrative has carried away so far that people are talking about a thing that doesn't exist all the time.

I mean, in some elements, it kind of reminds me at the near the end or near the real peak is when we started also to see metaverse and crypto

sustainable refi shit where they're like, oh, actually, you know, we can fight climate change with crypto,

putting carbon credits on the blockchain.

And so

there was a moment where the frenzy and the speculative frenzy led to like world-transformative visions that were bullshit.

And I feel

like we are we are heading there we're in that direction with artificial intelligence where you know consistently we've been fed oh this is going to revolutionize everything but it feels like the attempt to graft it onto more and more consumer pro products more and more government services more and more parts of our daily spheres of life as a way to like privatize almost everything or commodify everything feels like downstream of the way crypto's attempt to

put everything on a blockchain blew up.

Yeah.

I was thinking about this in a kind of like fundamentally cultural way, where I think at some point in the last 30 years, there was a time when everything coming out of Silicon Valley was cool.

Yeah.

Whether it was like useful or world transformative, it was cool and there was like an edge to it.

And people were like, ooh, that's neat.

Disruptive.

Yeah, disruption was everything.

And like, I think post like Facebook, Cambridge Analytica era, like 2016,

tech has just stopped being cool and edgy.

It's very corporate.

And like, I don't think the rest of corporate America has kind of figured out that Silicon Valley is not the cool thing anymore.

And that they can't lie.

They're fully capable of being wrong and lying.

Like, that's the other thing.

They've gotten very good at fundraising and marketing.

But they're also not like kids anymore.

Like, we talk, I still see people referring to OpenAI as a startup.

Palmer Lucky as a kid.

Palmer Lucky as a kid who looks looks like Leisure Sleeper Larry and sells arms, which is U.S.

government-powered.

He's a kid.

Just looking at a wardrobe.

She's a small guy.

She's small.

Well, we refer to them as startups, but also I think one of the most accomplished parts of AI marketing has been like we always refer to them as labs.

Yeah.

So they seem like so academic and like good fundamentally.

And it's like, these are companies.

Like some of them might be part of

a research institution or a university, but a lot of them are startups.

Yeah, literal companies.

Yeah, they are companies.

Like Anthropic's public benefit, I believe.

And

it's just remarkable.

And I think what's happened here is that the narrative has gotten away to the point that we're really, the Dunce Maskoff moment I mentioned is people like Mr.

Ludke from Shopify,

it's very clear he doesn't do any work.

Like, I think that anyone who is just being like, yeah, AI is the future and it's changing everything without specifying anything doesn't do any work.

I just don't.

Bob Iger from Disney said AI is going to change.

No, it's not.

Bob, how's it changing your fucking life, you lazy bastard?

Like, you're going to summarize your worthless emails that someone else reads as you lie on your Scrooge McBuck money.

Yeah.

And it's just, it's so bizarre, but it feels like we're approaching this insanity level where you've got people like Shopify being like, oh yeah, it's going to be in everything.

As like OpenAI burns more money than anyone's ever burned.

Anthropic lost 5.6 billion last year, reported by the information.

It does some incredible fucking work on this, I should say.

And

it just doesn't make any sense.

And it's getting more nonsensical.

You're seeing, like, all of the crypto guys have fully become AI guys now.

And that was something I didn't like talking about at first because it wasn't happening.

Now it's all of them.

They all have AI avatars.

This guy called Jamie Burke is real, real shithead.

This guy was like a crypto metaverse guy and is now a full AI guy.

Another guy called Bernard Maher, who is just a harmless Forbes, like a kind of like an N NPC type, like one of the hollows from Dark Souls walking around.

Diagram is increasingly becoming a circle.

Yeah, but he's onto quantum now, which is a bad sign.

That's a bearish sign.

When you've got one of the Forbes guys moving on to quantum, we're cooked.

What about thermo?

Isn't there?

Isn't there some like?

Oh, thermo, yeah.

There's some scam thermodynamics.

Yeah, I'm gonna become a thermodynamics influencer.

I know what I know what that means.

I also know what that means.

But if anyone could tell me real quick.

But it's, I think, the most egregious one I've seen, and I sent sent this all to you ed i think you and i have talked about this the most there was one of the stupidest things i've read in my worthless life and it's called ai 2027.

now if you have not run into this yet i as a listener it will be in the episode notes i'm just just going to bring it up because it is

literally

throughout i was like is this fan fiction this is fan fiction oh this is interactive fan fiction it is and you can hit the buttons and say what is this how do we write it our research on key questions what goals will future AI agents have, can be found here.

The scenario itself is written iteratively.

We wrote the first period up to mid-2025, then the following period, etc., until we reached the ending.

Yeah, otherwise known as how you write stuff.

Like you write in a lane fashion.

We then scrapped this and did it again.

You should have scrapped it in all of it.

Now, this thing is

predicting that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

We wrote a scenario that represents our best guess at what it might look like, otherwise known as making stuff up.

Not even over the next decade.

It basically says it's going to have superhuman, like catastrophic or world-changing impact in the next five years.

Like by 2023, we're either going to be completely overtaken by our robot overlords or like at a tenuous piece.

Yeah.

And

it's insane as well because it has some great headlines like mid-2026, China wakes up.

And then

I love that China was so far behind.

You know, it's such a like, when did this come out?

This came out like a week ago, and I've been sent it by a lot of people.

If you're one of the people who sent it, don't worry, I'm not mad at you.

It's just I got sent it by a lot of people.

This thing is one of the most well-written pieces of fan fiction ever, in that it appears to be like a Manchurian candidate situation for idiots.

Not saying this is about the same thing, but anyway, Kevin Roos wrote up the piece in full.

He wrote up a piece about this called,

I'm just gonna say the AI forecast predicts some storms ahead

some storms some storms my bad even an accurate description of all of the storms it predicts and the the long and short of this by the way I have read this a few times because I fucking hate myself

the long and short of it is that a company called open brain who could that be yeah it could be anyone

anyone open brain they create a self-learning agent somehow unclear how all layout is just that how how many terra flops it's going to require.

And it can train itself and also requires more data centers than ever.

How they get them, how those are funded, no fucking clue isn't explained.

Probably the easy, actually, this is the fuck, this just occurred to me.

Probably the only thing they could actually reasonably extrapolate in here is the cost of data centers.

That's the only thing.

And they don't.

Probably because they'd be like, yeah, we need

an actual trillion dollars to do this made-up thing.

I do also want to add in here that, you know, behind the AI 2027 is one of the people connected to it, if I remember correctly, is Scott Alexander, who's this guy that's part of the rationalist community, which is

one of the groups that overlaps with Effective Altruist, Accelerationist.

Yeah, you know, so if it feels like it's frothy and fan fiction-y and hypey, that's because these are the same people that keep, you know, that are connected to pushing constant hype cycles over and over and over again.

And it's written to be worrying as well.

It's written to upset you.

It's written to be worrying, but it also in the predictions for the next two years keeps talking about how the stock market is going to grow exponentially and do so well.

The president is going to be making all of these wise, informed decisions and having really deep conversations with the leader of Open Brain.

And I was like, are you...

That's why I asked, when did this come out?

Because I was like, maybe this was written a couple of years ago,

but no, it is why, like, literally, it's wrong.

It's like the Bene Gesserit kind of planning.

Like, okay, we're the quizzes of Tatarach.

He's going to be born.

He's going to lead us to the promised land.

It's so good as well because the people who have sent this to me have been very concerned just because they're like, this sounds scary.

And I really want to be clear.

If you read something like this and you're like, that doesn't make sense to me, it probably doesn't make sense to anyone because it's nonsense.

This thing is, let me read just one cut from it.

The AI RD progress multiplier, what do we mean by 50% faster algorithmic progress?

We mean that OpenBrain makes as much AI research progress in one week with AI as they would in 1.5 weeks without.

Who fucking cares, Matt?

What are you talking about?

If a frog had wings, it could fly.

Like, what do you...

And what's crazy is, and I know I bag on Kevin Roos, it's because he's a nakedly captured part of the tech industry now.

I am in public relations, and I'm somehow less frothy about this.

That should tell you fucking everything.

It is insane that the New York Times, at a time when you have SoftBank being potentially downgraded by S ⁇ P, you have OpenAI raising more money than they ever raised, $40 billion, except they only received $10 billion, and they'll only get $20 billion more by the end of the year if they become

a for-profit, which they can't do.

No, no, no, no.

Kevin Roos can't possibly cover that.

He needs to go and take a solemn-looking fucking photo of some

R swipe.

I can't even get my phone.

I do really love all of the incredibly solid

features.

This guy is just, and I'll put the link in there for this.

He's got 75 pockets on his trunk.

Yeah, my man is ready to open AI.

Oh, that's where it is.

And it's just him, like, this guy sitting with his hands clasped, like staring mournfully into the distance.

This is what you're spending your time on, Ket.

And I'm just going to read some Kevin Roos.

The AI prediction world is torn between optimism and gloom.

A report released on Thursday decidedly lands on the side of gloom.

That's Kevin Roos' voice.

But my favourite part of this, by far, I'm going to take a second to get it, because Ed, I sent this to you as well.

Oh, where is it?

So, also, a lot of this is.

Oh, here we go.

If all of this sounds fantastical, fantastical, well, it is.

Nothing remotely like what Mr.

Coco Taljo and Mr.

Lifflin are predicting is possible with today's AI tools, which can barely order a burrito on DoorDash without getting stuck.

Thank you, Kevin.

I'm so fucking glad the New York Times is on this.

And that was at the end, right?

Like, you set up this whole article, and it's like, these guys have these doom predictions.

And that's the other thing about the like the altruistic AI guys all have told themselves this story and they all believe it and they think they are like the Prometheus bringing fire to the people and like warning the people.

And it's like, you guys have sold yourself a story with no proof.

I don't know.

I feel like they're just scam artists.

Nothing about this suggests they believe in anything.

You can just say stuff.

Look, it will be.

Literally, the like second sentence in this is that in two months, there will be personal assistants that you can prompt them with tasks like order me a burrito on DoorDash.

And they'll do great stuff.

There are so many many things that go into ordering me a burrito on DoorDash.

What restaurant do I want?

What burrito do I want?

How do I want it to get to me?

Where am I?

It can't do any of those things, nor will it.

He gazed out the window and admitted he wasn't sure.

And the next few years went well, and we kept AI under control, he said, referring to one of the writers of the piece.

He could envision a future where people's lives were still largely the same, but where nearby special economic zones filled with hyper-efficient robot factories would churn out everything we needed.

And if the next few years didn't go well, maybe the sky would be filled with pollution and the people would be dead, he said nonchalantly.

Something like like that you know one of the things i really really

um love about i don't know it's just it's so frustrating because we're we're constantly fed uh these you know sci-fi esoteric futures about how AI, powerful AI, superhuman AI is around the corner.

And we need to figure out a way to accommodate these sorts of futures.

And part of that accommodation means restructuring the regulations we have around it.

Part of of that accommodation means entertaining experiments, grafting them onto our cultural production, grafting them onto consumer goods.

Part of that means just like, you know,

taking it on the chin and figuring out how to use ChatGPT.

But in all of this, just more or less sounds like you need to...

The marketing is failing on you and you need to step up in one way or another.

You need to believe.

You need to believe in the business.

You need to do your part, you know, to summon God.

And that's the thing.

It goes back to what you're saying.

It's like you've failed AI by not believing.

Yeah, and if you're bad at it, it's your fault and not the machine's fault.

You just

learn.

To Ed's point, I think like all of this predicting of the future and this like revolution is like they have told themselves a story that is this is inevitable and that there are no choices that the human beings in the room get to make about how this happens.

And it's like, actually, no, we can make choices about how we want our future to play out.

And it's not going to be just Silicon Valley shoving it down our throat.

And on the subject subject of human choice, if this shit is so powerful, why have their mighty human choices not made it useful yet?

Like, that's the thing.

It's,

and you make this point in your piece as well.

It's like, AI can never fail, it can only be failed.

Failed by you and me, the smooth-brained Luddites who just don't get it.

And it's like, why do I have to prove myself?

And listen, you know, the Luddites, they have more grooves on their brain than Kevin.

So he needs to, I think it's worth embracing a little bit, you know?

There's more to San Francisco with the Chronicle.

There's more food for thought, more thought for food.

There's more data insights to help with those day-to-day choices.

There's more to the weather than whether it's going to rain.

And with our arts and entertainment coverage, you won't just get out more, you'll get more out of it.

At the Chronicle, knowing more about San Francisco is our passion.

Discover more at sfchronicle.com.

Be honest, how many tabs do you have open right now?

Too many?

Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.

Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.

Everyone's cooped up in their house.

I will talk to this robot.

If you're a truly engaged activist, the government already has data on you.

Driverless cars are going to mess up in ways that humans wouldn't.

Listen to Close All Tabs, wherever you get your podcasts.

So I've shopped with Quince before they were an advertiser and after they became one.

And then again, before I had to record this ad, I really like them.

My green overshirt in particular looks great.

I use it like a jacket.

It's breathable and comfortable and hangs in my body nicely.

I get a lot of compliments.

And I liked it so much I got it in all the different colors, along with one of their corduroy ones, which I think I pull off.

And really, that's the only person that matters.

I also really love their linen shirts too.

They're comfortable, they're breathable, and they look nice.

Get a lot of compliments there too.

I have a few of them, love their rust-coloured ones as well.

And in general, I really like Quince.

The shirts fit nicely, and the rest of their clothes through too.

They ship quickly, they look good, they're high quality, and they partner directly with ethical factories and skip the middleman.

So you get top-tier fabrics and craftsmanship at half the price of similar brands.

And I'm probably going to buy more from them very, very soon.

Keep it classic and cool this fall.

With long-lasting staples from Quince, go to quince.com/slash better for free shipping on your order and 365-day returns.

That's quince.com slash better.

Free shipping and 365 day returns.

Quince.com slash better.

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three at the same time?

That's exactly what Kohir, Thomson Reuters, and specialized bikes have since they upgraded to the next generation of the cloud.

Oracle Cloud Infrastructure.

OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high-availability, consistently high-performance environment and spend less than you would with other clouds.

How is it faster?

OCI's block storage gives you more operations per second.

Cheaper?

OCI costs up to 50% less for computing, 70% less for storage, and 80% less for networking.

Better?

In test after test, OCI customers report lower latency and higher bandwidth versus other clouds.

This is the cloud built for AI and all your biggest workloads.

Right now, with zero commitment, try OCI for free.

Head to oracle.com slash strategic.

That's oracle.com slash strategic.

And look, you know, I feel like Rob Horning wrote this newsletter a few weeks ago that I think he was honing in on this point that LLMs and these generative AI chatbots and the tools that come out of them are in some ways a distraction because a lot of these firms are pivoting towards how do we, you know, create all these products, but also how do we figure out, you know, government products that we can provide, right?

How do we get into defense contracting?

How do we get into arming or integrating AI into arms?

And increasingly, it feels like, you know, yeah, your AI agent is going to be able to not going to be able to order your burrito.

But these firms are also, you know, at the same time that they're insisting superhuman intelligence is around the corner and we're going to be able to make your individual lives better, are spending a lot of time and energy on use cases that are actually dangerous, right?

And it should actually be concerning in generating kill lists, right?

Or facial recognition and surveillance.

Which has already been around and isn't generative.

Yeah, and it isn't generative.

The firms that are offering these generative products are spending actual, you know, the stuff that they're actually putting their time and energy into is

these sort of demonstrably destructive tools under the guise and the in the kind of murky covering of it's all you know artificial intelligence right it's all inevitable it's all coming down the same pipeline you should accept it yeah and it's i think the thing is as well is those guys really think that's the next big money maker but i don't think anyone's making any money off of this no one wants to talk about the money because they're not making any like no one like i think i've read the I've read the earnings schools.

I'm not going to listen.

Of every single company that is selling an AI service at this point, I I can't find a single one that wants to commit to a number other than Microsoft, and they'll only talk annualized, which is my favorite one.

ARR.

ARR, but the thing is, ARR traditionally would mean an aggregate rather than just 12 times the last biggest month, which is what they're doing.

No, that's the classic setup use of ARR.

I refuse to.

No, I burn my clients' asses to the ground with that one because it's like you can't just fucking make up a number unless you're in AI, then you absolutely can.

It's just frustrating because, and the reason I bag on Newton Roos, other than all the others I've listed, is that

I feel like in their position, and in the position of anyone with any major voice in the media, skepticism isn't like something you should sometimes bring in.

It's not, you don't have to be a grizzled hater like myself, but you can be like, hey, even if this did work, which it doesn't, how does this possibly last another year?

And the reaction is, no, actually, it's perfect now and will only be more perfect in the future.

And I still get emails from people because I said once on an episode, if you have a use for AI, please email me.

A regret of mine.

Every time I get an email like that, it's like, so it's very simple.

I've set up seven or eight hours worth of work to make one prompt work.

And sometimes I get something really useful.

It saves me like 10 minutes.

And you're like, great.

And what for?

It's like, oh, just some productivity things.

What productivity things?

They stop responding.

And it's just...

I really am shocked we got this far.

I'm going to be honest.

At this point,

I will never be tired because my soul burns forever, but it's exhausting watching this happen and watching how it's getting crazier.

I thought like as things got worse, people would be like, well, CNN's stepping up.

But it's like watching

the Times and some parts of the journal still feed this.

Also, the journal has some incredible critical work on that.

It's so bizarre.

The whole thing is just so bizarre.

and has been so bizarre to watch in the tech media.

I mean, I think part of it is also just because investors have poured a lot of money into this and so of course they're going to want to back what they have spent hundreds of millions or billions of dollars on and much of the tech media involves reporting on what those investors are doing, thinking and saying and whether or not what those people are saying or doing is it's often not based in reality.

Yeah, I say as not a member of the tech media.

So I have like kind of a general assignment, business, markets, econ.

That's kind of my jam.

And when I come, when like AI first started becoming the buzzword, like ChatGPT had just come out, I was like, oh, this sounds interesting.

So I was paying attention, like a lot of journalists were.

And, you know, like we've hit limitations.

And I think part of the reason it's gotten so far is because the narrative is so compelling.

Curing cancer.

Yeah.

We're gonna, we're gonna end.

It's my favorite one, not my favorite one, silliest one is like, we're gonna, we're gonna end hunger.

Nice.

Okay, how?

How?

Also, the problem of hunger in the world is not that we don't grow enough food.

It is a distribution problem.

It is a sociological, it is a complicated problem.

What actually is AI going to do?

Also, that you're going to need human beings to distribute it.

It just, like, if you push them one step.

If you read the 2027 AI thing, it explains that the AI is going to give the government officials such good advice, they'll be actually really nice and caring to poor people.

And what's crazy is, here's the thing, and I'm glad you brought that up.

One thing I've learned about politics, particularly recently, but in

historic means,

when the government gets good advice, they take it.

Every time.

Every time.

Every time they're like,

this is economically good, like Medicare for all, which we've, of course, had forever and never, and

came close to numerous times decades ago versus now when we have it.

And I think the other funny thing is as well with what you were saying, Alison, is like, yeah, it's going to cure cancer.

Okay, can it do that?

No.

Okay, it's going to cure hunger.

Can it do that?

No.

Okay,

it's easy then.

Perhaps it could make me an appointment.

Also, no.

Can you buy something with it?

No.

Can it take this spreadsheet and move stuff around?

Maybe

it can write a robotic sounding script for you to make the appointment yourself.

Wow.

I mean, I would even say that, like,

I could give the benefit of the doubt to researchers who are really working on the scientific aspects of this.

Like,

I'm not a scientist.

I don't know how to cure cancer.

But if you're working with an AI model that can do it, like, God bless.

But businesses actually do take money-making advice and money-making technology when it's available.

And I think about this all the time with crypto, which is another area I cover a lot.

It's like, if it were the miracle technology that everyone or its proponents have it is businesses would not hesitate to rip up their infrastructure to make more money.

And like no one's doing it.

And it's like, oh, well, they just haven't, they haven't figured out how to optimize it yet.

And it's like, that sounds like a failure of the product and not a failure of the people using it.

So I get back to the whole like.

AI cannot fail.

It can only be failed.

And it's the same with crypto and a lot of other tech, where it's just like, this is not a product that people are hankering for.

And I think part of the notable thing is when we do see examples of large businesses being like, oh, yeah, we're going to change everything about our business and integrate AI.

We're going to be an AI-first company.

The products that end up coming out of that are, there's an AI chat bot in my one medical app now.

Cool.

That does nothing for me.

When I'm trying to search the Amazon comments on a product, suddenly the search box is replaced with an AI chat bot.

That's not doing even one-tenth of what you've promised.

It's the same product every fucking thing.

It's just an AI chat bot that isn't super helpful.

And it's crazy.

I remember back in 2015, 2016, I had an AI chatbot company.

They took large repositories of data and turned it into a chatbot you'd use.

I remember pitching reporters at the time and then being like, who fucking cares?

Who gives a shit?

This will never be.

Like a decade later, everyone's like, this is literally God.

I cannot wait to go to the office of a guy who wrote fan fiction about this and talk to him about how scared I am now.

I can't wait for AGI.

And I've also said this before, but what if we make AGI?

None of them are going to do it, doesn't exist, but and it didn't want to do any work.

That's the other thing.

Like, they're not, they don't, Casey Kagawa, a friend of the show, made this point, has made it to me numerous times, which is they talk about AGI and Roos did this as well.

Like, AGI this, AGI that.

They don't want to define it because if you have to start defining AGI, you have to start talking about things like personage it.

Like, is this a citizen?

Is this, can this thing feel pain?

Because surely consciousness could feel pain.

Oh, you could take pain out that has to have real consciousness.

None of the, and hey,

how many is one?

Is it one unit?

Is it a virtual machine?

Like, there are real tangible things.

And you know they don't want to talk about that shit because you even start answering one of those and you go, oh, right, we're not even slightly closer.

We don't even know how the fuck to do a single one of these things ever.

And I honestly, the person I feel bad for, this is a joke, is Blake LeMine, I think his name was from Google.

If he'd have come out like three years later and said that he thought the computer, this guy from Google who thought

the manga, the AI there was the guy who was like, the chatbot is real and I love it.

Yeah, had that come out three years later, he'd be called Kevin Roos because that's exactly what Kevin Roos wrote about being AI several years.

He's like, being AI told me to leave my wife.

And Kevin, if you ever fucking hear this man, you're worried about me dogging you, I'm going to keep going.

Do your fucking job, mate.

Anyways,

it's just insane because

I am a gadget gizmo guy.

I love my doodads.

I love my shit.

I I really do.

If this was going to do something fun, I'd have done it.

Like, I've really spent time trying, and I've talked to people like Simon Wilson, Max Wolff, two people who are big LLM heads, who are, who disagree with me on numerous things, but their reaction is kind of, I'm not going to speak exactly for them, but it's basically, it actually does this.

You should look at this thing.

It does not.

This is literally God.

But...

It all just feels unsustainable economically.

But also, I feel like the media is in danger when this falls apart too, because the regular people I talk to about ChatGPT, I pretty much hear two use cases.

One, Google search isn't working, and two, I need someone to talk to, which is a worrying thing.

And I think, by the way, that use case is just, that's a societal thing.

That's a sign that lack of community, lack of friendship, or lack of access to mental health services.

And

also could lead to some terrible outcomes.

But for the most part, I don't know why I said for the most part, I have yet to meet someone who uses this every day and I've yet to meet someone who really cares about it.

Who like is like, if this went tomorrow.

Like if I didn't have my little anchor battery packs, I'd scream.

If I couldn't have permanent power everywhere.

Like if I couldn't like listen to music all day, that'd be make me real sad.

If I couldn't access chat GPT, I would not.

Who cares?

That's because you haven't tried Claude yet.

I've tried the shit out of Claude.

I've tried Claude so much.

And it's just, I don't know.

I feel like people's response to the media is going to be negative too, because there's so many people that boosted it.

There was a Verge story.

There was a study that came out today.

I'll link it as well in the notes, where it was

a study found that most people do not trust, like regular people do not trust AI, but they also don't trust the people that run it and they don't like it.

And I feel like this is a thing that the media is going to face at some point.

And Roost this time, baby, you got away with the crypto thing.

You're not this time.

I'm going to be hitting you with the TV off to shit every day.

But it's just, I don't think members of the media realize the backlash is coming.

And when it comes, it's going to, truly, it is going to lead to an era of cynicism, true cynicism in society that's already growing about tech.

But specifically, I think it will be a negative backlash to the tech media.

And now would be the great time to unwind this versus tripling down on the fanfiction that, and I have been meaning to read this out.

My favorite part of this by far.

I say and of course flawlessly have this ready.

Why our uncertainty increases substantially beyond 2026.

Our forecast from the current day through 2026 is substantially more grounded than what follows.

Thanks, motherfucker.

Awesome.

That's partially because it's nearer.

But it's also because the effects of AI on the world really start to compound in 2027.

What do you mean?

They don't.

You're claiming that.

And I just, I also think that there's the greatest subtle problem that we have too many people who believe the last smart person they listen to.

And I say that as a podcast runner.

Like the last investor they talk to, the last

expert they talk to, someone from a lab.

Yes.

yes well I think that

gets to if you just push the proponents and this is like I've I've come into AI skepticism as a true like I'm interested in this I'm interested in what you're pitching to the world and when I hear and I hear like CEOs of AI firms get interviewed about this all the time and they talk about this future where everyone just has a life of leisure and we're lying around writing poetry and touching grass and like everything's great no one has to do hard labor anymore.

They have that vision or they have like the p-doom of 75 and everything is going to be terrible.

But no one has a really good concept.

And that's why this is so funny, the fan fiction of what happens in 2027.

It's like no one has laid out any sense of like how the job creation or destruction will happen.

Like in this piece, they say like, oh, there's going to be more jobs in different areas, but some jobs have been lost.

And it's like, how?

Why?

What jobs?

They get oddly specific on some things.

Then the meaningful things, they're like, yep, there'll be jobs.

Yeah.

And the stock market's just going to go up.

And the number go up all the time, as it is right now as we report.

Yeah, I believe they say in 2028,

Agent 5, which is the super AI, is deployed to the public and begins to transform the economy.

People are losing their jobs, but Agent 5 instances in the government are managing the economic transition so adroitly that people are happy to be replaced

gdp growth is stratospheric government tax revenues are growing equally quickly and agent five advised positions show an uncharacteristic generosity towards the economically dispossessed you know what this is we failed to uphold like public arts education in america and a bunch of kids got into coding and know nothing but computers and so they can't write fan fiction yeah no one's fine writing is bad not enough people spent time in the minds of fanfiction.net and they're not.

No one's shipping anyone.

Like, this is clearly, this is just like someone wanting to have a creative vision of the future.

And it's like, it's not interesting or compelling.

It's joyless.

I mean, that's why they brought him on.

That's why they brought Scott Alexander on, because to write this narrative, right?

Because that's what he spends a lot of time doing in his blog, is trying to beautify or flesh out why this sort of future is inevitable.

Yeah.

You know, why we need to commit to

accelerating technological progress as much as possible and why the real reactionary or, you know, anti-progress perspective is caution or concern or skepticism or criticism.

If it's not nuanced in a direction that supports progress.

I just feel like a lot of the AI safety guys are grifters too.

I'm sorry.

They love saying alignment.

Just say pay me.

I know that we should have, I get the occasional email about this being like, you can't hate AI safety.

It's important.

It It is important.

Generative AI isn't AI.

It's just trying to fucking accept it.

If they cared about the safety issues, they'd stop burning down zoos and feeding entire lakes to generate one busty Garfield, as I love to say.

But they would also be thinking about the actual safety issues of what could this generate, which they do.

You can't do anarchist cookbook shit.

It's about as useful.

Phil Broughton, friend of the show, would be very angry for me to bring that up.

But the actual safety things of it steals from people, it's destroying the environment, it's unprofitable and unsustainable.

These aren't the actual, these are actual safety issues, these are actual problems with this.

They don't want to solve those.

And indeed, the actual other safety issue would be, hey, we gave a completely unrestrained chatbot to millions of people and now they're talking to it like a therapist.

That's a fucking, that's a safety issue.

No, they love that.

They love that.

I do think that one criticism of the AI safety initiatives that is incredibly politically salient and important right now is that they are so hyper-focused on the long-term thousand, hundred years from now future, where AI is going to be inside all of us, and we're all going to be, you know, robots controlled by an overlord, that they are not paying attention to literally any of the harms happening right now.

Or are they deliberately not talking about the harms today because then they'd have to do something at work when they get aged down?

You know, it's like when 972 MAG reported on how Israel was using or trying to integrate artificial intelligence into generating its kill lists and targets, so much so that they started targeting civilians and use that to fine-tune targeting of civilians.

You know, I saw almost nothing in the immediate aftermath of this reporting from the AI safety community.

You know, almost no interest in talking about a very real use case where it's being used to murder as many civilians as possible.

Silence.

You know, and that's a real short-term concern that we should have.

But that would require the AI safety people to do something.

And what they do is they get into work.

They're making a quarter of a million dollars a year.

They get into work.

They load Slack.

They load Twitter.

And that's what they do for eight hours.

And they occasionally post

being like, by 2028, the AI will have fucked my wife.

And everyone's like, God damn it, no, not our wives.

Final Frontier.

But it is all like, they want to talk about 10, 15, 20 years in the future, because if they had to talk about it now, what would they say?

Because I could give you AI 2026, which is

OpenAI runs into funding issues, can't pay Core Weave, can't pay Crusoe to build the data centers in Abilene, Texas, which requires Oracle, who have raised debt to fund that, to take a bath on that.

Their stock gets hit.

Coreweave collapses because most of Corwea's revenue is now going to be open AI.

Anthropic can't raise because the funding climate has got so bad.

OpenAI physically cannot raise in 2026 because SoftBank had to take on murderous debt to even raise one round.

And that's just with like one.

You're going to be excited here.

No, no, no.

Next newsletter, baby, and probably a two-part episode.

But that's the thing.

They don't want to do these because...

They get, okay, they would claim I'd get framed as a skeptic.

They also don't want to admit the thing in front of them because the thing in front of them is so egregiously bad.

With crypto, it was not that big.

Metaverse, it was not that big.

I do like that Meta burned like $40 billion and there's a Yahoo Finance piece about this just on mismanagement.

Like it's just like they

like I love it.

It's also sick that they renamed themselves after it's so good.

It's so big.

Can't go back.

Can't go back.

Yeah.

Like they should get Metai.

They should just change.

Oh, they should add an eye.

They should just add an I at the end.

It's just, if anyone talks about what's actually happening today, which is borderline identical what was happening a year ago, let's be honest.

It's april 2025 april 2024 was when i put up my first piece being like hey this doesn't seem to be doing anything different and it still doesn't even with reasoning it's just new just wait for q3 agent forces come yeah agent no agent zero is going to come out yeah actually the information reported that salesforce is not having a good time selling agent force you never guess why wow turns out that it's not that useful due to the problems of generative ai if only someone had said something which the information i i've begged on the information a little bit bit, but they are actually doing insanely good work.

Like Corey Weinberg, Jurasawa, Alisa Gardizi, Paris, of course, but I'm specifically talking about the AI.

The AI team is for the first time.

Stephanie Palozzo, of course.

And like, it's great because we need this reporting for when this shit collapsed so that we can say what happened.

Because it's going to, if I'm wrong, and man, would that be embarrassing?

Just going to be honest, like, if I'm wrong here, I'm going to look like a huge idiot.

But

if I'm right here, like, everyone has over-leveraged on one of the dumbest ideas of all time.

Like silly, silly.

It would be like crypto.

It would be like if everyone said, actually, crypto will replace the US dollar.

And you just saw like the CEO of Shopify being like, okay, I'm going to go buy a beer now using crypto.

No, this is going to take me 15 minutes.

Sorry.

That's just for you to get the money.

Actually, it's going to be more like 20.

The network's busy.

Okay, well, how's your day?

Oh, you use money, huh?

Yeah.

Okay.

Yeah, you should let that guy in front of me.

This is going to be a while.

It's what we're doing with AI.

It's like, well, AI is changing everything.

How?

It's a chap it's a chat book what if we have an Uber scenario where maybe they abandoned the dream of like this three trillion dollar dressable market that's worldwide they abandoned the dream of like being monopoly in every place um and and and focus on a few markets and uh some algorithmic uh price fixing so that they can figure out how to juice uh fares as much as possible, reduce the uh wages as much as possible, and finally eke out that profit.

What if we see you know some of these firms they pull back on the ambition or the scale, but they persist and they sustain themselves because they move on to some smaller fish.

I feel like Occam's razor, that's the most likely situation is that, you know, AI tools are useful in some way for some slice of people and make a lot of, maybe, maybe, let's be optimistic, it makes

a sizable chunk of a lot of people's jobs somewhat easier.

Like,

okay.

Was that worth spending billions and billions of dollars and also burning down a bunch bunch of trees?

Not known.

No, I'm just saying that could be, I think, best case scenario.

No, I'm not saying you're wrong.

I'm just saying, like, we haven't even reached that yet.

Because with Uber, it was this incredibly lossy and remains quite a lossy business, but still delivers people to and from places and objects from to them from places.

Yeah, you know, you don't have to, you know, as much as I hate them, I'll give them credit.

You know, you don't have that,

less drunk driving, you know, and some transit in parts of cities where

there's not much in the way of public transit, right?

This is like if Uber, if every ride was $30,000 and

every car weighed 100,000 tons.

When you factor in the externalities, just pollution, maybe.

But that's the crazy thing.

I think generative AI is so much worse as well, pollution-wise.

But even pulling that back, it's like, I think OpenAI just gets wrapped into Copilot.

I think that that's literally, they just shut this shit.

They absorb Sam Altman into the hive mind.

And he, I think, also, my chaos pick for everyone is Sachinadella is fired and Amy Hood takes over.

If that happens,

I think, is Prometheus the one who can see stuff?

I don't fucking know.

I don't read.

No, he gave fire to mortals.

Wow.

Technique.

I just spit fire.

It's just frustrating.

It's frustrating as well because a lot of the listeners on the show email me and they took like teachers of being like, oh, they're forcing AI in here.

Librarian.

Oh, there's AI being forced there.

I mean, the impact on the educational sector, especially with public schools, it's really terrifying, especially because

the school districts and schools that are being forced to use this technology, of course, are never the private, wealthy schools.

It is the most resource-starved public schools that are going to have budgets for teachers increasingly cut.

Meanwhile, they do another AI contract

and off-source like lesson.

The sort of things that these companies, the EdTech AI things, pitch as their use case is lesson planning, writing report cards, basically all the things that a teacher does other than physically being there and teaching, which in some cases the companies do that too.

They say, instead of teaching, put your kid in front of a laptop and they talk to a chat bot for an hour.

And that's a whole

lot of things.

And the school could, of course, I don't know, spend money on something that's already being spent, which is teachers have to buy their own fucking supplies all the time.

Teachers have to just spend a bunch of their money on the school, and the school doesn't give them money.

But the school will put money into chat GPT.

It's just oh they should ban it at universities as well.

Everything I'm hearing there is just like real fucking bad.

Like the I mean the issue is from talking to uh university professors it's like impossible for universities to ban it professionals.

Can you elaborate?

I haven't talked to anyone.

I guess professors are s i i the obvious uh example is like essays.

Like professors get AI written essays most of the time and they can't figure out whether they are AI written or not.

They just notice that all of their students seem to to suddenly be doing worse in class, while having similar output of written assignments.

There are very few tools for them to be able to accurately detect this and figure out what to do from it.

Meanwhile, I guess getting involved in trying to prosecute someone for doing this within the academic system is a whole other thing.

But on the

In K through 12 especially, it's been kind of

especially frustrating to see that some of the biggest pushers of AI end up being teachers themselves because they are overworked, underpaid, have no time to do literally anything, and they have to write God knows how many lesson plans and IEPs for kids with disabilities, and they can't do it all.

So they're like, well, why don't I just plug this into what's essentially a chat GPT wrapper?

And that results in worse outcomes for everyone, probably.

And so I have some personal experience with IEP.

I don't think they're doing it there, but they're definitely doing it elsewhere.

And

if you've heard IEP, that fucking kills me.

That's one of the things that these tools often pitch themselves as.

You can create IEPs.

I want to put my hands around someone's fucking head.

Can you describe what an IEP is?

It is a, I forget what it stands for.

It's an individual education plan?

I might be wrong, but that's the plan.

It is generally the plan that's put for a child with special needs.

So autism being one of the most obvious one, it names exactly what it is that they have to do, like what the teacher's goals will be, like socio-like legally have to do all the things in that document.

And it changes based on the designation they get.

And so, like, it's different if you get there's like an emotional instability one, I believe.

And nevertheless, there's like separate ones, and each one is like the goals of the where the kid is right now, where the kid will be in the future, and so on and so forth.

The idea that someone would use Chat GPT, and if you listen to this and use ChatGPT for one of these, I fucking hate your OS so bad.

I understand you're busy, but this is very important.

Um, nevertheless, wow, how disgraceful as well, because it's all this weird resource allocation done by people.

And I feel like the overarching problem as well is it's the people making these decisions, putting this stuff in, don't do work.

It's school administrators that don't teach.

It's CEOs that don't build anything.

It's venture capitalists that haven't interacted with the economy or anyone without a Patagonia sweater in decades.

And it's these, and again, these VCs, they're investing money based on how they used to make money, which was they invest in literally anything and then they sold it to literally anyone.

And that hasn't worked for 10 years.

Allison, you mentioned the thing 2015-ish.

That was when things stopped being fun.

That was actually the last time we really saw anything cool.

That was around the Apple Watch era.

Yeah.

And it was the last, really the end of the hype cycles, the successful ones, at least.

They haven't had one since then.

VR, AR, XR.

Crypto, Metaverse, the Indiegogo and Kickstarter era.

Sharing economy.

Sharing economy.

But these all had the same problem, which was they cost more money than they made and they weren't scalable.

And this is the same problem we've had.

What we may be facing is the fact that the tech industry does not know how to make companies anymore.

Like that may actually be the problem.

Can I add one thing to what you said about people who don't work?

I think there are people in Silicon Valley, and I don't, I'm going to get a million emails about this, but there are a lot of Silicon Valley men who are white men who don't really socialize.

And I think they are kind of propagating this technology that allows others to kind of not interact.

Like so much of ChatGPT is designed to

subvert human interactions.

Like you're not going to go ask your teacher or ask a,

excuse me, or ask a classmate, hey, how do we figure this out?

You're just going to go to the computer.

And I think that culturally, like I, I, I, you know, people who grew up with computers, God bless, but,

you know, we need to also value social interaction.

And it's interesting that there are these, there's this very like small group of people, often who lack social skills, propagating a technology to make other people not have social skills.

And I think there's also a class aspect to that because

I didn't grow up particularly with like

food on the table.

But one thing that grew up was I don't trust any easy fixes.

Nothing is ever that easy is something that I kind of a if something seems too good to be true, too accessible, there's usually something you're missing about the incentives or the actual output.

So, no, I wouldn't trust the computer to

tell me how to fix something because I don't fucking like you made that up.

Like, it isn't this easy.

There's got to be a problem.

The problem is hallucinations.

Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.

There was the six-foot cartoon otter who came out from behind a curtain.

It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.

Should I be telling this thing all about my love life?

I think we will see a Twitch stream or a president maybe within our lifetimes.

You can find Close All Tabs wherever you listen to podcasts.

Let's be real.

Life happens.

Kids spill.

Pets shed.

And accidents are inevitable.

Find a sofa that can keep up at washable sofas.com.

Starting at just $699, our sofas are fully machine washable inside and out.

So you can say goodbye to stains and hello to worry-free living.

Made with liquid and stain-resistant fabrics.

They're kid-proof, pet-friendly, and built for everyday life.

Plus, changeable fabric covers let you refresh your sofa whenever you want.

Neat flexibility?

Our modular design lets you rearrange your sofa anytime to fit your space, whether it's a growing family room or a cozy apartment.

Plus, they're earth-friendly and trusted by over 200,000 happy customers.

It's time to upgrade to a stress-free, mess-proof sofa.

Visit washablefas.com today and save.

That's washable sofas.com.

Offers are subject to change and certain restrictions may apply.

Top reasons data nerds want to move to Ohio.

High-paying careers for business researchers, analysts, project managers, and more.

So many jobs, you can take your pick.

What else does the data say?

How about a bigger backyard, a shorter commute, and a paycheck that goes further?

So crunch the numbers and our world-famous pickles.

It all adds up.

The career you want and a life you'll love.

Have it all in the heart of it all.

Dive into the data at callohiohome.com.

Every business has an ambition.

PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.

And your customers can pay all the ways they want with PayPal, Venmo, PayLater, and all major cards so you can focus on scaling up.

When it's time to get growing, there's one platform for all business, PayPal Open.

Grow today at PayPalOpen.com.

Loan subject to approval in available locations.

And And we're back.

So we didn't really lead into that ad break, but you're going to just have to like it.

I'm sure all of you are going to send me little emails, little emails about the ads that you love.

Well, I've

got to pay for my Diet Coke somehow.

So

back to this larger point around ChatGPT and why people use it and how people use it.

I think another thing that just occurred to me is, have you ever noticed that Sam Altman can't tell you how it works and what it does?

You ever noticed any of these people will tell you what it does?

I've read everything Sam Altman said at this point, listened to hours of podcasts.

He's quite a boring twerp.

But on top of that, for all his yapping and yammering, him and Wario Amade don't seem to be able to say out loud what the fucking thing does.

And that's because I don't think that they use it either.

Like, I genuinely, I'm beginning to wonder if any of the people injecting AI, sure, Sam Altman and Dario probably use it.

I'm not saying it fully, but look, these aren't people.

The next person that meets Sam Altman should just be like, hey, how often do you use ChatGPT?

Gets back to that, it reminds me of the remote work thing.

all these CEOs saying, guys should come back to the office.

How often are you in the office exactly?

And I think that this is just a giant revelation of like how many people don't actually interact with their businesses, that don't interact with other people, that don't really know how anything works, but they are the ones making the money and power decisions.

It's fucking crazy to me.

And I don't, and I...

I don't know how this shakes out.

It's not going to be an autonomous agent doing whatever.

Also, okay, that just occurred to me as well.

How the fuck do these people not think these agents come for them first?

Like, if the AGI was doing this and they read this, they'd be like, God, these people are fucking, they worked it all out.

I need to kill them first.

Well, I mean, that kind of gets back to what you were saying, where it's like, you know, if we entertain the fan fiction for a little bit, what is the frame of mind for these agents, if they're autonomous or not?

How are we thinking of them?

Are we thinking about, like, if they're persons or if they're, you know, lobotomized in some way?

Do they have opinions?

You know, and I think really it just gets back to like, you know, part of the old hunt for like, you know, a nice, polite slave.

You know,

how do we figure out how to reify that relationship?

Because it was quite profitable at the turn of like industrial capitalism.

And yeah, I think, you know, it's not a coincidence that a good chunk of our tech visions come to us from reactionaries who think that the problem with capitalism, the problem with tech development, is that a lot of these empathetic, egalitarian reforms get in the way of profit making.

You know, I think similarly, you know, the hunt for automatons for certain algorithmic systems is searching for a way to figure out how do we replicate, you know, human labor without the limitations on extracting and pushing and coercing as much as possible.

Yeah.

And there's an agent or something else.

And the thing is, yeah, sure, the idea of an autonomous AI system would be really useful.

I'm sure it could do stuff.

That sounds great.

There are massive, as you've mentioned, like sociological problems.

Like, do these things feel pain?

If so, how do I create it?

Anyway,

but in all seriousness, like, sure, an autonomous thing that could do all this thing would be useful.

They don't seem to even speak to that.

It's just like, and then the AI will make good decisions.

And then the decisions will be even better.

And then Agent 7 comes out and you thought Agent 6 was good.

It's like they don't even speak to how we're going to get to the point where Agent 1 knows truth from falsehood.

And that's inevitable.

Of course, yeah.

We just need to give it all of our data and everything that we've paid money for, required other people to pay money for, and then it will finally be perfect.

And it doesn't even make profit of any kind.

That's the other thing.

It's like people saying, well, it makes profit.

There's the profit-seeking.

Is it profit-seeking?

It doesn't seem like we've sought much profit or any.

That's also, I think, a good point of comparison to what you were talking about earlier, Ed, with the comparison to Uber,

these companies that achieved massive scale and popularity by making their products purposefully unprofitable, by charging you $5 for a 30-minute Uber across town, so that you're like, Yeah, this is going to be part of my daily routine.

And the only way they've been able to squeeze out a little bit of profit right now is by hiking those prices up, but trying to balance it to where they don't hike it up so much that people don't use it anymore.

And AI is at the point where, like, for these agents, I think some of the costs are something like thousands of dollars a month.

But they don't exist.

And they don't, they don't work already.

And it's like, you're still not making money by spend by charging people that much money to use it.

What is the use case one where this even works?

And if it somehow did manage to work, how much is that going to cost?

Who is going to be paying $20,000 a month for one of these things?

And how much of that is dependent on what is clearly nakedly subsidized compute prices?

How much of this is because Microsoft's not making a profit on Azure Compute, OpenAI isn't making, Anthropic isn't.

What happens if they need to?

What if they need to?

They're going to choke.

That's the subprime AI crisis from last year.

It's just.

Well, that's when you get the venture capitalists insisting that that's why we need to, you know, do this AI capex rollout, because if we build it out like infrastructure, then we can actually lower the compute prices and not subsidize anymore.

Yeah, that's the thing.

But that's the other thing.

So the information reported, OpenAI says that by 2030, they'll be profitable.

How?

Stargate.

And you may think, what does that mean?

And the answer is Stargate has data centers.

Now, you you have to, I just have one little question.

This isn't a knock on the information.

This is, they're reporting what they've been told, which is fine.

A little question with OpenAI, though, how?

How does more equal less cost?

Because this thing doesn't scale.

They lose money on every prompt.

It doesn't feel like they'll make any money.

In fact, they won't make any money.

They'll just have more of it.

And also, there's the other thing of...

Data centers are not fucking weeds.

They don't grow in six weeks.

They take three to six years to be fully done.

If Stargate is done by next year, I will fucking barbecue up my Padre's hat and eat it live on stream.

Like, that's if they're fucking alive next year.

Also, the other thing is, getting back to 2027 as well, year 2026, 2027 is going to be real important for everything.

2027 or 2026 is when Wario Abade says that Anthropic will be profitable.

That's also when the Stargate data center project will be done in 2026.

I think that they may have all just chosen the same year because it sounded good and they're going to get in real trouble next year when it arrives and they're nowhere near close.

I can't wait until all of those companies announce that because of the tariffs,

that they have to delay their timeline, and it's like completely out of their hands.

But no, the tariffs, you understand the tariffs.

I got a full roasted pig.

I'm going to be tailgating.

Microsoft earnings April 23rd.

Cannot wait.

Yeah, you should go to like a data center.

Have a marching band at the end of severance.

Yeah, it's but that's the thing.

I actually agree.

I think that they're going to, there's going to be difficult choices.

Sadly, there's only really two.

One capex reduction, two layoffs.

Or both.

Because they have proven willingness to lay off to fund their capex.

But at this point, people are like, they're asking to what end?

Like,

why are we doing this?

It just.

It feels like the collapse of any good or bad romantic relationship where just one party is doing shit that they think works from years ago and the other party is just deeply unhappy and then disappears one day and the other party being just we watched the episode of lost last night and it just happened this is uh lost lost is a far more logical show than any of this ai bullshit but it's

let's not get that no no it's a bad show it's a bad uh no i wouldn't say that either talking about something that's very long very expensive and never had a plan but everyone talks about like it was good despite it never proving it lost um

yeah sorry i have some I really do have some feelings on that.

You're going to get some emails.

I'm sure.

Email from me.

Yeah, he is texting me.

It is writing an email.

It is just sending me this very quiet

emoji like a hundred times.

Yeah, it's, I think it's just, I can't wait to see how people react to this stuff as well.

When this, because I obviously will look very silly if these companies stay alive and somehow make AGI.

AGI is killing me first.

Like the gravedigger AI truck is going to run me over outside my house.

It's going to be great.

But I can't wait to see how people explain this.

I can't wait to see what the ex like, oh, we never saw the tariffs, maybe.

Right.

And I talked to an analyst just last week who's like a bullish AI tech investor.

And he said, he said, already you're seeing investment pullback because of expectations in the market

that there was.

These stocks were overbought in the first place.

And now there's all this other turmoil, external macro elements that are going to kind of take the, you know, the jargon of like the froth out of the market.

They're going to, it's all going to deflate a little bit.

And so I was asking him, like, is the AI bubble popping?

And he says, no, but tariffs are definitely like deflating it and

delaying whatever progress that we are going to be promised from these companies is going to be delayed.

Even if it was going to be delayed, they were going to find other reasons.

This is a convenient macro kind of excuse to just say like, oh, well, we need, we didn't have enough chips, we didn't have enough investing, we didn't have enough compute.

You know, be patient with us.

We're going to have the revolution is coming.

What's great is, well, talking of my favorite Wall Street analyst, Jim Kramer of CNBC.

So, Core Weave's IPO went out.

I just need to mention we are definitely in the hype cycle because Jim Kramer said that he would sue an analyst, D.A.

Davidson, on behalf of NVIDIA

for claiming that they were a lazy Susan.

As in, basically, what the argument is, is that Nvidia funded Core Weave, so the Core Weave would buy GPUs, and at that point, Core Weave would then take out loans on those GPUs for CapEx reasons, CapEx including buying GPUs.

So, very clearly, and also you attack Gil over at D.A.

Davidson, you and me, Kramer, in the ring.

But it's, we know we're in the crazy time when you've got, like, a TV show host being like, I'm going to sue you because you don't like my stocks.

I think that, like, we're going to see, like, a historic washout of people, and the way to change things is this time we need to make fun of them.

I think we need to be like actively, we don't need to be mean, that's my job, but we can be like,

to your point, your article, Alison, it's like, say like, hey, look, no, what you are saying is not even rational or even connected to reality.

This is not doing the right things.

Apple intelligence is like the greatest anti-AI radicalization ever.

I actually think, Timmy.

It's so bad.

It's so fucking bad.

And I, before it even came out, I like downloaded the beta.

I was like, I'm going to test this out because, you know, I talk about this thing on my podcast sometimes.

And it's so bad.

It's so bad.

I like turns it off for most things, but I have it on for a couple of social networks.

And I mean, I guess with the most recent update, it got marginally better, but it still constantly tells me, so-and-so replied to your blue sky skeet.

I check, they didn't.

That person didn't even like the skeet.

I don't know where that name came from.

And this happens like every other day.

It's just completely wrong.

i'm like how

my favorite is the summary text for uber where it's like several cars headed to your location

we're gonna take you out

in calonig calonic mode activate it no it's it's great as well because i usually don't buy into the steve jobs would burst from his grave thing i actually think numerous choices tim cook has made have been way smarter than how jobs would have done this is actually like he's going to burst out of the ground thriller style actually did was that zombies pop out and anyway um

because it's nakedly bad.

It's close.

It's not a great reference.

But it's nakedly bad.

Like, it sucks.

And I've never, I've, people in my life who are non-techie

will constantly be like, hey, what is Apple intelligence?

Am I missing something?

I'm like, no, it's actually as bad as you think.

And I mean, it's also small other things beyond just the notification summaries.

The thing that every time I highlight a...

word and I'm trying to, sometimes I might want to use find definition or any of the things that come up.

I have to scroll by like seven different new options under the like right click or double click thing.

If you hit writing tools, it opens up a screenwide.

Yes, it opens up a thing.

And I'm like, who has ever, who is trying to use this to rewrite a text to their group chat?

Who is this for?

I feel like Apple, to its credit, is recognizing its mistake and it's clawing it back and like delaying Siri indefinitely.

I mean,

I don't know if I agree on that one.

That's fair.

Because the thing they're delaying is the thing that everyone wanted.

I think they can't make it work.

Because the thing they're delaying is the contextually aware Siri, right?

Yes.

They're quote-unquote delaying it.

It doesn't exist.

It never existed.

Yeah.

We'll see.

Apple's washed.

You think Apple's was washed.

I mean, but that's the thing.

It's the most brand-conscious company on the planet.

And

I wrote like when they did their June

revelation of the Siri AI is going to come out, and they said it was going to come out in the fall, and then it was coming out in the spring, and now it's not coming out ever question mark.

But throughout the whole like two-hour presentation,

the letters AI were never spoken, artificial was never spoken.

It was Apple intelligence.

We're doing this.

We're doing our own thing.

It's not, you know, because they already understood that when you say something is like, that looks like it was generated by AI, you're saying it looks like shit, you know?

Of course.

And the suggestions are also really bad, too.

I've had like, over the last few weeks, a few people give me some bad news from their lives and the responses it gives are really funny.

Oh no.

It'd be like someone telling me something bad happened and it's like, oh, or like I'm like, what was the worst one I had?

It was like, that sounds difficult.

And it's like a paragraph long thing about like a family thing they had.

Like, and it's not even like got like any juice to it.

Like, I didn't read too long.

Those would be funny suggestions.

But like, it can't even...

It's...

It's proof that I think that these large language models don't actually, well, they don't understand anything.

They don't know anything, they're not conscious, but it's like they're really bad at understanding words.

Like people like, oh, they make some mistakes.

They're bad at basic contextual stuff.

And we had Victoria song from The Virgil the other day, and she was talking about high context and low context languages.

And I say this is, I can only speak English.

I imagine, not being able to read or speak in any others, that it really fumbles those.

And I, if there's, if you're a listener and you want to email me anything about this research, how the fuck does this even scale if it it can't like, oh, we're replacing translators?

Great, you're replacing translators with things that sometimes translate right.

Sometimes?

Sometimes.

Sometimes.

It just feels also inherently.

Like, that feels like an actual alignment problem, by the way.

Right there, that feels like an actual safety problem.

Like, hey, if we're relying on something to translate and it translates...

words wrong and you know especially in other languages subtle differences can change everything maybe that's dangerous no no, no.

We've got the computer will wake up in like two weeks, and then it's going to be angry.

And that's the other thing.

We're going to make AGI, and we think it's not going to be pissed off at us.

I don't mean Rococo's modern basilisk or whatever.

I mean just like if it wakes up and looks at the world and goes, these fucking morons.

Like, you need to watch Person of Interest if you haven't one of the best shows on actually on AGI.

Like, genuinely, you need to watch Person of Interest because you will see how that could happen when you allow a quote-unquote perfect computer to make our decisions.

Also, when has a computer been particularly good at decision-making?

I don't know.

I feel like so much of this revolution, quote-unquote, is based on just the assumption that the computer makes great decisions, and it oftentimes doesn't.

No, it often does not.

Yeah.

Why would I think that the same search function in Apple that cannot find a document that I know what the name is, and I'm searching for it, why would I think that that same computer is going to be able to make wise decisions about my life, finance, and personal relationships?

Because that's Apple and this is AI.

Oh, that's true.

I'll show myself out.

I don't know how much is AI versus just like a good translation app.

Like I genuinely don't know like how much it's going to be.

Well, it's because AI is such a squishy term that we really don't like

in some way.

I guess AI could be expanded to include a lot of modern computing.

Like I can see travel and like emergency situations where you need, where like a good AI translator would be like a real lifesaver, just as a small aside.

I was just in Mexico

and my stepkids were using Google Translate and we were like kind of remembering Spanish and, you know, blah, blah, blah.

Go into a coffee shop and I wanted to order a flat white.

And so I used Google Translate to say like, how would you order a flat white in Spanish?

And it said to order a blanco plano.

which means flat white.

But like across Mexico City, there are are wonderful coffee shops.

And you know what they call them?

Flat whites.

Isn't it an Australian coffee or something like that?

I learned that very quickly with the help of Reddit because I went to the barista and ordered a blancoplano and they were like,

who are you?

You crazy gringo.

I'm sorry, I speak English.

Yeah.

I mean, like, it's

the functionality is very limited on those things.

And it's just like, also, it gets back to like, if it's 100% reliable, it's great if it's 98% reliable it sucks and and

just as an aside did any of you hear about Joan like the latest like quasi-fraudulent thing with Joni Ive that's happening I just saw the headline

so Sam Altman and Joni Ive founded a hardware startup last year

that built

that has built nothing there is a thing they claim there's a phone without a screen

and OpenAI a company run by Sam Altman owned principally by Sam Altman and Microsoft, is going to buy for half a billion dollars this company that has built nothing, co-founded by Sam Altman.

Sick.

I feel like there should be one law against this.

But it's just like, what have they been doing?

And this is just, it's kind of cliché to say, like, quote the big short, but like a big part of the beginning of that movie is talking about the increase in fraud and scams.

And it really feels like we're getting there.

And R.I.P.

to the humane pin, by the way.

Rest in piss, you won't be missed.

Motherfuckers, two management consultants, both like in dignity, eat shit.

Jesse Lou Rabbit R1, you're next, motherfucker.

When your shit's gone, I'll be honking and laughing.

Your customers should sue you.

The description, so my colleagues at the Information reported this journey of Sam Altman News, and the description for the device really makes me chuckle.

Designs for the AI device are still early and haven't been finalized, the people said.

Potential designs include a quote-unquote phone without a screen and AI-enabled household devices.

Others close to the project are adamant that it is, quote, not a phone, end quote.

And they're going to spend, they've discussed spending upwards of $500 million on this company.

This is like a bad philosophy class where it's like, what is a phone that's not a phone?

Nipshi.

It's semiotics for beginners.

Jesus fucking Christ.

Oh my God.

And that's like, that's the thing as well.

This feels like a thing the tech media needs to be on as well.

Just someone needs to say, I'll be saying it uh like this is bordering on fraud like it seems like it must be legal because otherwise there would be some sort of authority right you can't do anything illegal without anything happening hmm

but it's like this is one of the most egregious fucking things i've ever seen this is a guy handing himself money one hand hand this is should be fraud like this is like how is this ethical and everyone's just like oh yeah you know this kevin russ maybe you should get on this shit find out what the phone that isn't a phone is What the fuck?

And also, household appliances with AI.

Maybe like something with the screen and the speaker that you could say like a word to it and it would wake up and play music.

Like a Roomba.

Yeah.

A Roomba with AI on it.

Just declared bankruptcy.

DJ Reynolds

on the blockchain.

A Roomba Dad.

Roomba Dead?

Why Roomba Dead?

I think they did.

I don't know, actually.

I remember, I read the headline.

They were supposed to be acquired by Amazon, but I think the deal fell through under Lena Khan's FTC, I'd assume.

Sick.

And then

also one quick note on the Joni Ives Sam Olton thing.

I guess it's notable that Aldman has been working closely with the product, but is not a co-founder, and whether he has an economic stake in the hardware project is unclear.

Yeah,

you know, he just seems to be working closely with

the proof.

Whoa, like he's just hanging out there and taking a salary in an equity position.

I do think it's very interesting, all of these different AI device startups that have popped up in the last couple of years.

And my question for them is always just like, to what end?

Like people

didn't like Amazon Alexa.

And it also lost a shit ton of money.

Yeah.

And Amazon's still trying to make it work.

Siri's never been super popular.

And I just don't, like, one of

my co-hosts on the podcast, I work on Intelligent Machines, is obsessed with all these devices just because he's like one of those tech guys, Leo, yes.

And we love to make fun fun of him.

Oh, yeah, we love him.

But

his latest device is this thing called a B.

We just had Victoria Song on talking about this.

The thing that records everything.

It records everything all the time and then makes, puts that up in the cloud, and then I guess doesn't store the full transcripts, but does store a little AI-generated description of everything you did and whoever you talked to that day.

And there's no way, I mean, he's in, Leo's in California, which is not a one-party recording.

You gotta get consent from everybody to record.

And the B is not doing that.

But it's just baffling to me because I'm just like, I guess.

He's like, well, it could be nice to have a record of all of my days all the time.

And I'm like, I guess, but to what end?

Just record it.

You go to the bottom.

Just record it.

Just a diary.

Yeah.

Write a diary.

There's literally a Black Mirror episode about that.

I believe it's the first episode of Black Mirror.

Everyone has like a recording device.

And then it does.

When you were talking about this on the show, I was listening, thinking like in this black mirror thing, it reminded me that like when Facebook started having like all your photos collected under your photos and like how we started reliving so many experiences online.

You never scroll back and like look at how happy you were like six years ago.

You know, like, and it

creates this like cycle.

Like, imagine if every interaction, every like romantic interaction, every sad interaction, everything you could replay back to yourself, it sounds like a nightmare to me.

I do think it's also just a night.

Like, humans, we're not built socially to exist in a world where every interaction is recorded and searchable with everyone forever.

Like, you would never have a single friend.

Romantic relationships would dissolve.

Yes.

Eternal sunshine of the spotless mind.

Like,

but even then, like, memory is vastly different to the experience of collecting it.

Like, just existing.

Like, we are brain slot.

I don't know.

My brain just goes everywhere.

But, like, compared to memory, which can be oddly crystalline and wrong, you can just remember something.

You can remember a subtle detail wrong, or you can just fill in the gaps.

Memory sucks.

Also, doesn't having like a device that constantly records everything erode at the impulse or maybe the drive to be as present, you know, because you're like, well, it's got memory.

But it's also got huge privacy implications where suddenly the cops could just be like, yeah, we're just going to take a recording, we're just going to subpoena everybody who was in this area's B device and then suddenly get a recording of everyone's days ever that was just happened to be in this place because we think a crime could have happened there.

But I think that there's an overarching thing to everything we're talking about, which is these are products made by people that haven't made anything useful in a while.

And everything is being funded based on what used to work.

What used to work was you make a thing, people buy it, and then you sell it to someone else or take it public.

This only worked until about 2015.

It's not just a zero interest free era thing.

It's

we have increasingly taken away the creation of anything valuable in tech from people who experience real life.

Like our biggest CEOs are Sam Altman, Wario Amadei, Sundar Peshai, MBA, former McKinsey, Sajinadella, MBA.

I mean Tim Cook, MBA.

Like these are all people that don't really interact with people anymore.

And the problems, the people in power are not engineers.

They're not even startup founders anymore.

They're fucking business people making things that they think they could sell, things that could grow the raw economy, of course.

And we're at the kind of the pornographic point where it's like a guy being like, what could AI, what does AI do?

Well, you can just throw a bunch of data and it can give you insights.

Well, what if we just collected data on everything happening around us ever?

That would be good.

Then you could reflect on things.

That's what people do, right?

And I actually genuinely think there is only one question to ask the B founder, and that's are you wearing one of these now?

And how often do you use this?

Because if they use it all the time i actually kind of respect them i guarantee they don't i guarantee they don't and they'll they'll probably say something around a lot of privileged information as opposed to everyone else who's not important and this fucking join oh it's going to be a phone without without a screen what can you do with it i don't know i haven't thought that far ahead i only get paid 15 million dollars a year to do this oh my question is also who wants a phone without a screen the screen's the best part i love the screen

i love to hate the screens but they don't talk to anyone they don't have human experience they don't have friends that...

Like, they have friends who are all like

have $50 million in the bank account at any time.

They just, like, exist in a different difficulty level.

They're all going at very easy.

They don't really have like predators of any kind.

They don't really have experiences.

So what they experience in life is when you have to work out what you enjoy.

And because they enjoy nothing, all they can do is come up with ideas.

That's why the rabbit R1.

Oh, what do people do?

Order McDonald's.

Can it do it?

Not really, but it also could take a photo of something.

It could be pixelated.

That could be.

You could kind of order an Uber through it.

Maybe.

What was great was the rabbit launch.

The rabbit launch, and he tried to order McDonald's live, and it just didn't work.

It took like five minutes to fail.

And that's the thing.

Like, I feel like when this hype cycle ends, the tech media needs to just be aggressively like, hey, look.

Fool me thrice, shame on me.

Like, like, maybe, maybe next time around,

we can ask the questions I was asking in 2021, where it's like, what does this do?

Who is it for?

And if anyone says it could address millions of people, it's like, have you talked to one of them, motherfucker?

One of them.

I think we can wrap it up there, though.

I think, Alison, where can people find you?

You can find me at cnn.com/slash nightcap.

I write the CNN Business Nightcap.

It's in your inbox four nights a week.

Hell yeah.

Ed?

I write a newsletter on Substack, the Tech Bubble.

I co-host a podcast, This Machine Kills, with Jathan Sadowski.

And And I'm on Twitter at BigBlack Jacobin.

You're on Blue Sky too, right?

Yeah, BlueSky at Edward Donguaso Jr.com.

You can read my work at the information.

I also host a podcast called Intelligent Machines.

And you can find me on Twitter at Paris Martineau or on Bluesky at Paris.nyc.

And you can find me at edcitron.com on Blue Sky.

Google.

Who destroyed Google search?

Click the first link.

It's me.

I destroyed Google search along with Papa Got Ragaban.

Fuck you, dude.

If you want to support this podcast, you should go to the Webbys.

I will be putting the link in there.

I need your help.

I've never won an award in my life.

It's the best episode.

Sorry, best business podcast episode.

We are winning right now.

Please help me.

Please help me win this.

And if I need to incentivize you further, we are beating Scott Galloway.

If you want to defeat Scott Galloway, you need to vote on this.

Thank you so much for listening, everyone.

Thank you for listening to Better Offline.

The editor and composer of the Better Offline theme song is Matt Osowski.

You can check out more of his music and audio projects at mattosowski.com.

M-A-T-T-O-S-O-W-S-K-I dot com.

You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.

I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash offline to check out our Reddit.

Thank you so much for listening.

Better Offline is a production of CoolZone Media.

For more from CoolZone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.

There was the six-foot cartoon otter who came out from behind a curtain.

It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.

Should I be telling this this thing all about my loved life?

I think we will see a Twitch stream or president maybe within our lifetimes.

You can find Close All tabs wherever you listen to podcasts.

Tired of spills and stains on your sofa?

WashableSofas.com has your back, featuring the Anime Collection, the only designer sofa that's machine-washable inside and out, where designer quality meets budget-friendly prices.

That's right, sofas started just $699.

Enjoy a no-risk experience with pet-friendly, pet-friendly, stain-resistant, and changeable slip covers made with performance fabrics.

Experience cloud-like comfort with high-resilience foam that's hypoallergenic and never needs fluffing.

The sturdy steel frame ensures longevity, and the modular pieces can be rearranged anytime.

Check out washable sofas.com and get up to 60% off your Anabay sofa, backed by a 30-day satisfaction guarantee.

If you're not absolutely in love, send it back for a full refund.

No return shipping or restocking fees.

Every penny back.

Upgrade now at washable sofas.com.

Offers are subject to change and certain restrictions may apply.

Drew and Sue and Eminem's Minis.

And baking the surprise birthday cake for Lou.

And Sue forgetting that her oven doesn't really work.

And Drew remembering that they don't have flour.

And Lou getting home early from work, which he never does.

And Drew and Sue using the rest of the tubes of Eminem's Minis as party boppers instead.

I think this is one of those moments where people say, it's the thought that counts.

Eminems, it's more fun together.

Top reasons your dog wants you to move to Ohio.

Amazing dog parks to stretch your legs.

All four of them.

Dog-friendly patios.

Even gourmet hot dogs loaded with the good stuff.

Bone appetite.

And Ohio has so many high-paying jobs, you'll be top dog in no time.

Jobs in technology, engineering, science, advanced manufacturing, and more.

The career you want and a life you'll love.

Have it all in the heart of it all.

Go to callohiohome.com.

This is an iHeart podcast.