Radio Better Offline: Edward Ongweso Jr. & Allison Morrow
Welcome to Radio Better Offline, a tech talk radio show recorded out of iHeartRadio's studio in New York City. Ed Zitron is joined in studio by Allison Morrow of CNN, Ed Ongweso Jr. of the Tech Bubble newsletter to talk about the AI vibe shift, OpenAI’s burn rate, what will finally burst the bubble, and something called “Critterz.”
Allison Morrow
https://www.cnn.com/profiles/allison-morrow
https://bsky.app/profile/amorrow.bsky.social
Ed Ongweso Jr.
https://thetechbubble.substack.com/
https://bsky.app/profile/edwardongwesojr.com
OpenAI Hopes Animated 'Critterz' Will Prove AI Is Ready for the Big Screen - https://www.cnet.com/tech/services-and-software/openai-hopes-animated-critterz-will-prove-ai-is-ready-for-the-big-screen/
The AI vibe shift is upon us - https://www.cnn.com/2025/08/22/business/ai-vibe-shift-nightcap
The Silicon Valley Consensus & the "AI Economy" - https://thetechbubble.substack.com/p/the-silicon-valley-consensus-and-614
YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.
BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
See omnystudio.com/listener for privacy information.
Press play and read along
Transcript
is an iHeart podcast.
Guaranteed human.
And now, Superhuman Shaq.
I keep telling them not to say that. I'm no superhuman.
Believe it or not, I struggle with moderate obstructive sleep apnea, or OSA.
In adults with obesity, moderate to severe OSA is a condition where breathing is interrupted during sleep with loud snoring, choking, gasping for air, and even daytime fatigue.
Let's just say it can sound a lot like this.
Sound familiar? Learn more at don't sleep on OSA.com. This information is provided by Lilly, a medicine company.
This is Martha Stewart from the Martha Stewart podcast. Hi, darlings.
I have a little seasonal secret to share. It's the new Kahlua Duncan Caramel Swirl.
Kahlua, the beloved coffee liqueur, and Duncan, the beloved coffee destination, paired up to create a treat that is perfect for the holidays. So go ahead, treat yourself.
Cheers, my dears.
Must be 21 or older to purchase. Drink responsibly.
Kalua Caramel Swirl Cream Liqueur, 16% Alcohol by Volume, 32 Proof. Copyright 2025 imported by the Kalua Company, New York, New York.
Duncan trademarks owned by DDIP Holder LLC, used under license. Copyright 2025, DDIP Holder LLC.
Enterprise AI is redefining business operations and voice technology leads this transformation.
While Alexa showcases consumer applications, AWS AI delivers enterprise-scale voice solutions that are reshaping customer engagement across industries.
Leverage Amazon's proven AI innovation to transform your customer experience and drive operational excellence. AWS AI, the voice of innovation.
Discover the Alexa story at aws.com slash AI slash our dash story.
With Venmo Stash, a taco in one hand, and ordering a ride in the other. means you're stacking cash back.
Nice.
Get up to 5% cash back with Venmo Stash on your favorite brands when you pay with with your Venmo debit card.
From takeout to ride shares, entertainment, and more, pick a bundle with your go-to's and start earning cash back at those brands. Earn more cash when you do more with Stash.
Venmo Stash terms and exclusions apply. Max $100 cash back per month.
See terms at venmo.me/slash stash terms.
Callzone Media.
Hello and welcome to Better Offline. I'm your host, Ed Zitrom.
And we are recording here in beautiful New York City, and I have a wonderful pair of guests today.
I have, of course, Alison Morrow from CNN Nightcap newsletter and Edward Onguesso Jr., the hater himself from the Tech Bubble newsletter. Thank you so much for joining me today.
Great to be here as always.
So I think we should start with exactly what we were just talking about. The OpenAI claims that they have worked out what causes hallucinations.
Allison, do you want to go over this?
I should have read the paper a bit more carefully, but I can, you know, the highlights were getting digested yesterday on X and Blue Sky.
And it seems like it's kind of the test-taking problem of when you encourage students when they're taking standardized tests and you don't know the answer, you guess.
And that's exactly how these models are trained. And they don't, you don't get a point if you say, I don't know.
So you come up with something. And
the models are
meant to keep guessing until they get something close to right, right? Um, so that's why you get a lot of nonsense and hallucinations. And
OpenAI, at least in their reading of it, says, Oh, this is a simple solution. We'll just encourage the models to understand better when it's like a binary question.
And when you can say, I don't know the answer to that. So, we'll see.
I read the paper, albeit not today, because my brain just immediately read it as marketing copy. Just put that, replaced it with another anime theme song.
I went through it and I was like, okay, so it's going to encourage them to say, I don't know.
This feels like a very flat view of what hallucinations are, though, because hallucinations, as people know them, authoritatively stating something that isn't true.
But hallucinations in like a coding model are, it will just say, yeah, I did that. When it didn't.
This is very common. You go in through the cursor and the clawed code forums.
You can see this, or subreddits at least. And it's not that they say something that's true.
They don't know that something's not true. Also, so they might say they don't know.
And it's just very silly because they claim that they're going to fix this problem with this solution, but they've done years and hundreds of millions of dollars of reinforcement learning.
Do they unreinforce? Do they reinforce it more so that I don't know? I'm fucking tired of these companies, going to be completely honest.
I realize this is kind of cliche for the show, but it's the fact that these things get written up as very serious things that are just saying, you guys still haven't worked this out, it's kind of frustrating to me.
And they don't seem to be, the models aren't getting better, diminishing returns and all that, but this is the best they've got.
I mean, does this kind of thrust feel like, you know, do you guys feel like it's downstream of the attempts by some of these firms to say, oh, actually, like, if we dial back
the sick of fancy, you know, then we'll be able to have a much more engaging consumer product. We'll be able to have it hallucinate less, you know, do psychosis less.
You know, but do they feel linked in that way? Does it just feel like a
they, you know, another, maybe another dead end? I feel like it's just them trying to work shit out and trying to, like, this also feels like a very rudimentary answer that they probably already had.
It doesn't, anytime someone comes up with an idea that I can, that's technical, I'm like, okay, mate, you cannot be cooking with gas. This is, this cannot be that good an idea.
And the thing is, the sycophancy problem I don't think is solvable through solving hallucinations. The problem is it should stop.
It needs to just, it should not say, I don't understand.
It should say, no, actually, you sound like you think I'm God-King and that you are God-King, and that as God-Kings together, we will destroy the world. Yeah, I mean, I mean, which in my case is true.
But do you think the emphasis or the attempt to over-correct on that is leading them to go down solutions where they think, oh, like, if we just did back this and that, then what's the problem? Yes.
Yes. Actually, I think that that's right because they have the, I don't know if you saw the
attorneys general from Delaware and California sent a letter to OpenAA last week saying, hey, look, if you need to fix these safety protocols, you need to actually have them because what you have right now doesn't work, and we will block your non-profit.
Otherwise, I was really happy to read that right up until the bit they say, We wish you well with your quest for AI dominance. I'm just like, These are the fucking people protecting us from the.
They're like, No, it's great you want to dominate everyone with AI. It's just you drove some people to a murder-suicide situation.
And wasn't that part of the problem with GPT-5?
Is they tried to dial back the sycophancy, and then it took away the character and like the humanness that people had gotten attached to in GPT-4 and earlier models.
And like, it all seems to come back to OpenAI doesn't realize that what it's selling often is like a companion and a therapist. And it doesn't, it reminds me of like q-tips.
You're not supposed to put them in your ear, right? But that's all anyone uses q-tips. I was going to say, you're not meant to do.
Of course not. But like, that's the consumer.
That's how the consumer has chosen to use this product. And And they're saying, like, well, we don't condone that.
We don't think it's like the best use of our product. And, you know, we know better than
the consumer, of course.
I think it's one abstraction higher, which is I don't think they know what Chat GPT is.
I don't, I did an AI booster piece came out last week, but it was, I had this whole thing where it was, it's so, they don't describe what ChatGPT does.
If you go on the website, like, it's like, yeah, it analyzes data and it's brainstorming and charts and stuff.
The agent does things too. Please buy it.
And then you try and like actually look for any use cases and there's nothing. I think that they're just guessing.
But my favorite thing I'm seeing, my favorite thing is that now people are like, GPT-4.0 isn't the same because they brought it back. And now people are just freaking out.
People are like, no, it's not the same. It's different somehow.
It's honest. I honestly don't know if it's true.
I just think that they've entered gamer mode.
This is just what gay, this is how gamers react. It's like the gun isn't the same.
It's literally the same code. No, you've changed something.
I know. And it's what what happens when you release an imprecise glazing bot onto the world.
It's also really funny how literally any of these companies could have been OpenAI.
There doesn't seem to be anything special about this company at all anymore or ever. Well, you know, I mean, not everybody can lose as much money as OpenAI.
You know, that feels a very special.
Or have the connections of Masayoshi-san, you know? Oh, my God. I haven't heard from Masayoshi-san in a minute.
We haven't had an announcement of him claiming tomorrow. Tomorrow he's going to be...
I like that he bought a he bought a Foxconn lab, an old Foxconn lab to turn into an AI server building place in like Ohio, I think it is. It's like,
mate, what are you doing, man? Are you okay? Someone needs to check in on Massey. We need to hold space for Masiyoshi's son.
Yeah, bring him on.
I have asked him. I have genuinely emailed SoftBanks PR and I've emailed Anthropic's PR for Dario and haven't heard back from either.
I assume there's an email issue because there's no other reason they wouldn't come on this show. It's just it now.
What's fun to watch, though, is considering how many times we've had like AI bubble conversations on here, everyone seems to be kind of waking up to it. It's kind of fun.
Must be clear,
everyone here was early on this. Allison, you were actually very early.
You've been early on everything. I'm not as early as you, Adam.
No, I give you credit.
You're one of like the three people who, when the metaverse was happening, was actually calling it out. So
it's good to see and hear, but also insane that it's still going. That's what I don't like, this OpenAI 115 billion burn within by 2028, I think it is.
I don't understand how people keep publishing this and being like, and that will happen.
I will say to my relief, the metaverse went away quickly because when it first was announced and I wrote a piece just being like, what?
What is that? What?
And then like everything about it seemed really dumb in every iteration coming after. And so I was like, whew, okay, I wasn't crazy.
But with AI, and I know you can relate to this, I feel crazy because the lack of utility is still there. And like the absurdity of the investment is still there.
And it does seem like that's why I wrote about the vibe shift. Like it has been like going in Ed Zitron's favor in the last few weeks.
Fucking vindication, finally. But it's funny as well, because even with that, this open AI story comes out and people are still like, yeah, they're going to burn $115 billion.
People reported about a month ago that they're going to spend $30 billion a year on Oracle starting 2028.
How?
How? How? I just,
that's what I don't understand, how these articles keep coming out. And I understand that reporters have to report the news and things they have discovered.
I get it.
But can no one just be like, and no one has any idea where this money is coming from? No one, $30 billion
a fucking year from Oracle for servers that are yet to be built and all of this.
don't need to worry about the fact that the servers aren't actually built and Apple in Texas is not finished and the money doesn't exist, and it isn't obvious how Oracle affords it.
They will have to take on debt to build this, and Crusoe and primary digital infrastructure have already done that. And I mean, other than that, it could happen any day.
I just wonder if the media is actually not prepared for this. And I don't mean in a conspiratal way, I mean just in a
is the media actually set up for companies just lying or just projecting?
I mean, I feel like uh, it reminds me of kind of the relationship that maybe our critics or you know, our coverage might have to the medium where there's not an inherent
antagonism or skepticism of claims that are being offered and an assumption of good faith that continually gets betrayed or punished, but gets carried on with over and over and over again.
I feel like the AI bubble discussions that we're seeing, part of me also feels like they are going to disappear the second we start to see some of the firms maybe announce some favorable metrics, even though, as we've been talking about for a long time, revenues are not there, profits are not there, the burn is only increasing, right?
And there's no way forward
in the short term or that I can think of where these companies start to actually do the things that they're claiming they're going to be doing with transforming the world.
But I can see a scenario where...
you know, someone has a favorable quarter for adoption, even though, you know, we just, I just saw yesterday, Apollo Global Management was talking about how large firms are actually scaling back AI adoption, which already wasn't even providing returns and was hurting productivity in the first place, right?
And it's really weird as well, because there was that MIT study where it's like they said 95% of generative AI integrations don't have any return on investment.
There are some people critiquing that number, but something that comes out of that study that I really like was it was saying that enterprise adoption is high, but the actual transformation is low, because this shit doesn't work.
And
it's so strange. And I think that the only reason that things won't immediately unravel with a good quarter is because the media has chosen to follow a direction now.
When you've got the Atlantic, the goddamn Atlantic publishing a story saying, yeah, it turns out that AI isn't really doing much, but it's holding up our economy.
It's like, holy shit, the Atlantic's willing to admit something that happened, happened?
I didn't even know this was in there. I thought that they just wrote up whatever was emailed to them by.
Yeah, don't hold your breath. They just published Mike Solana.
Yeah, actually, I retract all my statements.
I think with the media,
I see it in political reporting, in business reporting, in tech. There's a deference to authority that I think American media, but all media have an issue with.
And I think that sort of speaks to the underlying economics of being in media right now.
where there's a general chill both economically and politically.
Reporters are worried about their bylines being out there and getting stuff wrong.
And I'm not saying that that's an excuse, but I do think that is an institutional mindset that has taken root, especially in the last 10 years.
It's just become like really hard to be a journalist and to do it right.
But you are starting to see that MIT report was so important because
it caused people on Wall Street, authority figures, to say, hmm. I don't know about this.
And then that got a lot of mainstream financial media to kind of like do that questioning headline about AI that they maybe wouldn't have done six months ago.
I do like that there's a rumor about the next DeepSeek model doing agents.
They're not going to, but even just like if that comes out and they even claim they can, I think that might have a market panic just because they'll be like, ah, China.
I mean, yeah, I mean, that's, that's, I actually think there's something to that, right? Because, you know, we did agents here. I didn't do shit for Salesforce.
Really, the market, did the market even really
and a wonderful story from this information. Um, OpenAI in their whole projections through 2029, they've reduced the amount of money they'll make from agents by $26 billion.
Why?
Anyway,
but yeah, it's like this deep seek thing could inspire people to get scared horribly because no one actually believes that agents exist because they don't. Yeah.
But they think they do, but they will, but they won't. It's this.
I don't think I've ever actually seen anything like this in tech. I'm going to be honest.
It's worse than crypto. Even worse than just the general generative AI thing is this concept of agents though.
Because I saw some fucking thing about some one political blog saying that Donald Trump would do some sort of act.
He would do the, I forget the exact thing, but it would be an act that would make AI copy copyright holders just hand shit over to AI due to us needing to beat China.
And it mentioned within like, yeah, the growing agentic capabilities of AI. It's just, what the fuck is that? I've never seen a tech thing in my life that has not existed like this.
And people talk about it like it's real.
And I think also it's interesting to see the more it doesn't manifest, the more some of the recommendations to make them happen just sound more and more like these are also things that might somehow bend the cost curve in our direction.
Maybe we should make the internet unusable to anything other than some of these programs, right?
And you know,
I'm curious which one is going to give out first, like the really savvy ability a lot of these firms have had in spinning sputtering or
spinning uh you know a crisis that might drain the markets or deter investors into oh actually we just need even more capital similar to what they did with deep seek the solution is even more compute intensive models uh you know whether they're going to be able to do that faster than uh people wising up and saying you know maybe we shouldn't uh misallocate trillions of dollars of capital over the next few years towards this.
But this is the thing, though.
I don't think there's anything stopping this because the suggested thing was the, okay, so all of these model companies can steal from everyone, which is what happened already.
Even with this anthropic settlement, it's not, it's great people are getting $3,000. I love that.
That's also what the companies are offering. That's similar to what your payout would have been if you said, yeah, right?
I didn't know that. I think like some publishers were offering or trying to ask authors, hey, if we pay you this,
would you allow your book to be trained on? Or would you allow it to be put into a data set?
The payment that you're getting from the settlement feels or reminds me of the amount of money that people are being offered and those sorts of deals.
Did you know Microsoft has officially ended support for Windows 10? Upgrade to Windows 11 with an LG Gram laptop. Voted PC Mag's Reader's Choice top laptop brand for 2025.
Thin and ultra lightweight, the LG Gram keeps you productive anywhere. And Windows 11 gives you access to free security updates and ongoing feature upgrades.
Visit lgusa.com slash iHeart for great seasonal savings on LG Gram laptops with Windows 11. PC MAG Reader's Choice used with permission.
All rights reserved. Ready for seven days of discovery?
For the first time, South by Southwest brings innovation, film and TV, and music together, running concurrently across Austin, March 12th through 18th.
Experience bold storytelling, groundbreaking ideas, and live performances that define what's next. The most unexpected discoveries happen when creative worlds collide at South by Southwest.
Start planning your adventure and register today at sxsw.com slash iHeart.
The holiday hustle is real. The traffic, the crowds, the never-ending to-do list.
And nowhere to park. Perfect.
That's why drivers use ParkWiz. They already save up to 50% when they book ahead.
This holiday season, use promo code Jingle10 in the ParkWiz app for an extra 10% off your next reservation. Because during the holidays, every minute and every dollar counts.
ParkWiz, less circling, more celebrating. Download the ParkWiz app and use code JINGLE10 to save an extra 10% today.
Offer ballots through December 31st at Reservations Only, limit one per user.
The world's best ski and snowboard athletes are chasing medals. Now you can follow their every move.
Join Insider, the official U.S.
ski and snowboard fan loyalty program, and get premium viewing at World Cup ski events, exclusive athlete meetups, discounts from from brands you love, and a custom welcome gift mailed direct to your doorstep.
This winter, show your support as they race for the podium. Head to insider.us skiandsnowboard.org and join today.
I think the... The thing is, okay, they already steal everything.
Well, okay, we need to give them as much money as possible. We've already done that.
Are we just going to do this forever? Because even if we do this forever, nothing's going to change. Even if I'm completely wrong and open ai keeps going another five years um
okay so we're just going to annihilate 115 billion dollars there is no there are no more things here like their projections open ai's projections from the information their their chart i don't even need to show you this because this is just fan fiction at this point there is in 2026 starting there is this growth of this orange thing that is other revenue
Who knows what it is? I don't know. Open AI doesn't seem to, and that's really important because they're going to make what looks like several billion dollars from this next year.
What the fuck is going on?
Every time I look at this company, I feel a little more insane because they've now lowered their expectations of selling their access to their models by five billion dollars over the next few years.
What even is OpenAI at this point? Is it just a rap? Have they become a wrapper company of their own models? They're no better than cursor.
It's just, it's so weird. And I realize I'm kind of going in circles at this point, but it's
I even the metaverse, even crypto, even crypto functioned. It was bad.
It's still bad. It is bad cloud software, but it still did the thing.
AI doesn't even seem to be doing it.
And they need more money to prove that it can't do it. And actually, they don't have enough right now, but they're going to need even more.
I don't even know how people are still taking this seriously because on top of that, did you hear about the Microsoft negotiations over the non-profit? I've been hearing reports.
Well, they're delaying it to next year.
to, they need to convert by the end of the year, otherwise, SoftBank comes, they're round in half. And everyone's just like, yeah, it'll be fine, mate.
We'll work it out. What the fuck?
Have either of you ever... I know, Ed, you covered Uber a lot.
I don't even think the economics match with that either. No.
I mean, you know, it's interesting because I think
Uber's
strategy, central strategy from the beginning was,
you know, we have a few existing playbooks that we need to reference. You know, the Coke deregulation of the taxi industry in the 90s, as well as regulatory deregulatory campaigns that they led,
you know, in Seattle and historic campaigns in San Francisco. There's a lot that we can reference.
And if we can figure out a way to
bootstrap ourselves onto the model and onto those previous histories of deregulation while
delaying scrutiny long enough for our economics to actually get to a profitable place, we'll get get there, which is what they did, right?
But even then, I mean, I feel like from the beginning, as much as I hated a lot of the coverage of Uber for years, the people who were always correct about it were like the labor reporters who would actually, who, you know, if you spend time talking to the drivers, that will lead you to be a little bit more interested in the, you know, what can justify the suffering behind this.
And then you almost always will see.
that there's no at that point there were no way the unit economics worked unless you subsidize everything.
Yeah, I feel like similarly there's something going on with gl with artificial intelligence firms in the global AI value chain where if you start with a labor analysis and you look at you know invisible workers or ghost workers that are intelligent.
The Kenyan people training the models.
Yeah, or labeling or you know any any
labor that's out of sight out of mind.
You know, then the you know starting there and going up it becomes hard to ask okay how are we supposed to
allocate all this capital towards a model that as is right now is cutting all these corners for costs and is still burning tens and tens and tens of billions of dollars and is asking for trillions of dollars more.
But I don't know.
I mean, part of my fear is that they are successful in the way that Uber was, where if you get enough buy-in from military industrial comp, well, you know, for in AI's case, if you get enough buy-in from military industrial complex, if you get enough buy-in from, you know, social programs and interfacing them and helping cutting them or redirect traffic through them.
If you get enough buy-in from other tech firms, if you get rents from other startups that need to use and get access to your product, and also if you graft yourself onto everybody's daily interactions, daily lives, the way they interface with the internet, can you actually make it work?
Which is also another way to say, like, what if you just become a massive parasite? But
the funny and grim thing is, is AI is a terrible parasite. It's not good at it.
It doesn't like, because Uber's success came from being able to graft itself on through utility Yeah.
And subsidized pricing that meant that everyone used it. And also cabs kind of fucking sucked.
Yeah, I mean, yeah, they sucked. And also transit in most cities sucks.
And I mean, that has only gotten worse. The inherent colonialism of most technology applies very good from Karen Howe, of course, Empire of AI.
Empires of AI did an episode, great book.
But in this one, it's kind of shit colonialism as well, because they don't even, they haven't found a way to actually exploit in a way that's profitable.
They haven't found a way to use human beings because the fundamental thing they want to do, it's kind of like if Uber, sometimes you got in the car and you got out at the wrong place.
And I don't mean like in a different country. Or you got into the car and it just exploded sometimes.
And I sound like I'm joking, but it really is that bad.
And on top of it, it's not replacing labor. And it's also not the kind of tech that can replace labor.
So
it's my grand theory that they're just playing the hits.
They're just trying, in the same way that you just eloquently put, Uber played the hits of here's how we did deregulation, here's how we did growth, and this is how software grew in the past.
I think the AI is trying to do the same thing, and it's bad at it. It's like watching a new class of dipshit try and do what the more evil dipshits of the past did and fail.
In fact,
these lobbying groups lobbying for AI, I hear a lot of people saying, oh, they're lobbying, they're lobbying. It's like, what for? Oh, no, they're going to build data centers everywhere.
They already do. They're going to steal from it.
They already do. They're trying to replace it.
They're already trying.
Everything that they, that everyone's scared of, they already can do other than the doing part they can't do any of that i
sorry i just i just remembered as well yesterday trying to read the because i went on the chat gpt pro subreddit because i hate myself and i was trying to find someone who'd used the agent and every post was someone saying anyone ever used the agent you got any tips and everyone's just it doesn't work it's it's broken it's
actually here's a good question as an ed have you can you think of a company that's ever released something just completely broken before
because the metaverse kind of worked it wasn't what they were promising but it worked it was a virtual world-ish
i mean the pharmaceutical industry has a nice long history putting out quasi-effective drugs that have all kinds of consequences and i can't remember who skeeted about this a few weeks ago it was after one of the you know it's become such a genre of the journalism right now about ai about like This man became delusional and had a psychotic episode because of his chat GPT relationship.
And it was one of those going around, and someone skeeted about how if there was, if this was coming from a pharmaceutical company, it would be recalled immediately.
There are real regulations in place that could actually claw that back and help save people's lives. But there are no regulations around AI.
So we get...
Chat GPT gods and spiritual awakenings and all these psychotic episodes.
I do think that that stuff is going to genuinely be its downfall, though, because right now it's burning more money than anyone's ever burned before.
And the most common use case people can talk about about is, yeah, it drove that guy insane. That guy went crazy.
There are children who's horrifying killing themselves because of this thing.
That's what it's getting known for. And otherwise, it's like, yeah, your most annoying friend loves this.
Because that really, it's the you love them, but they're like, I learned about all this from Chat GPT. It's like, you didn't.
Well, you know, on that,
part of also, part of my
fear I have is I think similar to how when firms were rolling out facial recognition surveillance and insisting that we need biometric surveillance to help keep city safe, community safe, products safe.
One angle that people used to attack it was, well, you know, like the racial bias of these things will allow them to,
you know, misidentify black or brown people more often than not. And they might get arrested or they might get targeted by the police in one way or another.
And that is why we should get rid of the technology versus we should get rid of the technology. And I think, like, I'm curious
how it's going to go, the concern about it inducing psychosis or inducing suicides, because I could see a scenario easily where they
patch together something that looks like a fix. And it's not until later, a year or two or three after there's much after people are much more dependent, that other harms come to the foreground.
whereas we lose something out, I think.
Not to say, you know, not to say or marginalize the fact that it has immense social costs or harm here, but it does in some ways remind me of the way in which that debate over facial recognition went.
And then they, you know, they solved, you know, with quotation marks, the racial bias problem. And now people have more or less accepted that facial recognition is okay, actually.
And you're right.
As long as it's not racist. And that's the thing.
It's people,
I say this as a white white bloke, but it's like people
really underplay how endemic that racism is within all algorithms. You know, Compass, which is like this
very, very old algorithm for being like, it's basically minority report, both in the reference to the thing and it reports on minorities, in that it says, yeah, this person will likely offend again, and that should be, and it isn't a unilateral, the judge has to take it, but what a surprise.
It's often used to them black people, to the
jail systems because it's heavily biased against them. And yeah, I somewhat fear LLM's doing similar.
They're probably already doing. And I think that every algorithmic system is inherently racist.
There's not enough people running them who actually fucking try. It's inherently biased against women.
I think there's also, I wish I had this in front of me, but there's also something about how like...
there are more fans of generative AI who are male than female.
But do you think it's possible that
they'll try to say, oh, we can solve for the psychosis problem, and then that will undermine a large
angle of the criticism. How do you solve it? Because it is probably a small,
it is probably a small-scale problem. We actually don't know, and it's not that these companies know or will tell us.
But nevertheless, each one is so horrifying.
This is horror. Like the story in the Wall Street Journal,
Julie Jargon, another person there who wrote that, where it was like a murder-suicide, an actual son of Sam Altman situation, which is fucking terrifying that this is happening.
I don't know if you can completely solve that because all it takes is one popping up again for them to go, fuck. And it's also not just a chat GPT problem.
There's this woman on TikTok who she has been saying what Claude has been telling her. And it's like, oh, this is giving me psychic visions, I think it is.
It's also the ultimate grifter tool.
It's just, it's, that's why I think it's taken off so much on social media as well. It's a tool that naturally fits into the grifter's toolbox.
I think that I actually have similar fears that they will try and find ways to hand wave away from this if it was the only problem. But they have so many problems.
They have so many problems at this point. But I do also think that people need to remember how racism in algorithms is fairly, it's in all of them.
I mean, you remember Microsoft Connect?
Which literally couldn't see black people, which was a joke in the show Better Off Ted, if anyone watched that. It's a great show.
It's just, it's insane that, I mean, sadly, it's very obvious why this keeps happening. It's because the people are predominantly white now.
And it's just they can't really.
And also, you can't really fix this stuff without intentionally building the data, which would require them to spend money on something they don't care about.
And they don't really understand what they're doing when they go in to tweak these models. They don't know how over-correcting or under-correcting they're being.
So they kind of have to just try and then put it out in the world and then wait for something bad to happen. It's funny.
Not funny. It's extremely sad.
In that journal story that you referenced, which I read twice because I was like, yeah, horrified. And also the reporting was incredible.
It was really, and they said it as this appears to be the first instance of a murder resulting from like we've seen suicides, but like this is a murder suicide.
And when OpenAI responded to the question about
did the bot ever respond to this guy who was clearly having a delusional episode? Hey, you need to talk to a real-life therapist, you need to go to the hospital, you need to seek help.
And I think they declined to comment.
It was like a very evasive maneuver, but like ultimately, the journal had seen the one time where the bot said, please go to the emergency room, was when the guy, the guy who was having paranoid delusions, said, I think my mom is trying to poison me.
And the bot said, if you think you've been poisoned, you should go to the hospital and get your stomach pumped. I also,
i agree they don't know how to tweak these things but i must be clear worked in tech for a long 16 17 years now and that's not even including including my games journalism work it is not hard for them to just have a just a unilateral thing of oh you're talking like this i'm going to stop i mean anthropic just announced that they have a thing that will cut a conversation which is good all of them should do this but it's they could if someone and the whole thing people i've seen where it should be, if you start talking like, I'm going to do this.
I am becoming this.
It should say, hey, you sound like you're having a paranoid, like, I'm worried about you. You should go and speak with it.
And just stop working with them.
People will say, well, the way they get around that is by telling the chat GPT window, oh, yeah, I'm writing a story. I don't know.
Do we need them to write a story about that? Do we need,
what is the answer? And the answer is they don't give a rap fuck and no one's made. I really, I genuinely think, because it's really easy to
social networks as well. It's you don't ban every slur the moment someone says it.
But I don't know. You have a thing that says, hey, someone said a slur.
Maybe take a quick look at the slur.
And you could probably just ban that person because I'm guessing that most uses of the N-word on social media are not used as... in culturally sensitive ways.
They're probably insanely racist.
You just cut them down. It's like, well, we can't.
It's an issue of free speech. Fuck you.
No, it's not. It's an issue of free speech when a person can't exist online without racism happening to them.
Right.
And these models, they could stop them, but I do think there is a compelling argument of they really don't know what to do, that every time they touch it, something else breaks.
Honestly, it's kind of the most egregious version of the most common software problem, which is coding is really fucking annoying and we don't know how these work or so.
And generative AI is not going to fix your coding problems no matter how many times you tell us, Sam Altman, that
AGI is just going to fix everything for us.
Did you know Microsoft has officially ended support for Windows 10? Upgrade to Windows 11 with an LG Gram laptop. Voted PC Mag's Reader's Choice Top Laptop Brand for 2025.
Thin and ultra lightweight, the LG Gram keeps you productive anywhere. And Windows 11 gives you access to free security updates and ongoing feature upgrades.
Visit lgusa.com slash iHeart for great seasonal savings on LG Gram laptops with Windows 11. PC Mag Reader's Choice used with permission.
All rights reserved. Ready for seven days of discovery?
For the first time, South by Southwest brings innovation, film and TV, and music together. Running concurrently across Austin March 12th through 18th.
Experience bold storytelling, groundbreaking ideas, and live performances that define define what's next. The most unexpected discoveries happen when creative worlds collide at South by Southwest.
Start planning your adventure and register today at sxsw.com slash iHeart.
The holiday hustle is real. The traffic, the crowds, the never-ending to-do list.
And nowhere to park. Perfect.
That's why drivers use ParkWiz. They already save up to 50% when they book ahead.
This holiday season, use promo code Jingle10 in the ParkWiz app for an extra 10% off your next reservation. Because during the holidays, every minute and every dollar counts.
ParkWiz, less circling, more celebrating. Download the ParkWiz app and use code Jingle10 to save an extra 10% today.
Offer ballots through December 31st at Reservations Only, limit one per user.
The world's best ski and snowboard athletes are chasing medals. Now you can follow their every move.
Join Insider, the official U.S.
ski and snowboard fan loyalty program, and get premium viewing at World Cup ski events, exclusive athlete meetups, discounts discounts from brands you love, and a custom welcome gift mailed direct to your doorstep.
This winter, show your support as they race for the podium. Head to insider.us skiandsnowboard.org and join today.
That's actually been my favorite thing to do right now is go on the r slash cursor r slash chat GPT pro r slash clawed AI and just looking at people complaining and what they're complaining about is hey I keep hitting rate limits.
Hey, it keeps breaking things. Hey, you get one guy every so often who says, This has changed my life.
And then you see what the response is being like, yeah, but I fucked up all my stuff really badly. It doesn't really work.
And we have an upcoming episode with Colt Voji about this, where it's like, the average software engineer is not just writing code anyway.
And so this is also, I think, this is actually real. This is funny.
This is a good one to laugh about. So their only real growth market right now is writing code.
The problem is, writing code requires you to use reasoning models. Reasoning models inherently burn more tokens.
And the way they burn tokens is because they're thinking, they don't really think.
They look over what a prompt asks for and goes, Okay, what would be the steps to solve this? With code, that becomes so
complex. And the more models reason, the more they hallucinate.
So, the very product that they are building that is going to save them is also the one that is going to burn more compute.
And this is a rumor.
I've heard from a source that like it can take like four to 12 GPUs for one person's particular, particularly rough coding, like a refactoring. That's sustainable.
And that's, and that's for one of the smaller models as well. That's for like 04 Mini, which is a reasoning model.
It's like, what do you think the big ones are like?
In the information, they talk about OpenAI having a new
$80 billion in costs that they expend
over the next three, four years.
It's $115 by 2029 as well. Does a good chunk of this come out of, oh, it turns out that computer is incredibly expensive
and we want to center our business model around it?
I think it's that. And I think it's just they don't know what else to do.
It's kind of like we're saying with the Uber model. They're playing the hits.
It's like, fuck, what did we do in the past?
We spent a lot of money. Shit, what do we buy? GPUs, I guess.
We train more. They're going to spend so much money on training.
And it's like, to what end? Your last model was a joke.
This is why it was really interesting to see that op-ed that came from Eric Schmidt and his research assistant, where he was, you know, Eric Schmidt is someone who was an architect of the idea that of
Google. Former CEO of Google,
you know, chairman of national security
group that was trying to figure out how to merge artificial intelligence into defense contractors and how to create a foreign policy that would allow America to dominate,
really to win an arms race, an AI arms race with China.
And he comes away saying the strategy I basically helped craft, which was that we need to prioritize AGI so that we can get, prioritize AGI so that we can get like a permanent lead
to deter any potential rivals, is scaring everyone. And it doesn't work.
It's a waste of capital. It's misallocating capital.
It's It's imposing all these harms.
And if we look at the competitor that we're going up against, against China, by abandoning the AGI pursuit and instead prioritizing ways to figure out how to experiment with it, integrate with it,
build up practical use applications,
there's a much more general public acceptance of it, willingness to try it out, adopt it.
And we're not seeing, because we're not trying to scale out these massive either monopolies or one-size-fits-all models,
You see a wider adoption and something that looks like it's a more sustainable model. Are we going to follow it? Probably not.
Of course not.
What I love about chasing China as well is China has had stories for like a year where it's like, yeah, we have a bunch of unused GPU compute.
We're massively overbuilt. Joseph Tsai, I think it was, the Chinese billionaire said, yeah, it's a bubble.
We have a real GPU bubble. And America's just like, we need to fucking copy it.
We need to beat them.
We're going to run our economy into the ground.
We can just be China. It's like we're saying we're going to confront them.
And what is it that we're actually doing?
We're prioritizing developing artificial intelligence that has like a question mark consumer use that's going to be used in you know killing machines and drones maybe and for surveillance purposes
yeah that's not even generative ai but that's where the actual excitement for any sort of artificial intelligence future is and this is i you know the generative ai stuff is talked about as if it is the future the transformative future of artificial intelligence in reality it's just the actual you know the actual interest, excitement, capital is going to, I think, go back to the center of gravity, which is like, how do we just figure out the shiniest, the most fearsome weaponry?
But I think that what's weird about this is I've... I don't think we've had a bubble that spreads so far into consumers' hearts.
I'm not saying it's as bad as the housing bubble, but consumer software, if we go back to the dot-com boom, I think it was like 45% of Americans had access to the internet.
It was relatively small in comparison, though the massive overinvestment in fiber happened.
But I don't think people realize what they see to the chat GPT may not exist in a year or two, at least not in the same way. It's going to be so, you're already seeing week-long
rate limits on Anthropics Claude.
Like, do people not realize that this could happen? I guess they don't realize. And I don't, I think that there's going to be a big dunce mask off.
There are so many people who have fallen behind this.
I mean, not to bridge too aggressively into this, but there was a story in the Wall Street Journal that I shared with you, of course, about this movie called Critters.
It's with a Z or a Z for my Canadian and UK listeners, where OpenAI will be providing the compute and the tech to do a movie called Critters with a budget of less than $30 million, though it's not obvious whether OpenAI and their compute is part of that.
But it's the weirdest shit in the world. Allison, you were bringing this up that like they're still using a bunch of humans.
Yeah, so I was reading the same story and I haven't done any, this came out this morning, so I haven't done my own reporting on it, but I will say from the story I read, it seems like they're hiring two different animation studios with artists and writers working on the script.
They're hiring human actors to voice the characters, and then
some mystery X amount of the movie will be put together with AI.
And I honestly don't know how different that is from a regular Pixar or DreamWorks animation process, but it seems like there's, I, when I first saw the, you know, the teaser image is very cute.
And I was like, oh, God, they're like, this is going to be some AI propaganda, and it's going to be very cute and hard for me to refute.
But actually, it's just a human-made movie, it seems, and with extra computer help. And this picture I'm holding up, of course, we're listening to a podcast that you can all see this.
It's just this generic blue furry creature. It looks like an extra from Monsters Inc., it really does.
But it's not due to the copyright law. It's not the same thing.
It's different. But what's funny with that as well is, I was mentioning this as a lead-in.
It's that $30 million thing.
If that doesn't include OpenAI's compute, it probably costs the same as a Pixar movie because you're still like, actually, 3D animation is one of the few other GPU use cases.
So really, it's just a different thing, Ryan. It'll be funny also if they save money because they don't do any marketing.
And they're like, see how cheap it is if you don't advertise a movie at all?
I think they might be getting around some Hollywood unions. Yeah.
Going overseas.
Oh, really? They're going completely overseas too?
I'd have to to check. Don't quote me on it.
But I think, I know they were using at least one overseas animation studio.
So they're probably saving a lot on the animation process by not paying animators, I would guess. It's so cool.
And also, another fact from the story is we don't know how long the piece will be.
And if it's like five minutes long, I'm so sorry. Come on.
Feature length, then. Feature length.
I should make it as long as the Silent Napoleon film.
Which one? Six hours. Oh, six hours.
Yeah. Actually, I love that.
They should be forced to.
I I don't know how they're going to do a feature-long movie because I don't know if you've cursed yourself by looking at the AI-generated movies that people try.
Every so often one pops up on Twitter where it's, it'll be, yeah, I made this entire thing in AI. And you look at the front, and it's like a different fucking thing each time.
That balloon boy one,
different size balloon, different color balloon. You read the stories about the balloon boy one.
It's like, yeah, they kept putting a face in the balloon. We don't know why.
I just, I, and I know I have a good amount of film and TV people who listen who are quite anxious about this. This doesn't scare me because they're very vague about the details.
Every other big tech innovation, I even, other than the metaverse, I guess, they usually like to show you behind the curtain a little bit and like talk up that there'd be a big splashy story in like MIT technology review or something like that, being like, oh, New York Times be like, oh, look at this, look at that, look at all things.
And they're like, yeah, we're just using some people somewhere in a place. And they will make it.
And in the Wall Street Journal story as well, they showed sketches that would then be turned into AI.
I just,
this feels like a death rattle far more than something terribly scary. And I understand film TP people are likely a bit scared, but it's like they're using out-of-the-country studios.
Of course, I just assume they're skipping union stuff because this is all they do. It's like, this is the best they can squeak out years in.
Fucking how?
And it's like a boring-looking children's thing, I guess, with a name from 2001. Oh, it does have the producers or writers who worked worked on Paddington and Peru, apparently.
What a movie about a criminal. The sequel to a movie about a criminal who unfairly attacked you grunt.
No, sorry. I'm not sure.
Well, I mean, then this is the question of, you know, where in the Uber analogy is this?
Is this, you know, Uber's failed expansions where they tried different models overseas, or is this Uber returning home where they take the lessons from overseas or they use those overseas things to buy them a bit more time to then subsidize operations.
Here is my comparison.
This is the drone deliveries.
This is the drone deliveries. It's the Amazon drone deliveries.
Great job, Casey and Kevin several years ago talking about the Amazon drone deliveries. Never fucking happened, mate.
It's hilarious as well because it is the same thing. It's like, we cobbled together this.
It sucked. It took so much money.
It's horribly inefficient. It sucks.
We hate it. You hate it.
The customers hate it. We hate it.
We hate doing this, but we did it. Ta-da.
And it's okay. Well, you sure prove that.
To your point, Alison, it's like, yeah, we use the power of AI to hire a bunch of people to do all the real work because you can't trust this to work.
It does not work. When I saw a headline for an AI movie, I was like, it's going to be awful.
It's going to like writing a movie is hard. But wait a minute.
Also, there's the other thing of, oh my God.
How are they going to lip-sync this shit?
How do you lip-sync this?
You can't generate the same frame.
How are they going to, are they going to go in and post-edit it with humans, I assume? At this point, how much are you actually relying on AI for?
It's very unclear.
It feels like being at a party where everyone's pissed themselves.
It does feel like some next-level, like, young propaganda. Yeah.
Like, if they can get kids to enjoy whatever this monstrous movie is going to be, then maybe there's like a longer-term brand play for Open AI as a warm and cuddly, but safe for children.
The thing is, French and Korean companies have
already been doing slot-based 3D shows. I don't mean the famous one from
K-pop Demon Hunters, which is apparently very good. I've not watched it.
I haven't seen it yet. And please don't kill me.
I'm not attacking that.
I'm talking about there is a gluttony of like very cheap kids, 3D kids' shows, and they've been around for decades because you can do this shit. on the cheap now.
Another thing where the Uber model made sense as long as you didn't count the costs, which is, yeah, this is a way of getting people around that they become dependent on because it's useful.
This is like, we have found an extremely expensive and annoying way to do something that we already have a cheap alternative to do.
It's not like there was a cheap, a cheap, reliable cab service that Uber replaced. There was a slow shit cab service that Uber replaced everywhere.
And it's like, is it a good company?
Is it horrible to workers? Yes. But does it work? Yes.
This is, we're going to automate everything with the power of AI other than labor,
other than stuff.
That's where the AI story starts to overlap again with crypto, where at least
with Uber, you understand what you're getting as a consumer. And then with AI, you're kind of like, I don't really know what this is.
I don't know what problem it's solving.
It's like a solution in search of a problem. And that was crypto's same bag.
It's just like, oh, we invented this cool new alternative money system. Why?
The thing is with crypto is they always had a plan, which sucks. I really should have seen it coming.
I was not smart enough at the time.
It was they always wanted to just get embedded in the financial system and then just turn the funny money into real money. AI doesn't have that.
There is no way to turn this into new, like you can't just generate new money. That's what crypto did, and it fucking sucks.
And by the way, the next crypto crash is going to wash out some real, like it's going to really fuck people up. I don't think people realize that SPF2,
who at this point might just be SPF? Like if he just gets pardoned and comes back,
honestly, if he comes back and does it again,
no one can complain. I'm going to law school.
I've got to go in the hyperbolic time chamber. I'm going to join the fight.
Just so you can, so I can put him in cuffs.
You're going to put Sam Backman Fried back in cuffs. Yes.
Oh, Sam Altman Fried would be good.
It's, it's just, I don't see an end point for this. I don't see everyone's, even the boosters at this point, they're like, and then it will be powerful.
When? How?
What are you seeing that even tells you this? I don't even want to fight. Just tell me.
I do think there's so, there's just so much money behind it and there's so many people who've invested i was listening to um a vc guy get interviewed on the odd lots podcast and i can't remember his name so i apologize but he was talking about how all these founders like all these smaller startups that are getting in on the ai game all these founders have kind of been raised with this idea of silicon valley and what it will bring you and it's life-changing amounts of wealth.
And when you have enough people, and like the VCs are part of that, the actual actual tech startups are part of that.
Stanford and like kind of the whole ethos of the of the valley is like, if you just keep going and work hard enough, you can have generational wealth. And that is a very powerful force.
And I feel like, I think we're going to be seeing AI kind of hype last longer than we have in other previous bubbles and tech cycles in part because the
The potential for the wealth is outstanding. And it's like nothing we've ever seen.
But that's the thing. You're completely right.
Except AI has one problem, which is all the companies lose a shit ton of money and no one's selling. No one is buying the.
There's been like three acquisitions.
There's one to AMD, one to Nvidia, one to a public company called NICE, which sold a custody. It was an AI Cognivi, I think they were called.
It was like an AI customer service thingy that never really seemed that good anyway. But that whole thing is true.
And I think that that's what people are.
And I think that the myth of you can just use AI to spin up a startup quickly as well has kind of gone into that has kind of fueled that mythos as well.
But the problem is, this is so different because the whole point of Silicon Valley, the whole thing where you can just move there and start a startup is because it didn't cost ruinous amounts of money to start one.
You didn't get $3 million from a VC and expect to spend 2.5 million of that on compute.
You were like, okay, we're going to, we're going to have to bootstrap a little bit further. We've just got a little bit of venture capital.
We're going to go this far.
This is like every step of of the way, this cost increases massively. It used to be it was sales and marketing and just people.
AI is people plus compute plus marketing plus this plus that.
I think, you know, Perplexity, the AI search engine, spent 164% of their revenue in 2024 just on compute and AWS. Like, it's like, this is not, this whole generational wealth thing,
I fully agree. It's what they're using to sell it.
I just don't think it's going to work. And it's scary because this could have the wide thing, and I really haven't talked about this enough.
The wider problem is as well, is all of these people who went to Silicon Valley, raised all this money, or have pretty much raised to sell companies that will never sell, that they can never take public because they burn too much money.
They don't really have great user bases because LMs don't really have those.
And so they're just going to sit there. And then you've got a bunch of VC money tied up in that that will never exit and a bunch of limited partner money that will never exit.
I think that there is an entirely separate bubble building
that when that burst is going to, the depression within Silicon Valley is going to be insane. It's already pretty gnarly, but I think it's like 33% of venture capital went into AI last year.
It's like
eventually people are going to realize there's no exit for anyone. And I don't know what that does.
I mean, it will piss off limited partners.
The money that comes to VCs is just not going to be there. Well, so then that's the.
That's the question, right?
Because venture capital encourages, on one level, overvaluing because you need to figure out a way to make more money than what you put in on the exit with an acquisition or some merger.
But on another level, you're also working within a network trying to enrich yourself and your friends or trying to build the infrastructure for future.
startups, portfolio options that you and your friends make to come in and make money regardless of
that other people can invest in bits of. And so, you know, on one level, I really, I really do, I agree that there's not really much of an exit ramp if there's actually no revenue and no profits.
But then also, I would, I'd be curious, like, do you think they're going to try to ram these things through similar to like what we saw with Corey Weave, right?
Where, you know, like you, you talked, I think, extensively about
the ways in which the financials there do not actually make sense if you're interested in a company that actually has the capital to do what it's going to say to it's going to do, which is provide GPU compute to everybody.
And even though it has such a central role in this ecosystem, it can't make profits that are, you know, that justify the capital that it's getting. It has odious and
burdensome debt that should be a massive red flag. And it might be, you know, round-tripping, right? Yeah.
But this is supposed to be like the darling of the sector.
And it, and it got pushed through, part of me feels like because of
desperation. Nvidia and Magnetar push.
And Magnetar Capital, of course, famous for the CDOs. Yeah, right.
They're back.
But with Core Weave, they pushed it through onto the markets, but that doesn't mean it can't die. Right.
Well, so that's the thing.
Do you think that it's possible that they'd be successful in pushing it onto markets, but it dies? Because
I feel like there will definitely be a lot of investment incineration, but I also do think we're going to have bags. dumped on everybody.
I think you could do it with something like Core Weave and Lambda, which is another situation where NVIDIA is the customer invested and also sells them the GPUs, which they then use as collateral to buy more GPUs using debt, which is so good.
You'll notice that there are no software companies going public. There's no software AI companies going public.
Everyone thinks that OpenAI goes public here. And oh, they'll go.
The market's going to, if they can even convert, the market's going to eat them for dinner. Oh, yeah, we're going to burn bazillions of dollars forever.
No, the markets didn't like Core Weave either. Coreweave wouldn't have gone public had Nvidia not put more money in.
Lambda is probably going to be exactly the same if they even make it. You won't see software companies because that's the other thing.
Core we've had, albeit bunches of debt, assets.
They have data centers kind of through Core Scientific. God, I hate these fucking companies.
But they don't, they have things that they can point to and relationships. Even OpenAI, that's the thing with them.
They don't. They barely have assets.
Oracle is building their data center in Abilene with Crusoe. They don't own any of the GPUs.
They have a few GPUs, I think, for research, I've heard, but Microsoft owns most of their infrastructure.
They don't own their RD. Well, they do, but Microsoft also has access to that, their intellectual property, same deal.
So it's like, what actual value does an AI startup have?
People always say, oh, they're getting the data. They get the data so that the data will tell them.
It's like, what? It's all these horrible stories about like, oh, Doge has got an LLM.
They're doing this with. What's the end point? It's scary.
Don't get me wrong.
And then what? And there never is one. And
I hope someone, I hope an AI software company goes public. I want to see this so bad.
I want to see, you have any idea.
If you give me the open AI books, the Anthropic books, you become the official homie of Better Offline. I'll mention you on every episode.
Get me these books, but because I think all of them are going to be like a dog's dinner. There's, I've actually looked at the markets and Uber,
Uber, by comparison, they did burn a shit ton of money. It was things like 25 billion between 2019 and 2022.
A lot of that was on sales and R D. It's pretty much group on.
I think also they're R D with autonomous cars, but separate problem. But it's like there wasn't a, I can't find an example of someone that just annihilated fuel unless it's like planes.
And I think we've established the use case for planes by now. Clear.
Yeah.
Sold. It's, it's just, it's all very frustrating.
But you know what? I think I'm going to call it there. I think we've had a good conversation.
Allison, where can people find you?
You can find me on Blue Sky at Amaro or on cnn.com/slash nightcap. Ed.
You can find me on Twitter at BigBlack Jacobin.
You can find me on Blue Sky on Edward Angoso Jr. and on Substack at the Tech Bubble.
And you can find me, of course, at google.com. Just type in Prabhagar Raghavan.
You'll find me. I pop right up.
That's all me. Thank you so much for listening, everyone.
My episodes are coming out in a weird order because I'm recording this knowing there's a three-part this week, but this will come out with a monologue of some sort.
Thank you so much for listening, everyone. Of course, Bahid, thank you for producing here out in New York City.
And yeah, thanks, everyone.
Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Mattasowski.
You can check out more of his music and audio projects at mattasowski.com.
M-A-T-T-O-S-O-W-S-K-I dot com.
You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.
I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash better offline to check out our Reddit.
Thank you so much for listening. Better Offline is a production of CoolZone Media.
For more from CoolZone Media, visit our website, coolzonemedia.com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Did you know Microsoft has officially ended support for Windows 10? Upgrade to Windows 11 with an LG Gram laptop. Voted PC Mag's Reader's Choice Top Laptop Brand for 2025.
Thin and ultra-lightweight, the LG Gram keeps you productive anywhere, and Windows 11 gives you access to free security updates and ongoing feature upgrades.
Visit lgusa.com slash iHeart for great seasonal savings on LG Gram laptops with Windows 11. PC MAG Reader's Choice used with permission.
All rights reserved. Ready for seven days of discovery?
For the first time, South by Southwest brings innovation, film and TV, and music together, running concurrently across Austin, March 12th through 18th.
Experience bold storytelling, groundbreaking ideas, and live performances that define what's next. The most unexpected discoveries happen when creative worlds collide at South by Southwest.
Start planning your adventure and register today at sxsw.com slash iHeart.
This is Sophie Cunningham from Show Me Something. Do you know the symptoms of moderate to severe obstructive sleep apnea or OSA in adults with obesity?
They may be happening to you without you knowing.
If anyone has ever said you snored loudly or if you spend your days fighting off excessive tiredness, irritability, and concentration issues, it may be due to OSA.
OSA is a serious condition where your airway partially or completely collapses during sleep, which may cause breathing interruptions and oxygen deprivation. Learn more at don'tsleep on osa.com.
This information is provided by Lilly, a medicine company. At CVS, it matters that we're not just in your community, but that we're part of it.
It matters that we're here for you when you need us, day or night. And we want everyone to feel welcomed and rewarded.
It matters that CVS is here to fill your prescriptions and here to fill your craving for a tasty and yeah, healthy snack.
At CVS, we're proud to serve your community because we believe where you get your medicine matters. So visit us at cvs.com or just come by our store.
We can't wait to meet you.
Store hours vary by location. This is an iHeart podcast.
Guaranteed human.