Radio Better Offline: Edward Ongweso Jr. & Allison Morrow
Welcome to Radio Better Offline, a tech talk radio show recorded out of iHeartRadio's studio in New York City. Ed Zitron is joined in studio by Allison Morrow of CNN, Ed Ongweso Jr. of the Tech Bubble newsletter to talk about the AI vibe shift, OpenAI’s burn rate, what will finally burst the bubble, and something called “Critterz.”
Allison Morrow
https://www.cnn.com/profiles/allison-morrow
https://bsky.app/profile/amorrow.bsky.social
Ed Ongweso Jr.
https://thetechbubble.substack.com/
https://bsky.app/profile/edwardongwesojr.com
OpenAI Hopes Animated 'Critterz' Will Prove AI Is Ready for the Big Screen - https://www.cnet.com/tech/services-and-software/openai-hopes-animated-critterz-will-prove-ai-is-ready-for-the-big-screen/
The AI vibe shift is upon us - https://www.cnn.com/2025/08/22/business/ai-vibe-shift-nightcap
The Silicon Valley Consensus & the "AI Economy" - https://thetechbubble.substack.com/p/the-silicon-valley-consensus-and-614
YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.
BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast
for life with pets.
There's Chewy with everything
delivered fast at great prices.
From food, with favorites to fill their bowls and bellies, to fun, with all the toys, with all the noise.
Even fashion, with all the looks that'll get second looks at the park or on the couch,
and pretty much anything else you can imagine.
If a pet is part of your family,
we should be too, with everything you need for life with pets.
Live in the Bay Area long enough, and you know that this region is made up of many communities, each with its own people, stories, and local realities.
I'm Erica Cruz-Guevara, host of KQED's podcast, The Bay.
I sit down with reporters and the people who know this place best to connect the dots on why these stories matter to all of us.
Listen to The Bay, new episodes every Monday, Wednesday, and Friday, wherever you get your podcasts.
Every business has an ambition.
PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.
And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards so you can focus on scaling up.
When it's time to get growing, there's one platform for all business, PayPal Open.
Grow today at PayPalOpen.com.
Loan subject to approval in available locations.
There's more to San Francisco with the Chronicle.
There's more food for thought, more thought for food.
There's more data insights to help with those day-to-day choices.
There's more to the weather than whether it's going to rain.
And with our arts and entertainment coverage, You won't just get out more, you'll get more out of it.
At the Chronicle, knowing more about San Francisco is our passion.
Discover more at sfchronicle.com.
Cool zone media.
Hello, and welcome to Better Offline.
I'm your host, Ed Zittrom.
And we are recording here in beautiful New York City, and I have a wonderful pair of guests today.
I have, of course, Allison Morrow from CNN Nightcap newsletter and Edward Ongueso Jr., the hater himself from the Tech Bubble newsletter.
Thank you so much for joining me today.
Great to be here as always.
So I think we should start with exactly what we were just talking about.
The OpenAI claims that they have worked out what causes hallucinations.
Allison, do you want to go over this?
I should have read the paper a bit more carefully, but I can, you know, the highlights were getting digested yesterday on X and Blue Sky.
And it seems like it's kind of the test-taking problem of when you encourage students when they're taking standardized tests and you don't know the answer, you guess.
And that's exactly how these models are trained.
And they don't, you don't get a point if you say, I don't know.
So you come up with something.
And
the models are
meant to keep guessing until they get something close to right.
Right.
So that's why you get a lot of nonsense and hallucinations.
And
OpenAI, at at least in their reading of it, says, oh, this is a simple solution.
We'll just encourage the models to understand better when it's like a binary question.
And when you can say, I don't know the answer to that.
So we'll see.
I read the paper, albeit not today, because my brain just immediately read it as marketing copy, just put that, replaced it with another anime theme song.
I went through it and I was like, okay, so it's going to encourage them to say, I don't know.
This feels like a very flat view of what hallucinations are, though, because hallucinations, as people know them, authoritatively stating something that isn't true.
But hallucinations in like a coding model are it will just say yeah I did that when it didn't.
This is very common.
You're going through the cursor and the clawed code forums.
You can see this or subreddits at least.
And it's not that they say something that's true.
They don't know that something's not true.
Also, so they might say they don't know.
And it's just very silly because They claim that they're going to fix this problem with this solution, but they've done years and hundreds of millions of dollars of reinforcement learning.
Do they reinforce it more so that I don't know?
I'm fucking tired of these companies, going to be completely honest.
I realize this is kind of cliche for the show, but it's the fact that these things get written up as very serious things.
So they're just saying, you guys still haven't worked this out, it's kind of frustrating to me.
And they don't seem to be, the model's not getting better, diminishing returns and all that, but this is the best they've got.
I mean, does this kind of thrust feel like, you know, do you guys feel like it's downstream of the attempts by some of these firms to say, oh, actually, like, if we dial back the sick of fancy, you know, then we'll be able to have a much more engaging consumer product.
We'll be able to have it hallucinate less, you know, induce psychosis less.
You know,
do they feel linked in that way?
Does it just feel like a...
They, you know, another, maybe another dead end.
I feel like it's just them trying to work shit out.
And trying to, like, this also feels like a very rudimentary answer that they probably already already had it doesn't anytime someone comes up with an idea that I can that's technical I'm like okay mate you cannot be cooking with gas this is this cannot be that good an idea and the thing is the sycophancy problem I don't think is solvable through solving hallucinations the problem is it should stop it needs to just it should not say I don't understand it should say no actually you sound like you think I'm God king and that you are god king and that as god kings together we will destroy the world yeah i mean i mean which in my case is true but do they do you think like the emphasis or the attempt to overcorrect on that that is leading them to go down solutions where they think, oh, like if we just unback this and that, then what's the problem?
Yes.
Yes.
Actually, I think that that's right.
Because they have the, I don't know if you saw the attorneys general from Delaware and California sent a letter to OpenAI last week saying, hey, look.
If you need to fix these safety protocols, you need to actually have them because what you have right now doesn't work and we will block your non-profit.
Otherwise, I was really happy to read that right up until the bit they say, we wish you well with your quest for AI dominance.
I'm just like, these are the fucking people protecting us from the, they're like, no, it's great you want to dominate everyone with AI.
It's just you drove some people to a murder-suicide situation.
And wasn't that part of the problem with GPT-5?
Is they tried to dial back the sycophancy and then it took away the character and like the humanness that people have gotten attached to in GPT-4 and earlier models.
And like, it all seems to come back to OpenAI doesn't realize that what it's selling often is like a companion and a therapist.
And it doesn't, it, it reminds me of like Q-tips.
You're not supposed to put them in your ear, right?
But that's all anyone uses Q-tips.
I was going to say, you're not meant to do that.
Of course not.
But like, that's the consumer.
That's how the consumer has chosen to use this product.
And they're saying like, well, we don't condone that.
We don't think it's like the best use of our product.
And, you know, we know better than
the consumer, of course.
I think it's one abstraction higher, which is I don't think they know what chat GPT is.
I I don't, I did an AI booster piece came out last week, but it was, I had this whole thing where it was, it's so, they don't describe what ChatGPT does.
If you go on the website, like, it's like, yeah, it analyzes data and it's brainstorming and charts and stuff.
The agent does things too.
Please buy it.
And then you try and like actually look for any use cases and there's nothing.
I think that they're just guessing.
But my favorite thing I'm seeing.
My favorite thing is that now people are like, GPT-4.0 isn't the same because they brought it back and now people are just freaking out.
People are like, no, it's not the same.
It's different somehow.
It's honest.
I honestly don't know if it's true.
I just think that they've entered gamer mode.
This is just what game, this is how gamers react.
It's like the gun isn't the same.
It's literally the same code.
No, you've changed something.
I know.
And it's what happens when you release an imprecise glazing bot onto the world.
It's also really funny how literally any of these companies could have been open AI.
There doesn't seem to be anything special about this company at all anymore or ever.
Well, you you know, I mean, not everybody can lose as much money as OpenAI.
You know, that feels very special.
Or have the connections of Masayoshi's son, you know?
Oh, my God.
I haven't heard from Masayoshi-san in a minute.
We haven't had an announcement of him claiming tomorrow.
Tomorrow he's going to be...
I like that
he bought a Foxconn lab, an old Foxconn lab, to turn into an AI server building.
place in like Ohio, I think it is.
It's like,
mate, what are you doing, man?
Are you okay?
Someone needs to check in on Masuo.
We need to hold space for Masuyoshi's son.
Yeah, bring him on.
I'd love to hear him.
I have asked him.
I have genuinely emailed SoftBanks PR and I've emailed Anthropics PR for Dario.
Haven't heard back from either.
I assume there's an email issue because there's no other reason they wouldn't come on this show.
It's just it.
Now, what's fun to watch, though, is considering how many times we've had like AI bubble conversations on here, everyone seems to be kind of waking up to it.
It's kind of fun.
Must be clear,
everyone here was early on this.
Alison, you were actually very early.
You've been early on every time.
I'm not as early as you, Adam.
No, I give you credit.
You're one of like the three people who, when the metaverse was happening, was actually calling it out.
So
it's good to see and hear, but also insane that it's still going.
That's what I don't, like this OpenAI 115 billion burn within by 2028, I think it is.
I don't understand how people keep publishing this and being like, and that will happen.
I will say, to my relief, the metaverse went away quickly because when it first was announced, and I wrote a piece just being like, what?
What is that?
What?
And then, like, everything about it seemed really dumb in every iteration coming after.
And so I was like, whew, okay, I wasn't crazy.
But with AI, and I know you can relate to this, I feel crazy because the lack of utility is still there.
And like, the absurdity of the investment is still there.
And it does seem like that's why I wrote about the vibe shift.
Like, it has been like going in Ed Zitron's favor in the last few weeks.
Valentine fucking vindication, finally.
But it's funny as well, because even with that, this open AI story comes out and people are still like, yeah, they're going to burn $115 billion.
People reported about a month ago that they're going to spend $30 billion a year on Oracle starting 2028.
How?
How?
How?
I just.
That's what I don't understand, how these articles keep coming out.
And I understand that reporters have to report the news and things they have discovered i get it but can no one just be like and no one has any idea where this money's coming from no one 30 billion dollars a fucking year from oracle for servers that are yet to be built and all of this like don't need to worry about the fact that the servers aren't actually built and apple in texas is not finished and the money doesn't exist and it isn't obvious how oracle affords it they will have to take on debt to build this and cruso and primary digital infrastructure have already done that.
And I mean, other than that, it could happen any day.
I just wonder if the media is actually not prepared for this.
And I don't mean in a conspiratal way.
I mean just in a, is the media actually set up for companies just lying or just projecting?
I mean, I feel like it reminds me of kind of the relationship that maybe our critics or our coverage might have to the medium where there's not an inherent
antagonism or skepticism of claims that are being offered and an assumption of good faith that continually gets betrayed or punished, but gets carried on with over and over and over again.
I feel like the AI bubble discussions that we're seeing, part of me also feels like they are going to disappear the second we start to see some of the firms maybe announce some favorable metrics, even though, as we've been talking about for a long time, right, revenues are not there, profits are not there, the burn is only increasing, right?
And there's no way forward
in the short term, or you know, that I can think of, where these companies start to actually do the things that they're claiming they're going to be doing with transforming the world.
But I can see a scenario where you know someone has a favorable quarter for adoption, even though, you know, we just, I just saw yesterday Apollo Global Management was talking about how large firms are actually scaling back AI adoption, which already wasn't even providing returns and was hurting productivity in the first place, right?
It's, and it's really weird as well, because there was that MIT study where it's like they said 95% of generative AI integrations don't have any return on investment.
There are some people critiquing that number, but something that comes out of that study that I really like was it was saying that enterprise adoption is high, but the actual transformation is low because this shit doesn't work.
And it's, it's so strange.
And I think that the only reason that things won't immediately unravel with a good quarter is because the media has chosen to follow a direction now.
When you've got the Atlantic, the goddamn Atlantic publishing a story saying, yeah, it turns out that AI isn't really doing much, but it's holding up our economy.
It's like, holy shit, the Atlantic's willing to admit something that happened.
I didn't even know this was in there.
I thought that they just wrote up whatever was emailed to them by don't hold your breath.
They just published Mike Solana.
Yeah, actually, I retract all my statements.
But it's, I think, I think with the media,
I see it in political reporting, in business reporting, in tech.
There's a deference to authority that I think American media, but all media have an issue with.
And I think that sort of speaks to the underlying economics of being in media right now, where there's a general chill both economically and politically.
Reporters are worried about their bylines being out there and getting stuff wrong.
And I'm not saying that that's an excuse, but I do think that is an institutional mindset that has taken root, especially in the last 10 years.
It's just become like really hard to be a journalist and to do it right.
But you are starting to see that MIT report was so important because
it caused people on Wall Street, authority figures, to say, hmm, I don't know about this.
And then that got a lot of mainstream financial media to kind of like do that questioning headline about AI that they maybe wouldn't have done six months ago.
I do like that there's a rumor about the next deep seek model doing agents.
I, I, they're not going to, but even just like if that comes out and they even claim they can, I think that might have a market panic just because they'll be like, ah, China.
I mean, yeah, I mean, that's, that's, I actually think there's something to that, right?
Because, you know, we did agents here, didn't do shit for Salesforce.
Really, the market, did the market even really do it?
Oh, no, don't do a central claim.
And a wonderful story from this information.
Um, OpenAI in their
whole projections through 2029, they've reduced the amount of money they'll make from agents by $26 billion.
Why?
I want anyway, so
but yeah, it's like this deep seek thing could inspire people to get scared horribly because no one actually believes that agents exist.
Because they don't.
Yeah.
But they think they do, but they will, but they won't.
It's this, I don't think I've ever actually seen anything like this in tech.
I'm going to be honest.
It's worse than crypto.
Even worse than just the general generative AI thing is this concept of agents though.
Because I saw some fucking thing about some one political blog saying that Donald Trump would do some sort of act.
He would do the I forget the exact thing, but it would be an act that would make AI copy, copyright holders just hand shit over to AI due to us needing to beat China.
And it mentioned within, I was like, yeah, the growing agentic capabilities of AI.
It's just, what the fuck is this?
I've never seen a tech thing in my life that has not existed like this.
And people talk about it like it's real.
And I think also it's interesting to see the more it doesn't manifest, the more some of the recommendations to make them happen just sound more and more like these are also things that might somehow bend the cost curve in our direction.
Maybe we should make the internet unusable to anything other than some of these programs.
Right.
And
I'm curious which one is going to give out first, like the really savvy ability a lot of these firms have had in spinning sputtering or
spinning
a crisis that might drain the markets or deter investors into, oh, actually, we just need even more capital, similar to what they did with DeepSeek.
The solution is even more compute compute-intensive models.
You know, whether they're going to be able to do that faster than people wising up and saying, you know, maybe we shouldn't misallocate trillions of dollars of capital over the next few years towards this.
But this is the thing, though.
I don't think there's anything stopping this because the suggested thing was the, okay, so all of these model companies can steal from everyone, which is what happened already.
Even with this anthropic settlement, it's not, it's great.
People are getting $3,000.
I love that.
That's also what the companies are offering.
That's similar to what your payout would have been if you said, yeah, right.
Yeah,
I didn't know that.
I think, like, some publishers were offering or you know, trying to ask authors, hey, if we pay you this, would you be, would you allow your book to be trained on?
Or would you allow
it to be put into a data set?
The payment that you're getting from the settlement feels or reminds me of the amount of money that people are being offered and those sorts of deals.
Live in the Bay Area long enough, and you know that this region is made up of many communities, each with its own people, stories, and local realities.
I'm Erica Cruz-Guevara, host of KQED's podcast, The Bay.
I sit down with reporters and the people who know this place best to connect the dots on why these stories matter to all of us.
Listen to The Bay, new episodes every Monday, Wednesday, and Friday, wherever you get your podcasts.
So, I've I've shopped with Quince before they were an advertiser and after they became one and then again before I had to record this ad, I really like them.
My green over shirt in particular looks great.
I use it like a jacket.
It's breathable and comfortable and hangs on my body nicely.
I get a lot of compliments and I liked it so much I got it in all the different colours along with one of their corduroy ones which I think I pull off and really that's the only person that matters.
I also really love their linen shirts too.
They're comfortable, they're breathable and they look nice.
Get a lot of compliments there too.
I have a few of them.
Love their rust coloured ones as well.
And in general I really like Quince.
The shirts fit nicely, and the rest of the clothes through too.
They ship quickly, they look good, they're high-quality, and they partner directly with Ethical Factories and skip the middleman.
So you get top-tier fabrics and craftsmanship at half the price of similar brands.
And I'm probably going to buy more from them very, very soon.
Keep it classic and cool this fall.
With long-lasting staples from Quince, go to quince.com/slash better for free shipping on your order and 365-day returns.
That's q-u-i-n-ce-e dot com/slash better.
Free shipping and 365 day returns.
Quince.com slash better.
In business, they say you can have better, cheaper, or faster, but you only get to pick two.
What if you could have all three at the same time?
That's exactly what Kohir, Thomson Reuters, and specialized bikes have since they upgraded to the next generation of the cloud.
Oracle Cloud Infrastructure.
OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment and spend less than you would with other clouds.
How is it faster?
OCI's block storage gives you more operations per second.
Cheaper?
OCI costs up to 50% less for computing, 70% less for storage, and 80% less for networking.
Better?
In test after test, OCI customers report lower latency and higher bandwidth versus other clouds.
This is the cloud built for AI and all your biggest workloads.
Right now, with zero commitment, try OCI for free.
Head to oracle.com/slash strategic.
That's oracle.com/slash strategic.
Parking shouldn't slow you down.
ParkWiz gives every driver a shortcut.
Book ahead, save up to 50%, and skip the hassle of circling the block.
Park smarter, park faster, park whiz.
Download the ParkWiz app today and save every time you park.
I think the thing is, okay, they already steal everything.
Well, okay, we need to give them as much money as possible.
We've already done that.
Are we just going to do this forever?
Because even if we do this forever, nothing's going to change.
Even if I'm completely wrong and OpenAI keeps going another five years.
Okay, so we're just going to annihilate $115 billion.
There are no more things here.
Like, their projections, OpenAI's projections from the information,
their chart, I don't even need to show you this because this is just fan fiction at this point.
There is, in 2026, starting, there is this growth of this orange thing that is other revenue.
Who knows what it is?
I don't know.
Open AI doesn't seem to, and that's really important because they're going to make what looks like several billion dollars from this next year.
What the fuck is going on?
Every time I look at this company, I feel a little more insane because they've now lowered their expectations of selling their access to their models by $5 billion over the next few years.
What even is OpenAI at this point is it just a rap have they become a wrapper company of their own models they're no better than cursor it's just it's so weird and i realize i'm kind of going in circles at this point but it's
i even the metaverse even crypto even crypto functioned it was bad it's still bad it is bad cloud software but it still did the thing ai doesn't even seem to be doing it and they need more money to prove that it can't do it and actually they don't have enough right now but they're going to need even more.
I don't even know how people are still taking this seriously because on top of that, did you hear about the Microsoft negotiations over the non-profit?
I've been hearing.
Well, they're delaying it to next year.
They need to convert by the end of the year.
Otherwise, SoftBank comes their round in half.
And everyone's just like, yeah, it'll be fine, mate.
It'll work it out.
What the fuck?
Have you, have either of you ever...
I know, Ed, you covered Uber a lot.
I don't even think the economics match with that either.
No.
I mean, you know, it's interesting because I think Uber's
strategy, central strategy from the beginning was,
you know, we have a few existing playbooks that we need to reference.
You know, the Coke deregulation of the taxi industry in the 90s, as well as regular deregulatory campaigns that they led,
you know, in Seattle and historic campaigns in San Francisco.
There's a lot that we can reference.
And if we can figure out a way to
bootstrap ourselves onto the model and onto those previous histories of deregulation while
delaying scrutiny long enough for our economics to actually get to a profitable place, we'll get there, which is what they did, right?
But even then, I mean, I feel like from the beginning, as much as I hated a lot of the coverage of Uber for years, the people who were always correct about it were like the labor reporters who had actually who, you know, if you spend time talking to the drivers, that will lead you to be a little bit more interested in the, you know, what can justify the suffering behind this.
And then you almost always will see
that there's no at that point, there were no way the unit economics worked unless you subsidize everything.
Yeah.
I feel like similarly, there's something going on with artificial intelligence firms in the global AI value chain where
if you start with a labor analysis and you look at invisible workers or ghost workers that are in political.
The Kenyan people training the models.
yeah or labeling or you know any any any labor that's out of sight out of mind
you know then the you know starting there and going up it becomes hard to ask okay how are we supposed to
allocate all this capital towards a model that as is is right now is cutting all these corners for costs and is still burning tens and tens and tens of billions of dollars and is asking for trillions of dollars more
But I don't know.
I mean, part of my fear is that they are successful in the way that Uber was, where if you get enough buy-in from military industrial comp, well, you know, for in AI's case, if you get enough buy-in from military industrial complex, if you get enough buy-in from, you know, social programs and interfacing them and helping cutting them or redirect traffic through them, if you get enough buy-in from other tech firms, if you get rents from other startups that need to use and get access to your product, and also if you graft yourself onto everybody's daily interactions, daily lives, the way they interface with the internet, can you actually make it work?
Which is also another way to say, like, like, what if you just become a massive parasite?
But
the funny and grim thing is, is AI is a terrible parasite.
It's not good at it.
It doesn't like, because Uber's success came from being able to graft itself on through utility and subsidized pricing that meant that everyone used it.
And also cabs kind of fucking sucked.
Yeah, I mean, yeah, they sucked.
And also transit in most cities sucks.
And, I mean,
the inherent colonialism of most technology applies very good from Karen Howe, of course, Empire of AI, Empires of AI, did an episode, great book.
But in this one, it's kind of shit colonialism as well, because they don't even, they haven't found a way to actually exploit in a way that's profitable.
They haven't found a way to use human beings because the fundamental thing they want to do, it's kind of like if Uber, sometimes you got in the car and you got out at the wrong place.
And I don't mean like in a different country.
Or you got into the car and it just exploded sometimes.
And I sound like I'm joking, but it really is that bad.
And on top of it, it's not replacing labor.
And it's also not the kind of tech that can replace labor.
So
it's my grand theory that they're just playing the hits.
They're just, they're trying, in the same way that you just eloquently put, Uber played the hits of here's how we did deregulation, here's how we did growth, and this is how software grew in the past.
I think the AI is trying to do the same thing, and it's bad at it.
It's like watching a new class of dipshit try and do what the more evil dipshits of the past did and fail.
In fact,
these lobbying groups lobbying for AI, I hear a lot of people saying, oh, they're lobbying.
They're lobbying.
It's like, what for?
Oh, no, they're going to build data centers everywhere.
They already do.
They're going to steal from it.
They already do.
They're trying to replace it.
They're already trying.
Everything that
everyone's scared of, they already can do other than the doing part.
They can't do any of that.
Sorry, I've just remembered as well yesterday trying to read the, because I went on the chat GPT pro subreddit because I hate myself.
And I was trying to find someone who'd used the agent.
And every post was someone saying, anyone ever used the agent?
You got any tips?
and everyone's just it doesn't fucking work it's it's broken it's
actually here's a good question as an ed have you can you think of a company that's ever released something just completely broken before
because the metaverse kind of worked it wasn't what they were promising but it worked it was a virtual world-ish
i mean the pharmaceutical industry has a nice long history putting out quasi-effective drugs that have all kinds of consequences.
And I can't remember who skeeted about this a few weeks ago.
It was after one of the, you know, it's become such a genre of the journalism right now about AI, about like this man became delusional and had a psychotic episode because of his chat GPT relationship.
And it was one of those going around, and someone skeeted about how if there was, if this was coming from a pharmaceutical company, it would be recalled immediately.
There are real regulations in place that could actually claw that back and help save people's lives.
But there are no regulations around AI.
So we get chat GPT gods and spiritual awakenings and all these psychotic episodes.
I do think that that stuff is going to genuinely be its downfall, though, because right now it's burning more money than anyone's ever burned before.
And the most common use case people can talk about is, yeah, it drove that guy insane.
That guy went crazy.
There are children who's horrifying killing themselves because of this thing.
That's what it's getting known for.
And otherwise, it's like, yeah, your most annoying friend loves this.
Because that really, it's the, you love them, but they're like, I learned about all this in Chat GPT.
It's like, you didn't.
Well, you know, on that,
part of also, part of my fear I have is, I think, similar to how when firms were rolling out facial recognition surveillance and insisting that we need biometric surveillance to help keep city safe, community safe, products safe.
One angle that people used to attack it was, well, you know, like the racial bias of these things will allow them to,
you know, misidentify black or brown people more often than not.
And they might get arrested or they might get targeted by the police in one way or another.
And that is why we should get rid of the technology versus we should get rid of the technology.
And I think, like, I'm curious
how it's going to go, the concern about it inducing psychosis or inducing suicides, because I could see a scenario easily where they
patch together something that looks like a fix.
And it's not until later, a year or two or three after there's much after people are much more dependent, that other harms come to the foreground.
Whereas we lose something out, I think, not to say, you know, not to say or marginalize the fact that it has immense social.
costs or harm here but it does in some ways remind me of the way in which that debate over facial recognition went and then they you know they solved you know with quotation marks the racial bias problem and now people have more or less accepted that facial recognition is okay actually and you're right as long as it's not racist and that's the thing it's people
didn't say this is a white bloke but it's like people
really underplay how endemic that racism is within all algorithms.
You know, Compass, which is like this very, very old algorithm for being like, it's basically minority report, both in the reference to the thing and it reports on minorities in that it says, yeah, this person will likely offend again.
And that should be, and there isn't a unilateral, the judge has to take it, but what a surprise.
It's often used to them black people, to the
jail systems because it's heavily biased against them.
And yeah, I somewhat fear LLM's doing similar.
They're probably already doing.
And I think that every algorithmic system is inherently racist.
There's not enough people running them who actually fucking try.
It's inherently biased against women.
I think there's also, I wish I had this in front of me, but there's also something about how like more, there are more fans of generative AI who are male than female.
But do you think it's possible that
they'll try to say, oh, we can solve for the psychosis problem and then that will undermine a large
angle of the racism.
How do you solve it?
Because it is probably a small...
It is probably a small scale problem.
We actually don't know.
And it's not that these companies know or will tell us.
But nevertheless, each one is so horrifying.
It's this is horror.
Like the story in the Wall Street Journal,
Julie Jargon, another person there who wrote that, where it was like a murder-suicide, an actual son of Sam Altman situation, which is fucking terrifying that this is happening.
I don't know if you can completely solve that because all it takes is one popping up again for them to go, fuck.
And it's also not just a chat GPT problem.
There's this woman on TikTok who
she has been saying what Claude has been telling her.
And it's like, oh, this is giving me psychic visions, I think it is.
It's also the ultimate grifter tool.
It's just, it's, that's why I think it's taken off so much on social media as well.
It's a tool that naturally fits into the grifter's toolbox.
I think that I actually have similar fears that they will try and find ways to hand wave away from this if it was the only problem.
But they have so many problems.
They have so many problems at this point.
But I do also think that people need to remember how racism in algorithms is fairly, it's in all of them.
I mean, you remember Microsoft Connect,
which literally couldn't see black people, which was a joke in the show Better Off Ted, if anyone watched that.
It's a great show.
It's just, it's insane that, I mean, sadly, it's very obvious why this keeps happening.
It's because the people are predominantly white and male, and it's just they can't really.
And also, you can't really fix this stuff without intentionally building the data, which would require them to spend money on something they don't care about.
And they don't really understand what they're doing when they go in to tweak these models.
They don't know how overcorrecting or under-correcting they're being.
So they kind of have to just try and then put it out in the world and then wait for something bad to happen.
It's funny.
Not funny.
It's extremely sad.
In that journal story that you referenced, which I read twice because I was like, yeah, horrified.
And also the reporting was incredible.
It was really, and they said it as this appears to be the first instance of a murder murder resulting from, like, we've seen suicides, but like, this is a murder suicide.
And when OpenAI responded to the question about,
did the bot ever respond to this guy who was clearly having a delusional episode?
Hey, you need to talk to a real-life therapist.
You need to go to the hospital.
You need to seek help.
And I think they declined to comment.
It was like a very evasive maneuver, but like ultimately the journal had seen the one time where the bot said, please go to the emergency room was when the guy, the guy who was having paranoid delusions said, I think my mom is trying to poison me.
And the bot said, if you think you've been poisoned, you should go to the hospital and get your stomach pumped.
I also, I, I agree they don't know how to tweak these things, but I must be clear, worked in tech for a long, 16, 17 years now.
And that's not even including my games, journalism work.
It is not hard for them to just have a, just a unilateral thing of, oh, you're talking like this i'm going to stop i mean anthropic just announced that they have a thing that will cut a conversation which is good all of them should do this but it's they could if someone and the whole thing people i've seen where it should be if you start talking like i'm going to do this i am becoming this
It should say, hey, you sound like you're having a paranoid, like, I'm worried about you.
You should go and speak with it and just stop working with them.
People will say, well, the way they get around that is by telling the chat GPT window, oh yeah, I'm writing a story.
I don't know.
Do we need them to write a story about that?
Do we need,
what is the answer?
And the answer is they don't give a rap fuck and no one's me.
I really, I genuinely think.
Because it's really easy to
social networks as well.
It's you don't ban every slur the moment someone says it.
But I don't know.
You have a thing that says, hey, someone said a slur.
Maybe take a quick look at the slur.
And you could probably just ban that person because I'm guessing most uses of the n-word on social media are not used as in culturally sensitive ways.
They're probably insanely racist.
You just cut them down.
It's like, well, we can't, you can't, it's an issue of free speech.
Fuck you.
No, it's not.
It's an issue of free speech when a person can't exist online without racism happening to them.
Right.
And
these models, they could stop them, but I do think there is a compelling argument of they really don't know what to do.
That every time they touch it, something else breaks.
Honestly, it's kind of the most egregious version of the most common software problem, which is coding is really fucking annoying, and we don't know how these work also.
And generative AI is not going to fix your coding problems no matter how many times you tell us, Sam Altman, that
AGI is just going to fix everything for us.
Ford was built on the belief that the world doesn't get to decide what you're capable of.
You do.
So, ask yourself: can you or can't you?
Can you load up a Ford F-150 and build your dream with sweat and steel?
Can you chase thrills and conquer curves in a Mustang?
Can you take a Bronco to where the map ends and adventure begins?
Whether you think you can or think you can't, you're right.
Ready, set,
Ford.
In a region as complex as the the Bay Area, the headlines don't always tell the full story.
That's where KQED's podcast, The Bay, comes in.
Hosted by me, Erica Cruz Guevara, The Bay brings you local stories with curiosity and care.
Understand what's shaping life in the Bay Area.
Listen to new episodes of The Bay every Monday, Wednesday, and Friday, wherever you get your podcasts.
Every business has an ambition.
PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.
And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards so you can focus on scaling up.
When it's time to get growing, there's one platform for all business, PayPal Open.
Grow today at paypalopen.com.
Loan subject to approval in available locations.
Parking shouldn't slow you down.
ParkWiz
Download the ParkWiz app today and save every time you park.
That's actually been my favorite thing to do right now.
It's go on the r slash cursor, r slash chat GPT pro r slash clawed AI and just looking at people complaining.
And what they're complaining about is, hey, I keep hitting rate limits.
hey i keep it keeps breaking things hey it's you get one guy every so often who says i this has changed my life and then you see the responses being like yeah but they fucked up all my stuff really badly doesn't really work and we have an upcoming episode with uh cult voji about this where it's like the average software engineer is not just writing code anyway
and so you this is also i think this is actually real this is funny this is a good this is a good one to laugh about so their only real growth market right now is writing code
The problem is, writing code requires you to use reasoning models.
Reasoning models inherently burn more tokens, and the way they burn tokens is because they're thinking, they don't really think.
They look over what a prompt asks for and goes, Okay, what would be the steps to solve this?
With code, that becomes so
complex.
And the more models reason, the more they hallucinate.
So, the very product that they are building that is going to save them is also the one that is going to burn more compute.
And this is a rumor, I've heard from a source that like it can take like four to 12 GPUs for one person's particular, particularly rough coding, like a refactoring.
That's sustainable.
And that's for one of the smaller models as well.
That's for like 04 Mini, which is a reasoning model.
It's like, what do you think the big ones are like?
In the information, they talk about OpenAI having a new
$80 billion in costs that they expend
over the next three, four years.
Yeah, it's like it's 115 by 2029 as well.
Does a good chunk of this come out of, oh, it turns out that computer is incredibly expensive
and we want to center our business model around it?
I think it's that, and I think it's just they don't know what else to do.
It's kind of like we're saying with the Uber model.
They're playing the hits.
It's like, fuck, what did we do in the past?
We spent a lot of money.
Shit, what do we buy?
GPUs, I guess.
We train more.
They're going to spend so much money on training.
And it's like, to what end?
Your last model was a joke.
This is why it was really interesting to see that op-ed that came from Eric Schmidt and his research assistant,
where he was, you know, Eric Schmidt is someone who was an architect of the idea that a former CEO of Google.
Former CEO of Google, you know, chairman of national security
group that was trying to figure out how to merge artificial intelligence into defense contractors and how to create a foreign policy that would allow America to dominate, really to win an arms race, an AI arms race with China.
And he comes away saying the strategy I basically helped craft, which was that we need to prioritize AGI so that we can get prioritize AGI so that we can get like a permanent lead
to deter any potential rivals is scaring everyone.
And it doesn't work.
It's a waste of capital.
It's misallocating capital.
It's imposing all these harms.
And if we look at the competitor that we're going up against, against China, by abandoning the AGI pursuit and instead prioritizing ways to figure out how to experiment with it, integrate with it,
build up practical use applications,
there's a much more general public acceptance of it, willingness to try it out, adopt it.
And we're not seeing, because we're not trying to scale out these massive either monopolies or one-size-fits-all models,
you see a wider adoption and something that looks like it's a more sustainable model.
Are we going to follow it?
Probably not.
Of course not.
What I love about chasing China as well is China has had stories for like a year where it's like, yeah, we have a bunch of unused GPU compute.
Yes.
We're massively overbuilt.
Joseph Tsai, I think it was, the Chinese billionaire said, yeah, it's a bubble.
We have a real GPU bubble.
And America's just like, we need to fucking copy it.
We need to beat them.
We're going to run our economy into the ground.
We can just beat China.
It's like we're saying we're going to copy them.
And what is it that we're actually doing?
We're prioritizing developing artificial intelligence that has like a question mark consumer use that's going to be used in you know killing machines and drones maybe and for surveillance purposes
yeah that's not even generative ai but that's where the actual excitement for any sort of artificial intelligence future is and this is i you know the generative ai stuff is talked about as if it is the future the transformative future of artificial intelligence in reality it's just the actual you know the actual interest excitement, capital is going to, I think, go back to the center of gravity, which is like, how do we just figure out the shiniest and most fearsome weaponry?
But I think that what's weird about this is I've, I don't think we've had a bubble that spreads so far into consumers' hearts.
I'm, I'm not saying it's as bad as the housing bubble, but consumer software, if we go back to the dot-com boom, I think it was like 45% of Americans had access to the internet.
It was relatively small in comparison, though the massive overinvestment in fiber happened.
But I don't think people realize what they see to the chat GPT may not exist in a year or two, at least not in the same way.
It's going to be so, you're already seeing week-long rate limits on Anthropics Claude.
Like, do people not realize that this could happen?
I guess they don't realize.
And I don't, I think that there's going to be a big dunce mask off.
There are so many people who have fallen behind this.
I mean, not to bridge too aggressively into this, but there was a story in the Wall Street Journal that I shared with you, of course, about this movie called Critters.
It's with a Z or a Z from my Canadian and UK listeners, where OpenAI will be providing the compute and the tech to do a movie called Critters with a budget of less than $30 million, though it's not obvious whether OpenAI and their compute is part of that.
But it's the weirdest shit in the world.
Allison, you were bringing this up, but like they're still using a bunch of humans?
Yeah, so I was reading the same story and I haven't done any, this came out this morning, so I haven't done my own reporting on it, but I will say from the story I read, it seems like they're hiring two two different animation studios with artists and writers working on the script.
They're hiring human actors to voice the characters.
And then
some mystery X amount of the movie will be put together with AI.
And I honestly don't know how different that is from a regular Pixar or DreamWorks animation process, but it seems like there's, when I first saw the, you know, the teaser image is very cute.
And I was like, oh, God, they're like, this is going to be some AI propaganda.
And it's going to be very cute and hard for me to refute.
But actually, it's just a human-made movie, it seems, and with the extra computer help.
And this picture I'm holding up, of course, we're listening to a podcast that you can all see this.
It's just this generic blue, furry creature.
It looks like an extra from Monsters Inc., it really does.
And it's not.
Due to the copyright law.
It's the same thing.
It's different.
But what's funny with that as well is, I was mentioning this as a lead-in.
It's that $30 million thing.
If that doesn't include OpenAI's compute, it probably costs the same as a Pixar movie because you're still like, actually, 3D animation is one of the few other GPU use cases.
So really, it's just a different thing, Ryan.
It'll be funny also if they save money because they don't do any marketing.
And they're like, see how cheap it is if you don't advertise a movie at all?
It's a solid participant.
I think they might be getting around some Hollywood unions.
Yeah.
Oh, really?
They're going completely overseas too?
I'd have to check.
Don't quote me on it.
But I think, I know they were using at least one overseas animation studio.
So they're probably saving a lot on the animation process by not paying animators, I would guess.
It's so cool.
And also, another fact from the story is we don't know how long the piece will be.
And it's, if it's like five minutes long, I'm so sorry.
Come on.
Feature length.
Feature length.
They should make it as long as the Silent Napoleon film.
Which one?
Six hours.
Oh, six hours.
Yeah.
Actually, I love that.
They should be forced to.
I don't know how they're going to do a feature-long movie, because I don't know if you've cursed yourself by looking at the AI-generated movies that people try.
Every so often, one pops up on Twitter where it's it'll be, yeah, I made this entire thing in AI.
And you look at the front, and it's like a different fucking thing each time.
That balloon boy one,
different size balloon, different color balloon.
You read the stories about the balloon boy one, it's like, yeah, they kept putting a face on the balloon, we don't know why.
I just I and I know I have a good amount of film and TV people who listen who are quite anxious about this.
This doesn't scare me because they're very vague about the details.
Every other big tech innovation, I even other than the metaverse, I guess, they usually like to show you behind the curtain a little bit and like talk up that there'd be a big splashy story in like MIT technology review or something like that.
Being like, oh, New York Times would be like, oh, look at this, look at that, look at all things.
And they're like, yeah, we're just using some people somewhere in a place and they will make it.
And in the Wall Street Journal story as well, they showed sketches that would then be turned into AI.
I just,
this feels like a death rattle far more than something terribly scary.
And I understand film TP people are likely a bit scared, but it's like they're using out-of-the-country studios.
Of course, I just assume they're skipping union stuff because this is all they do.
It's like, this is the best they can squeak out years in.
Fucking how?
Is this?
And it's like a boring-looking children's thing, I guess, with a name from 2001.
Oh, it does have the producers or writers who worked on Paddington and Peru, apparently.
What, a movie about a criminal?
A sequel to a movie about a criminal who unfairly attacked you Grunt.
No, sorry.
Well, I mean, then this is the question of, you know, where in the Uber analogy is this?
Is this, you know, Uber's failed expansions where they tried different models overseas, or is this Uber returning home where they take the lessons from overseas or they use those overseas things to buy them a bit more time to then subsidize operations?
Here is my comparison.
This is the drone deliveries.
This is the drone deliveries.
It's the Amazon drone deliveries.
Great job, Casey and Kevin several years ago, talked about the Amazon drone deliveries.
Never fucking happened, mate.
It's hilarious as well because it is the same thing.
It's like, we cobbled together this.
It sucked.
It took so much money.
It's horribly inefficient.
It sucks.
We hate it.
You hate it.
The customers hate it.
We hate it.
We hate doing this, but we did it.
Ta-da!
And it's okay.
Well, you sure prove that.
To your point, Alison, it's like, yeah, we use the power power of AI to hire a bunch of people to do all the real work because you can't trust this to work.
It does not work.
When I saw a headline for an AI movie, I was like, it's going to be awful.
It's going to, like, writing a movie is hard.
But wait a minute.
Also, there's the other thing of, oh my God, how are they going to lip-sync this shit?
How do you lip-sync this?
You can't generate the same frame.
How are they going to, are they going to go in and post-edit it with humans, I assume?
At this point, how much are you actually relying on AI for?
It's very unclear.
It feels like being at a party where everyone's pissed themselves.
It does feel like some next level, like young propaganda.
Like if they can get kids to enjoy whatever this monstrous movie is going to be, then maybe there's like a longer-term brand play for Open AI as a warm and cuddly safe for children.
The thing is, French and Korean companies have
already been doing slot-based 3D shows.
I don't mean the famous one from the
K-pop Demon Hunters, which is apparently very good.
I've not watched it.
I haven't seen it yet.
Please don't kill me.
I'm not attacking that.
I'm talking about there is a gluttony of like very cheap kids, 3D kids' shows, and they've been around for decades because you can do this shit.
on the cheap now.
You've already another thing where the Uber model made sense as long as you didn't count the costs, which is, yeah, this is a way of getting people around that they become dependent on because it's useful.
This is like, we have found an extremely expensive and annoying way to do something that we already have a cheap alternative to do.
It's not like there was a cheap, a cheap, reliable cab service that Uber replaced.
There was a slow shit cab service that Uber replaced everywhere.
And it's like, is it a good company?
Is it horrible to workers?
Yes, but does it work?
Yes.
This is, we're going to automate everything with the power of AI other than labor,
other than stuff.
That's where the AI story starts to overlap again with crypto, where at least
with Uber, you understand what you're getting as a consumer.
And then with AI, you're kind of like, I don't really know what this is.
I don't know what problem it's solving.
It's like a solution in search of a problem.
And that was crypto's same bag.
It's just like, oh, we invented this cool new alternative money system.
Why?
The thing is with crypto is they always had a plan, which sucks.
I really should have seen it coming.
I was not smart enough at the time.
It was they always wanted to just get embedded in the financial system and then just turn the funny money into real money.
AI doesn't have that.
There is no way to turn this into new, like you can't just generate new money.
That's what crypto did, and it fucking sucks.
And by the way, the next crypto crash is going to wash out some real, like it's going to really fuck people up.
I don't think people realize the SPF2,
who at this point might just be SPF.
Like, if he just gets pardoned and comes back,
honestly, if he comes back and does it again,
no one can complain i'm going to a law school i'm going to go in the hyperbolic time chamber i'm going to join the fight just so you can so i can put him in cuffs you're going to put sam backman freed back in cuffs yes oh sam altmanfried would be good um it's it's just i don't see an end point for this i don't see everyone's even the boosters at this point they're like and then it will be powerful when
how what are you seeing that even tells you this i don't even want to fight just tell me i do think there's so there's just so much money behind it.
And there's so many people who've invested.
I was listening to a VC guy get interviewed on the Odd Lots podcast, and I can't remember his name.
So I apologize.
But he was talking about how all these founders, like all these smaller startups that are getting in on the AI game, all these founders have kind of been raised with this idea of Silicon Valley and what it will bring you.
And it's life-changing amounts of wealth.
And when you have enough people, and like the VCs are part of that, the actual tech startups are part that.
Stanford and like kind of the whole ethos of the of the valley is like, if you just keep going and work hard enough, you can have generational wealth.
And that is a very powerful force.
And I feel like, I think we're going to be seeing AI kind of hype last longer than we have in other previous bubbles and tech cycles in part because the The potential for the wealth is outstanding.
And it's like nothing we've ever seen.
But that's the thing.
You're completely right.
Except AI has one problem, which is all the companies lose a shit ton of money, and no one's selling, no one is buying the.
There's been like three acquisitions: there's one to AMD, one to NVIDIA, one to a public company called Nice,
which sold a customer, it was an AI Cognivi, I think they were called.
It was like an AI customer service thingy that never really seemed that good anyway.
But that whole thing is true.
And I think that that's what people are.
And I think that the myth of you can just use AI to spin up a startup quickly as well, has kind of gone into that, has kind of fueled that mythos as well.
But the problem is, this is so different because the whole point of Silicon Valley, the whole thing where you can just move there and start a startup is because it didn't cost ruinous amounts of money to start one.
You didn't get $3 million from a VC and expect to spend $2.5 million of that on compute.
You were like, okay,
we're going to have to bootstrap a little bit further.
We've just got a little bit of venture capital.
We're going to go this far.
This is like every step of the way, this cost increases massively it used to be it was sales and marketing and just people ai is people plus compute plus marketing plus this plus that i think you know perplexity the ai search engine spent 164 of their revenue in 2024 just on compute and aws like it's like this is not this whole generational wealth thing
i i fully agree it's what they're using to sell it i just don't think it's going to work.
And it's scary because this could have the wide thing.
And I really haven't talked about this enough.
The wider problem is as well, is all of these people who went to Silicon Valley, raised all this money, or have pretty much raised to sell companies that will never sell, that they can never take public because they've earn too much money.
They don't really have great user bases because LMs don't really have those.
And so they're just going to sit there, and then you've got a bunch of VC money tied up in that that will never exit, and a bunch of limited partner money that will never exit.
I think that there is an entirely separate bubble building that when that burst is going to the depression within Silicon Valley is going to be insane.
It's already pretty gnarly, but I think it's like 33% of venture capital went into AI last year.
It's like
eventually people are going to realize there's no exit for anyone.
And I don't know what that does.
I mean, it will piss off limited partners.
The money that comes to VCs is just not going to be there.
Well, so then that's the...
That's the question, right?
Because venture capital encourages, on one level, overvaluing because you need to figure out a way to make more money than what you put in on the exit within acquisition or some merger.
But on another level,
you're also working within a network trying to enrich yourself and your friends or trying to build the infrastructure for future
startups, portfolio options that you and your friends make to come in and make money.
You're building a platform that other people can invest in bits of.
And so, you know, on one level, I really, I really do, I agree that there's not really much of an exit ramp if there's actually no revenue and no profits.
But then also, I'd be curious, like, do you think they're going to try to ram these things through similar to like what we saw with Corey Weave, right?
Where, you know, like you, you talked, I think, extensively about
ways in which the financials there do not actually make sense if you're interested in a company that actually has the capital to do what it's going to say to it's going to do, which is provide GPU compute to everybody.
And even though it has such a central role in this ecosystem, it can't make profits that are, you know, that justify the capital that it's getting.
It has odious and burdensome debt that should be a massive red flag.
And it might be, you know, round-tripping, right?
Yeah.
But this is supposed to be like the darling of the sector.
And
it got pushed through, part of me feels like, because of
the desperation.
Nvidia and Magnetar push.
And Magnetar Capital, of course, is famous for the CDOs.
Yeah, right.
They're back.
But with Coreweave, they pushed it through onto the markets, but that doesn't mean it can't die.
Right.
Well, so that's the thing.
Do you think that it's possible that they'd be successful in pushing it onto markets, but it dies?
Because
I feel like there will definitely be a lot of investment incineration, but I also do think we're going to have bags.
dumped on everybody.
I think you could do it with something like Core Weave and Lambda, which is another situation where Nvidia is the customer invested and also sells them the GPUs, which they then use as collateral to buy more GPUs using debt, which is so good.
You'll notice that there are no software companies going public.
There's no software AI companies going public.
Everyone thinks that OpenAI goes public here and oh, they'll go.
The market's going to, if they can even convert, the markets are going to eat them for dinner.
Oh yeah, we're going to burn bazillions of dollars forever.
No, the markets didn't like Core Weave either.
Coreweave wouldn't have gone public had Nvidia not put more money in.
Lambda is probably going to be exactly the same if they even make it.
You won't see software companies because that's the other thing.
Core we've had, albeit bunches of debt, assets.
They have data centers kind of through core scientific.
God, I hate these fucking companies.
But they don't, they have things that they can point to and relationships.
Even OpenAI, that's the thing with them.
They don't...
They barely have assets.
Oracle is building their data center in Abilene with Crusoe.
They don't own any of the GPUs.
They have a few GPUs, I think, for research, I've heard, but Microsoft owns most of their infrastructure.
They don't own their RD.
Well, they do, but Microsoft also has access to that, their intellectual property, same deal.
So it's like, what actual value does an AI startup have?
People always say, oh, they're getting the data.
They get the data so that the data will tell them.
It's like, what?
It's all these horrible stories about like, oh, Doge has got an LLM.
They're doing this with.
What's the end point?
It's scary.
Don't get me wrong.
And then what?
And there never is one.
And
I hope someone, I hope an AI software company goes public.
I want to see this so bad.
I want to see, you have any idea.
If you give me the open AI books, the Anthropic books, you become the official homie of Better Offline.
I'll mention you on every episode.
Get me these books.
But, because I think all of them are going to be like a dog's dinner.
There's, I've actually looked at the markets and Uber,
Uber, by comparison, they did burn a shit ton of money.
It's things like 25 billion between 2019 and 2022.
A lot of that was on sales and R D.
It's pretty much group on.
I think also they're R D with autonomous cars, but separate problem.
But it's like there wasn't a, I can't find an example of someone that just annihilated fuel unless it's like planes.
And I think we've established the use case for planes by now.
Clear.
Yeah.
Sold.
It's, it's just, it's all very frustrating.
But you know what?
I think I'm going to call it there.
I think we've had a good conversation.
Allison, where can people find you?
You can find me on BlueSky at Amaro or on cnn.com slash nightcap.
Ed.
You can find me on Twitter at BigBlack Jacobin.
You can find me on Blue Sky on Edward Angosa Jr.
and on Substack at the Tech Bubble.
And you can find me, of course, at google.com.
Just type in Prabhagar Raghavan.
You'll find me.
I pop right up.
That's all me.
Thank you so much for listening, everyone.
This, my episodes are coming out in a weird order because I'm recording this, knowing there's a three-parter this week, but this will come out with a monologue of some sort.
Thank you so much for listening, everyone.
Of course, Bahid, thank you for producing here out in New York City.
And yeah, thanks, everyone.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Matt Osowski.
You can check out more of his music and audio projects at matosowski.com.
M-A-T-T-O-S-O-W-S-K-I dot com.
You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.
I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash better offline to check out our Reddit.
Thank you so much for listening.
Better Offline is a production of CoolZone Media.
For more from CoolZone Media, visit our website, coolzone media.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
In a region as complex as the Bay Area, the headlines don't always tell the full story.
That's where KQED's podcast, The Bay, comes in.
Hosted by me, Erica Cruz Guevara, The Bay brings you local stories with curiosity and care.
Understand what's shaping life in the Bay Area.
Listen to new episodes of the Bay every Monday, Wednesday, and Friday, wherever you get your podcasts.
Parking shouldn't slow you down.
ParkWiz gives every driver a shortcut.
Book ahead, save up to 50%, and skip the hassle of circling the block.
Park smarter, park faster, ParkWiz.
Download the ParkWiz app today and save every time you park.
Every business has an ambition.
PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.
And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards so you can focus on scaling up.
When it's time to get growing, there's one platform for all business, PayPal Open.
Grow today at PayPalOpen.com.
Loan subject to approval in available locations.
Mint is still $15 a month for premium wireless.
And if you haven't made the switch yet, here are 15 reasons why you should.
One, it's $15 a month.
Two, seriously, it's $15 a month.
Three, no big contracts.
Four, I use it.
Five, my mom uses it.
Are you playing me off?
That's what's happening, right?
Okay, give it a try at mintmobile.com/slash switch.
Upfront payment of $45 per three month plan, $15 per month equivalent required.
New customer offer first three months only, then full price plan options available.
Taxes and fees extra.
See mintmobile.com.
This is an iHeart Podcast.