How To Argue With An AI Booster, Part Two
In part two of this week's three-part Better Offline Guide To Arguing With AI Boosters, Ed Zitron walks you through why the AI bubble is nothing like the dot-com bubble, how the cost of inference is actually going up, and why OpenAI’s massive burnrate is nothing like Uber’s.
Latest Premium Newsletter: Why Everybody Is Losing Money On Generative AI: https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/
YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.
BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast.
On Fox One, you can stream your favorite news, sports, and entertainment live, all in one app.
It's fing roll and unfiltered.
This is the best thing ever.
Watch breaking news as it breaks.
Breaking tonight, we're following two major stories.
And catch history in the making.
Gibby, meet Freddy.
Debates,
drama, touchdowns.
It's all here,
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.
There was the six-foot cartoon otter who came out from behind a curtain.
It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.
Should I be telling this thing all about my loved life?
I think we will see a Twitch streamer president maybe within our lifetimes.
You can find close all tabs wherever you listen to podcasts.
There's more to San Francisco with the Chronicle.
More to experience and to explore.
Knowing San Francisco is our passion.
Discover more at sfchronicle.com.
What happens when Delta Airlines sends four creators around the world to find out what is the true power of travel?
I love that both trips had very similar mental and social perks.
Very much so.
On both trips, their emotional well-being and social well-being went through the roof.
Find out more about how travel can support well-being on this special episode of the Psychology of Your 20s, presented by Delta.
Fly and live better.
Listen wherever you get your podcasts.
Coolzone Media.
Hello, and welcome to Better Offline.
I'm your host, Edzitron.
This is part 2 of our three-part series on how to argue with an AI booster.
When we last left off, I'd started talking about some of the most common and vacuous talking points used by those who defend the generative AI industry, and why a lot of them are wholly without merit.
These are the booster quips, assertions that, if you don't know much, sound convincing, but are easily disproven with with the right information.
And in that last episode, we addressed the quips that say we're in the early days of AI and that people doubted smartphones and the internet, things they didn't do, just like they did generative AI, which they should do.
In the cycle of grief, that's the denial stage.
Now we're going to move on to bargaining.
This is just like the dot-com boom.
Even if all of this collapses, the overcapacity will be practical for the market like the fiber boom was.
All right, folks, time for a little history.
You know me, I'll love me some history.
The fiber boom began after the Telecommunications Act of 1996 deregulated large parts of America's communications infrastructure, creating a massive boom.
A $500 billion one, to be precise, primarily funded...
with debt.
Obviously, we're still using the infrastructure bought during that boom, and this fact is used as a defense of the insane capex spending surrounding generative AI.
High-speed internet is useful, right?
Sure, but the fiber optic boom period was also defined by a gluttony of overinvestment, ridiculous valuations, and genuine outright fraud.
In any case, this is not remotely the same thing, and anyone making this point needs to learn the very fucking basics of technology.
Let's get going.
Now, the fiber optic cable of this era is mostly owned by a few companies.
42% of NVIDIA's revenue is from the Magnificent 7, and the companies buying these GPUs are, for the most part, not going to go bust once the AI bubble bursts.
You can also already get the cheap fiber of this era too.
Cheap AI GPUs are already here.
GPUs are depreciating assets, meaning that the good deals are already happening.
I found an NVIDIA A100 for $2,000 or $3,000 multiple times on eBay, and you can get the H100s, which are more powerful for, well, I think $30,000, and those things go $45,000 retail, so not brilliant.
AI GPUs also do not have a variety of use cases and are limited by CUDA, NVIDIA's programming libraries and APIs.
AI GPUs are integrated into applications using this language CUDA, and this is specifically NVIDIA's programming language.
While there are other use cases, scientific simulations, image and video processing, data science, analytics, medical imaging, and so on, CUDA is not a one-size-fits-all digital panacea.
While fiber optic cable was, it was also put everywhere, it truly did set up the future.
What are these GPUs setting up exactly?
Also, widespread access to cheaper GPUs has already happened.
And what new use cases are there?
What are the new innovative things we can do?
As a result of the AI bubble, there are now many, many, many, many, many different vendors to get access to GPUs.
You can pay at an hourly rate.
Who knows if it's profitable, but you can do it.
And sometimes you can get them for as little as $1 an hour, which is really not good.
It definitely isn't making them money.
But putting the financial collapse aside, while they might be cheaper when the AI bubble bursts, does cheaper actually enable people to do new stuff?
Is cost the problem?
Because I think the costs are going to go up.
But even if they weren't going up, what are the things that you could do that are new?
What is the prohibitive cost?
No one can actually answer this question because the answer isn't fun.
GPUs are built to shove massive amounts of compute into one specific function again and again and again, like generating the output of a model, which remember mostly boils down to complex maths.
Unlike CPUs, a GPU can't easily change tasks or handle many little distinct operations, meaning that these things aren't going to be adopted for another mass market use case because there probably isn't one.
In simpler terms, this was not an infrastructure build out.
The GPU boom is a heavily centralized capital expenditure-funded asset bubble where a bunch of chips will sit in warehouses or kind of fallow data centers waiting for somebody to make up a use case for them.
And if an endearing one existed, we'd already have it because we already have all the fucking GPUs.
Now,
here's a really big boost equipment I have been looking forward to.
I get a lot of people asking you about this.
You're so stupid.
Why am I stupid exactly?
Well, five really smart guys got together and wrote AI 2027, which is a very real sounding extrapolation that shut the fuck up.
Shut up.
Shut up.
AI 2027 is fan fiction.
If you were scared by this and you're not a booster, you shouldn't feel bad, by the way.
This was written to scare you.
And by the way, if you don't know what it is I'm talking about, you should consider yourself lucky.
It's essentially a piece of speculative fiction that describes where Gen AI companies get fatter models that get exponentially better and the US and China are embroiled in an AI arms race.
It's really silly.
It's so very silly.
And I call it fanfiction because it is.
If we're thinking about this in purely intellectual terms, it's up there with My Immortal.
And no, I'm not explaining that.
You can Google that one for yourselves.
It doesn't matter if all the people writing the fanfiction are scientists or that they have the right credentials.
They themselves say that AI 2027 is a guess, an extrapolation, which means guess, with expert feedback, which means someone editing your fan fiction, and involves experience at OpenAI.
There are people that worked on the shows they write fanfiction about.
I'm not even insulting fanfiction, by the way.
Gonuts.
You're more...
You are 100 times more ethically
positive than these people.
At least you admit it's fanfiction.
Could Knuckles get pregnant?
I'm sure somebody's found out.
I'm not going to go line by line and cut this apart any more than I'm going to go and do a lengthy takedown of someone's erotic Banjo-Kazooie story, because both are fictional.
The entire premise of this nonsense is that at one point someone invents a self-learning agent that teaches itself stuff and it does a bunch of other stuff requiring a bazillion compute points with different agents with different numbers after them.
There is no proof that this is possible, nobody has done it and nobody will do it.
AI 2027 was written specifically to fool people that want to be fooled, with big charts and the right technical terms used to lull the credulous into a wet dream in a New York Times column where one of the writers folds their hands and looks worried.
It was also written to scare people that are already scared.
It makes big, scary proclamations with tons of links to stuff that looks really legitimate, but when you piece it all together, it's literally just fanfiction.
Except really not that endearing.
My personal favorite part is mid-2026, China Wakes Up, which involves China's intelligence agencies trying to steal Open Brain's agent.
No idea who this company could be referring to.
Please email me if you can work it out to Idon'Care at business.org.
Before the headline of AI takes some jobs after Open Brain releases a model, oh god, I'm so bored even fucking talking about this.
Now, Sarah Lyons puts this well, arguing that AI 2027 and AI in general is no different from the spurious spectral evidence used to accuse someone of being a witch during the Salem witch trials.
And I quote, and the evidence is spectral.
What is the real evidence in AI 2027 beyond trust us and vibes?
People who wrote it cite themselves in the piece.
Do not demand I take this seriously.
This is so clearly a marketing device to scare people into buying your product before this imaginary window closes.
Don't call me stupid for not falling for your spectral evidence.
My whole life, people have been saying artificial intelligence is around the corner and it never arrives.
I simply do not believe a chatbot will ever be more than a chatbot.
And until you show me it doing that, I will not believe it.
Anyway, AI 2027 is fan fiction, nothing more.
And just because it's full of fancy words and has five different grifters on its byline doesn't mean a goddamn thing.
There's more to San Francisco with the Chronicle.
There's more food for thought, more thought for food.
There's more data insights to help with those day-to-day choices.
There's more to the weather than whether it's going to rain.
And with our arts and entertainment coverage, you won't just get out more, you'll get more out of it.
At the Chronicle, knowing more about San Francisco is our passion.
Discover more at sfchronicle.com.
Be honest, how many tabs do you have open right now?
Too many?
Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.
Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.
Everyone's cooped up in their house.
I will talk to this robot.
If you're a truly engaged activist, the government already has data on you.
Driverless cars are going to mess up in ways that humans wouldn't.
Listen to Close All Tibes, wherever you get your podcasts.
The horrible events of September 11th continue to leave their mark on our nation.
Many of those who served need your help, and they need Wounded Warrior Project.
I'm asking you to join us with your gift right now.
So many veterans like myself physically left the war, but we live and deal with the war daily.
Your donation, it warms my heart.
You don't know how much that means to me to see the support of my fellow Americans.
Please give to those who gave so much for us mint is still fifteen dollars a month for premium wireless and if you haven't made the switch yet here are 15 reasons why you should one it's fifteen dollars a month two seriously it's fifteen dollars a month three no big contracts four i use it five my mom uses it are you are you playing me off that's what's happening right okay give it a try at mintmobile.com slash switch upfront payment of 45 for three months plan fifteen dollars per month equivalent required new customer offer first three months only then full price plan options available.
Taxes and fees extra.
CMintmobile.com.
Now.
Now, now, now.
Now, folks.
We've all been waiting for this moment.
And here's the ultimate boost equipment.
The cost of inference is coming down.
This proves that things are getting cheaper.
Here's a bonus trick for you before I get to my bet.
Here we go.
Ask them to explain whether things have actually got cheaper.
And if they say they have, ask them why there are no profitable AI companies.
If they say they're in the growth stage, ask them why there are no profitable AI companies.
Again, I'd say it's been several years and not got one.
At this point, they should try and kill you.
But really, I'm about to be petty.
I'm about to be petty for a fucking reason, though.
In an interview on a podcast from earlier this year, that I will not even quote, because the journalist in question did not back me up and it pisses me off.
Journalist Casey Newton said the following about my work.
You don't think that that kind of flies in the face of Sam Altman saying that we need billions of dollars for years?
No, not at all.
And I think that's why it's so important when you're reading about AI to read people who actually interview people who work at these companies and understand how the technology works, because the entire industry has been on this curve where they are trying to find micro-innovations that reduce the cost of training the models and to reduce the cost of what they call inference, which is when you actually enter a query into ChatGPT.
And if you plotted the curve of how the cost has been falling over time, DeepSeek is on that curve, right?
So everything that DeepSeek did, it was expected by the AI labs that someone would be able to do.
The novelty was just that a Chinese company did it.
So to say that it like upends expectations of how AI would be built is just purely false and is the opinion of somebody who does not know what he's talking about.
Newton then says several octaves higher, which shows you exactly how mad he isn't, that he thought what he said was very civil and that there are things that are true and there are things that are false like you can choose which ones you want to believe i'm not going to be so civil other than the fact that casey refers to micro innovations the you talking about and deep seek being on a curve that was expected he makes as many do two very big mistakes and personally if i was doing this
I personally would not have said these things in a sentence that began with me suggesting that I, being Casey Newton in this example, knew how the technology works.
Now, here's the Casey Newton quip.
Inference, which is when you actually enter a query into Chat GPT, this statement is false.
It's not what inference means.
Inference, and I've gotten this wrong in the past too, I'm being accountable, is everything that happens from when you put in a prompt to generate an output.
It's when an AI based on your prompt infers meaning.
To be more specific, and quoting Google, machine learning inference is the process of running data points into a machine learning model to calculate an output, such as a single numerical score.
Except that's what these things are bad at.
But nevertheless, Casey will try and weasel out of this one and say this is what he meant.
It wasn't.
He also said, if you plotted the curve of how the cost of inference has been falling over time.
Well, that...
That's wrong, Casey.
That's wrong, my man.
The cost of inference has gone up over time.
Now, Casey, like many people who talk about stuff without learning about it first, is likely referring to the fact that the price of tokens for some models has gone down in some cases.
But you know what, folks?
Let's establish some facts about inference.
I'm doing the train.
I'm pulling the big horn on the invisible train, I'm cooking.
Now inference as a thing that costs money is entirely different to the price of tokens, and conflating the two is journalistic malpractice.
The cost of inference would be the price of running the GPU and the associated architecture, a cost we do not at this point have any real insight into.
Token prices are set by the people who sell access to the tokens, such as OpenAI and Anthropic.
For example, OpenAI dropped the price of its O3 models token costs almost immediately after the launch of Claudopus 4.
Do you think it did that because the price of serving the models got cheaper?
If you do, I don't know how you possibly put your trousers on every morning without cutting yourself in half.
Now, the cost of inference conversation comes from articles that say that we now have models that are cheaper, that can now hit higher benchmark scores.
Though, the article I'm referring to, which will be in the show notes, is from November 2024, and the comparison it makes is between GPT-3, which is from November 2021, and Lama 3.23b, September 2024.
Now, the suggestion is, in any case, that the cost of inference is going down 10x year over year.
The problem is, however, that these are raw token costs, not actual expressions of evaluations of token burn in a practical setting.
And to really, I realized that was a bit technical.
These are just what it costs to do something.
It doesn't actually tell you how many tokens will be burned, at what volume they will be burned, because that would change things.
And well, wouldn't you know it, the cost of inference actually went up as a result.
In an excellent blog from Killer Code, and I did not get the chance to find out the pronunciation of this
second name, so I'm just going to call her it's E-W-A-S-Y-Z-S-Z-K-A.
I am so sorry.
I would rather spell it out than miss, then actually mispronounce it.
I hate when people say Zitron wrong.
Great blog.
Anyway, let me quote.
Application inference costs increased for two reasons.
The Frontier model's cost per token stayed constant, and the token consumption per application grew a lot.
Token consumption per application grew a lot because models allowed for longer context windows and bigger suggestions from the models.
The combination of a steady price per token and more token consumption caused app inference costs to grow about 10 times over the past two years.
To explain that in really simple terms, while the costs of old models may have decreased, new models, which you need to do most things, cost about the same and the reasoning that these new models use do actually burn way, way more tokens.
When these new models reason, they break a user's input down and break it into component parts, then run inference on each of those parts.
When you plug an LLM into an AI coding environment, it will naturally burn an absolute shit ton of tokens, in part because of the large amount of information you have to load into the prompt and the context window or the amount of information you can load in at once.
And in part because generating code is inference intensive, and also breaking down all those coding tasks, each of those tasks requiring a coding tool and taking a bunch of inference themselves.
It's really bad.
In fact, the inference costs are so severe that Killer Code says that a combination of a steady price per token and more token consumption caused app inference costs to grow about 10x over the last two years.
I'm repeating myself, I realize, but I really need you to get one thing, which is that the cost of inference went up.
But I'm not done.
I refuse to let this point go because people love to say the cost of inference is going down when the cost of inference has increased.
And they do so to a national audience, all while suggesting I'm wrong somehow and acting superior.
I don't like being made to feel this way.
I don't think it's nice to do this to people.
And if you're gonna do it, if you have the temerity to call someone out directly, at least be fucking right.
I'm not wrong.
You're wrong.
In fact, software developer influencer Theo Brown recently put out a video called I Was Wrong About AI Costs, They Keep Going Up, which he breaks down as follows.
Reasoning models are significantly increasing the amount of output tokens being generated.
These tokens are also more expensive.
In one example, Brown finds that Grok4's reasoning mode uses 603 tokens to generate two words.
This was a problem across every single reasoning model, as even cheap reasoning models would do the same thing.
As a result, tasks are taking longer and burning more tokens.
Another writer called Ethan Ding noted a few months ago that reasoning models burn so many tokens that there is no flat subscription price that works in this new world, as the number of tokens they consumed went absolutely nuclear.
The price drops have also, for the most part, stopped.
You cannot at this point fairly evaluate whether a model is cheaper just based on its cost per tokens, because reasoning models inherently burn and are built to inherently burn more tokens to create an output.
Reasoning models are also the only way that model developers have been able to improve the efficacy of new models, using something called test time compute to burn extra tokens to complete a task.
And in basically anything you're using today, there's got to be some sort of reasoning model, especially if you're coding.
The cost of inference has gone up.
Statements otherwise are purely false and are the opinion of somebody who does not know what he's talking about.
But you ask, could the costs of inference go down?
Maybe?
It sure isn't trending that way, nor has it gone down yet.
I also predict that there's going to be some sort of sudden realization in the media that inference is going up, which has kind of already started.
The Information had a piece on it in late August, where they note that Intuit paid $20 million to Azure last year, primarily to access OpenAI's models, and is on track to spend 30 million this year, which outpaces the company's revenue growth in the same period, raising questions about how sustainable the spending is and how much of the cost it can pass along to customers.
Christopher Mims in the Wall Street Journal also had a piece about the costs going up.
Do not be mad at Chris.
Chris and I chatted before he submitted that piece.
Like, he literally on Blue Sky called me out.
It fucking rocks, by the way.
Big up to Chris Mims, because it's nice to see the mainstream media actually engaging with these things, even though it's dangerous to the bubble.
But you know what?
The truth must win out.
And the problem here is that the architecture underlying large language models is inherently unreliable.
I imagine OpenAI's introduction of the router to ChatGPT-5 is an attempt to moderate both the costs of the model chosen and reduce the amount of exposure to reasoning models for simple queries.
Though Sam Altman was boasting on August 10th about the significant increase in both free and paid users' exposure to reasoning models, they don't teach you this in business school.
Worse still, a study written up by VentureBeat found that open weight models burn between 1.5 to 4 times more tokens, in part due to a lack of token efficiency, and in part thanks to, you guessed it, reasoning models.
I quote.
The findings challenge a prevailing assumption in the AI industry that open source models offer a clear economic advantage over proprietary alternatives.
While open source models typically cost less per token to run, the study suggests that this advantage could be, and I quote the study, easily offset if they require more tokens to reason about a given problem.
And models keep getting bigger and more expensive too.
So why did this happen?
Well, it's because model developers hit a wall of diminishing returns, and the only way to make models do more was to make them burn more tokens to generate a more accurate response, which is a very simple way of describing reasoning, a thing that OpenAI launched in September 2024, and others followed.
As a result, all the gains from powerful new models come from burning more and more tokens.
The cost per million token number is no longer an accurate measure of the actual cost of generative AI because it's much, much, much, much harder to tell how many tokens a reasoning model may burn.
And it varies, as Theo Boing.
Theo Boing, I'm keeping that, all right?
You get the real cuts, as Theo Brown noted from model to model.
In any case, there really is no changing this path.
These companies are out of ideas.
Now, another,
another one of my favorite ultimate boost equips.
This is a classic, and I still get this on social media.
I have people yapping in my ear saying, OpenAI and Anthropic are just like Uber because Uber burned $25 billion over the course of 15 or so years.
And look, look, Edward, they're now profitable.
Why are you you calling me Edward?
Shut up.
This proves that Open AI, a totally different company with different economics, will be totally fine.
So I've heard this argument maybe 50 times in the last year, to the point that I had to talk about it in my piece,
How Does Open AI Survive, which I also turned into a podcast around July 2024.
Go back and link.
I'll link to it in the piece.
Yada yada yada.
Nevertheless, people make a few points about Uber and AI that I think are fundamentally incorrect, and I'm going to break them down for you.
Now, they claim that AI is making itself too big to fail, embedding itself everywhere and becoming essential.
And none of these things are the case.
I've heard this argument a lot, by the way, and it's one that's both ahistorical and alarmingly ignorant of the very basics of society.
But it, the government!
No, no, no, no, no.
Now, you've heard.
You've heard OpenAI got a $200 million defense contract with an estimated completion date of July 2026.
And just to be clear, that's up to $200 million.
And that they're selling ChatGPT Enterprise to the US government for a dollar a year, along with Anthropic doing the same thing.
And even Google's doing it, except they're doing 40 cents for a year.
Now you're probably hearing this and thinking, oh shit, this means the government has paid them.
They're never going away.
And I cannot be clear enough that you believing this is the very intention of these deals.
They are built specifically to make you feel like these things are never going away.
This is also an attempt to get in with the government at a rate that makes trying these models a no-brainer.
At which point I ask, and
the government is going to have cheap access to AI software does not mean that the government relies on it.
Every member of the government having access to ChatGPT, something that is not even necessarily the case, does not make this software useful, let alone essential.
And if OpenAI burns a bunch of money making it work for them, it still won't be essential because large language models are not actually that useful for doing stuff.
Now, let's talk Uber.
Uber was and is useful, which eventually made it essential.
Uber used lobbyist Bradley Tusk to steamroll local governments into allowing Uber to operate in their cities, but Tusk did not have to convince local governments that Uber was useful or or have to train people how to use Uber.
Uber's too big to fail moment was that local cabs kind of fucking suck just about everywhere.
You ever try and take a yellow cab from downtown Manhattan to Hoboken, New Jersey or Brooklyn or Queens?
Did you ever try and pay with a credit card?
How about trying to get a cab outside a major metropolitan area?
Do you remember how bad it was?
It was really awful.
Like, I don't think people realize or remember how bad it was.
And I'm not saying that Uber is good.
I'm not glorifying Uber in any way, but the experience that Uber replaced was very, very bad.
As a result, Uber did become too big to fail because people now rely on it because the old system sucked.
Uber used its masses of venture capital to keep prices low to get people used to it too, but the fundamental experience was better than calling a cab company and hoping they showed up.
I also want to be clear that this is not me condoning Uber.
Take public transport if you can.
To be clear, Uber has created a new kind of horrifying extractive labor practice, which deprives people of benefits and dignity, paying off academics to help the media gloss over the horrors of their platform, and also now having to increase prices.
So that's how they reach profitability by doing that.
That isn't something that's going to happen with generative AI, just the costs are too high.
They're way too high.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.
There was the six-foot cartoon otter who came out from behind a curtain.
It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.
Should I be telling this thing all about my love life?
I think we will see a Twitch stream or president maybe within our lifetimes.
You can find Close All Tabs wherever you listen to podcasts.
The ocean delights us.
Some marvel at the colorful world below the surface.
The ocean feeds us.
Others find nourishment in its bounty.
The ocean teaches us how our everyday choices impact even the deepest places.
The ocean moves us, whether we're riding a wave or soaking in its breathtaking beauty.
The ocean connects us.
Find your connection at Monterey Bay Aquarium.org slash connects.
There's a lot going on in Hollywood.
How are you supposed to stay on top of it all?
Variety has the solution.
Take 20 minutes out of your day and listen to the new Daily Variety podcast for breaking entertainment news and expert perspectives.
Where do you see the business actually heading?
Featuring the iconic journalists of Variety and hosted by co-editor-in-chief Cynthia Littleton.
The only constant in Hollywood is change.
Open your free iHeartRadio app, search Daily Variety, and listen now.
Tired of spills and stains on your sofa?
Washablesofas.com has your back, featuring the Anibay collection, the only designer sofa that's machine washable inside and out, where designer quality meets budget-friendly prices.
That's right, sofas started just $699.
Enjoy a no-risk experience with pet-friendly, stain-resistant, and changeable slip covers made with performance fabrics.
Experience cloud-like comfort with high-resilience foam that's hypoallergenic and never needs fluffing.
The sturdy steel frame ensures longevity, and the modular pieces can be rearranged any time.
Check out washable sofas.com and get up to 60% off your Anibay sofa, backed by a 30-day satisfaction guarantee.
If you're not absolutely in love, send it back for a full refund.
No return shipping or restocking fees, every penny back.
Upgrade now at washablesofas.com.
Offers are subject to change and certain restrictions may apply.
But anyway, what is essential about generative AI?
What exactly, and be specific, is the essential experience of generative AI?
What are we, if ChatGPT disappeared tomorrow, what actually disappears?
And on an enterprise or governmental level, what exactly are these tools doing for governments that would make removing them so painful?
What use cases?
What outcomes?
If your answer here is to say, well, they're putting it in and they're choosing, they're choosing which people to cut out of benefits and they're doing it.
Please, goddamn.
This is what they want you to do.
They want you to be scared so
they can feel powerful.
They're not doing that.
You notice that we get all these horrible stories, by the way, of internal government things shoving stuff into LLMs.
You know what we don't get?
Another thing.
We don't get, oh, and then happen.
It's just they're doing this scary, bad thing that they shouldn't be.
They shouldn't be putting people's private information into.
Anyway, I'm rambling.
Uber's essential nature is that millions of people use it in place of regular taxis.
and it effectively replaced decrepit, exploitative systems like the Yellow Cab Medallions in New York with its own tech-enabled exploitation system that, nevertheless, worked far better for the user.
Okay, I also want to do a side note just to acknowledge that the disruption from Uber brought something to the medallion system that was genuinely horrendous.
The consequences were horrifying for the owners of the medallions, some of who had paid more than a million dollars for the privilege of driving a New York cab and were burdened under mountains of debt.
That whole system is so fucking evil.
I think it's horrifying.
And I think the payday loan people involved should all be in fucking prison.
Worst scum of the world.
The people who are taking advantage of people come to this country to drive a fucking cab that they have to take out massive loans to buy.
That is evil.
Uber is also, just to be clear.
But that also is.
That's the point I'm trying to make.
People should.
feel sorry for the victims of that system.
That system was a kind of corruption unto itself.
Anyway, getting back to the thing, because I don't know, I feel, I actually feel a lot for the people who were the victims of the medallion system.
It's fucking rough.
And every time I think of it, I feel very sad inside.
But let's get back to the episode.
I don't want to think about that any longer.
There really are no essential use cases for ChatGPT or really any Gen AI system.
You cannot point to one use case that is anywhere near as necessary as cabs in cities.
And indeed, the biggest use cases, things like brainstorming and search, are either easily replaced by any other commoditized LLM or already exist in the case of Google search.
Now, let's do another booster quip.
Data centers are important economic growth vehicles and are helping drive innovation and jobs throughout America.
Having data centers promotes innovation, making open AI and AI data centers essential.
And the answer to that is nope.
Nope.
Sorry, this is a really simple one.
These data centers are not in and of themselves driving much economic growth, other than the costs of building them, which I went into last episode.
As I've discussed again and again, there's maybe $40 billion in revenue and no profit coming out of AI companies.
There isn't any economic growth.
They're not holding up anything other than the
massive infrastructure built to make them make no money and lose billions.
There's no great loss associated with the death of large language models or the death of this era.
Taking away Uber would be genuinely catastrophic for some people's ability to get places and people's jobs, even if they are
horrifyingly underpaid.
But here's another booster grip.
Uber burned a lot of money, $25 billion or more, to get where it is today.
Ooh, Mr.
Zittron, Mr.
Zittron, you're dead.
And my response is that OpenAI and Anthropic have both separately burned more than four times as much money since the beginning of 2024 as Uber did in its entire existence.
So the classic and wrong argument about OpenAI and companies like OpenAI is that Uber burned a bunch of money is now cash flow positive or profitable.
I want to be clear that Uber's costs are nothing like large language models, and making this comparison is ridiculous and desperate.
But let's talk about raw losses shall we and where people are making this assumption.
So Uber lost $24.9 billion in the space of four years from 2019 to 2022 in part because of the billions it was spending on sales and marketing and RD, $4.6 billion and $4.8 billion respectively in 2019 alone.
It also massively subsidized the cost of rides, which is why prices had to increase and spent heavily on driver recruitment, burning cash to get scale, you know, the classic Silicon Valley way.
This is absolutely nothing like how large language models are growing and I'm tired of defending this point.
But defend it I shall.
OpenAI and Anthropic burn money primarily through compute costs and specialized talent.
These costs are increasing, especially with the rush to hire every single AI scientist at the most expensive price possible.
There are also essential immovable costs that neither OpenAI nor Anthropic have to shoulder.
The construction of the data centers necessary to train and run inference for their models, and of course the GPUs inside them, which I will get to in a little bit.
Yes, Uber raised $33.5 billion through multiple rounds of post-IPO dam, though it raised about $25 billion in actual funding.
Yes, Uber burned an absolute ars ton of money.
Yes, Uber has scale.
But Uber has not burned money as a means of making its product functional or useful.
Uber worked immediately.
I mean, was it 2012, I think I used it for the first time?
Maybe earlier?
No, no, it would have been 2010.
It worked immediately.
You used it.
You were like, wow, this I can just put in my address.
I don't have to say my address three times because I have a British accent accent and nobody can fucking understand me sometimes.
You can, though.
You're special.
Yeah, it was really obvious that it worked.
And also, the costs associated with Uber and its capital expenditures from 2019 through 2024 were around $2.2 billion, by the way, are minuscule compared to the actual real costs of OpenAI and Anthropic.
Both OpenAI and Anthropic lost around $5 billion each in 2024, but their infrastructure was entirely paid for by either Microsoft, Google, or Amazon.
And by which I mean the building of it and the expansion therein.
While we don't know how much of this infrastructure is specifically for OpenAI or Anthropic, as the largest model developers, it's fair to assume that a large chunk, at least 30% of Amazon and Microsoft's capital expenditures, have been to support these loads.
Great sentence to cut and listen to again.
I also leave out Google as it's unclear whether it's expanded its infrastructure for Anthropic, but we know Amazon has done so.
As a result, the true cost of OpenAI and Anthropic is at least 10 times what Uber burned.
Amazon spent $83 billion in capital expenditures in 2024 and expects $105 billion of the fuckers in 2025.
Microsoft spent $55.6 billion in 2024 and expects to spend $80 billion this year.
I'm actually confident most of that is OpenAI.
But based on my conservative calculations, the true cost of OpenAI is at least $82 billion, and that only includes CapEx in 2024 onwards.
Based on 30% of Microsoft's CapEx, as not everything has been invested yet in 2025, and OpenAI might not be all of the capex.
And also the $41.4 billion of funding that OpenAI has received so far.
The true cost of Anthropic is around $77.1 billion, and that's not including the $13 billion they just raised.
But it does include all their previous funding and 30% of Amazon's capex in the beginning of 2024.
Now, these are inexact comparisons, but the classic argument is that Uber burned lots of money and worked out okay.
When in fact, the combined capital expenditures from 2024 onwards that are necessary to make OpenAI and Anthropic work are each on their own four times what Uber burned in over a decade.
I also believe these numbers are conservative.
There's a good chance that OpenAI and Anthropic dominate the capex of Amazon, Google and Microsoft, in part because of what the fuck else are they buying all these GPUs for, as their own AI services don't appear to be making much money at all.
Anyway, to put it real simple, AI has burned way more in the last two years than Uber burned in 10.
Uber didn't burn money in the same way, didn't burn much in the way of capital expenditures, didn't require massive amounts of infrastructure, and isn't remotely the same in any way, shape, or form, other than that it burned a lot of money.
And that burning wasn't because it was trying to build the core product, it was trying to scale.
It's all so stupid, and you know what?
I'm not even done.
Our next and final AI booster episode, we'll breeze through the dumbest of the dumb arguments.
And I'll say why I'm finally drawing a line under these arguments for real, because it needs to be said.
We need to say something.
I hope you've enjoyed this.
See you tomorrow.
Godspeed.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Mattasowski.
You can check out more of his music and audio projects at matasowski.com.
M-A-T-T-O-S-O-W-S-K-I dot com.
You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.
I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash betteroffline to check out our Reddit.
Thank you so much for listening.
Better Offline is a production of CoolZone Media.
For more from CoolZone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Be honest, how many tabs do you have open right now?
Too many?
Sounds like you need close all tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.
Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.
Everyone's cooped up in their house.
I will talk to this robot.
If you're a truly engaged activist, the government already has data on you.
Driverless cars are going to mess up in ways that humans wouldn't.
Listen to Close All Tabs, wherever you get your podcasts.
Ah, smart water.
Pure, crisp taste, perfectly refreshing.
Wow, that's really good water.
With electrolytes for taste, it's the kind of water that says, I have my life together.
I'm still pretending the laundry on the chair is part of the decor.
Yet, here you are, making excellent hydration choices.
I do feel more sophisticated.
That's called having a taste for taste.
Huh, a taste for taste.
I like that.
Smartwater.
For those with a taste for taste, grab yours today.
Every business has an ambition.
PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.
And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards.
So you can focus on scaling up.
When it's time to get growing, there's one platform for all business: PayPal Open.
Grow today at PayPalOpen.com.
Loan subject to approval in available locations.
The ocean moves us, surfing a wave or savoring the view.
The ocean delights us as playful otters restore coastal kelp for us.
The ocean connects us.
Visit Monterey Bay Aquarium.org/slash connects.
This is an iHeart podcast.