How To Argue With An AI Booster, Part Two

32m

In part two of this week's three-part Better Offline Guide To Arguing With AI Boosters, Ed Zitron walks you through why the AI bubble is nothing like the dot-com bubble, how the cost of inference is actually going up, and why OpenAI’s massive burnrate is nothing like Uber’s.

Latest Premium Newsletter: Why Everybody Is Losing Money On Generative AI: https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/

YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.

BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

See omnystudio.com/listener for privacy information.

Press play and read along

Runtime: 32m

Transcript

Speaker 1 This is an iHeart podcast.

Speaker 3 On Fox One, you can stream your favorite news, sports, and entertainment live, all in one app.

Speaker 5 It's fing roll and unfiltered.

Speaker 7 This is the best thing ever.

Speaker 3 Watch breaking news as it breaks.

Speaker 8 Breaking tonight, we're following two major stories.

Speaker 3 And catch history in the making.

Speaker 5 Gibby, meet Freddy.

Speaker 3 Debates,

Speaker 5 drama, touchdowns.

Speaker 4 It's all here,

Speaker 9 Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.

Speaker 11 There was the six-foot cartoon otter who came out from behind a curtain.

Speaker 13 It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.

Speaker 15 Should I be telling this thing all about my loved life?

Speaker 5 I think we will see a Twitch streamer president maybe within our lifetimes.

Speaker 9 You can find close all tabs wherever you listen to podcasts.

Speaker 17 There's more to San Francisco with the Chronicle. More to experience and to explore.

Speaker 17 Knowing San Francisco is our passion.

Speaker 17 Discover more at sfchronicle.com.

Speaker 15 What happens when Delta Airlines sends four creators around the world to find out what is the true power of travel?

Speaker 19 I love that both trips had very similar mental and social perks.

Speaker 16 Very much so. On both trips, their emotional well-being and social well-being went through the roof.

Speaker 15 Find out more about how travel can support well-being on this special episode of the Psychology of Your 20s, presented by Delta.

Speaker 6 Fly and live better.

Speaker 9 Listen wherever you get your podcasts.

Speaker 12 Coolzone Media.

Speaker 20 Hello, and welcome to Better Offline. I'm your host, Edzitron.

Speaker 22 This is part 2 of our three-part series on how to argue with an AI booster.

Speaker 22 When we last left off, I'd started talking about some of the most common and vacuous talking points used by those who defend the generative AI industry, and why a lot of them are wholly without merit.

Speaker 20 These are the booster quips, assertions that, if you don't know much, sound convincing, but are easily disproven with with the right information.

Speaker 20 And in that last episode, we addressed the quips that say we're in the early days of AI and that people doubted smartphones and the internet, things they didn't do, just like they did generative AI, which they should do.

Speaker 20 In the cycle of grief, that's the denial stage. Now we're going to move on to bargaining.

Speaker 20 This is just like the dot-com boom. Even if all of this collapses, the overcapacity will be practical for the market like the fiber boom was.

Speaker 20 All right, folks, time for a little history. You know me, I'll love me some history.

Speaker 20 The fiber boom began after the Telecommunications Act of 1996 deregulated large parts of America's communications infrastructure, creating a massive boom.

Speaker 20 A $500 billion one, to be precise, primarily funded...

Speaker 20 with debt.

Speaker 20 Obviously, we're still using the infrastructure bought during that boom, and this fact is used as a defense of the insane capex spending surrounding generative AI.

Speaker 20 High-speed internet is useful, right? Sure, but the fiber optic boom period was also defined by a gluttony of overinvestment, ridiculous valuations, and genuine outright fraud.

Speaker 20 In any case, this is not remotely the same thing, and anyone making this point needs to learn the very fucking basics of technology. Let's get going.

Speaker 20 Now, the fiber optic cable of this era is mostly owned by a few companies.

Speaker 20 42% of NVIDIA's revenue is from the Magnificent 7, and the companies buying these GPUs are, for the most part, not going to go bust once the AI bubble bursts.

Speaker 20 You can also already get the cheap fiber of this era too. Cheap AI GPUs are already here.
GPUs are depreciating assets, meaning that the good deals are already happening.

Speaker 20 I found an NVIDIA A100 for $2,000 or $3,000 multiple times on eBay, and you can get the H100s, which are more powerful for, well, I think $30,000, and those things go $45,000 retail, so not brilliant.

Speaker 20 AI GPUs also do not have a variety of use cases and are limited by CUDA, NVIDIA's programming libraries and APIs.

Speaker 20 AI GPUs are integrated into applications using this language CUDA, and this is specifically NVIDIA's programming language.

Speaker 20 While there are other use cases, scientific simulations, image and video processing, data science, analytics, medical imaging, and so on, CUDA is not a one-size-fits-all digital panacea.

Speaker 20 While fiber optic cable was, it was also put everywhere, it truly did set up the future. What are these GPUs setting up exactly? Also, widespread access to cheaper GPUs has already happened.

Speaker 20 And what new use cases are there? What are the new innovative things we can do? As a result of the AI bubble, there are now many, many, many, many, many different vendors to get access to GPUs.

Speaker 20 You can pay at an hourly rate. Who knows if it's profitable, but you can do it.
And sometimes you can get them for as little as $1 an hour, which is really not good.

Speaker 20 It definitely isn't making them money. But putting the financial collapse aside, while they might be cheaper when the AI bubble bursts, does cheaper actually enable people to do new stuff?

Speaker 20 Is cost the problem? Because I think the costs are going to go up. But even if they weren't going up, what are the things that you could do that are new? What is the prohibitive cost?

Speaker 20 No one can actually answer this question because the answer isn't fun.

Speaker 20 GPUs are built to shove massive amounts of compute into one specific function again and again and again, like generating the output of a model, which remember mostly boils down to complex maths.

Speaker 20 Unlike CPUs, a GPU can't easily change tasks or handle many little distinct operations, meaning that these things aren't going to be adopted for another mass market use case because there probably isn't one.

Speaker 20 In simpler terms, this was not an infrastructure build out.

Speaker 20 The GPU boom is a heavily centralized capital expenditure-funded asset bubble where a bunch of chips will sit in warehouses or kind of fallow data centers waiting for somebody to make up a use case for them.

Speaker 20 And if an endearing one existed, we'd already have it because we already have all the fucking GPUs.

Speaker 20 Now,

Speaker 20 here's a really big boost equipment I have been looking forward to. I get a lot of people asking you about this.

Speaker 20 You're so stupid. Why am I stupid exactly? Well, five really smart guys got together and wrote AI 2027, which is a very real sounding extrapolation that shut the fuck up.

Speaker 20 Shut up.

Speaker 20 Shut up. AI 2027 is fan fiction.
If you were scared by this and you're not a booster, you shouldn't feel bad, by the way. This was written to scare you.

Speaker 20 And by the way, if you don't know what it is I'm talking about, you should consider yourself lucky.

Speaker 20 It's essentially a piece of speculative fiction that describes where Gen AI companies get fatter models that get exponentially better and the US and China are embroiled in an AI arms race.

Speaker 20 It's really silly. It's so very silly.
And I call it fanfiction because it is. If we're thinking about this in purely intellectual terms, it's up there with My Immortal.

Speaker 20 And no, I'm not explaining that. You can Google that one for yourselves.
It doesn't matter if all the people writing the fanfiction are scientists or that they have the right credentials.

Speaker 20 They themselves say that AI 2027 is a guess, an extrapolation, which means guess, with expert feedback, which means someone editing your fan fiction, and involves experience at OpenAI.

Speaker 20 There are people that worked on the shows they write fanfiction about. I'm not even insulting fanfiction, by the way.
Gonuts. You're more...

Speaker 20 You are 100 times more ethically

Speaker 20 positive than these people. At least you admit it's fanfiction.
Could Knuckles get pregnant? I'm sure somebody's found out.

Speaker 20 I'm not going to go line by line and cut this apart any more than I'm going to go and do a lengthy takedown of someone's erotic Banjo-Kazooie story, because both are fictional.

Speaker 20 The entire premise of this nonsense is that at one point someone invents a self-learning agent that teaches itself stuff and it does a bunch of other stuff requiring a bazillion compute points with different agents with different numbers after them.

Speaker 20 There is no proof that this is possible, nobody has done it and nobody will do it.

Speaker 20 AI 2027 was written specifically to fool people that want to be fooled, with big charts and the right technical terms used to lull the credulous into a wet dream in a New York Times column where one of the writers folds their hands and looks worried.

Speaker 20 It was also written to scare people that are already scared.

Speaker 20 It makes big, scary proclamations with tons of links to stuff that looks really legitimate, but when you piece it all together, it's literally just fanfiction. Except really not that endearing.

Speaker 20 My personal favorite part is mid-2026, China Wakes Up, which involves China's intelligence agencies trying to steal Open Brain's agent. No idea who this company could be referring to.

Speaker 20 Please email me if you can work it out to Idon'Care at business.org.

Speaker 20 Before the headline of AI takes some jobs after Open Brain releases a model, oh god, I'm so bored even fucking talking about this.

Speaker 20 Now, Sarah Lyons puts this well, arguing that AI 2027 and AI in general is no different from the spurious spectral evidence used to accuse someone of being a witch during the Salem witch trials.

Speaker 20 And I quote, and the evidence is spectral. What is the real evidence in AI 2027 beyond trust us and vibes? People who wrote it cite themselves in the piece.
Do not demand I take this seriously.

Speaker 20 This is so clearly a marketing device to scare people into buying your product before this imaginary window closes. Don't call me stupid for not falling for your spectral evidence.

Speaker 20 My whole life, people have been saying artificial intelligence is around the corner and it never arrives. I simply do not believe a chatbot will ever be more than a chatbot.

Speaker 20 And until you show me it doing that, I will not believe it.

Speaker 20 Anyway, AI 2027 is fan fiction, nothing more. And just because it's full of fancy words and has five different grifters on its byline doesn't mean a goddamn thing.

Speaker 17 There's more to San Francisco with the Chronicle. There's more food for thought, more thought for food.
There's more data insights to help with those day-to-day choices.

Speaker 17 There's more to the weather than whether it's going to rain.

Speaker 17 And with our arts and entertainment coverage, you won't just get out more, you'll get more out of it. At the Chronicle, knowing more about San Francisco is our passion.

Speaker 17 Discover more at sfchronicle.com.

Speaker 18 Be honest, how many tabs do you have open right now?

Speaker 6 Too many?

Speaker 1 Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.

Speaker 9 Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.

Speaker 14 Everyone's cooped up in their house. I will talk to this robot.

Speaker 13 If you're a truly engaged activist, the government already has data on you. Driverless cars are going to mess up in ways that humans wouldn't.

Speaker 9 Listen to Close All Tibes, wherever you get your podcasts.

Speaker 21 The horrible events of September 11th continue to leave their mark on our nation. Many of those who served need your help, and they need Wounded Warrior Project.

Speaker 21 I'm asking you to join us with your gift right now.

Speaker 23 So many veterans like myself physically left the war, but we live and deal with the war daily. Your donation, it warms my heart.

Speaker 23 You don't know how much that means to me to see the support of my fellow Americans.

Speaker 4 Please give to those who gave so much for us mint is still fifteen dollars a month for premium wireless and if you haven't made the switch yet here are 15 reasons why you should one it's fifteen dollars a month two seriously it's fifteen dollars a month three no big contracts four i use it five my mom uses it are you are you playing me off that's what's happening right okay give it a try at mintmobile.com slash switch upfront payment of 45 for three months plan fifteen dollars per month equivalent required new customer offer first three months only then full price plan options available.

Speaker 6 Taxes and fees extra. CMintmobile.com.

Speaker 20 Now.

Speaker 20 Now, now, now.

Speaker 20 Now, folks. We've all been waiting for this moment.

Speaker 20 And here's the ultimate boost equipment.

Speaker 20 The cost of inference is coming down. This proves that things are getting cheaper.
Here's a bonus trick for you before I get to my bet.

Speaker 20 Here we go. Ask them to explain whether things have actually got cheaper.
And if they say they have, ask them why there are no profitable AI companies.

Speaker 20 If they say they're in the growth stage, ask them why there are no profitable AI companies. Again, I'd say it's been several years and not got one.
At this point, they should try and kill you.

Speaker 20 But really, I'm about to be petty. I'm about to be petty for a fucking reason, though.

Speaker 20 In an interview on a podcast from earlier this year, that I will not even quote, because the journalist in question did not back me up and it pisses me off.

Speaker 20 Journalist Casey Newton said the following about my work.

Speaker 15 You don't think that that kind of flies in the face of Sam Altman saying that we need billions of dollars for years?

Speaker 3 No, not at all.

Speaker 3 And I think that's why it's so important when you're reading about AI to read people who actually interview people who work at these companies and understand how the technology works, because the entire industry has been on this curve where they are trying to find micro-innovations that reduce the cost of training the models and to reduce the cost of what they call inference, which is when you actually enter a query into ChatGPT.

Speaker 3 And if you plotted the curve of how the cost has been falling over time, DeepSeek is on that curve, right?

Speaker 3 So everything that DeepSeek did, it was expected by the AI labs that someone would be able to do. The novelty was just that a Chinese company did it.

Speaker 3 So to say that it like upends expectations of how AI would be built is just purely false and is the opinion of somebody who does not know what he's talking about.

Speaker 20 Newton then says several octaves higher, which shows you exactly how mad he isn't, that he thought what he said was very civil and that there are things that are true and there are things that are false like you can choose which ones you want to believe i'm not going to be so civil other than the fact that casey refers to micro innovations the you talking about and deep seek being on a curve that was expected he makes as many do two very big mistakes and personally if i was doing this

Speaker 20 I personally would not have said these things in a sentence that began with me suggesting that I, being Casey Newton in this example, knew how the technology works. Now, here's the Casey Newton quip.

Speaker 20 Inference, which is when you actually enter a query into Chat GPT, this statement is false. It's not what inference means.

Speaker 20 Inference, and I've gotten this wrong in the past too, I'm being accountable, is everything that happens from when you put in a prompt to generate an output.

Speaker 20 It's when an AI based on your prompt infers meaning.

Speaker 20 To be more specific, and quoting Google, machine learning inference is the process of running data points into a machine learning model to calculate an output, such as a single numerical score.

Speaker 20 Except that's what these things are bad at. But nevertheless, Casey will try and weasel out of this one and say this is what he meant.
It wasn't.

Speaker 20 He also said, if you plotted the curve of how the cost of inference has been falling over time.

Speaker 20 Well, that... That's wrong, Casey.
That's wrong, my man. The cost of inference has gone up over time.

Speaker 20 Now, Casey, like many people who talk about stuff without learning about it first, is likely referring to the fact that the price of tokens for some models has gone down in some cases.

Speaker 20 But you know what, folks? Let's establish some facts about inference. I'm doing the train.
I'm pulling the big horn on the invisible train, I'm cooking.

Speaker 20 Now inference as a thing that costs money is entirely different to the price of tokens, and conflating the two is journalistic malpractice.

Speaker 20 The cost of inference would be the price of running the GPU and the associated architecture, a cost we do not at this point have any real insight into.

Speaker 20 Token prices are set by the people who sell access to the tokens, such as OpenAI and Anthropic.

Speaker 20 For example, OpenAI dropped the price of its O3 models token costs almost immediately after the launch of Claudopus 4. Do you think it did that because the price of serving the models got cheaper?

Speaker 20 If you do, I don't know how you possibly put your trousers on every morning without cutting yourself in half.

Speaker 20 Now, the cost of inference conversation comes from articles that say that we now have models that are cheaper, that can now hit higher benchmark scores.

Speaker 20 Though, the article I'm referring to, which will be in the show notes, is from November 2024, and the comparison it makes is between GPT-3, which is from November 2021, and Lama 3.23b, September 2024.

Speaker 20 Now, the suggestion is, in any case, that the cost of inference is going down 10x year over year.

Speaker 20 The problem is, however, that these are raw token costs, not actual expressions of evaluations of token burn in a practical setting. And to really, I realized that was a bit technical.

Speaker 20 These are just what it costs to do something. It doesn't actually tell you how many tokens will be burned, at what volume they will be burned, because that would change things.

Speaker 20 And well, wouldn't you know it, the cost of inference actually went up as a result. In an excellent blog from Killer Code, and I did not get the chance to find out the pronunciation of this

Speaker 20 second name, so I'm just going to call her it's E-W-A-S-Y-Z-S-Z-K-A. I am so sorry.
I would rather spell it out than miss, then actually mispronounce it. I hate when people say Zitron wrong.

Speaker 20 Great blog. Anyway, let me quote.
Application inference costs increased for two reasons. The Frontier model's cost per token stayed constant, and the token consumption per application grew a lot.

Speaker 20 Token consumption per application grew a lot because models allowed for longer context windows and bigger suggestions from the models.

Speaker 20 The combination of a steady price per token and more token consumption caused app inference costs to grow about 10 times over the past two years.

Speaker 20 To explain that in really simple terms, while the costs of old models may have decreased, new models, which you need to do most things, cost about the same and the reasoning that these new models use do actually burn way, way more tokens.

Speaker 20 When these new models reason, they break a user's input down and break it into component parts, then run inference on each of those parts.

Speaker 20 When you plug an LLM into an AI coding environment, it will naturally burn an absolute shit ton of tokens, in part because of the large amount of information you have to load into the prompt and the context window or the amount of information you can load in at once.

Speaker 20 And in part because generating code is inference intensive, and also breaking down all those coding tasks, each of those tasks requiring a coding tool and taking a bunch of inference themselves.

Speaker 20 It's really bad.

Speaker 20 In fact, the inference costs are so severe that Killer Code says that a combination of a steady price per token and more token consumption caused app inference costs to grow about 10x over the last two years.

Speaker 20 I'm repeating myself, I realize, but I really need you to get one thing, which is that the cost of inference went up. But I'm not done.

Speaker 20 I refuse to let this point go because people love to say the cost of inference is going down when the cost of inference has increased.

Speaker 20 And they do so to a national audience, all while suggesting I'm wrong somehow and acting superior. I don't like being made to feel this way.
I don't think it's nice to do this to people.

Speaker 20 And if you're gonna do it, if you have the temerity to call someone out directly, at least be fucking right.

Speaker 20 I'm not wrong. You're wrong.
In fact, software developer influencer Theo Brown recently put out a video called I Was Wrong About AI Costs, They Keep Going Up, which he breaks down as follows.

Speaker 20 Reasoning models are significantly increasing the amount of output tokens being generated. These tokens are also more expensive.

Speaker 20 In one example, Brown finds that Grok4's reasoning mode uses 603 tokens to generate two words.

Speaker 20 This was a problem across every single reasoning model, as even cheap reasoning models would do the same thing. As a result, tasks are taking longer and burning more tokens.

Speaker 20 Another writer called Ethan Ding noted a few months ago that reasoning models burn so many tokens that there is no flat subscription price that works in this new world, as the number of tokens they consumed went absolutely nuclear.

Speaker 20 The price drops have also, for the most part, stopped.

Speaker 20 You cannot at this point fairly evaluate whether a model is cheaper just based on its cost per tokens, because reasoning models inherently burn and are built to inherently burn more tokens to create an output.

Speaker 20 Reasoning models are also the only way that model developers have been able to improve the efficacy of new models, using something called test time compute to burn extra tokens to complete a task.

Speaker 20 And in basically anything you're using today, there's got to be some sort of reasoning model, especially if you're coding. The cost of inference has gone up.

Speaker 20 Statements otherwise are purely false and are the opinion of somebody who does not know what he's talking about. But you ask, could the costs of inference go down?

Speaker 20 Maybe?

Speaker 20 It sure isn't trending that way, nor has it gone down yet.

Speaker 20 I also predict that there's going to be some sort of sudden realization in the media that inference is going up, which has kind of already started.

Speaker 20 The Information had a piece on it in late August, where they note that Intuit paid $20 million to Azure last year, primarily to access OpenAI's models, and is on track to spend 30 million this year, which outpaces the company's revenue growth in the same period, raising questions about how sustainable the spending is and how much of the cost it can pass along to customers.

Speaker 20 Christopher Mims in the Wall Street Journal also had a piece about the costs going up. Do not be mad at Chris.
Chris and I chatted before he submitted that piece.

Speaker 20 Like, he literally on Blue Sky called me out. It fucking rocks, by the way.

Speaker 20 Big up to Chris Mims, because it's nice to see the mainstream media actually engaging with these things, even though it's dangerous to the bubble. But you know what? The truth must win out.

Speaker 20 And the problem here is that the architecture underlying large language models is inherently unreliable.

Speaker 20 I imagine OpenAI's introduction of the router to ChatGPT-5 is an attempt to moderate both the costs of the model chosen and reduce the amount of exposure to reasoning models for simple queries.

Speaker 20 Though Sam Altman was boasting on August 10th about the significant increase in both free and paid users' exposure to reasoning models, they don't teach you this in business school.

Speaker 20 Worse still, a study written up by VentureBeat found that open weight models burn between 1.5 to 4 times more tokens, in part due to a lack of token efficiency, and in part thanks to, you guessed it, reasoning models.

Speaker 20 I quote. The findings challenge a prevailing assumption in the AI industry that open source models offer a clear economic advantage over proprietary alternatives.

Speaker 20 While open source models typically cost less per token to run, the study suggests that this advantage could be, and I quote the study, easily offset if they require more tokens to reason about a given problem.

Speaker 20 And models keep getting bigger and more expensive too.

Speaker 20 So why did this happen?

Speaker 20 Well, it's because model developers hit a wall of diminishing returns, and the only way to make models do more was to make them burn more tokens to generate a more accurate response, which is a very simple way of describing reasoning, a thing that OpenAI launched in September 2024, and others followed.

Speaker 20 As a result, all the gains from powerful new models come from burning more and more tokens.

Speaker 20 The cost per million token number is no longer an accurate measure of the actual cost of generative AI because it's much, much, much, much harder to tell how many tokens a reasoning model may burn.

Speaker 20 And it varies, as Theo Boing.

Speaker 20 Theo Boing, I'm keeping that, all right? You get the real cuts, as Theo Brown noted from model to model. In any case, there really is no changing this path.
These companies are out of ideas.

Speaker 20 Now, another,

Speaker 20 another one of my favorite ultimate boost equips. This is a classic, and I still get this on social media.

Speaker 20 I have people yapping in my ear saying, OpenAI and Anthropic are just like Uber because Uber burned $25 billion over the course of 15 or so years. And look, look, Edward, they're now profitable.

Speaker 20 Why are you you calling me Edward? Shut up. This proves that Open AI, a totally different company with different economics, will be totally fine.

Speaker 20 So I've heard this argument maybe 50 times in the last year, to the point that I had to talk about it in my piece,

Speaker 20 How Does Open AI Survive, which I also turned into a podcast around July 2024. Go back and link.
I'll link to it in the piece. Yada yada yada.

Speaker 20 Nevertheless, people make a few points about Uber and AI that I think are fundamentally incorrect, and I'm going to break them down for you.

Speaker 20 Now, they claim that AI is making itself too big to fail, embedding itself everywhere and becoming essential. And none of these things are the case.

Speaker 20 I've heard this argument a lot, by the way, and it's one that's both ahistorical and alarmingly ignorant of the very basics of society. But it, the government! No, no, no, no, no.

Speaker 20 Now, you've heard. You've heard OpenAI got a $200 million defense contract with an estimated completion date of July 2026.
And just to be clear, that's up to $200 million.

Speaker 20 And that they're selling ChatGPT Enterprise to the US government for a dollar a year, along with Anthropic doing the same thing. And even Google's doing it, except they're doing 40 cents for a year.

Speaker 20 Now you're probably hearing this and thinking, oh shit, this means the government has paid them. They're never going away.

Speaker 20 And I cannot be clear enough that you believing this is the very intention of these deals. They are built specifically to make you feel like these things are never going away.

Speaker 20 This is also an attempt to get in with the government at a rate that makes trying these models a no-brainer. At which point I ask, and

Speaker 20 the government is going to have cheap access to AI software does not mean that the government relies on it.

Speaker 20 Every member of the government having access to ChatGPT, something that is not even necessarily the case, does not make this software useful, let alone essential.

Speaker 20 And if OpenAI burns a bunch of money making it work for them, it still won't be essential because large language models are not actually that useful for doing stuff. Now, let's talk Uber.

Speaker 20 Uber was and is useful, which eventually made it essential.

Speaker 20 Uber used lobbyist Bradley Tusk to steamroll local governments into allowing Uber to operate in their cities, but Tusk did not have to convince local governments that Uber was useful or or have to train people how to use Uber.

Speaker 20 Uber's too big to fail moment was that local cabs kind of fucking suck just about everywhere. You ever try and take a yellow cab from downtown Manhattan to Hoboken, New Jersey or Brooklyn or Queens?

Speaker 20 Did you ever try and pay with a credit card? How about trying to get a cab outside a major metropolitan area? Do you remember how bad it was? It was really awful.

Speaker 20 Like, I don't think people realize or remember how bad it was. And I'm not saying that Uber is good.
I'm not glorifying Uber in any way, but the experience that Uber replaced was very, very bad.

Speaker 20 As a result, Uber did become too big to fail because people now rely on it because the old system sucked.

Speaker 20 Uber used its masses of venture capital to keep prices low to get people used to it too, but the fundamental experience was better than calling a cab company and hoping they showed up.

Speaker 20 I also want to be clear that this is not me condoning Uber. Take public transport if you can.

Speaker 20 To be clear, Uber has created a new kind of horrifying extractive labor practice, which deprives people of benefits and dignity, paying off academics to help the media gloss over the horrors of their platform, and also now having to increase prices.

Speaker 20 So that's how they reach profitability by doing that. That isn't something that's going to happen with generative AI, just the costs are too high.
They're way too high.

Speaker 9 Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.

Speaker 11 There was the six-foot cartoon otter who came out from behind a curtain.

Speaker 13 It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.

Speaker 6 Should I be telling this thing all about my love life?

Speaker 5 I think we will see a Twitch stream or president maybe within our lifetimes.

Speaker 9 You can find Close All Tabs wherever you listen to podcasts.

Speaker 26 The ocean delights us. Some marvel at the colorful world below the surface.
The ocean feeds us.

Speaker 27 Others find nourishment in its bounty.

Speaker 26 The ocean teaches us how our everyday choices impact even the deepest places. The ocean moves us, whether we're riding a wave or soaking in its breathtaking beauty.

Speaker 27 The ocean connects us. Find your connection at Monterey Bay Aquarium.org slash connects.

Speaker 24 There's a lot going on in Hollywood. How are you supposed to stay on top of it all? Variety has the solution.

Speaker 24 Take 20 minutes out of your day and listen to the new Daily Variety podcast for breaking entertainment news and expert perspectives.

Speaker 11 Where do you see the business actually heading?

Speaker 24 Featuring the iconic journalists of Variety and hosted by co-editor-in-chief Cynthia Littleton.

Speaker 16 The only constant in Hollywood is change.

Speaker 24 Open your free iHeartRadio app, search Daily Variety, and listen now.

Speaker 28 Tired of spills and stains on your sofa?

Speaker 28 Washablesofas.com has your back, featuring the Anibay collection, the only designer sofa that's machine washable inside and out, where designer quality meets budget-friendly prices.

Speaker 28 That's right, sofas started just $699.

Speaker 28 Enjoy a no-risk experience with pet-friendly, stain-resistant, and changeable slip covers made with performance fabrics.

Speaker 28 Experience cloud-like comfort with high-resilience foam that's hypoallergenic and never needs fluffing. The sturdy steel frame ensures longevity, and the modular pieces can be rearranged any time.

Speaker 28 Check out washable sofas.com and get up to 60% off your Anibay sofa, backed by a 30-day satisfaction guarantee. If you're not absolutely in love, send it back for a full refund.

Speaker 28 No return shipping or restocking fees, every penny back. Upgrade now at washablesofas.com.
Offers are subject to change and certain restrictions may apply.

Speaker 20 But anyway, what is essential about generative AI?

Speaker 20 What exactly, and be specific, is the essential experience of generative AI?

Speaker 20 What are we, if ChatGPT disappeared tomorrow, what actually disappears?

Speaker 20 And on an enterprise or governmental level, what exactly are these tools doing for governments that would make removing them so painful? What use cases? What outcomes?

Speaker 20 If your answer here is to say, well, they're putting it in and they're choosing, they're choosing which people to cut out of benefits and they're doing it. Please, goddamn.

Speaker 20 This is what they want you to do. They want you to be scared so

Speaker 20 they can feel powerful. They're not doing that.
You notice that we get all these horrible stories, by the way, of internal government things shoving stuff into LLMs. You know what we don't get?

Speaker 20 Another thing. We don't get, oh, and then happen.
It's just they're doing this scary, bad thing that they shouldn't be. They shouldn't be putting people's private information into.

Speaker 20 Anyway, I'm rambling. Uber's essential nature is that millions of people use it in place of regular taxis.

Speaker 20 and it effectively replaced decrepit, exploitative systems like the Yellow Cab Medallions in New York with its own tech-enabled exploitation system that, nevertheless, worked far better for the user.

Speaker 20 Okay, I also want to do a side note just to acknowledge that the disruption from Uber brought something to the medallion system that was genuinely horrendous.

Speaker 20 The consequences were horrifying for the owners of the medallions, some of who had paid more than a million dollars for the privilege of driving a New York cab and were burdened under mountains of debt.

Speaker 20 That whole system is so fucking evil. I think it's horrifying.
And I think the payday loan people involved should all be in fucking prison. Worst scum of the world.

Speaker 20 The people who are taking advantage of people come to this country to drive a fucking cab that they have to take out massive loans to buy. That is evil.
Uber is also, just to be clear.

Speaker 20 But that also is. That's the point I'm trying to make.
People should.

Speaker 20 feel sorry for the victims of that system. That system was a kind of corruption unto itself.

Speaker 20 Anyway, getting back to the thing, because I don't know, I feel, I actually feel a lot for the people who were the victims of the medallion system. It's fucking rough.

Speaker 20 And every time I think of it, I feel very sad inside. But let's get back to the episode.
I don't want to think about that any longer.

Speaker 20 There really are no essential use cases for ChatGPT or really any Gen AI system. You cannot point to one use case that is anywhere near as necessary as cabs in cities.

Speaker 20 And indeed, the biggest use cases, things like brainstorming and search, are either easily replaced by any other commoditized LLM or already exist in the case of Google search.

Speaker 20 Now, let's do another booster quip.

Speaker 20 Data centers are important economic growth vehicles and are helping drive innovation and jobs throughout America. Having data centers promotes innovation, making open AI and AI data centers essential.

Speaker 20 And the answer to that is nope.

Speaker 20 Nope. Sorry, this is a really simple one.
These data centers are not in and of themselves driving much economic growth, other than the costs of building them, which I went into last episode.

Speaker 20 As I've discussed again and again, there's maybe $40 billion in revenue and no profit coming out of AI companies. There isn't any economic growth.
They're not holding up anything other than the

Speaker 20 massive infrastructure built to make them make no money and lose billions.

Speaker 20 There's no great loss associated with the death of large language models or the death of this era.

Speaker 20 Taking away Uber would be genuinely catastrophic for some people's ability to get places and people's jobs, even if they are

Speaker 20 horrifyingly underpaid. But here's another booster grip.
Uber burned a lot of money, $25 billion or more, to get where it is today. Ooh, Mr.
Zittron, Mr. Zittron, you're dead.

Speaker 20 And my response is that OpenAI and Anthropic have both separately burned more than four times as much money since the beginning of 2024 as Uber did in its entire existence.

Speaker 20 So the classic and wrong argument about OpenAI and companies like OpenAI is that Uber burned a bunch of money is now cash flow positive or profitable.

Speaker 20 I want to be clear that Uber's costs are nothing like large language models, and making this comparison is ridiculous and desperate.

Speaker 20 But let's talk about raw losses shall we and where people are making this assumption.

Speaker 20 So Uber lost $24.9 billion in the space of four years from 2019 to 2022 in part because of the billions it was spending on sales and marketing and RD, $4.6 billion and $4.8 billion respectively in 2019 alone.

Speaker 20 It also massively subsidized the cost of rides, which is why prices had to increase and spent heavily on driver recruitment, burning cash to get scale, you know, the classic Silicon Valley way.

Speaker 20 This is absolutely nothing like how large language models are growing and I'm tired of defending this point. But defend it I shall.

Speaker 20 OpenAI and Anthropic burn money primarily through compute costs and specialized talent.

Speaker 20 These costs are increasing, especially with the rush to hire every single AI scientist at the most expensive price possible.

Speaker 20 There are also essential immovable costs that neither OpenAI nor Anthropic have to shoulder.

Speaker 20 The construction of the data centers necessary to train and run inference for their models, and of course the GPUs inside them, which I will get to in a little bit.

Speaker 20 Yes, Uber raised $33.5 billion through multiple rounds of post-IPO dam, though it raised about $25 billion in actual funding. Yes, Uber burned an absolute ars ton of money.
Yes, Uber has scale.

Speaker 20 But Uber has not burned money as a means of making its product functional or useful. Uber worked immediately.
I mean, was it 2012, I think I used it for the first time? Maybe earlier?

Speaker 20 No, no, it would have been 2010. It worked immediately.
You used it. You were like, wow, this I can just put in my address.

Speaker 20 I don't have to say my address three times because I have a British accent accent and nobody can fucking understand me sometimes. You can, though.
You're special.

Speaker 20 Yeah, it was really obvious that it worked.

Speaker 20 And also, the costs associated with Uber and its capital expenditures from 2019 through 2024 were around $2.2 billion, by the way, are minuscule compared to the actual real costs of OpenAI and Anthropic.

Speaker 20 Both OpenAI and Anthropic lost around $5 billion each in 2024, but their infrastructure was entirely paid for by either Microsoft, Google, or Amazon.

Speaker 20 And by which I mean the building of it and the expansion therein.

Speaker 20 While we don't know how much of this infrastructure is specifically for OpenAI or Anthropic, as the largest model developers, it's fair to assume that a large chunk, at least 30% of Amazon and Microsoft's capital expenditures, have been to support these loads.

Speaker 20 Great sentence to cut and listen to again. I also leave out Google as it's unclear whether it's expanded its infrastructure for Anthropic, but we know Amazon has done so.

Speaker 20 As a result, the true cost of OpenAI and Anthropic is at least 10 times what Uber burned. Amazon spent $83 billion in capital expenditures in 2024 and expects $105 billion of the fuckers in 2025.

Speaker 20 Microsoft spent $55.6 billion in 2024 and expects to spend $80 billion this year. I'm actually confident most of that is OpenAI.

Speaker 20 But based on my conservative calculations, the true cost of OpenAI is at least $82 billion, and that only includes CapEx in 2024 onwards.

Speaker 20 Based on 30% of Microsoft's CapEx, as not everything has been invested yet in 2025, and OpenAI might not be all of the capex.

Speaker 20 And also the $41.4 billion of funding that OpenAI has received so far. The true cost of Anthropic is around $77.1 billion, and that's not including the $13 billion they just raised.

Speaker 20 But it does include all their previous funding and 30% of Amazon's capex in the beginning of 2024.

Speaker 20 Now, these are inexact comparisons, but the classic argument is that Uber burned lots of money and worked out okay.

Speaker 20 When in fact, the combined capital expenditures from 2024 onwards that are necessary to make OpenAI and Anthropic work are each on their own four times what Uber burned in over a decade.

Speaker 20 I also believe these numbers are conservative.

Speaker 20 There's a good chance that OpenAI and Anthropic dominate the capex of Amazon, Google and Microsoft, in part because of what the fuck else are they buying all these GPUs for, as their own AI services don't appear to be making much money at all.

Speaker 20 Anyway, to put it real simple, AI has burned way more in the last two years than Uber burned in 10.

Speaker 20 Uber didn't burn money in the same way, didn't burn much in the way of capital expenditures, didn't require massive amounts of infrastructure, and isn't remotely the same in any way, shape, or form, other than that it burned a lot of money.

Speaker 20 And that burning wasn't because it was trying to build the core product, it was trying to scale. It's all so stupid, and you know what? I'm not even done.

Speaker 20 Our next and final AI booster episode, we'll breeze through the dumbest of the dumb arguments. And I'll say why I'm finally drawing a line under these arguments for real, because it needs to be said.

Speaker 20 We need to say something.

Speaker 4 I hope you've enjoyed this.

Speaker 20 See you tomorrow.

Speaker 4 Godspeed.

Speaker 4 Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Mattasowski.
You can check out more of his music and audio projects at matasowski.com.

Speaker 4 M-A-T-T-O-S-O-W-S-K-I dot com.

Speaker 2 You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.

Speaker 2 I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash betteroffline to check out our Reddit.

Speaker 2 Thank you so much for listening.

Speaker 29 Better Offline is a production of CoolZone Media.

Speaker 29 For more from CoolZone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

Speaker 18 Be honest, how many tabs do you have open right now?

Speaker 6 Too many?

Speaker 1 Sounds like you need close all tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.

Speaker 9 Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.

Speaker 14 Everyone's cooped up in their house. I will talk to this robot.

Speaker 13 If you're a truly engaged activist, the government already has data on you. Driverless cars are going to mess up in ways that humans wouldn't.

Speaker 9 Listen to Close All Tabs, wherever you get your podcasts.

Speaker 14 Ah, smart water. Pure, crisp taste, perfectly refreshing.

Speaker 26 Wow, that's really good water.

Speaker 14 With electrolytes for taste, it's the kind of water that says, I have my life together.

Speaker 28 I'm still pretending the laundry on the chair is part of the decor.

Speaker 14 Yet, here you are, making excellent hydration choices.

Speaker 28 I do feel more sophisticated.

Speaker 14 That's called having a taste for taste.

Speaker 18 Huh, a taste for taste.

Speaker 14 I like that. Smartwater.

Speaker 18 For those with a taste for taste, grab yours today.

Speaker 25 Every business has an ambition. PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.

Speaker 25 And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards. So you can focus on scaling up.

Speaker 5 When it's time to get growing, there's one platform for all business: PayPal Open. Grow today at PayPalOpen.com.

Speaker 25 Loan subject to approval in available locations.

Speaker 26 The ocean moves us, surfing a wave or savoring the view. The ocean delights us as playful otters restore coastal kelp for us.
The ocean connects us.

Speaker 27 Visit Monterey Bay Aquarium.org/slash connects.

Speaker 1 This is an iHeart podcast.