The Case Against Generative AI (Part 1)
In part one of this week’s four-part case against generative AI, Ed Zitron walks you through how generative AI is sold through a complete misunderstanding of the concept of labor - and myth-building by companies like NVIDIA and OpenAI.
Latest Premium Newsletter: OpenAI Needs A Trillion Dollars In The Next Four Years: https://www.wheresyoured.at/openai-onetrillion/
YOU CAN NOW BUY BETTER OFFLINE MERCH! go to https://cottonbureau.com/people/better-offline use free99 for free shipping on orders of $99 or more.
Newsletter: wheresyoured.at Reddit: http://www.reddit.com/r/betteroffline Discord chat.wheresyoured.at Ed's Socials -
http://www.twitter.com/edzitron
instagram.com/edzitron
https://bsky.app/profile/edzitron.com
https://www.threads.net/@edzitron
email me ez@betteroffline.com
SOURCE LINKS: http://www.tinyurl.com/betterofflinelinks
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast.
There's more to San Francisco with the Chronicle.
There's more food for thought, more thought for food.
There's more data insights to help with those day-to-day choices.
There's more to the weather than whether it's going to rain.
And with our arts and entertainment coverage, you won't just get out more, you'll get more out of it.
At the Chronicle, knowing more about San Francisco is our passion.
Discover more at sfchronicle.com.
So I love Square.
They make paying for stuff at a store really easy.
I just use them at Court Street Grocers to buy a sandwich here in Brooklyn, New York.
Their stuff's reliable, easy to use, and it just works for both businesses and their customers.
Square keeps up, so you don't have to slow down.
Get everything you need to run and grow your business without any long-term commitments.
And why wait?
Right now, you can get up to $200 off Square hardware at square.com/slash go/slash better offline.
That's
a r e dot com slash go slash better offline.
Run your business smarter with square.
Get started today.
Your global campaign just launched.
But wait, the logo's cropped.
The colors are off and did legal clear that image?
When teams create without guardrails, mistakes slip through.
But not with Adobe Express, the quick and easy app to create on-brand content.
Brand kits and lock templates make following design guidelines a no-brainer for HR sales and marketing teams.
And commercially safe AI powered by Firefly lets them create confidently so your brand always shows up polished, protected, and consistent everywhere.
Learn more at adobe.com slash go slash express.
This is Larry Flick, owner of the Floor Store.
Leaves are falling and so are our prices.
Welcome to the Floor Stores Fall Sale.
Now through October 14th, get up to 50% off store-wide on carpet, hardwood, laminate, waterproof flooring, and much more.
Plus two years interest-free financing.
And we pay your sales tax.
The Floor Stores Fall Sale.
Cooler days, hotter deals, and better floors.
Go to floorstores.com to find the nearest of our 10 showrooms from Santa Rosa to San Jose.
The Floor Store, your area flooring authority.
Coolzone Media.
Hello, and welcome to Better Offline.
I am, of course, your host, Ed Zitron.
And after a few three-part episodes, I had an idea.
What if I did a four-parter?
In all seriousness, I know that this is a little bit long, but the topic we're about to explore demands quite a bit of depth, and it isn't something I could really do justice to in a one-parter or two-parter, or I guess even three-parter.
But let's get into it.
Over the last few months, we've felt the vibe shift downward in an aggressive way, with both Mark Zuckerberg and Clammy Sam Altman saying that we're in a bubble.
In the latter case, said warnings of a bubble are always couched in rank hypocrisy, as it's always implied that whoever it is and the companies they represent aren't part of that bubble but rather it's other people and other companies making unfortunate decisions.
The thing is, there's really no escape for either of these guys, not for Zuck and definitely not for Sam Ortman.
And over the next four episodes I'm going to make a comprehensive case for the fact that we're in a bubble and condense everything I've been talking about into one series.
And I know I've been all over the place and I get a lot of people saying, oh, well, where did you talk about this and where did you talk about that?
And that's kind of fair when you put out as much as I do.
But I'm going to break this down in four episodes.
I'm going to give you a comprehensive...
argument against the bubble.
Well, I mean the for a bubble, I guess, but against generative AI in general.
But in this episode, I think it's good to start from the beginning and work our way forward, to track the thread from the origins of ChatGPT to the billions burned building data centers all over the world and the weak business justifications for burning in nearly a trillion dollars to keep this hollow industry alive.
Now in 2022, a kind of company called OpenAI surprised the world with a website called ChatGPT that could generate text that sort of sounded like a person using a technology called large language models, LLMs, which can also be used to generate images, video, and computer code, or at least would eventually.
Large language models require entire clusters of servers connected with high-speed networking, all containing this thing called a GPU, graphics processing units.
These are different to the GPUs in your Xbox or laptop or gaming PC.
They cost much, much more, and they're good at doing the processes of inference, the creation of an output of any any LLM, and training, feeding masses of training data to the models, or feeding them information about what a good output might look like so they can later identify a thing or replicate it.
These models showed some immediate promise in their ability to articulate concepts or generate video, visuals, audio, text, and code.
They also immediately had one glaring, obvious problem.
Because they're probabilistic, meaning that they're just guessing whatever the right output might be, these models can't actually be relied upon to do exactly the same thing every single time.
So if you generated a picture of a person that you wanted to, for example, use in a storybook, every time you create a new page using the same prompt to describe the protagonist, the person would look different, and that difference could be minor, something that a reader could shrug off, or it could make the character look like a completely different person.
Now, none of this, by the way, is me validating or saying that any of this stuff is good, I'm just describing it.
Moreover, the probabilistic nature of generative AI meant that whenever you asked it a question, it would guess as to the answer, not because it knew the answer, but rather because it was guessing on the right word to add in a sentence based on previous training data as a result these models would frequently make mistakes something which we later referred to as hallucinations and that's not even mentioning the cost of training these models the cost of running them the vast amounts of computational power they required the fact that the legality of using materials scraped from books and the web without the owner's permission was and remains legally dubious or the fact that nobody seemed to know how to use these models to actually create profitable businesses.
These problems were overshadowed by something flashy and new, and something that investors and the tech media believed would eventually automate jobs that have proven most resistance towards information: knowledge work, and the creative economy.
The newness and hype of these expectations sent the market into a frenzy, with every hyperscaler immediately creating the most aggressive market for one supplier I've ever seen.
Nvidia has sold over $200 billion of GPUs since the beginning of 2023, becoming the largest company on the American stock market and trading at over $170 as of writing this sentence, only only a few years after being worth $19.52 a share.
Now there's a stock split that happened there, but it works out that way.
Now while I've talked about some of the propelling factors behind the AI wave, automation and novelty, that's not really the complete picture.
A huge reason why everybody decided to do AI was because the software industry's growth was slowing and SaaS, software as a service company, valuations, were stalling or dropping, resulting in the terrifying prospect of companies having to under-promise and over-deliver and be efficient.
You know, gross things like running sustainable businesses.
Things that normal companies, those whose valuations aren't contingent on ever-increasing, ever-constant growth, don't have to worry about because they're normal companies.
Suddenly, there was a new promise of new technology, large language models that were getting exponentially more powerful, which was mostly a lie, but hard to disprove, because powerful can mean basically anything.
And the definition of powerful depended entirely on whoever you asked at any given time and what that person's motivations were.
The media also immediately started tripping over its own feet, mistakenly claiming OpenAI's GPT-4 model tricked a task rabbit into solving a capture, it didn't, this never happened, or saying that, and I quote, people who don't know how to code already used bots to produce full-fledged games.
And if you weren't wondering what the New York Times was referring to when they said full-fledged there, it meant Pong.
and a cobbled-together rolling demo of Skyroads, a game from 1993, likely because a bunch of that training data was fed into the models.
Now, the media and investors helped peddle the narrative that AI was always getting better, could basically do anything, and that any problems you saw today would inevitably be solved in a few short months or years or at some point, I guess.
Not really sure when that point is, but damn do they think it's coming.
And LLMs were touted as a kind of digital panacea and the companies building them offered traditional software companies the chance to plug these models into their software using an API, thus allowing them to write the same generative AI wave that every other company was riding.
The model companies companies similarly started going after individual and business customers, offering software and subscriptions that promised the world, though this mostly boiled down to chatbots that could generate stuff and then doubled down with the promise of agents, a marketing term that's meant to make you think autonomous digital worker, but really means broken digital chatbot of some sort, or just broken digital product.
It really depends how you're feeling that day.
Throughout this era, investors in the media spoke with a sense of inevitability that they never really backed up with data.
It was an era based on confidently asserted vibes.
Everything was always getting better and more powerful, even though there was never much proof that this was truly disruptive technology, other than in its ability to disrupt apps you were using with AI, making them worse, for example, suggesting questions on every Facebook post that you could ask Meta AI, but which Meta AI couldn't answer.
And I mean on memes, on just random posts.
It's really not useful in any way, shape, or form.
AI became omnipresent, and it eventually grew to mean mean everything and nothing.
Open AI would see its every move lauded over like a gifted child.
Its CEO Sam Ullman called the Oppenheimer of our age, even if it wasn't really obvious why everybody was impressed.
GPT-4 felt like something a bit different, but was it actually meaningful?
The thing is, artificial intelligence is built and sold on not just faith, but a series of myths that AI boosters expect us to believe with the same certainty that we treat things like gravity or the boiling point of water.
Can large language models actually replace coders?
Not really, no, and I'll get into why later in this series.
So I'm a big fan of Quince.
I've been shopping with them long before they advertised with the show, and I just picked up a bunch of their Pima cotton t-shirts after they came back in stock, as well as another overshirt, because I love to wear them like a jacket over a t-shirt.
Talking of jackets, I'm planning to pick up one of their new leather racer jackets very, very soon.
Their clothes fit well, they fall nicely on the body, and feel high quality, like you'd get at a big, nice nice department store, except they're a lot cheaper because Quince is direct-to-consumer.
And that's part of what makes Quince different.
They partner directly with Ethical Factories and skip the middlemen, so you get top fabrics and craftsmanship at half the price of similar brands, and they ship quickly too.
I highly recommend them, and we'll be giving them money in the future.
Layer up this fall with pieces that feel as good as they look.
Go to quince.com slash better for free shipping on your order and 365-day returns.
Now available in Canada too.
That's q-u-i-n-ce-e
slash better.
Free shipping and 365 day returns.
Quince.com slash better.
Still using a copy-paste website?
Break the template trap with Framer.
Whether you're overwhelmed by traditional site builders or frustrated with cookie-cutter designs, Framer gives you the freedom to create a site that's professional, polished, and uniquely yours.
Hit publish and Framer ships your site worldwide in seconds.
No code, no compromises.
While DIY website tools are everywhere, most fall short on design and performance.
Framer changes that with a powerful, user-friendly platform that delivers developer-level results without writing a single line of code.
Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes.
It's free to start.
Browse hundreds of pixel-perfect templates or design from a totally blank canvas, with multiplayer collaboration allowing your writer, designer, and marketer all to tweak the same page at once.
No version control nightmares.
Ready to build a site that looks hand-coded without hiring a developer?
Launch your site for free at framer.com and use code offline to get your first month of pro on the house.
That's framer.com promo code offline.
Framer.com promo code offline.
Rules and restrictions may apply.
Tires matter.
They're the only part of your vehicle that touches the road.
Tread confidently with new tires from Tire Rack.
Whether you're looking for expert recommendations or know exactly what you want, Tire Rack makes it easy.
Fast, free shipping, free road hazard protection, and convenient installation options.
Go to tire rack.com to see tire test results, tire ratings, and consumer reviews.
And be sure to check out all the special offers.
TireRack.com, the way tire buying should be.
Is your financial advisor supported by a firm with over 130 years of strength and stability in changing markets?
Every AmeriPrize financial advisor is.
When you work with Ameriprise, you get personal financial advice from an advisor who knows you and a range of goal-based investing and solutions.
No wonder AmeriPrize is rated 4.9 out of 5 in overall client satisfaction.
Visit Ameriprise.com slash advice for information and disclosures.
Securities offered by AmeriPrize Financial Services LLC, member FINRA and SIPC.
Can Sora, OpenAI's video creation tool, replace actors or animators?
No, not at all, but it still fills the air full of tension because you can immediately see who is pre-registered to replace everyone that works for them.
AI is apparently replacing workers, but nobody seems able to prove it at scale.
But every every few weeks a story runs where everybody tries to pretend that AI is replacing workers with some sort of poorly sourced and incomprehensible study, never actually saying somebody's job got replaced by AI because it isn't happening at scale.
And because if you provide real-world examples, people can actually check if they're true.
Now I want to be clear.
Some people have lost jobs to AI.
Not just, not really white-collar workers, software engineers, or really any of the career paths that the mainstream media and AI investors would have you believe.
Brian Merchant has done excellent work covering how LLMs have devoured the work of translators, using cheap, almost good automation to lower already stagnant wages in a field that has already been hurting before the advent of generative AI, with some having to abandon the field and others pushed into bankruptcy.
I've heard the same for art directors, SEO experts and copy editors, and Christopher Mims of the Wall Street Journal covered these last year.
These fields all have something in common.
Shitty bosses with little regard for their customers who have been eagerly waiting for the opportunity to slash labor.
To quote Merchant, the drumbeat, marketing and pop culture of powerful AI encourages and permits management to replace or degrade jobs they might not otherwise have.
Across the board, the people being replaced by AI are the victims of lazy, incompetent cost cutters who don't care if they ship poorly translated text.
To quote Merchant again, AI hype has created the cover necessary to justify slashing rates and accepting just good enough automation output for video games and media products.
Yet the jobs crisis facing translators speaks to the larger flaws of the large language model era and why other careers aren't seeing this kind of disruption.
Generative AI creates outputs and by extension defines all labor as some kind of output created from a request.
In the case of translation, it's possible for a company to get by with a shitty version because many customers see translation as what do these words say, even though, as one worker told Brian Merchant, translation is about conveying meaning.
Nevertheless, translation work has already started to condense to a world where humans would at times clean up machine-generated text, and the same worker warned that the same might come for other industries.
Yet the problem is that translation is a heavily output driven industry, one where idiot bosses can say, oh yeah, that's fine, because they ran an output back through Google Translate and it seemed fine in their native tongue.
The problems of a poor translation are obvious, but customers of translation are, it seems, often capable of getting by with a shitty product.
The problem is that most jobs are not output-driven at all and what we're buying from a human being is a person's ability to think and do.
Every CEO talking about replacing workers with AI is an example of the real problem, that most companies are run by people who don't understand or experience the problems they're solving, don't do any real work, don't face any real problems, and thus can never be trusted to solve them.
In the era of the business idiot, which is a piece I wrote a while ago, I talked about how this was the result of letting management consultants and neoliberal free market sociopaths take over everything, leaving us with companies run by people who don't know how the companies make money, just that they must always make more without fail.
And when you're a big, stupid asshole, every job that you see is condensed to its outputs, outputs and not the stuff that leads up to the output or the small nuances and conscious decisions that make an output good as opposed to simply acceptable or even bad.
What does a software engineer do?
They write code, right?
What does a writer do?
They write words, right?
What does a hairdresser do?
They cut hair.
Yet that's of course not actually the case.
As I'll get into later in the series, a software engineer does far more than just code.
And when they write code, they're not just saying, what would solve this problem with a big smile on their face?
They're taking into account their years of experience, what code does, what code could do, and all the things that might break as a result, and all of the things that you can't really tell from just looking at the code, like whether there's a reason things are made in a particular way.
And a good coder doesn't just hammer at the keyboard with the aim of doing a particular task.
They factor in questions like, how does this functionality fit into the code that's already there?
Or if someone has to update this code in the future, how do I make it easy for them to understand what I've written and make changes without breaking a bunch of other stuff?
A writer doesn't just write words.
They jostle ideas and ideas and emotions and thoughts and facts and feelings into a condensed piece of text.
They sit up late at night typing thousands and thousands of words that drives them insane.
It's often quite emotive, or at the very least, driven or inspired by a given emotion, which is something that an AI simply can't replicate in a way that's authentic or believable.
And a hairdresser doesn't just cut hair.
They cut your hair, which may be wiry, dry, oily, long, short, healthy, unhealthy on a scalp with particular issues at a time of year when perhaps you want to change length at a time that fits you and the way you like it, which may be impossible to actually write down, but they get it just right.
And they make conversation, making you feel at ease while they snip and clip away at your tresses with you never having to think for a second, fuck, does this person know what they're doing?
Are they going to listen to me?
This is the true nature of labor that executives fail to comprehend at scale.
That the things we do are not units of work but extrapolations of experience, emotion, and context that cannot be condensed in written meaning or bunches of trading material.
Business idiots see our labor as the results of a smart manager saying, do this, rather than human ingenuity interpreting both a request and the shit the manager didn't say.
Now, what does a CEO do?
Um,
well, uh, I did look, and a Harvard study said that they spend 25% of their time on people and relationships, 25% on functional and business unit reviews, 16% on organization and culture, and 21% on just strategy, with a few percent here and there for things like professional development.
Hmm.
That's who runs the vast majority of companies.
People that describe their work predominantly as looking at stuff, talking to people, thinking what we do next, and going to lunch.
The most highly paid jobs in the world are impossible to describe.
Their labor described in a mishmash of linked inspiration.
Yet everybody else's labor is an output that can be automated.
As a result, large language models must seem like magic to these dickheads.
When you see everything as an outcome, an outcome you may or may not understand, and definitely don't understand the process behind, let alone care about, you kind of already see your workers as LLMs.
You create a stratification of the workforce that goes beyond the normal organizational chart, with senior executives, those closer to the class level of CEO, acting as those who have risen above the doldrums of doing things to the level of decision-making, a fuzzy term that can mean everything from making nuanced decisions with input from multiple different subject matter experts, to, as ServiceNow Bill McDermott did in 2022, and I quote, make it clear to everybody in a boardroom of other executives that everything they do must be AI, AI, AI, AI, AI, and that's five of those.
The same extends to some members of the business and tech media that have, for the most part, gotten by without having to think too hard about the actual things the companies are saying.
Look, I realize this sounds a little mean, and it's not a unilateral statement.
And I must, must be clear, it doesn't mean that these people know nothing, just that it's been possible to scoot through the world without thinking too hard about whether or not something is true, just because an executive said it.
When Salesforce said back in 2024 that its Einstein trust layer in AI would be transformational for jobs, the media dutifully wrote it down and published it without a second thought.
It fully trusted Mark Benioff when he said that agent force agents would replace human workers, and then again when he said that AI agents were doing 30 to 50 percent of all the work in Salesforce itself, even though that's an unproven and nakedly ridiculous statement.
Salesforce's CFO, by the way, said earlier in this year that AI wouldn't boost sales growth in 2025.
One would think this would change how Salesforce was covered or how seriously one takes Mark Benioff, but it hasn't because nobody's really paying attention.
In fact, nobody seems to want to do their job in this case.
And this is how the core myths of generative AI were built, by executives saying stuff and the media publishing it without thinking about it.
AI is replacing workers.
AI is writing entire computer programs.
AI is getting exponentially more powerful.
What does powerful mean?
Well, it means that the models are getting better on benchmarks that are rigged in their favor.
But because nobody fucking explains what the benchmarks are, regular people are regularly told that AI is powerful and getting more powerful every single day.
The only thing powerful about generative AI is its pathology.
The world's executives, entirely disconnected from labor and natural production, are doing the only thing they know how to, spend a bunch of money and say vague stuff about AI being the future.
There are people, journalists, investors and analysts that have built entire careers on filling in the gaps for the powerful as they splurge billions of dollars and repeat with increasing desperation that the future is here and then, well,
absolutely nothing else happens.
You've likely seen a few ridiculous headlines recently, though.
One of the most recent and most absurd is that OpenAI will pay Oracle $300 billion over the next four years, closely followed with the claim that NVIDIA will invest, and I put that in air quotes, $100 billion in OpenAI to build 10 gigawatts of AI data centers, though the deal is structured in a way that means OpenAI is paid progressively as each gigawatt is deployed.
And also apparently OpenAI will be leasing the chips rather than buying them outright.
I must be clear that these deals are intentionally made to continue the myth of generative AI, to pump NVIDIA, and to make sure OpenAI insiders can sell $10.3 billion worth of shares, which they're currently trying to do at a valuation of $500 billion goddamn dollars.
I want to be clear about something.
OpenAI cannot afford the $300 billion.
OpenAI has not received a dollar from NVIDIA and won't do so for at least a month when I think they're going to receive $10 billion.
But the rest of that 90, that's only coming when they build those data centers, which OpenAI can't afford to do.
NVIDIA needs this myth to continue because in truth, all of these data centers are being built for demand that doesn't exist or that, if it...
did exist, doesn't necessarily translate into business customers paying huge amounts for access to OpenAI's generative AI services.
NVIDIA, OpenAI, Coreweave and other AI-related companies hope that by announcing theoretical billions of dollars or hundreds of billions of dollars of these strange, vague and impossible seeming deals, they can keep pretending that the demand is there because why else would they build all these data centers, right?
Well, there's that and the entire stock market rests on NVIDIA's back.
It accounts for 7-8% of the value of the S ⁇ P 500 and Jensen Huang needs to keep selling those fucking GPUs.
I intend to explain later how all of this works and how brittle all of this really is.
But the intention of these deals is simple, to make you think this much money can't be wrong.
And I assure you it can.
These people need you to believe this is inevitable, but they are being proven wrong again and again and again.
And today, I'm going to continue to do so.
Underpinning these stories about huge amounts of money and endless opportunity lies a dark secret.
The none of this is working and all of this money has been invested in a technology that doesn't make much revenue and loves to burn millions or billions or hundreds of billions of dollars.
Over half a trillion dollars, in fact, has gone into an entire industry without a single profitable company developing models or products built on top of these AI models.
By my estimates, there's about $44 billion of revenue in generative AI this year when you add in Anthropic and OpenAI's revenue to the part along with other stragglers.
And most of that number has been gathered from reporting from outlets like the information because none of these companies share their revenues, all of them lose shit tons of money, and their actual revenues are really, really, really small.
Only one member of the Magnificent 7 outside of NVIDIA has ever disclosed its AI revenue, Microsoft, which has stopped reporting in January 2025 when it reported it would have $13 billion in annualized revenue this, well, I guess that would be for the month because it's month times 12, about 1.083 billion a month.
I know that sounds like a lot, but Microsoft is a sales machine.
It's built specifically to create or exploit software markets, suffocating competitors by using its scale to drive down prices and to leverage the ecosystem it's created over the past few decades.
$1 billion a month in revenue is chump change for an organization that makes over $27 billion a quarter in profits.
But Ed, it's the early days.
Did you get in here?
Ow!
Go up!
Thank you, Scott.
Did you not listen to my three-part series on how to argue with an AI booster?
I went over it over there.
Get out!
Okay.
This is also nothing like any other tech era.
There's never been this kind of cash rush, even in the fiber boom.
Over a decade, Amazon spent about a tenth of the capex that the magnificent 7 spent in the last two years on generative AI building Amazon web services, something that now powers a vast chunk of the web and has long been Amazon's most profitable business unit.
Generative AI is also nothing like Uber, with OpenAI and Anthropic's true costs coming in at around $159 billion in the past two years, approaching five times Uber's $30 billion all-time burn, and that's before the bullshit with NVIDIA and Oracle.
Microsoft last reported the AI revenue in January, by the way.
It's now October.
Why did it stop reporting the number, do you think?
Is it because the numbers are so good they couldn't possibly let you know?
Hmm.
As a general rule, publicly traded companies, especially those where the leadership are compensated primarily in equity, so stock, they tend to brag about their successes, in part because bragging boosts the value of the thing that the leadership gets paid in.
There's no benefit to being shy.
Look, Oracle announced they literally filed something saying they had that huge $300 billion contract.
They did that to spike the stock.
Why is Microsoft not doing that with their incredible AI revenues?
Do you think it's because they're shy?
Come on, Satcha.
Come on out.
Come on, Satcha.
You can show me the numbers.
But in all seriousness, if Microsoft can't sell this, nobody can.
Alright, so I'm explaining this whole thing as if you're brand new and walking up to this relatively unprepared.
So I need to introduce another company.
In 2020, a Splinter group jumped off of OpenAI, funded by Amazon and Google, to do much the same thing as OpenAI, but pretended to be nicer about it until they had to raise money from the Middle East.
I'm, of course, talking about Anthropic, and they've always been a bit better at coding for some reason, and people really like their clawed models but like does not mean profit or even much revenue.
Both OpenAI and Anthropic have become the only two companies in generative AI to make any real progress either in terms of recognition or in sheer commercial terms accounting for the vast majority of revenue in the AI industry.
In a very real sense the AI industry's revenue is OpenAI and Anthropic.
In the year where Microsoft recorded $13 billion in AI revenues, $10 billion of that came from OpenAI's spending on Microsoft Azure.
Anthropic burned $5.3 billion last year, with the vast majority of that going towards compute.
Outside of these two companies, there's barely enough revenue to justify a single data center.
Where we sit today is a time of immense tension.
Mark Zuckerberg says we're in a bubble.
Sam Altman says we're in a bubble.
Alibaba chairman and billionaire Joe Sai says we're in a bubble.
Apollo says we're in a bubble.
Nobody's making money and nobody knows why they're actually doing this anymore.
Just that they must do it and must do so immediately.
And they've yet to make the case that generative AI warranted any of the expenditures.
Now we're a quarter of the way through this four-parter, but this one was necessary.
I needed to get you up to speed and kind of give you the lay of the land.
Because we're going to go a little deeper in the next episode, and I can't wait for you to hear it.
See you tomorrow.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Matt Osowski.
You can check out more of his music and audio projects at mattosowski.com.
M-A-T-T-O-S-O-W-S-K-I dot com.
You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.
I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r slash betteroffline to check out our Reddit.
Thank you so much for listening.
Better Offline is a production of CoolZone Media.
For more from CoolZone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
This is an iHeart podcast.