The Case Against Generative AI (Part 2)
In part two of this week’s four-part case against generative AI, Ed Zitron walks you through how NVIDIA funds and pumps money into unprofitable, debt-ridden “neoclouds” all to create vehicles to buy more GPUs - all to cover up the lack of demand for generative AI compute.
Latest Premium Newsletter: OpenAI Needs A Trillion Dollars In The Next Four Years: https://www.wheresyoured.at/openai-onetrillion/
YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast.
There's more to San Francisco with the Chronicle.
There's more food for thought, more thought for food.
There's more data insights to help with those day-to-day choices.
There's more to the weather than whether it's going to rain.
And with our arts and entertainment coverage, you won't just get out more, you'll get more out of it.
At the Chronicle, knowing more about San Francisco is our passion.
Discover more at sfchronicle.com.
From Australia to San Francisco, Cullen Jewelry brings timeless craftsmanship and modern lab-grown diamond engagement rings to the U.S.
Explore Solitaire, trilogy, halo, and bezel settings, or design a custom ring that tells your love story.
With expert guidance, a lifetime warranty, and a talented team of in-house jewelers behind every piece, your perfect ring is made with meaning.
Visit our new Union Street showroom or explore the range at cullenjewelry.com.
Your ring, your way.
You're juggling a lot.
Full-time job, side hustle, maybe a family, and now you're thinking about grad school?
That's not crazy.
That's ambitious.
At American Public University, we respect the hustle and we're built for it.
Our flexible online master's programs are made for real life because big dreams deserve a real path.
At APU, the bigger your ambition, the better we fit.
Learn more about our 40-plus career-relevant master's degrees and certificates at apu.apus.edu.
This is Larry Flick, owner of the Floor Store.
Leaves are falling, and so are our prices.
Welcome to the Floor Stores Fall Sale.
Now through October 14th, get up to 50% off store-wide on carpet, hardwood, laminate, waterproof flooring, and much more.
Plus two years interest-free financing, and we pay your sales tax.
The Floor Stores Fall Sale.
Cooler days, hotter deals, and better floors.
Go to floorstores.com to find the nearest of our 10 showrooms from Santa Rosa to San Jose.
The Floor Store, your area flooring authority.
CoolZone Media.
Hello, I'm Ed Zittron, and this, of course, is Better Offline.
Welcome to the second part of our four-part series, where I give you my most comprehensive, most up-to-date explanation of why we're in a bubble and what that even means.
The reason why I'm taking my time to be descriptive and comprehensive is because I want this to make sense to those who listen to it.
Having written hundreds of thousands of words this year about the AI bubble, so many of the arguments I've made and the secrets I've exposed are contained in their own discrete little episodes or newsletters.
This is my series to consolidate all of the information I've put out there in one place.
And I want to make it make sense to anyone who listens to it.
I want anyone, even someone who doesn't even know that much about AI, to listen to the arguments I've been making for the past three years, to understand why things are dire, and to feel the same alarm I'm feeling, or at least understand why I'm alarmed.
Because I don't like to tell you how you feel.
Old school bit of feedback I got from a listener once, and I appreciate that to this day.
Now, today I'll make the case that generative AI's fundamental growth story is flawed and explain why we're in the midst of an egregious bubble.
This industry is sold by keeping things vague, and knowing that most people don't dig much deeper than a headliner, a problem I simply do not have.
This industry is effectively in service of two companies, OpenAI and NVIDIA, who pump headlines out through endless contracts between them or subsidiaries or investments to give the illusion of activity.
OpenAI has now promised over $400 billion in the next four years, though honestly they might owe about a trillion dollars with all the data centers they've signed up for.
All of these are egregious sums for a company that have already forecasted billions in losses with no clear explanation as to how it will afford any of this beyond we need more money and the vague hope that there's another softbank or Microsoft waiting in the wings to swoop in and save the day.
Now I'm going to walk you through where I see this industry today and why I see no future for it beyond a horrible fiery car wreck.
While everybody reasonably harps on about hallucinations, which to remind you is when a model authoritatively states something that isn't true, the truth of why that's bad is far more complex and actually far worse than it seems.
You cannot rely on a large language model to do what you want.
Even the most highly tuned models on the most expensive and intricate platforms can't actually be relied upon to do exactly what you want.
And I know some people might say, well, yes, they do.
Every time, 100% of the time.
A hallucination isn't just when these models say something that isn't true.
It's when they decide to do something wrong because it seems the most likely thing to do, or when a coding model decides to go on a wild goose chase, failing the user and burning a ton of money in the process.
The advent of reasoning models, those engineered to think through problems in a way reminiscent of a human, but it's not thinking, they don't think, they have no consciousness.
They literally, you ask them something and they break down what the prompt might mean and then choose, it's not thinking.
And the expansion of what people are trying to use LLMs for demands that the definition of an AI hallucination be widened, not merely referring to factual errors, but fundamental errors in understanding the user's request or intent, or what constitutes a task, in part because these models, as I said, cannot think and do not know anything.
However successful a model might be in generating something good once, it will also often generate something bad, or it'll generate the right thing, but in an inefficient and over-verbose fashion.
You do not know what you're going to get each time, and hallucinations multiply with the complexity of the thing you're asking for, or whether a task contains multiple steps, which is a fatal blow to the idea of agents.
You can add as many levels of intrigue and reasoning as you want, but large language models cannot be trusted to do something correctly, or even consistently, let alone every time.
Model companies have successfully convinced everybody that the issue is that users are prompting the models wrong, and that the people need to be trained to use AI, but what they're doing is training people to explain away the inconsistencies of large language models, and to assume individual responsibility for what is an innate flaw in how these fucking things work.
Large language models are also uniquely expensive.
Many mistakenly try and claim that this is like the dot-com boom or Uber, but the basic unique economics of generative AI are insane.
Providers must purchase tens or hundreds of thousands of GPUs, each costing $50,000 to $70,000 apiece, and the hundreds of millions of or billions of dollars of infrastructure that goes around them are so expensive and hard to install, and that's without mentioning things like staffing, or construction, or power, or water, or even permitting.
Then you turn them on, and immediately they start losing you money.
Despite hundreds of billions of GPUs sold, nobody seems to actually make any of it, other than NVIDIA, of course, the company that makes them, and resellers like Dell and Supermicro, who buy the GPUs, put them in servers, and sell them to other people.
Now, if you're an eager listener, I would love to hear from you on one question.
And this is just something that's been bouncing around my head.
Supermicro.
Is NVIDIA a customer of Supermicro?
Supermicro is a huge customer of Nvidia.
I read something like 70% of their cost of goods sold is
buying GPUs.
But I read that NVIDIA was a customer of them, but I can't find anything else.
Reach out, easy at betteroffline.com if you've got any thoughts there.
Anyway, but back to those resellers, this arrangement works out great for Jensen Huang, the CEO of NVIDIA, and terribly for everybody else.
Today I'm going to explain the insanity of the situation we find ourselves in and why I continue to do this work undeterred.
The bubble has entered its most pornographic, aggressive, and destructive stage, where the more obvious it becomes that we're all cooked here in AI land, the more ridiculous the generative AI industry will act.
A dark juxtaposition against every new study that says generative AI does not work, or news story about chat GPT's uncanny ability to activate mental illness illness in people.
And we're going to start looking at one company, NVIDIA, which now dominates the stock market and has taken extraordinary and dangerous measures to sustain growth that is, to any sane person, completely unsustainable and unrealistic on every level.
But let's start simple.
NVIDIA is a hardware company that sells GPUs, including consumer GPUs that you'd see in a modern gaming PC.
But when you read someone say GPU within the context of AI, they mean enterprise-focused GPUs like the A100, H100, H200 and more modern GPUs like the Blackwell Series B200 and GB200, which combines two GPUs with an NVIDIA CPU.
This is all complex sounding, but I want you to have the groundwork.
These GPUs cost anywhere from $50,000 to $70,000 and require tens of thousands of dollars more of infrastructure.
Networking to cluster these server racks of GPUs together to provide compute and massive cooling systems to deal with the massive amounts of heat they produce, as well as servers themselves that they run on, which typically use top-of-the-line data center CPUs and contain vast quantities of high-speed memory and storage.
While the GPU itself is likely the most expensive single item within an AI server, the other costs, and I'm not even factoring in the actual physical building that the server lives in, or the water or electricity that it uses, well, all this crap adds up.
I've mentioned NVIDIA because it has a virtual monopoly in this space.
Generative AI effectively requires NVIDIA GPUs, in part because it's the only company really making the kinds of high-powered cards that generative AI demands, and because NVIDIA created something called CUDA, C-U-D-A, a collection of software tools that lets programmers write software that runs on GPUs, which were traditionally used primarily for rendering graphics in games.
While there are some open source alternatives, as well as alternatives from Intel with its Arc GPUs and AMD, NVIDIA's main rival in the consumer space, these aren't nearly as mature or feature-rich.
CUDA's been around for 10-15 years now, and they really knew what they were doing.
They also bought a company called Mellanox, which did the high-speed networking back in 2019, I think, for $6 billion.
Anyway, due to the complexities of AI models, one cannot just stand up a few of these GPUs either.
You need clusters of thousands, tens of thousands, or hundreds of thousands of them for it to be worthwhile, making any investment in GPUs in the hundreds of millions or billions of dollars, especially considering they require completely different data center architecture to make them run.
You've probably read a bunch of stuff about crypto miners turning into AI data center providers.
These crypto data centers have to be knocked down and replaced.
You can't just put the same GPUs in.
It isn't going to work.
And with the new Blackwell ones, the brand new ones and then the Rubens following them, same deal.
A common request, like asking a generative AI model to pass through thousands of lines of code and make a change or an addition, may use multiples of these $50,000 GPUs at the same time.
And so, if you aspire to serve thousands or millions of concurrent users, you need to spend big.
Really, really, really big.
It's these factors, the vendor lock-in, the ecosystem, and the fact that generative AI AI really only works when you're buying GPUs at scale, that underpin the rise of NVIDIA.
But beyond the economic and technical factors, there are human ones too.
To understand the AI bubble is to understand why CEOs do the things they do, because an executive job is so vague, they can telegraph the value of their labor by spending money on initiatives and partnerships and stratagem.
AI gave hyperscalers the excuse to spend hundreds of billions of dollars on data centers and buy a bunch of GPUs to go in them because that to the markets looks like they're doing something.
By virtue of spending a lot of money in a frighteningly short amount of time, Satchinadella received multiple glossy profiles, all without having to prove that AI can really do anything-be it a job or make Microsoft money.
Nevertheless, AI allowed CEOs to look busy, and once the markets and journalists had agreed on the consensus opinion that AI would be big, all that these executives had to do was buy GPUs and do AI, or plug AI within their own software products.
But really, it was just jump on the big, stupid asshole train.
So I'm a big fan of Quince.
I've been shopping with them long before they advertised with the show, and I just picked up a bunch of their Pima cotton t-shirts after they came back in stock, as well as another overshirt, because I love to wear them like a jacket over a t-shirt.
Talking of jackets, I'm planning to pick up one of their new leather racer jackets very, very soon.
Their clothes fit well, they fall nicely on the body, and feel high quality, like you get at a big, nice department store, except they're a lot cheaper because Quince is direct-to-consumer.
And that's part of what makes Quince different.
They partner directly with ethical factories and skip the middlemen, so you get top fabrics and craftsmanship at half the price of similar brands, and they ship quickly too.
I highly recommend them, and we'll be giving them money in the future.
Letter up this fall with pieces that feel as good as they look.
Go to quince.com/slash better for free shipping on your order and 365-day returns.
Now available in Canada, too.
That's q-u-i-n-ce-e dot com/slash better.
Free shipping and 365-day returns.
Quince.com/slash better.
Still using a copy-paste website?
Break the template trap with Framer.
Whether you're overwhelmed by traditional site builders or frustrated with cookie-cutter designs, Framer gives you the freedom to create a site that's professional, polished, and uniquely yours.
Hit publish, and Framer ships your site worldwide in seconds.
No code, no compromises.
While DIY website tools are everywhere, most fall short on design and performance.
Framer changes that with a powerful, user-friendly platform that delivers developer-level results without writing a single line of code.
Framer is the design-first no-code website builder that lets anyone ship a production-ready site in minutes.
It's free to start.
Browse hundreds of pixel-perfect templates or design from a totally blank canvas, with multiplayer collaboration allowing your writer, designer, and marketer all to tweak the same page at once.
No version control nightmares.
Ready to build a site that looks hand-coded without hiring a developer?
Launch your site for free at framer.com and use code offline to get your first month of pro on the house.
That's framer.com promo code offline.
Framer.com promo code offline.
Rules and restrictions may apply.
Trading at Schwab is now powered by Ameritrade, giving you even more specialized support than ever before.
Like access to the trade desk, our team of passionate traders ready to tackle anything from the most complex trading questions to a simple strategy gut check.
Need assistance?
No problem.
Get 24-7 professional answers and live help and access support by phone, email, and in-platform chat.
That's how Schwab is here for you to help you trade brilliantly.
Learn more at schwab.com slash trading.
From Australia to San Francisco, Cullin Jewelry brings timeless craftsmanship and modern lab-grown diamond engagement rings to the US.
Explore solitaire, trilogy, halo, and bezel settings, or design a custom piece that tells your love story.
With expert guidance, a lifetime warranty, and a talented team of in-house jewels behind every piece, your perfect ring is made with meaning.
Visit our Union Street Showroom or explore the range at colournjewelry.com.
Your ring your way.
We are in the midst of one of the darkest forms of software in history, described by many as unwanted guests invading their products, their social media feeds, their bosses' empty minds, and resting in the hands of monsters.
Every story of AI's success feels bereft of any real triumph, with every literal description of its abilities involving multiple caveats about the mistakes it makes or the incredible costs of running it.
Generative AI really exists for two reasons, to cost money and to make executives look busy.
It was meant to be the new enterprise software and the new iPhone and the new Netflix all at once, a panacea where the software guys pay one hardware guy for GPUs to unlock the incredible value creation of the future.
In many ways, generative AI was always set up to fail because it was meant to be everything, was talked about like it was everything, is still sold like it's everything, yet for all the fucking hype it comes down to two companies, OpenAI and NVIDIA.
And NVIDIA was for a while living high on the hog.
All CEO Jensen Huang had to do every three months would say, check out these numbers, and the markets and business journalists would squeal with glee, even as he said stuff like, the more you buy, the more you save, in part tipping his head to the very real and sensible idea of accelerated computing, but framed within the context of the cash inferna that's generative AI.
And it all seems kind of fucking ludicrous.
Huang's showmanship worked really well for NVIDIA for a while because for a while the growth was easy.
Everybody was buying GPUs.
Meta, Microsoft, Amazon, Google, and to a lesser extent Apple and Tesla made up 42% of NVIDIA's revenue creating, at least for the first four, a degree of shared mania where everybody justified buying tens of billions of dollars of GPUs by saying the other guy's doing it.
This is one of the major reasons the AI bubble is happening, because people conflated NVIDIA's incredible sales with interest in AI, rather than everybody buying GPUs at once.
Don't worry, I'll explain the revenue side a little bit later.
We're here for the long haul.
Sit down, get comfy.
You're going to need to be.
Anyway, NVIDIA is now facing a big problem.
The only thing that grows forever is cancer.
On September 9th, 2025, the Wall Street Journal said that Nvidia's wow factor was fading, going from beating analyst estimates by nearly 21% in its fiscal year Q2 2024 earnings to scraping by with a pathetic, measly 1.52% beat in its most recent earnings, something that for any other company would be a good thing because they made so much money, but framed against the delusional expectations that generative AI has inspired, well, the figure looks nothing short of ominous.
I quote the Wall Street Journal.
Already, Nvidia's 56% annual revenue growth rate in its latest quarter was its slowest in more than two years.
If analyst projections hold, growth will slow further in the current quarter.
In any other scenario, 56% year-over-year growth would lead to an abundance of Dom Perignon and Huang signing hundreds of boobs, but this is NVIDIA, and that's just not good enough.
Back in February 2024, NVIDIA was booking 265% year-over-year growth, but in its February 2025 earnings, NVIDIA only grew by a meastly pathetic, disgusting 78% year-over-year.
I'm being sarcastic, of course.
It isn't so much that NVIDIA isn't growing, but to grow year-over-year at the rates that people expect is insane.
Life was a lot easier when NVIDIA went from $6.05 billion in revenue in Q4 fiscal year 2025 to $22 billion in revenue in Q4 fiscal year 2024.
But for it to to grow even 55% year over year from Q2FY 2026, I'm just going to truncate that now, which was $46.7 billion, to Q2 2027, that would require them to make $72.385 billion in revenue in the space of three months, mostly from selling GPUs, which make up about 88% of its revenue.
Just want to be clear there.
In a year, they would have to make $72 billion.
Just selling pretty much GPUs and the associated hardware in the space of three months.
It's It's insane.
This is really, it's too much.
It's too much to expect.
And this, by the way, would put NVIDIA in the ballpark of Microsoft, who made $76 billion in their last quarterly earnings, and within the neighborhood of Apple, who made $94 billion in their last quarter of earnings.
And they would do this, predominantly making money in an industry that a year and a half ago barely made the company $6 billion and a quarter.
And the market needs NVIDIA to perform.
They must.
They must, as the company makes up 7% to 8% of the value of the S ⁇ P 500.
It's not enough for for Nvidia to be wildly profitable or to have a monopsony on selling GPUs or for it to have effectively 10x their stock in a few years.
No, no, no, more, more, more, always more.
Number must go up.
It must continue to grow at the fastest rate of anything ever, making more and more money, selling more and more of these GPUs to a small group of companies that immediately start losing money the moment they plug them in.
It's not brilliant, is it?
While a few members of the Magnificent 7 could be dependent on to funnel tens of billions of dollars into a furnace each quarter, there were limits, even for companies like Microsoft, which had bought over 485,000 GPUs in 2024 alone.
To take a step back about how people actually make money from buying these GPUs, companies like Microsoft, Google, and Amazon make their money by either selling access to large language models that people incorporate into their products, or by renting out servers full of those GPUs to run inference, the thing to generate the output, or train AI models for companies that develop and market their models themselves, namely Anthropic and OpenAI, with some smaller competitors that don't really matter.
That latter revenue stream, renting out GPUs, is where Jensen Wong found a solution to that horrible eternal growth problem, the Neo-Cloud.
Namely, companies like Coreweave, Lambda and Nebius.
Now these businesses are fairly straightforward.
They own or lease data centers that they then fill full of servers that are full of NVIDIA GPUs, which they then rent out on an hourly basis to customers, either on a per GPU basis or in large batches for large customers who guarantee they'll use a certain amount of compute and sign up for a long-term agreement, for so more than an hour a time, couple years perhaps, these larger commitments.
A NeoCloud is a specialist cloud compute company that exists only to provide access to GPUs for AI, unlike Amazon Web Services, Microsoft Azure, and Google Cloud, all of which have healthy businesses selling other kinds of compute, with AI, as I'll get into later, failing to provide much of a return on investment at all.
It's not just the fact that these companies are more specialized than, say, AWS or Azure.
As you've gathered from the name, these are new, young, and in almost all cases incredibly precarious businesses, each with financial circumstances that would make a Greek finance minister blush.
That's because setting up a neo-cloud is expensive.
Even if the company in question already has data centers, as Coreweave did with its cryptocurrency mining operation, AI requires, as I said, completely new data center infrastructure to run and cool the GPUs.
And those GPUs also need paying for, and then there's the other stuff I mentioned earlier, like power, water, and the other bits of the computer, CPU, motherboard, blah, blah, blah, blah, blah.
As a result, these neoclouds are forced to raise billions of dollars in debt, which they collateralize using the GPUs they already have, along with contracts from customers, which they then use to buy more GPUs.
That's right, they buy GPUs from NVIDIA, they raise debt on those GPUs, and then they use that debt to my buy more GPUs from NVIDIA.
It's enough to drive a man insane.
CoreWave, for example, has $25 billion in debt on an estimated $5.35 billion of revenue in 2025, losing hundreds of millions of dollars per quarter.
Now, you know who also invests in these neo-clouds?
You'll never guess, it's NVIDIA.
NVIDIA is also one of Core Weave's largest customers, accounting for 15% of its revenue in 2024, and just signed a deal to buy $6.3 billion of any capacity that Core Weave can't otherwise sell to someone else through 2032, an extension of a $1.3 billion 2023 deal reported by the Information.
It was also the anchor investment in Coreweave's IPO, about $250 million.
NVIDIA is currently doing the same thing with Lambda, another NeoCloud that Nvidia invested in, which also plans to go public next year.
NVIDIA is also one of Lambda's largest customers, signing a deal with it this summer to rent 10,000 GPUs for $1.3 billion over four years.
In the UK, NVIDIA has also just invested $700 million in Nscale, a former crypto miner that has never built an AI data center that has, despite having no experience, committed $1 billion
and or 100,000 GPUs to an open AI data center in Norway.
On Thursday, September 25th, Nscale announced that it closed another funding round with NVIDIA listed as the main backer, although it's unclear how much money it put in.
It would be as safe to assume it's probably at least $100 million.
NVIDIA also invested in Nebius, an outgrowth of Russian conglomerate Yandex, and Nebius provides, through their partnership with NVIDIA, tens of thousands of dollars of compute credits to the company's NVIDIA's inception startup program.
Look, NVIDIA's plan is simple.
Fund these NeoClouds, clouds, let these neo clouds load themselves up with that, at which point they buy bunches of GPUs from NVIDIA, which can then be used as collateral for loans along with contracts from customers, allowing the NeoClouds to buy even more GPUs from NVIDIA.
It is just that simple.
It's infinite money, right?
Just money me money now.
You fund the company, the company buys from you, you fund them again, they've used the thing they bought to buy from more from you.
Unlimited money.
Except that is for one small problem.
These companies don't also, also, they don't really appear to have that many customers and they don't appear to be making much money.
As I went into in a recent premium newsletter, NVIDIA funds and sustains NeoClouds as a way of funneling revenue to itself, as well as partners like Supermicro and Dell, resellers that take NVIDIA GPUs, like I I mentioned, and put them in servers to sell pre-built to customers.
These two companies made up 39% of NVIDIA's revenues last quarter.
Yet when you remove hyperscaler revenue, Microsoft, Amazon, Google, OpenAI, and NVIDIA from the revenues of these NeoClouds, there's barely $1 billion in revenue combined across CoreWeave, Nebius, and Lambda.
CoreWeave's $5.35 billion in revenue is predominantly made up with its contracts with NVIDIA, Microsoft, who are offering that compute to OpenAI, Google, who have hired CoreWeave to offer compute to OpenAI, and I'm not kidding, and of course, OpenAI itself, which has now promised Core Weave $22.4 billion in business over the next five years.
This is all a lot of stuff, so I'll make it really simple.
There's no real money in offering AI compute, but that isn't Jensen Huang's problem.
So we simply will force NVIDIA to hand money to these companies that they have contracts to point at, so they can raise debt to buy more of those GPUs, so that NVIDIA can give them more contracts so they can use that to raise more money.
It's really bad.
All right, it's really bad.
When I read this stuff out loud, I feel a little crazy because it's so obviously unsustainable.
Neo-clouds are effectively giant private equity vehicles that exist to raise money to buy GPUs from NVIDIA or for hyperscalers to move money around so they don't have to increase their capital expenditures and can, as Microsoft did earlier in the year, simply walk away from deals they don't like with the masses of data center releases they walk from.
Nebius recently signed a $17.4 billion deal with Microsoft, which even included a clause in its 6K filing, an official filing with the government, that Microsoft can terminate the deal in the event that the capacity isn't built by the delivery dates.
And by the way,
Nebius already used the contract that Microsoft gave them to raise $3 billion to,
I'm not shitting you here, build the data center to actually
provide the compute for the contract.
They don't have it yet.
They don't have the...
They don't have the fucking compute.
They haven't fucking built it.
No one built it.
They haven't got the compute, mate.
These fucking companies are right.
Anyway, anyway, sorry.
Sorry.
I'll stop spiraling.
Let me just break down these numbers.
Let's look at Corwe first.
Microsoft, their 60% of their revenue in 2024, and they're providing compute mostly for OpenAI.
15% of their revenue last year was NVIDIA.
And then the rest was Meta.
And then OpenAI and then Google.
Lambda, half of their revenue comes from Amazon and Microsoft, and now $1.5 billion of their revenue comes from NVIDIA, which their current revenue, by the way, and that $1.5 billion over four years.
So the current revenue is $250 billion.
Well, that would make NVIDIA their largest customer.
I realize I'm just saying numbers here, but for real, with that contract, because Lambda only made $250 million in the first half of this year, and Nvidia is spreading $1.5 billion across four years, NVIDIA is the largest customer now.
Now, Nebius has got similar revenue to Lambda, but their largest customer is now
fucking Microsoft.
It's just, they don't have real customers.
They just have hyperscalers or NVIDIA themselves.
And from my analysis, it appears that Coreweave, despite expectations to make that $5.35 billion this year, has only around $500 million of non-Magnificent 7 or OpenAI revenue in 2025, with Lambda estimated to have maybe a round of $100 million in AI revenue otherwise, and Nebius only around $250 million.
And that's being generous.
In much simpler terms, the Magnificent 7 is the AI bubble, and the AI bubble exists to buy more GPUs because, as I'll talk about, there's no real money or growth coming out of this other than the amount that private credit is investing.
And this really is quite worrying, by the way.
I had a quote here for an analyst that says it's about $50 billion a quarter for the low end for the past three quarters.
So why is this bad?
All right, I don't know.
Let's start simple.
$50 billion a quarter of data center funding is going into an industry that has less revenue than free-to-play mobile game Genshin Impact.
That feels pretty bad.
Who's gonna use these data centers?
How are they even gonna make money on them?
Private equity firms don't typically hold on to assets.
They sell them or they take them public.
That doesn't seem great to me.
Anyway, if AI was truly the next big growth vehicle, neoclouds would be swimming in diverse global revenue streams.
Instead, they're heavily centralized around the same few names, one of which, NVIDIA, directly benefits from their existence, not as a company doing business, but as an entity that can accrue debt and spend money on GPUs.
These neo clouds are entirely dependent on a continual flow of private credit from firms like Goldman Sachs, who's backed Nebius, Corweave, and Lambda, JPMorgan, Lambda, Crusoe, building Abilene, Texas's OpenAI data center, and of course, Corweave and Blackstone, Lambda, and Corweave, who have, in a very real sense, created an entirely debt-based infrastructure to feed billions of dollars directly to NVIDIA,
all in the name of an AI revolution that's yet to arrive.
The fact that the rest of the NeoCloud revenue stream is effectively either a hyperscaler or open AI is also concerning.
Hyperscalers are, at this point, the majority of data center capital expenditures and have yet to prove any kind of success from building out this capacity.
Outside, of course, Microsoft's investment in OpenAI, which has succeeded in generating revenue while burning billions of dollars of revenue on, well, I mean, it's not really any profit, is there?
It's just burning money.
It's also insane when you say this stuff.
I've got two more goddamn episodes of this.
And when I read these scripts, I'm just like, how is nobody else more freaked out?
Oh, well,
hyperscaler revenue is also capricious.
But even if it isn't, why are there no other major customers?
Why across all of these companies does there not seem to be one major customer who isn't OpenAI?
Well, the answer is quite obvious.
Nobody that wants it can afford it, and those that can afford it don't need it.
It's also unclear what exactly hyperscalers are doing with this compute, because it sure isn't making money.
While Microsoft makes $10 billion in revenue from renting compute to OpenAI via their Microsoft Azure Cloud, it does so at cost and was charging OpenAI $1.30 per hour for each A100 AI GPU it rents, a loss of $2.2 an hour per GPU, meaning that it is likely losing money on this compute, especially as semi-analysis has the total cost per hour per GPU around $1.46 with the cost of capital and debt associated for a hyperscaler.
Though it's unclear whether that's for an H100 or an A100 GPU.
In any case, how do these neo-clouds pay for their debt if the hyperscalers give up, or NVIDIA doesn't send them money, or more likely, private credit begins to notice that there's no real revenue growth outside of circular compute deals with NeoCloud's largest suppliers, investors, and customers?
Don't know why I said plural there because it's just one, NVIDIA.
And the answer is they don't.
In fact, I have serious concerns that they can't even build the capacity necessary to fulfill these deals, but nobody seems to worry or think about that.
But really, though, it appears to be taking Oracle and Cruiser around 2.5 years per gigawatt of compute capacity.
How exactly are any of these NeoClouds, or indeed Oracle itself, able to expand to capture this revenue?
Who knows?
But I assume somebody is going to say OpenAI.
Here's an insane statistic for you, by the way.
OpenAI will account for, in both its revenue, projected $13 billion, and in its own compute cost, $10,
somewhere in the region of 40 to 50% of all AI revenues in 2025.
As a reminder, OpenAI has leaked that it will burn $115 billion in the next four years.
And based on my estimates, it actually needs to raise, I mean, upwards of $400 billion in the next four years based on its $300 billion deal with Oracle and some recently announced $100 billion compute purchases for backup.
And that alone is a very bad sign.
Very, very bad indeed, especially as we're three years and $500 billion or more into this hype cycle, with few signs of life outside of, well, OpenAI promising people money.
And that's not healthy or sane or normal.
And it's certainly not stable.
And it's going to get bad real fast.
Catch you tomorrow.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Matasowski.
You can check out more of his music and audio projects at matasowski.com.
M-A-T-T-O-S-O-W-S-K-I dot com.
You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.
I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash betteroffline to check out our Reddit.
Thank you so much for listening.
Better Offline is a production of CoolZone Media.
For more from CoolZone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
This is an iHeart podcast.