The Case Against Generative AI (Part 3)
In part three of this week’s four-part case against generative AI, Ed Zitron walks you through how “AI replacing software engineers” is a myth spread by the media and investors - and how Microsoft only has 8 to 12 million active paying customers for Microsoft 365’s AI Copilot out of 440 million users.
Latest Premium Newsletter: OpenAI Needs A Trillion Dollars In The Next Four Years: https://www.wheresyoured.at/openai-onetrillion/
YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast.
There's more to San Francisco with the Chronicle.
More to experience and to explore.
Knowing San Francisco is our passion.
Discover more at sfchronicle.com.
So I love Square.
They make paying for stuff at a store really easy.
I just use them at Court Street Grocer's to buy a sandwich here in Brooklyn, New York.
Their stuff's reliable, easy to use, and it just works for both businesses and their customers.
Square keeps up so you don't have to slow down.
Get everything you need to run and grow your business without any long-term commitments.
And why wait?
Right now you can get up to $200 off square hardware at square.com slash go slash better offline.
That's square.com slash go slash better offline.
Run your business smarter with square.
Get started today.
This is Larry Flick, owner of the floor store.
Leaves are falling and so are our prices.
Welcome to the floor stores fall sale.
Now through October 14th, get up to 50% off store-wide on carpet, hardwood, laminate, waterproof flooring, and much more.
Plus two years interest-free financing, and we pay your sales tax.
The Floor Stores Fall Sale.
Cooler days, hotter deals, and better floors.
Go to floorstores.com to find the nearest of our 10 showrooms from Santa Rosa to San Jose.
The Floor Store, your Bay Area Flooring Authority.
Your global campaign just launched.
But wait, the logo's cropped.
The colors are off, and did Lego clear that image?
When teams create without guardrails, mistakes slip through, but not with Adobe Express, the quick and easy app to create on-brand content.
Brand kits and lock templates make following design guidelines a no-brainer for HR sales and marketing teams.
And commercially safe AI, powered by Firefly, lets them create confidently so your brand always shows up polished, protected, and consistent everywhere.
Learn more at adobe.com/slash go/slash express.
Call Zone Media.
Hello, and welcome to Better Offline.
I'm, of course, your your host Ed Zitron.
We're in the third episode of our four-part series where I give you a comprehensive explanation as to the origins of the AI bubble, the mythology sustaining it and why it's destined to end really, really badly.
Now if you're jumping in now, please start from the very beginning.
The reason why this is a four-parter, my first ever, is because I want it to be comprehensive and because this is a very big subject with a lot of moving parts and even more bullshit.
A few weeks ago, I published a premium newsletter that explained how everybody is losing money on generative AI, in part because the costs of running AI models is increasing, and in part because the software itself doesn't do enough to warrant the costs associated with running them, which are already subsidized and unprofitable for the model providers.
Outside of OpenAI, and to a lesser extent Anthropic, nobody seems to be making much revenue, with the most successful company being Anysphere, makers of AI coding tool Cursor, which hit $500 million of annualize, so $41.6 million in one month, a few months ago, just before Anthropic and OpenAI jacked up the prices for priority processing on enterprise queries, raising their operating costs as a result.
In any case, that's some piss-poor revenue for an industry that's meant to be the future of software.
Smartwatches are projected to make $32 billion this year, and as I've mentioned in the past, the Magnificent 7 expect to make $35 billion or so in revenue from AI this year, and I think in total, when you throw in Core Weave and all them, it's barely $55 billion in total.
Even Anthropic and OpenAI seem a little lethargic, both burning billions of dollars while making, by my estimates, no more than $2 billion in Anthropic's case this year so far and $6.6
billion in 2025 so far for OpenAI, despite projections of $5 billion and $13 billion respectively.
Outside of these two, AI startups are floundering, struggling to stay alive and raising money in several hundred million dollar bursts as their negative gross margin businesses flounder.
As I dug into a few months ago, I could find only 12 AI-powered companies making more than $8.3 million a month, with two of them slightly improving their revenues, specifically AI search company Perplexity, which has now hit $150 million in AR or $12.5 million a month, and AI coding startup Replit, which has hit the same amount.
Both of these companies burn ridiculous amounts of money.
Perplexity burned 164% of its revenue on Amazon web services, OpenAI and Anthropic last year.
And while Replit hasn't leaked its costs, the information reports its gross margins in July were 23%, which doesn't include the cost of its free users, which you simply have to do with LLMs, as free users are capable of costing you a shit ton of money.
And some of you might say, that's how they do it in software.
Well, guess what?
Software doesn't usually connect you to a model that can burn, I don't know, 10 cents, 20 cents every time they touch it, which may not seem like much, but when you're making free dollars on someone and they don't convert, it does.
Problematically, your paid users also cost you more than they bring in as well.
In fact, every user loses you money in generative AI because it's impossible to do cost control in a consistent manner.
A few months ago, I did a piece in Anthropic losing money on every single Claude Code subscriber.
And now I'm going to walk you through the whole story in a simplified fashion because it's quite important.
So Claude Code is a coding environment that people used used or I should really say tried to use to build software using generative AI.
It's available as part of Anthropic's $20, $100 and $200 a month claud subscriptions with the more expensive subscriptions having more generous rate limits.
Generally, these subscriptions are all you can eat.
You can use them as much as you want until you hit limits rather than paying for the actual tokens you burn.
When I say burn tokens, and someone reached out saying I should specify this, I'm describing how these models are traditionally billed.
In general, you're billed at a dollar per million input tokens, as in user feeding in data, and output tokens, the output created.
So you wouldn't get one token built.
So every million you get charged.
So for example, Anthropic charges $3 per million input tokens and 6 million output tokens to use its ClaudeSonnet 4 model.
And it's about, I think,
well, a word before tokens.
I should really look that up.
It also gets more complex as you get into things like generating code.
Nevertheless, Claude Code has been quite popular.
And a user created a program called CC Usage, which allowed you to see your token burn.
The amount amount of tokens you were using, you were actually burning using Anthropic's models while using Claude Code, versus just getting charged a month and not knowing.
And many were seeing that they were burning in excess of their monthly spend.
To be clear, this is the token price based on Anthropic's own pricing, and thus the costs to Anthropic are likely not identical.
So I got a little clever.
Using Anthropic's gross profit margins, I chose 55%, and then a few weeks after my article, 60% was leaked, I found at least 20 different accounts of people costing Anthropic anywhere from 130% to 3,084% of their subscription.
There is also now a leaderboard called VibeRank, where people compete to see how much they burn, with the current leader burning and I shit you not $51,291 over the course of a month.
Anthropic is, to be clear, the second largest model developer and has some of the best AI talent in the industry.
It has a better handle on its infrastructure than anyone outside of big tech and open AI.
And it still cannot seem to fix this problem, even with weekly rate limits brought in at the end of August.
While one could assume that Anthropic is simply letting users run wild, my theory is far simpler.
Even the model developers have no real way of limiting user activity, likely due to the architecture of generative AI.
I know it sounds insane, but at the most advanced level, even there, model providers are still prompting their models, and whatever rate limits may be in place appear to at times get completely ignored, and there doesn't seem to be anything they can do to stop it.
Now really, Anthropic counts amongst its capitalist apex predators one lone Chinese man who spent $50,000 to their compute in the space of a month fucking around with clawed code.
Even if Anthropic was profitable, it isn't, and will burn billions of dollars this year, a customer paying $200 a month ran up $50,000 in costs, immediately devouring the margin of any user running the service that day, that week, or even that month.
Even if Anthropic's costs are half the published rates, they're not, by the way, one guy amounted to 125 users worth of monthly revenue.
This This is not a real business.
That's a bad business with out-of-control costs, and it doesn't appear anybody has these costs under control.
And, faced with the grim reality ahead of them, these companies are trying nasty little tricks on their customers to juice more revenue from them.
A few weeks ago, Replit, an unprofitable AI coding company, released a product called Agent 3, which promised to be 10 times more autonomous and offer infinitely more possibilities.
Testing and fixing its code, constantly improving your application behind the scenes in a reflection loop.
Sounds very real.
Sounds extremely real.
It's so real, but actually it isn't.
In reality, this means you'd go and tell the model to build something and it would go and do it and you'll be shocked to hear that these models can't be relied upon to go and do anything.
Please note that this was launched a few months after Replit raised their prices, shifting to obfuscated effort-based pricing that would charge the full scope of the agent's work.
And if you're wondering what the fuck that means, so are their customers.
Agent 3 has been a disaster.
Users found the tasks that previously cost a few dollars were spiraling into the hundreds of dollars.
With the register reporting, one customer found themselves with an $1,000 bill after a week, and I quote them.
I think it's just launch pricing adjustment.
Some tasks on new apps ran over an hour and 45 minutes and only charge $4 to $6, but editing pre-existing apps seems to cost most overall.
I spent $1K this week alone.
And they told that to the register, by the way.
Another user complained that costs skyrocketed without any concrete results, and I quote the register here.
I typically spent between $100 and $250 a month.
I blew through $70 in a night at Agent 3 launch, another editor wrote, alleging the new tool also performed some questionable actions.
One prompt brute forced its way through authentication, redoing author and hard resetting a user's password to what it wanted to perform app testing on a form, the user wrote.
I realize that's a little nonsensical, but long story short, it did a bunch of shit it wasn't asked to.
As I previously reported in late May, early June, both OpenAI and Anthropic cranked up the pricing on their enterprise customers, leaving Replit and Cursor both shifting their prices prices upward.
This abuse has now trickled down to their customers.
Replit has now released an update that lets you choose how autonomous you want Agent 3 to be, which is a tacit admission that you can't trust coding LLMs to build software.
Replit's users are still pissed off, complaining that Replit is charging them for an activity when the agent doesn't do anything, a consistent problem I've found across Redditors.
While Reddit is not the full summation of all users of every company everywhere, it's a fairly good barometer of user sentiment, and man, a user's pissy.
And now here's where this is bad.
traditionally silicon valley startups have relied upon the same model of grow really fast and burn a bunch of money then turn the profit lever ai does not have a profit lever because the raw costs of providing access to ai models are so high and they're only increasing that the basic economics of how the tech industries sell software don't make sense
So I'm a big fan of Quince.
I've been shopping with them long before they advertised with the show, and I just picked up a bunch of their Pima cotton t-shirts after they came back in stock, as well as another overshirt, because I love to wear them like a jacket over a t-shirt.
Talking of jackets, I'm planning to pick up one of their new leather racer jackets very, very soon.
Their clothes fit well, they fall nicely on the body, and feel high quality, like you get at a big, nice department store, except they're a lot cheaper because Quince is direct-to-consumer.
And that's part of what makes Quince different.
They partner directly with ethical factories and skip the middlemen, so you get top fabrics and craftsmanship at half the price of similar brands, and they ship quickly too.
I highly recommend them, and we'll be giving them money in the future.
Layer up this fall with pieces that feel as good as they look.
Go to quince.com slash better for free shipping on your order and 365 day returns.
Now available in Canada too.
That's q-u-i-n-ce-e dot com slash better.
Free shipping and 365 day returns.
Quince.com slash better.
Still using a copy-paste website?
Break the template trap with Framer.
Whether you're overwhelmed by traditional site builders or frustrated with cookie-cutter designs, Framer gives you the freedom to create a site that's professional, polished, and uniquely yours.
Hit publish and Framer ships your site worldwide in seconds.
No code, no compromises.
While DIY website tools are everywhere, most fall short on design and performance.
Framer changes that with a powerful, user-friendly platform that delivers developer-level results without writing a single line of code.
Framer is the design-first no-code website builder that lets anyone ship a production-ready site in minutes.
It's free to start.
Browse hundreds of pixel-perfect templates or design from a totally blank canvas, with multiplayer collaboration allowing your writer, designer, and marketer all to tweak the same page at once.
No version control nightmares.
Ready to build a site that looks hand-coded without hiring a developer?
Launch your site for free at framer.com and use code offline to get your first month of pro on the house.
That's framer.com, promo code offline.
Framer.com, promo code offline.
Rules and restrictions may apply.
Run a business and not thinking about podcasting?
Think again.
More Americans listen to podcasts than ad-supported streaming music from Spotify and Pandora.
And as the number one podcaster, iHeart's twice as large as the next two combined.
So whatever your customers listen to, they'll hear your message.
Plus, only iHeart can extend your message to audiences across broadcast radio.
Think podcasting can help your business?
Think iHeart.
Streaming, radio, and podcasting.
Let us show you at iHeartAdvertising.com.
That's iHeartAdvertising.com.
Is your financial advisor supported by a firm with over 130 years of strength and stability in changing markets?
Every AmeriPrize financial advisor is.
When you work with AmeriPrize, you get personal financial advice from an advisor who knows you and a range of goal-based investing and solutions.
No wonder AmeriPrize is rated 4.9 out of 5 in overall client satisfaction.
Visit Ameriprise.com/slash advice for information and disclosures.
Securities offered by AmeriPrise Financial Services LLC, member FINRA, and SIPC.
I'll reiterate something I wrote a few weeks ago.
A large language model user's infrastructural burden varies wildly between users and use cases.
While somebody asking ChatGPT to summarize an email might not be much of a burden, somebody asking ChatGPT to review hundreds of pages of documents at once, a core feature of basically any $20 a month subscription, could eat up to eight GPUs at once.
To be very clear, a user that pays $20 a month could run multiple queries like this a month, and there's not really a way to stop them.
Unlike most software products, any errors in producing an output from a large language model have a significant opportunity cost.
When a user doesn't like an output or the model gets something wrong, which it's guaranteed to do, or the user realizes they forgot something, the model must make a further generation or generations and even with caching, which Anthropic's editor told to, there's a definitive cost attached to any mistake.
Large language models are for the most part lacking in any definitive use cases, meaning that every user is, even with an idea of what they want to do, experimenting with every input and output.
In doing so, they create the opportunity to burn more tokens, which in turn creates an infrastructural burn on GPUs, which costs a lot of money to run.
The more specific the output, the more opportunities there are for monstrous token burn, and I'm specifically thinking about coding with LLMs.
The token-heavy nature of generating code means that any mistakes, sub-optimal generations, or straight-up errors will guarantee further token burn.
Even efforts to reduce compute costs by, for example, pushing free users or those on cheap plans to smaller, less intensive models, have dubious efficacy.
As I talked about in a previous episode, OpenAI splitter model in the GPT version of ChatGPT requires vast amounts of additional compute in order to route the user's request to the appropriate model, with simpler requests going to smaller models and more complex ones being shifted to reasoning models.
And it makes it impossible to cache part of the input.
As a result, it's not really clear whether it's saving OpenAI any money and indeed kind of suggests it might be costing them more.
In simpler terms, it's very, very, very difficult to imagine what one user, free or otherwise, might cost, and thus it's hard to charge them anything on a monthly basis or tell them what a service might actually cost them on average.
And this is a huge, huge problem with AI coding environments.
But let's talk about Claude Code again, Anthropic's code generator tool.
According to the information, Claude Code was driving nearly $400 million in annualized revenue, roughly doubling from a few weeks ago on July 31st, 2025.
The annualized revenue works out to about $33 million a month in revenue for a company that predicts it will make at least $416 million a month by the end of the year and for a product that has become, for a time, the most popular coding environment in the world from the second largest and best-funded AI company in the world.
Is that it?
Is that fucking it?
Is that all that's happening here?
$33 million, all of which is unprofitable, after it felt, at least based on social media chatter and discussing with multiple different engineers, that clawed code had become ubiquitous with anything to do with LLMs and coding.
To be clear, Anthropic Sonic and Opus models are consistently some of the most popular for programming an open router, an aggregator of LLM usage, and Anthropic has been consistently named as the best at coding.
Whether or not I feel that way is irrelevant.
Some bright spark out there is going to send it Microsoft's GitHub co-pilot has 1.8 million paying subscribers, and guess what?
That's true.
In fact, I reported it.
Here's another fun fact.
The Wall Street Journal reported that Microsoft loses on average $20 a month per user, with some users costing the company as much as 80 bucks.
And that's for the most popular product.
But wait, wait, wait, wait.
Hold up.
Wait.
I read some shit in the newspaper.
Aren't these LLM code generators replacing actual human engineers?
And thus, even if they cost way more than $20, $100, or $200 a month, they're still worth it, right?
They're replacing an entire engineer.
Oh, my sweet summer child.
If you believe the New York Times or other outlets that simply copy and paste whatever Anthropic CEO Warry O'Amade says, you'd think that the reason that software engineers are having trouble finding work is because their jobs are being replaced by AI.
This grotesque, manipulative, abusive, and offensive lie has been propagated through the entire business and tech media without anybody sitting down and asking whether it's true or even getting a good understanding of what it is that LLMs can actually do with code.
Members of the media, I am begging you, stop, stop doing this, stop publishing these fucking headlines.
You're embarrassing yourself.
Every asshole is willing to give a quote saying that coding is dead and that every executive is willing to burp out some nonsense about replacing all of their engineers.
But I'm fucking begging you to either use these things yourself or speak to people that do.
I am not a coder.
I cannot write or read code.
Nevertheless, I'm capable of learning and I've spoken to numerous software engineers in the last few months and basically I've reached a consensus that this is kind of useful sometimes.
However, one time a very silly man with an increasingly squeaky voice said that I don't speak to people who use AI tools.
So I went and spoke to three notable experienced software engineers and asked them to give me the straight truth about what coding LLMs can do.
Now for the purposes of brevity I'm going to use select quotes quotes from what these people said, but if you want to read the whole thing, you can check out the newsletter.
First, I'm going to read what Carl Brown of the Internet of Bugs said, and I had him on the show a few months back.
He's fantastic.
So, most of the advancements in programming languages, technique, and craft in the last four years have been designing safer and better ways of tying these blocks together to create large and larger programs with more complexity and functionality.
Humans use these advancements to arrange these blocks in logical abstraction layers, so we can fit an understanding of the layers interconnections in our heads as we work, diving Diving into blocks temporarily is needed.
This is where AIs fall down.
The amount of context required to hold the interconnections between these blocks quickly grows beyond the AI's effective short-term memory, in practice, much smaller than its advertised context window size, and the AIs lack the ability to reason about the abstractions as we do.
This leads to real-world code that's illogically layered, hard to understand, debug, and maintain.
Carl also said,
Code generation AIs from an industry standpoint are roughly the equivalent of a slightly below-average computer science graduate fresh out of school without any real-world experience, only ever having written programs to be printed and graded.
That's bad, because as he pointed out, whereas LLMs can't get past this summer intern stage, actual humans get better.
And if we're replacing the bottom rung of the labor market, there won't be any mid-level or senior developers later down the line.
Next, I asked Nick Shuresh of I will fucking pile drive you if you mention AI again what he thought.
LLMs, he said, will sometimes solve a thorny problem for me in a few seconds, saving me some brainpower.
But in practice, the effort of articulating so much of the design work in plain English and hoping the LLM omits code that I find acceptable is frequently more work than just writing the code.
For most problems, the hardest part is the thinking, and LLMs don't make that part any easier.
I also talked to Colt Voji of no, AI is not making AI engineers 10x's productive, who we also had on the show recently, and he said this.
LLMs often function like a fresh summer intern.
They're good at solving the straightforward problems that coders learn about in school, but they are unworldly.
They do not understand how to bring lots of solutions to small, straightforward problems together into a larger whole.
They lack the experience to be wholly trusted, and trust is the most important thing you need to fully delegate coding tasks.
In simpler terms, LLMs are capable of writing code but can't do software engineering because software engineering is the process of understanding, maintaining, and executing code to produce functional software.
And LLMs do not learn, cannot adapt, and, to paraphrase something Carl Brown said to me, break down the more of your code and variables you ask them to look at at once.
So, you can't replace a software engineer with them.
If you are printing this in a media outlet and have heard this sentence, you are fucking up.
You really are fucking up.
I'm really, we need members of the media hearing this.
You need to change.
You need to change on this one.
You are doing software engineers dirty.
Look, and I understand why too.
It's very easy to believe that software engineering is just writing code, but the reality is that software engineers maintain software, which includes writing and analyzing code amongst a vast array of different personalities and programs and problems.
Good software engineering harkens back to Brian Merchant's interviews with translators.
While some may believe the translators simply tell you what words mean, True translation is communicating the meaning of a sentence, which is cultural, contextual, regional, and personal, and often requires the exercise of creativity and novel thinking.
And on top of that, while translation is the production of words, you can't just take code and look at it.
You actually need to know how code works and functions and why it functions in that way.
Using an LLM, you'll never know because the LLM doesn't know anything either.
Now, my editor Matt Hughes gave an example of this in his newsletter, which I think I'll paraphrase.
He used to live in France, in the French-speaking part of Switzerland, and sometimes he'll read French translations of books to see how awkward bits of prose are translated.
Doing those awkward bits requires a bit of creative thinking, and I quote.
Take Harry Potter.
In French, Hogwarts is poudlard, which translates into bacon lice.
Why did they go with that instead of a literal translation of Hogwarts, which would be verus spork?
I'm sorry to anyone who can actually read languages.
No idea, but I'd assume it was something to do with the fact that
poudlard sounds a lot better than ver spork, and both of them I can say flawlessly.
Someone had to actually think about how to translate that one idea.
They had to exercise creativity, which is something that an AI is inherently incapable of doing.
Similarly, coding is not just a series of texts that programs a computer, but a series of interconnected characters that refers to other software in other places that must also function now and explain on some level to someone who has never ever seen the code before why it was done in this way.
This is, by the way, why we're still yet to get any tangible proof that AI is replacing software engineers, because it isn't replacing software engineers.
And now we need to understand why this is so existentially bad for generative AI.
Of all the fields supposedly at risk from AI disruption, coding feels or felt the most tangible, if only because the answer to can you write code with LLMs wasn't an immediate unilateral no.
The media has also been quick to suggest that AI writes software, which is true in the same way that ChatGPT writes novels.
In reality, LLMs can generate code and do somewhere some sort of software engineering adjacent tasks, but like all large language models, break down and go totally insane, hallucinating more and more as the tasks get more complex, and software engineering is extremely complex.
Even software engineers who can read code and have done so for decades will find problems they can't solve just by looking at the code.
And as I pointed out earlier, software engineering is not just coding.
It involves thinking about problems, finding solutions to novel challenges, designing stuff in a way that can be read and maintained by others, and that's ideally scalable and secure.
The whole fucking point of an AI is that you hand shit off to it.
That's what they've been selling it as.
That's why Jensen Huang told kids to stop learning to code, as with AI, there's no point.
And it was all a fucking lie.
lie generative AI can't do the job of a software engineer and it fails while also costing an abominable amount of money coding large language models seem like magic at first because they to quote a conversation with Carl Brown make the easy things easier but they also make the harder things harder they don't even speed up engineers there's a study that showed they make them slower yet coding is basically the only obvious use case for LLMs oh I'm sure you're gonna say but I bet the enterprise is doing well and you're also very very wrong Microsoft if you've ever switched on a TV in the past two years, has gone all in on generative AI and despite being arguably the biggest software company in the world, at least in terms of desktop operating systems and productivity software, has made almost no traction in popularizing generative AI.
It has thousands, if not tens of thousands, of salespeople and thousands of companies that literally sell Microsoft services for a living.
And it can't sell AI.
I've got a real fucking scoop for you.
I'm so excited.
And I buried it in the third part of a four-pod episode.
I'm truly twisted.
But a source that has seen materials related to sales has confirmed that as of August 2025, Microsoft has around 8 million active licensed, so-paying, users of Microsoft 365 Copilot, amounting to a 1.81% conversion rate across 440 million Microsoft 365 subscribers.
Must be clear that 365 is their big cash cow.
This would amount to if each of these users paid annually at the full rate $30 a month to about $2.88 billion in annual revenue for a product category that makes $33 billion a fucking quarter, this productivity and business unit for Microsoft.
And I must be clear, I am 100% sure these users aren't all paying $30 a month.
The information reported a few weeks ago that Microsoft has been reducing the software's price, referring to Microsoft 365, with more generous discounts on the AI features, according to customers and salespeople, heavily suggesting discounts have already been happening.
Enterprise software is traditionally sold at a discount anyway, or put a different way, with bulk pricing for those who sign up a bunch of users at once.
In fact, I found evidence that they've been doing this for a while, with a 15% discount on annual Microsoft 365 co-pilot subscriptions for orders of 10 to 300 seats, mentioned by an IT consultant back in late 2024, and another that's currently running through September 30th, 2025, with another Microsoft Cloud Solution Provider program.
Yeah, this...
I found tons of other examples too.
And Microsoft 365 is the enterprise version where they sell things with like Word and PowerPoint and sometimes Teams as well.
This is probably their most popular product.
And by the way, they even manipulate the numbers a little bit there.
An active user is someone who has taken one action on any Microsoft 365 app with Copilot in the space of 28 days.
Not 30, 28.
That's so generous.
Now I know, I know, that word active may be thinking, Ed, this is like the gym model.
There are unpaid licenses that Microsoft is getting paid for.
Fine, fine, fine.
Fucking fine.
Let's assume that Microsoft also has, based on research that suggests this can be the case for some software companies, another 50%, 4 million paying co-pilot licenses that aren't being used.
That's still 12 million users, which is around 2.7%
conversion rate.
That's piss poor, buddy.
That's piss poor.
That's pissy.
It sucks.
It's bad.
It's doo-doo.
Well, I just said PP, I guess.
Anyway, very serious, very serious podcast.
But why aren't people paying for Copilot?
Well, let's hear from someone who talked to the information.
And I quote, it's easy for an employee to say, yes, this will help me, but hard to quantify how.
And if they can't quantify how it'll help them, it's not going to be a long discussion over whether the software is worth paying for.
Is that good?
Is that good?
Is that what you want to hear?
It isn't.
It isn't.
That's the secret.
It's not.
It's bad.
It's really bad.
It's all very bad.
And Microsoft 365 Copilot has been such a disaster that Microsoft will now integrate Anthropics models to try and make them better.
Oh, one other thing too.
Sources also confirm GPU utilization.
So, how much the GPU is set aside for Microsoft 365.
Yeah, their enterprise copilot is barely scratching the 60%.
I'm also hearing that SharePoint, which is an app they have with over 250 million users, has less than 300,000 weekly active users of their copilot features, suggesting that people just don't want to fucking use this.
Those numbers are from August, by the way.
And it's pathetic.
And I must be clear, if Microsoft's doing this badly, I don't know how anyone else is doing well.
And they're not.
They're all failing.
It's pathetic.
But I've spent a lot of time today talking about AI coding because this was supposed to be the saving grace.
The thing that actually turned this from a bubble into an actual money minting industry that changes the world.
And I wanted to bring up Microsoft 365 because that's the place where Microsoft should be making the most money.
It's their most ubiquitous software.
It's their most well-known software.
And they're not.
8 million people.
8 million people.
I've run that by a few people and everyone's made the same oh god noise.
It's quite weird.
The oh god noise and the numbers.
But this just isn't happening.
Things are going badly and it really only gets worse from here.
And I'm going to tell you more tomorrow in the final part of our four-part.
Thank you for your patience and thank you for your time.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Matosowski.
You can check out more of his music and audio projects at matosowski.com.
M-A-T-T-O-S-O-W-S-K-I dot com.
You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.
I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r slash betteroffline to check out our Reddit.
Thank you so much for listening.
Better Offline is a production of CoolZone Media.
For more from CoolZone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
This is an iHeart podcast.