How To Argue With An AI Booster, Part Three

37m

In the final part of this week's three-part Better Offline Guide To Arguing With AI Boosters, Ed Zitron walks you through why generative AI is nothing like Amazon Web Services, how the media misled the public about ChatGPT, and why ChatGPT’s popularity does not mean it’s a mass-market product.

Latest Premium Newsletter: Why Everybody Is Losing Money On Generative AI: https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/

YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.

BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

See omnystudio.com/listener for privacy information.

Listen and follow along

Transcript

This is an iHeart podcast.

On Fox One, you can stream your favorite news, sports, and entertainment live, all in one app.

It's fing roll and unfiltered.

This is the best thing ever.

Watch breaking news as it breaks.

Breaking tonight, we're following two major stories.

And catch history in the making.

Gibby, meet Freddy!

Debates,

drama, touchdowns.

It's all here, baby.

Fox One.

We live for live.

Streaming now.

Be honest.

How many tabs do you have open right now?

Too many?

Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.

Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.

Everyone's cooped up in their house.

I will talk to this robot.

If you're a truly engaged activist, the government already has data on you.

Driverless cars are going to mess up in ways that humans wouldn't.

Listen to Close All Tabs, wherever you get your podcasts.

There's more to San Francisco with the Chronicle.

There's more food for thought, more thought for food.

There's more data insights to help with those day-to-day choices.

There's more to the weather than whether it's going to rain.

And with our arts and entertainment coverage, you won't just get out more, you'll get more out of it.

At the Chronicle, knowing more about San Francisco is our passion.

Discover more at sfchronicle.com.

Lily is a proud partner of the iHeartRadio Music Festival for Lily's duets for type 2 diabetes campaign that celebrates patient stories of support.

Share your story at mountjaro.com slash duets.

Mountjaro terzepatide is an injectable prescription medicine that is used along with diet and exercise to improve blood sugar, glucose, and adults with type 2 diabetes mellitus.

Mountjaro is not for use in children.

Don't take Mountjaro if you're allergic to it or if you or someone in your family had medullary thyroid cancer or multiple endocrine neoplasia syndrome type 2.

Stop and call your doctor right away if you have an allergic reaction, a lump or swelling in your neck, severe stomach pain, or vision changes.

Serious side effects may include inflamed pancreas and gallbladder problems.

Taking Maljaro with a sulfinyl norrhea or insulin may cause low blood sugar.

Tell your doctor if you're nursing pregnant plan to be or taking birth control pills and before scheduled procedures with anesthesia.

Side effects include nausea, diarrhea, and vomiting, vomiting, which can cause dehydration and may cause kidney problems.

Once weekly Mountjaro is available by prescription only in 2.55, 7.5, 10, 12.5, and 15 milligram per 0.5 milliliter injection.

Call 1-800-LILLIERX-800-545-5979 or visit mountjaro.lilly.com for the Mountjaro Indication and Safety Summary with warnings.

Talk to your doctor for more information about Mountjaro.

Mountjaro and its delivery device base are registered trademarks owned or licensed by Eli Lilly and Company, its subsidiaries or affiliates.

Hello I'm Ed Zitron and this is Better Offline.

We've finally reached the end of our three-part how to argue with an AI booster series.

Big strong men are standing outside of the Better Offline studio, suspended 200 feet above the Las Vegas Strip with tears in their eyes and they're saying, sir, sir, it's the most beautiful podcast I've ever heard.

Please stop recording it.

Everyone else will feel insufficient when they hear it.

No, no, I have to continue.

I'm afraid my listeners need me.

But okay, seriously, folks.

If there's anything I want you to take home so far, it's that the arguments that these people, these AI sycophants, make,

they crumble under the slightest bit of scrutiny, and yet these arguments work because they either exploit the lack of knowledge of those of us who don't understand the rotten economics of AI, or because they force the opposite party to surrender their reason and to exit the planes of reality.

To which I say, fuck that, absolutely not.

This is a hill I'm prepared to die on, and I'll hope that you'll all be standing with me side by side.

Now, in the first episode, we shook away the claims that it's the early days for AI and we just need to give it more time.

Then I took the idea that generative AI is like Uber or fiber optic networking, two industries that both burned a lot of money at the start, but are otherwise nothing like generative AI.

And now it's time to deal with the dregs of the arguments.

These are the worst of the worst booster quips.

Here we go.

Ultra booster quip.

I thought about recording that with a bunch of reverb, but it didn't really work out for me, but I wanted to do it once.

But anyway, their argument is, uh-huh, AI is just like Amazon Web Services, a massive investment that took a while to go profitable, and everybody hated Amazon for it.

Now, I actually covered this in depth in the hater's guide to the AI bubble, but the long and short of it is that Amazon Web Services is a platform, a necessity with an obvious choice, and has burned about 10% of what Amazon and all of them have burned chasing generative AI, and had also proven demand before building it.

Also, Amazon Web Services was break-even within three years, and OpenAI was founded in fucking 2015, and even if you start from November 2022, by Amazon Web Services standards, it should be break-even by now.

But now I'll quote myself.

Amazon web- no wait, sorry, that's the boosters.

Amazon web services was created out of necessity.

Amazon's infrastructure needs were so great that he had effectively had to build both the software and hardware necessary to deliver a store that sold theoretically everything to theoretically everywhere.

Handling both the traffic from customers, delivering the software that runs Amazon.com quickly and reliably, and well, making sure things were stable.

It didn't need to come up with a reason for people to run web applications.

They were already doing so themselves, but in ways that cost a lot more, were inflexible and required specialist server skills.

Amazon web services took something that people already did and what there was already a proven demand for and made it better and scaled it.

Eventually, Google and Microsoft would join the fray.

I editorialized a bit there, but I can do that with my own work.

Now, a common booster query, by the way, is for them to say, well, this AI company, they've got high annualized revenues.

And as I've discussed in the past, this metric is basically month times 12.

And while it's a fine measure for normal, high-gross margin businesses like software as a service companies, it isn't for AI.

It doesn't account for churn, which is when people leave.

It also is a number intentionally used to make a company sound more successful.

So you can say $200 million annualized revenue instead of $16.6 million a month.

And you're also meant to, but you heard, they said 200 million annualized, but you heard 200 million.

Your mind did.

That was a bad blinket reference, but I'll continue.

They want you to think 200 million.

They want you to think that's what they'll make.

More often than not, if they mention an annualized number, that number will not be how much they make that year.

Also, if they're using this number, it's likely not consistent.

And now, if they bring this up, you should just say to them, hey, how much profit is the company making?

And also, how much are they burning?

At this point, they will, I think, mace you.

I mean, with the spray or an actual mace.

AI boosters are strange.

Now, they'll also say, well,

this AI company's in growth mode, and they'll pull the profit lever when it's time.

And the answer to that is always going to be, why have none of them done this?

Not one.

Not one of them.

Now, a booster will burst through your door and go, AI, AGI, AGI, and then there's that wheel bullshit again.

It's always about the wheel with these fuckers.

We do not know how thinking works in humans and thus cannot extrapolate it to a machine.

And at the very least, human beings have the ability to re-evaluate things and learn, a thing that LLMs cannot do and will never do.

We do not know how to get to AGI.

Sam Mortman said in June that OpenAI was now confident they knew how to build AGI as they have traditionally understood it.

Then in August, Altman said that AGI was not a super useful term and that the point of all this is it doesn't really matter and it's just this continuing exponential of model capability that we'll rely on for more and more things.

I'm really tired of people quoting this guy.

He doesn't make any fucking sense.

Read anything he says out loud and it's just

really just total bullshit.

Even meta's chief ai scientist the annlacun says it isn't possible with transformer based models to make a gi we don't know if a gi is possible and anyone claiming they do is lying anyone who's talking about a gi is talking about fan fiction again ask them how they feel about banjo and kazooie do you think they made love

actually that's a real if next time someone brings up a gi to you seriously bring up banjo and kazooie and their romantic involvement i actually that i think that that's the only response you should give now stop humoring humoring them.

It is fanfiction.

But putting Banjo and Kazooie aside, there's also a really stupid booster thing they do, which is, I'm hearing from people deep within the AI industry that there's some sort of ultra-powerful models they're not talking about.

And this, by the way, is hogwash.

Nothing different than your buddy's friend's uncle who works at Nintendo that says Mario is coming to the PlayStation.

Ilya Sutskeva and Miramarati raised billions of dollars for companies with no product, let alone a product roadmap.

And they did so because they saw an opportunity for a grift and to throw a bunch of money at compute for no reason.

Anyone who has secret shit is not talking about it because it doesn't exist.

Also, if someone from deep within the industry has told somebody big things are coming, they're doing so to con them or make them think that they have privileged information.

Ask for specifics.

And if they say, I couldn't possibly tell you, then they're full of shit.

They're full of crap.

They are full of doo-doo.

And if they get vague, get specific.

Oh, it's going to be able to automate all the things.

What things?

How?

How does it automate them?

Oh, I don't know.

Then you don't know shit about fuck.

Now talking about not knowing shit about fuck is another booster quip.

Chat GPT is so popular.

700 million people use it weekly.

It's one of the most popular websites on the internet.

Its popularity proves its utility.

Look at all the paying customers.

Now that paying customers part we'll get to in a second, but this argument is poised as a comeback to my suggestion that AI is not particularly useful.

A proof point that this movement is not inherently wasteful or that there are in fact use cases for ChatGPT that are lasting, meaningful, or important.

I fundamentally disagree.

In fact, I believe ChatGPT and LLMs in general have been marketed based on lies of inference, which I realize is ironic.

I know.

It's pretty clever.

I had a whole blog written called The Lie of Inference that kind of became this.

It wasn't very good, though.

This is.

This is good.

Don't say it's bad.

I also have grander concerns and suspicions about what OpenAI considers a user and how it counts revenue.

Let me give you an example.

They claim to have 5 million business customers, yet 500,000 of those are from a 15 million dollar year deal, year-long deal with Cal State University, which works out to around $2.50 a user a month.

OpenAI has also started doing $1 a month trials of its $30 a month Team subscription, and one has to wonder how many of those subscribers are counted in the total, and indeed for how long.

I do not know the scale of these offers nor how long OpenAI has been offering them.

A Redditor posted about this $1 for a month deal a few months ago saying that open ai was offering five seats at once so one buck a month for a month per seat how many people cancel after that who knows maybe they just hope they don't in fact i found a few people talking about these deals and even one adding that they were offered an annual ten dollar a month chat gpt plus subscription that's like not like ten dollars a month for just one month that's for 12 months with one person saying a few weeks ago that they'd seen people offered that same deal for canceling their subscription and actually i got the same thing when i tried to cancel.

Yes, I pay for ChatGPT.

I need to actually use the fucking thing to criticize it.

When I tried to cancel it was like, hey, do you want three months for 10 bucks a piece?

And I was like, sure, just to prove my point.

Suspicious.

But there is a greater problem at play, by the way, and it goes beyond pricing.

And it's that ChatGPT and OpenAI has been marketed based on lies.

So ChatGPT has 700 million weekly active users.

OpenAI has yet to provide a definition.

Yes, I've asked them.

Which means that an active user could be defined as somebody who has gone to ChatGPT once in the space of a week.

This term is extremely flimsy and doesn't really tell us much.

Yes, it's a lot of people, but how active are they?

Similar web says that in July 2025, chatgpt.com had 1.287 billion total visits, making it very popular.

What do these facts actually mean though?

As I said previously, ChatGPT has had probably the most sustained PR campaign for anything outside of a presidency or a pop star.

Every single article about AI mentions open AI or ChatGPT.

Every single feature launch, no matter how small, gets a slew of coverage.

Every single time you hit AI, you're made to think of ChatGPT by a tech media that's never stopped to think about their role in the hype or their responsibility to their readers.

And as the hype has grown, the publicity compounds because the natural thing for a journalist to do when everybody is talking about something is to talk about it more.

Chat GPT's immediate popularity may have been viral, but the media took the ball and ran with it and then proceeded to tell people it did stuff it did not.

People were pressured to try this service under false pretenses, something that continues to happen to this day.

And I'm going to give you a really fucking grisly example.

When I discovered this, when I went and looked at this, it filled me full of rage.

It's disgraceful what happened.

On March 15th, 2023, Kevin Roos of the New York Times would say that OpenAI's GPT-4 was exciting and scary and that it was exacerbating, in his words, the dizzy and vertiginous feeling I've been getting whenever I think about AI lately, wondering if he was experiencing future shock, then described how it was an indeterminate level of better, and then said something that immediately sounded ridiculous.

In one test conducted by an AI safety research group that hooked GPT-4 up to a number of other systems, GPT-4 was able to hire a human task rabbit worker to do a simple online task for it, solving a capture test without alerting the person to the fact it was a robot.

The AI even lied to the robot about why it needed the capture done, concocting a story about a vision impairment.

Now, this doesn't sound even remotely real now, but this was two years ago.

So, I went and looked up the paper, and pretty much everything that Roos described was illustrative.

I doesn't really seem whether it happened.

Now, he's referring to the safety card, which every model has that lists all the measures used to train it and such.

And this safety card led to the perpetration of one of the earliest falsehoods and most eagerly parroted lies about this fucking industry.

And that was that chat GPT and generative AI is capable of agentic actions.

Outlet after outlet and some people who should definitely have known better led by Kevin Roos eagerly interpreted an entire series of events that took place that doesn't remotely make sense starting with the fact that I don't think you can hire a task rabbit to solve a capture or at the very least without a contrived situation where you create an empty task and ask them to complete it.

Why not use mechanical Turk or Fiverr?

There are people right now offering that service.

There were actual real things.

But you know me, I'm a curious little critter.

So I went further and followed this the citation from the safety card to the study.

And this is by METR and the research page.

It turns out that what actually happened was METR had a researcher copy and paste the generated responses from the model and otherwise handled the entire interaction with the task wrapper.

And based on the plurality of TaskGrabbit contractors, it appears to have taken multiple times.

On top of that, it appears that OpenAI and Meta, that's METR, sorry, were prompting the model on what to say, which kind of defeats the point.

Like,

we don't actually know what they prompted it to do.

And when you look, it even says it does like chain of thought reasoning, which didn't really exist back then.

And if it did, it was extremely like chain of thought is reasoning, and that came out end of 2024.

This whole thing is absolutely insane.

It's absurd that anyone wrote about it as real.

What happened, just to be really blunt, is that they, if they even opened a task rabbit, it's really not obvious whether they actually did this.

They had to go to the model and say, okay, I'm opening a TaskRabbit window.

And now the person has said this.

And now this is it.

It just doesn't sound real at all.

But even if it did, it's very obvious that they were telling the model what to say and then copy-pasting the response.

And it took them multiple tries.

It took me five whole minutes to find this article.

Partly because it was cited on the GPT-4 system card.

I then read it within that time, then wrote this part of the script.

It didn't require any technical knowledge other than the ability to read.

It is transparently, blatantly blatantly obvious that GPT-4 did not hire a task rabbit or indeed make any of these actions.

It was prompted to, and they did not show the prompts they used, likely because they had to use so many of them if they even did it.

Anyone falling for this is a mark, and OpenAI should have gone out of their way to correct people.

Instead, they sat back and let people publish outright misinformation.

There's more to San Francisco Francisco with the Chronicle.

More to experience and to explore.

Knowing San Francisco is our passion.

Discover more at sfchronicle.com.

Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.

There was the six-foot cartoon otter who came out from behind a curtain.

It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.

Should I be telling this thing all about my loved life?

I think we will see a Twitch streamer president maybe within our lifetimes.

You can find Close All Tabs wherever you listen to podcasts.

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three at the same time?

That's exactly what Cohere, Thompson Reuters, and specialized bikes have since they upgraded to the next generation of the cloud.

Oracle Cloud Infrastructure.

OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment and spend less than you would with other clouds.

How is it faster?

OCI's block storage gives you more operations per second.

Cheaper?

OCI costs up to 50% less for computing, 70% less for storage, and 80% less for networking.

Better?

In test test after test, OCI customers report lower latency and higher bandwidth versus other clouds.

This is the cloud built for AI and all your biggest workloads.

Right now with zero commitment, try OCI for free.

Head to oracle.com slash strategic.

That's oracle.com slash strategic.

Every business has an ambition.

PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.

And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards so you can focus on scaling up.

When it's time to get growing, there's one platform for all business, PayPal Open.

Grow today at PayPalOpen.com.

Loans subject to approval in available locations.

Roos, along with his co-host Casey Newton, would go on to describe this example at length on a podcast that week, describing an entire narrative where the human actually gets suspicious and GPT-4 reasons out loud that it should not reveal that it is a robot.

It's not a reasoning model.

At which point, the task rabbit solves the capture.

During this conversation, Newton gasps and says, Oh my god, twice.

And when he asks Roos, how does this model understand that in order to succeed at its task, it has to deceive the human, Roos responds, We don't know, that is the unsatisfying answer.

And Newton laughs and states, We need to pull the plug.

I mean, again, what?

Disgraceful, embarrassing, reprehensible.

All that and more on the Hard Fork podcast.

Published weekly.

You can cut that, that ads for free, fellas.

Credulousness aside, the GPT for marketing campaign was incredibly effective, creating an aura that allowed OpenAI to take advantage of the vagueness of its offering as people, including members of the media, willfully filled in the blanks for them.

Altman has really never had to work to sell his product.

Think about it.

Have you ever heard OpenAI tell you what ChatGPT can do or go to great great lengths to describe its actual abilities?

Even on OpenAI's own page for ChatGPT, the text is extremely vague.

You scroll down, you're told that ChatGPT can write, brainstorm, edit, and explore ideas with you.

It can generate and debug code, automate repetitive tasks, not clear what the tasks are, and help you learn new APIs.

Question mark.

With ChatGPT, you can learn something new, dive into a hobby, answer complex questions, and analyze data and create charts.

What repetitive tasks?

Who knows?

How am I learning?

Unclear.

It's got thinking built in.

What that means is also unclear, unexplained, and thus allows a user to incorrectly believe that ChatGPT has a brain and thinks.

To be clear, I know what reasoning means, but this website does not attempt to explain what thinking means.

You can also offload complex tasks from start to finish with an agent, which can, according to OpenAI, think and act, proactively choosing from a toolbox of agentic skills to complete tasks for you using its own computer.

This is an egregious lie, employing the kind of weasel wording that would be used to torture IR Baboon for an eternity.

Precise in its vagueness, OpenAI's copy is honed to make reporters willing to simply write down whatever they see and interpret it in the most positive light.

And thus the lie of inference began.

What ChatGPT meant was muddied from the beginning, and thus ChatGPT's actual outcomes have never been fully defined.

What ChatGPT could do became a kind of folklore, a non-specific form of automation that could write code and generate copy and images, that can analyze data, all things that are true but one can infer much greater meaning and use from.

One can infer that automation means the automation of anything related to text, or that write code means write the entirety of a computer program.

OpenAI's chat GPT agent is not, by any extension of the word, and I quote, already a powerful tool for handling complex tasks, but it has not, in any meaningful sense, committed to any actual outcomes.

As a result, potential users, subject to a 24-7 marketing campaign, have been pushed towards a website that can theoretically do anything or nothing and have otherwise been left to their own devices.

The endless gaslighting, societal pressure, media pressure, and pressure from their bosses has pushed hundreds of millions of people to try a product that even its creators can't really describe or don't feel compelled to.

And if I was wrong, we'd have real use cases by now and better metrics than weekly active users.

As I've said in the past, OpenAI is deliberately using these weekly active users so that it doesn't have to publish their monthly active users, which I believe would be much higher.

Now, why wouldn't it do this?

Well, OpenAI, as I've mentioned, has 20 million paying ChatGPT subscribers and 5 million business customers, with no explanation of what the difference might be, really other than it involves Teams and Edgie, but not pro.

Anyway, this is already a mediocre 3.5% conversion rate.

Yeah, its monthly active users, which are likely 800 or 900 million, but these are guesses, would make that rate lower than 3%, which is pretty terrible considering everyone says this shit is the future.

I'm also tired of having people claim that search or brainstorm or companions are a lasting, meaningful business model.

I'm really tired of it.

I'm tired of being told this again and again and again.

That's not what Chat GPT is going to actually survive on.

Breathe.

Okay.

Let's move on.

Here's another boost to grip, though.

OpenAI is making tons of money.

That's proof they're a successful company, and you are wrong somehow.

So OpenAI announced that it has hit its first $1 billion month on August 20th, 2025, on CNBC, in fact.

Weirdly enough, by the way, that quote was not in the TV interview.

But anyway, this also brings it exactly in line with my estimated 5.26 billion in revenue that I believe it has made at the end of July.

Did that in a premium newsletter?

Please pay me.

However, remember what the MIT study that I mentioned said.

Enterprise adoption is high, but transformation is low.

There are tons of companies throwing money at AI, but they are not seeing actual returns.

Open AI's growth as the single most prominent company in AI, and if we're honest, one of the most prominent in software writ large, makes sense, but at some point will slow because the actual returns for businesses are not there.

If there were, we'd have one article where we could point at a chat GPT integration that helped scale a company,

save a bunch of money, make a bunch of money, written in plain English, and not in the gobbly gook of profit improvement.

Also, OpenAI is projected to make $12.7 billion in 2025.

How exactly will it do that?

Is it really making $1.5 billion a month by the end of the year?

Even if it does, is the idea that it keeps burning $10 billion or more a year, every year into a a 10 like what actual revenue potential does open ai have long term its products are about as good as everybody else's cost about the same and do the same things chat gpt is basically the same product as claude or grok maybe less mecha hitler or any number of different llms the only real advantages that open ai has are infrastructure and brand recognition These models have clearly hit a wall in training, hitting diminishing returns, meaning that the infrastructural advantage is that they can continue providing its service at scale, nothing more.

It isn't making its business cheaper, other than the fact it mostly hasn't had to pay for it, other than the site in Abilene, Texas, where it's promised Oracle $30 billion a year in 2025.

I'm sorry, I don't buy it.

I don't buy that this company will continue growing forever, and its stinky conversion rate isn't going to change anytime soon.

When OpenAI opens Stargate Abilene, it will turn profitable.

How?

I hear this one a good amount.

How?

How?

How?

How?

How?

How's it going to happen?

Nobody ever answers.

No one ever answers actually how this company will become profitable.

It's fucking insane to me.

Nobody ever answers the question.

Efficiencies?

Efficiencies.

They're going to be efficient?

Mm-hmm.

Mm-hmm.

They're going to be efficient.

If you're going to say ChatGPT-5, I wrote a huge scoop and then did an episode about why it's not more efficient.

In fact, it's less efficient.

And I'm sure one of you is going to argue, well, you know, they could do the custom silicon.

They have a $10 billion deal with Broadcom.

How they fucking pay him for that?

Also, you realize that...

Well, actually, no, just, you you know, no, you're right.

They're going to get the chip from Broadcom because you know what they always say about the first generation of tech, right?

It always works and it's always great.

And it has no problems.

That's what I always,

that's what I happen with pretty much every first time they make anything in tech.

Anyway, let's move on.

You'll hear boosters also be like, well, my brother's friend's dog uses ChatGPT and they'll love it.

Well, I heard this happen or my mate has it in this or I heard this person or I use it in this one.

Before we go any further, just to be clear though,

is when you hear a booster bring up AI and they'll say something, make sure they're talking about generative AI.

Are they actually talking about generative AI?

Is this a large language model thing?

It's very, very, very common for people to conflate AI with generative AI.

There are many different kinds of AI.

Make sure that the AI booster, whatever they're claiming, whatever they're telling you, is actually about large language models.

There are all sorts of other kinds of machine learning that people love to bring up.

LLMs have nothing to do with folding at home, autonomous cars, or most disease research.

But okay, let's do a speed run.

Using AI led researchers to discover 44% more materials.

No, it didn't.

MIT has now withdrawn this paper, citing concerns about its integrity.

I've linked it in the show notes.

There's a huge rundown.

Here's another quote.

AI is so profoundly powerful, it's causing young people to have trouble finding a job.

While young people have been having trouble finding jobs, there's no proof that AI is the reason.

Every piece of coverage or reading is citing an Oxford economics report that, amidst a bunch of numbers, says, and I quote, there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.

A statement that it does not back up, other than claiming that the high adoption rate by information companies along the sheer employment declines in some roles since 2022 suggested some displacement effect from AI.

And digging deeper, the largest displacement seems to be entry-level jobs normally filled by recent graduates.

There's otherwise new data.

Anyone making this point is grasping at straws.

I go into this in more detail in a a newsletter called Sincerity Wins the War, which I've linked to, but it's one of the worst reported stories in tech.

And now I'm actually going to Adlip for a second because I forgot while writing this script, there was also this thing that came out of Stanford that said there's been a 13% drop in jobs affected by AI.

And this was used as proof that AI was taking them.

Now, curious little critic that I am, I went and read that.

What it actually did was find a bunch of jobs that they think are related to AI and being affected by AI.

They saw they were going down.

They went, oh, it's AI that did that.

They fart around with various statistics, but that's the long and short of it.

I'll give you an example of one of the jobs, accountancy.

Now, any accountants listening, big up to my accountant listeners.

There's been a hiring crisis in accountancy for years.

People are not becoming CPAs.

The reason there are less of them is that less people are becoming them.

It's nothing to do with AI.

Imagine if anyone put half as much effort into writing up these stories as I did writing up one of these booster quibs.

But here's another one, that AI is replacing young coders, and it is not.

In fact, Amazon's cloud chief just said that replacing junior employees with AI is one of the dumbest things he's ever heard.

There is no actual real evidence that this is the case.

Every single story you have read is anecdotal.

Anyone peddling this has an agenda or is not reading.

Every CEO mentioning this specifically avoids saying the words that AI is replacing people because AI can't replace people.

I will add an aside, There are people's jobs that have been replaced by AI, translators, transcribers.

Brian over at Blood is in the Machine, Blood in the Machine even.

Sorry, Brian.

He's doing a great job on covering this.

There are people that have lost jobs.

These people are losing it because their bosses are fucking stupid, because their bosses are just taking the shittiest possible version of their work and slopping it up.

That's not happening at the knowledge worker scale, nor is it happening at the coder scale.

Everyone telling you that has an agenda.

But boost this will also claim that AI is doing science research somehow or will do it.

And it won't.

I've included a write-up about why foundation models can't do this someone's going to read it and say but there's this bit where it says it isn't a defeat of llms and the reason he says this is because i shit you not that llms aren't incapable of doing scientific research he says they're insufficient

which is which is the the same thing they're in they're insufficient at the anyway he claims they're also not dead weight for science, then spends hundreds of words meandering around that thing to kiss up to a eye boosters for some reason, I assume, because they've terrified him by being really annoying.

And these people need to go outside and touch grass.

Now, a lot of people think they're going to tell me that they use AI all the time, and that will change my mind.

I cannot express how irrelevant it is that you have a use case.

Every use case I hear is one of the following.

I use it for brainstorming, to which I say, who cares?

Not a business model, it's commoditized.

I use it like search.

Who cares?

It's not even good at search.

It's fine.

It's not even better than the low bar set by Google search.

The results it gives aren't great and and the links are deliberately made smaller, which gets in the way of me clicking them so I can actually look at the content.

If you're using Chat GPT for search, you may not actually care about the content of the things you're looking at.

If I'm wrong, great.

You now have a functional search engine.

Congratulations.

Well, I use it for research.

And if you use it for research, you do not respect actual research.

You want a quick answer.

It's that simple.

These reports are slop.

I've read many, many, many AI reports and they're not good.

Sorry.

Well, I use it for coding or know someone who used it for coding.

And I'll get to that in a minute.

But all of this would be fine and dandy if people weren't talking about this stuff as if it was changing society.

None of these use cases come close to explaining why I should be impressed by generative AI.

It also doesn't matter if you yourself have a kind of useful thing that AI did for you once.

We are so past the point when any of that matters.

AI is being sold as a transformational technology and I am yet to see it transform anything.

I am yet to hear one use case that truly impresses me or even one thing that feels possible now that wasn't possible before.

This isn't even me being a cynic.

I'm ready to be impressed.

I just haven't been impressed in three fucking years, and it's getting boring.

Also, tell me with a straight face that any of this shit is worth the infrastructure.

Remember, AI boosters are arguing that this stuff is powerful.

None of these use cases are powerful sounding.

But sir,

sir, vibe coding is changing the world, allowing people who can't code to make software.

Now, this is one of the most brain-dead takes about AI and coding.

And it's that that vibe coding is allowing anyone to build software.

And you'll never guess what Kevin Roos covered this.

He actually did this article.

While writing the script, I hadn't even noticed the literature.

Anyway, while technically true in that one can just type build me a website into one of many coding environments, this does not mean said website is functional, secure, or useful.

Let's make this really clear.

AI cannot just handle coding.

Go into the show notes and read this piece I've linked from Colton Vogie.

I have actually interviewed him now.

He's going to be be coming out in the next few weeks.

The episode, the interview is fucking brilliant.

And then the other I've linked to by Nick Suresh.

If you contact me about AI and coding without reading these, I will send them to you and nothing else or crush you like a car in a garbage dump into a cube.

One or the other I will choose at the time.

Also, show me a vibe-coded company, please.

Not a company where somebody who can code has quickly spun up some features.

A fully functional, secure, and useful app that has made money

and made by somebody who cannot read or write code.

You won't be able to because it is impossible.

Vibe coding is a marketing term based on lies peddled by people who either have a lack of knowledge or morals.

And are AI coding environments making people faster?

I don't think so.

In fact, a recent study suggested that they actually make software engineers 19% slower.

The reason that nobody is vibe coding in entire companies because software development is not just put a bunch of code in a pile and hit go.

And oftentimes when you add something, it breaks something else.

This is all well and good if you actually understand code.

It's another thing entirely when you're using cursor or clawed code, like a kid at an arcade machine, turning the wheel repeatedly without having a coin in there and pretending that you're playing it when the demo is going on.

Vibe coders are also awful for the already negative margins of most AI coding environments, as every single thing they ask the model to do is imprecise, burning tokens in pursuit of a goal they themselves don't really understand.

Vibe coding doesn't work, it will not work, and pretending otherwise is at best ignorance and at worst supporting a campaign built on lies.

There's more to San Francisco with the Chronicle.

There's more food for thought, more thought for food.

There's more data insights to help with those day-to-day choices.

There's more to the weather than whether it's going to rain.

And with our arts and entertainment coverage, you won't just get out more, you'll get more out of it.

At the Chronicle, knowing more about San Francisco is our passion.

Discover more at sfchronicle.com.

Be honest, how many tabs do you have open right now?

Too many?

Sounds like you need close all tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.

Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.

Everyone's cooped up in their house.

I will talk to this robot.

If you're a truly engaged activist, the government already has data on you.

Driverless cars are going to mess up in ways that humans wouldn't.

Listen to Close All Tabs, wherever you get your podcasts.

Every business has an ambition.

PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.

And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards so you can focus on scaling up.

When it's time to get growing, there's one platform for all business: PayPal Open.

Grow today at PayPalOpen.com.

Loan subject to approval in available locations.

Preparato hogar para las festividades pormenos en los.

Comprafisos ebinilos selectos style selections a dola inovente nou escentabos por pi cuerado con installación antenn de las fiestas.

Habla con un associado o en pieza hoy en los punto combiagonal holiday install.

Los, nosotros ayudamos, tu ahoras.

Sol installación basic instincta instican reseciones defechas, sujeto disponibilidad installation por contratistas independentes.

Deadas ilicencias con una associados, son un estados contivos estados uniros.

And this is all built up to one final point.

I'm no longer accepting half-baked arguments.

If you're an AI booster, please come up with better arguments.

And if you truly believe in this stuff, you should have a firmer grasp on why you do so.

It's been three years, and the best some of you have is it's really popular.

Uber also burned money.

Your arguments are based on what you wish were true rather than what's actually true and it's deeply embarrassing.

Then again there are many well-intentioned people who aren't necessarily AI boosters who repeat these arguments regardless of how thinly framed they are.

In part because we live in a high information, low processing society where people tend to put great faith in people who are confident in what they say and sound smart to Jason.

I also think the media is failing on a very basic level to realize that their fear of missing out or seeming stupid is being used against them.

If you don't understand something, it's likely because the people you're reading or hearing it from don't either.

If a company takes a promise and you don't understand how they'll deliver on it, it's their job to explain how and your job to suggest it isn't plausible in clear and defined language.

This has gone beyond simple objectivity into the realm of an outright failure of journalism.

I have never seen more and misinformation about the capabilities of a product in my entire career, and it's largely peddled by reporters who either don't know or have no interest in knowing what's actually possible, in part because all their peers are doing the same thing and saying the same nonsense.

As things begin to collapse, and they sure look like they're collapsing, but I'm not making any wild claims about the bubble bursting quite yet, it will look increasingly more deranged to bluntly publish everything that these companies say.

Never have I seen an act of outright contempt more egregious than Sam Altman saying that GPT-5 was actually bad and that GPT-6 will be even better.

Members of the media.

Sam Altman does not respect you.

He is not your friend.

Clammy Sam Altman is not secretly confiding in you.

Clammy will thinks you are stupid and easily manipulated and will print anything he says, largely in part because many members of the media will print exactly what he says whenever he says it.

And to be clear, if you wrote about GPT-6 and made fun of it, that's great.

But let's close by discussing the very nature of AI skepticism and the so-called void between those who hate AI and those who love AI from the perspective of one of the most prominent people in the skeptic camp.

Critics and skeptics are not given the benefit of grace, patience, or in many cases, hospitality when it comes to their position.

While they may receive interviews and opportunities to give their side, it's always framed as the work of a firebrand, an outlier, or somebody with dangerous ideas that they must eternally justify.

Skeptics are demonized, their points under constant scrutiny, their allegiances and intentions constantly interrogated for some sort of moral or intellectual weakness.

Skeptic and critic are words said with a sneer of trepidation, that the listener should be suspicious that this person isn't agreeing that AI is the most powerful special thing ever.

To not immediately fall in love with something that everybody is talking about is to be framed as a hater, to have oneself introduced with the words, not everyone agrees, or on 40% of your appearances.

By comparison, AI boosters are the first to get TV appearances and offers to be on panels.

Their coverage featured prominently on Tech Meme, selling slop-like books called shit like The Future of Intelligence, Masters of the Brain, featuring 18 interviews with different CEOs that all say the same thing.

They don't have to justify justify their love.

They simply have to remember all the right terms, chirping out test time compute, and the cost of inference is going down enough times to Simon Wario Amade to give them an hour-long interview where he says, the models they are in years going to be the most powerful school teacher ever built.

And by the way, yeah, I did sell a book because my shit fucking bangs.

My shit rocks.

I'm not going to be too smug, but like, I put a lot of effort into this and I research it very well.

Others should try harder.

I have consistent, deeply sourced arguments that I've built over the the course of years.

I didn't become a hater because I'm a contrarian.

I became a hater because the shit that these fucking oaths have done to the computer pisses me off.

I did the man that destroyed Google search because I wanted to know why Google search sucked.

I wrote Sam Altman Free because at the time I didn't understand why everybody was so fucking enamored with this clammy sociopath.

Everything I do comes from a genuine curiosity and an overwhelming frustration with the state of technology.

I started writing the newsletter that led to this podcast with 300 subscribers and 60 views and have written it as an exploration of subjects that grows as I write.

I do not have it in me to pretend to be anything other than what I am and if that's strange to you, well I'm a strange man, but at least I'm an honest one.

I do have a chip on my shoulder in that I really do not like it when people try to make other people feel stupid, especially when they do so as a means of making money for themselves or making someone else look good.

I write this stuff out because I have an intellectual interest.

I like writing, and by writing, I'm able to learn and process my complex feelings around technology, and talking it out actually feels good.

It's a intellectual exercise that i really enjoy i happen to do so in a manner that hundreds of thousands of people enjoy every month and i'm not specifying where those people go and if you think that i've grown this by being a hater you are doing yourself the disservice of underestimating me which i will use to my advantage by writing deeper more meaningful and more insightful things than you and then i'll sell say them with lots of curse words on this podcast I've watched these pigs ruin the computer again and again and make billions doing so and all of this is happening while the media celebrates the destruction of things like Google, Facebook, and the fucking environment in pursuit of eternal growth.

I can't manufacture my disgust, nor is it hard to, nor can I manufacture whatever it is inside me that makes it impossible to keep quiet about these things.

I don't know if I take this too seriously, whether I don't take it seriously enough, because I keep saying fucking shit, but I'm honored that I'm able to do so, and I really appreciate everyone who listens, reads, or engages with me in any way.

I really do love you all for listening.

I know that this was a long three-parter.

I've enjoyed recording it.

I've done lots of retakes.

Mattasowski, love you, man.

Sorry for all of this.

I'll catch you next episode.

Thank you for listening to Better Offline.

The editor and composer of the Better Offline theme song is Matt Osowski.

You can check out more of his music and audio projects at mattasowski.com.

M-A-T-T-O-S-O-W-S-K-I dot com.

You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and, of course, my newsletter.

I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash betteroffline to check out our Reddit.

Thank you so much for listening.

Better Offline is a production of CoolZone Media.

For more from CoolZone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

Be honest, how many tabs do you have open right now?

Too many?

Sounds like you need close all tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.

Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.

Everyone's cooped up in their house.

I will talk to this robot.

If you're a truly engaged activist, the government already has data on you.

Driverless cars are going to mess up in ways that humans wouldn't.

Listen to Close All Tabs, wherever you get your podcasts.

There's a lot going on in Hollywood.

How are you supposed to stay on top of it all?

Variety has the solution.

Take 20 minutes out of your day and listen to the new Daily Variety podcast for breaking entertainment news and expert perspectives.

Where do you see the business actually heading?

Featuring the iconic journalists of Variety and hosted by co-editor-in-chief Cynthia Littleton.

The only constant in Hollywood is change.

Open your free iHeartRadio app, search Daily Variety, and listen now.

Life's messy.

We're talking spills, stains, pets, and kids.

But with Anibay, you never have to stress about messes again.

At At washablesofas.com, discover Anibay Sofas, the only fully machine washable sofas inside and out, starting at just $699.

Made with liquid and stain-resistant fabrics, that means fewer stains and more peace of mind.

Designed for real life, our sofas feature changeable fabric covers, allowing you to refresh your style anytime.

Need flexibility?

Our modular design lets you rearrange your sofa effortlessly.

Perfect for cozy apartments or spacious homes.

Plus, they're earth-friendly and built to last.

That's why over 200,000 happy customers have made the switch.

Upgrade your space today.

Visit washable sofas.com now and bring home a sofa made for life.

That's washablesofas.com.

Offers are subject to change and certain restrictions may apply.

Top reasons technology pros want to move to Ohio, a thriving tech industry with high-paying jobs for programmers, developers, database architects, and more.

Ohio is the silicon heartland with the top tech brands and thousands of startups too.

Shorter commute times mean more time for you.

And since your dollar goes further in Ohio, it's like a cheat code for success.

The tech career you want and a life you'll love.

Have it all in the heart of it all.

Learn more at callohiohome.com.

This is an iHeart Podcast.