987 - May I Meet You? feat. Ed Zitron (11/17/25)
Get your Ed at:
Better Offline podcast: https://linktr.ee/betteroffline
Where’s Your Ed At newsletter: https://www.wheresyoured.at/
Twitter/X: https://x.com/edzitron
Bluesky: https://bsky.app/profile/edzitron.com
Press play and read along
Transcript
Speaker 0 All I wanna be is ill jumping.
Speaker 0 All I wanna be is hell jumping.
Speaker 0 We're pay slows.
Speaker 0 All I wanna be is hell
Speaker 1 AI.
Speaker 2 It isn't just two letters or a Steven Spielberg classic. Nor is it Bicentennial Man, which is actually a different movie about a robot who had to wait 200 years to have sex.
Speaker 2 But will investors be waiting 200 years for their capital to have sex? That is the question that we have Ed Zittron on to discuss today. Ed, do you think that the robot and Bicentennial Man had sex?
Speaker 2 I understand this is the subject of a recent article you wrote with the Financial Times.
Speaker 1 Yeah.
Speaker 3 Well, Bryce Elder and I at the Financial Times were discussing this, and we could not, we spoke to many sources, and no one would agree. Half of them believed he fucked all the time.
Speaker 3 Some of them believed he was actually asexual,
Speaker 3 which meant he would have sex, but only with one specific partner, which I think is not seen in the movie, but I think that's the part of the movie.
Speaker 2 Yeah, well, actually, Michael Burry closed his fund because he couldn't figure out this question.
Speaker 2 It's vexed investors pretty much forever since Bicentennial Man has come out.
Speaker 1 I think the one thing I can say with absolute certainty here is that the Jude Law character from Steven Spielberg's AI definitely had sex.
Speaker 1 That was sort of like
Speaker 1 a hot point in the world.
Speaker 3 100%.
Speaker 2 Yeah. Actually, we have Ed on today to discuss kind of a bombshell regarding Open AI's capital expenditures and potential operating costs.
Speaker 2 Ed, could you explain what you found looking over the
Speaker 2 financial data that OpenAI has on offer to the public?
Speaker 3 Yeah.
Speaker 3 So
Speaker 3 specifically, it's OPEC and revenue. So in documents I viewed and the FT and I have looked over as well, because they did a story on it as well.
Speaker 3 I found that OpenAI spent through September the end of this year, sorry, end of September this year from the beginning of the year, $8.67 billion on inference, which is just the thing for creating the output.
Speaker 3 Inference is just the process of spitting out, oh, Scooby-Doo with a big old pair of tits or a fanfic of about that, which is most, I think, most of what OpenAI's Chat GPT is used for, based on the data I've seen.
Speaker 3
But also, on top of this, I was able to see Microsoft's revenue share with OpenAI. So they get 20% of all of OpenAI's revenue.
And
Speaker 3 through the first three quarters of this year, they've made $4.329 billion in revenue. Now, they might be a little bit more based on the share they get from Bing and the share of Microsoft's models.
Speaker 3 Microsoft has the exclusive rights to sell OpenAI's models, but basically this company's projected to make $13 billion this year.
Speaker 3 I don't fucking know how they're going to do so because they'd have to make like $8.00 billion of revenue in the final quarter of this year.
Speaker 3
But putting that all aside, all of their revenue is being eaten. by their inference costs.
And that's just to be clear, the cost to just create the outputs.
Speaker 3
On top of that, they have thousands of highly paid staff. They have billions of dollars of training costs.
They, I mean, they have real estate and their data and legal fees.
Speaker 3 So this is just massive amounts more. And indeed, there was reporting that came out earlier in the year that said they'd only spent $2.5 billion on inference for the first half of the year.
Speaker 3
It's bullshit. It's just not, it's not what I've seen.
And it's really worrying because this company, I don't know, maybe there's something missing. I'm not sure what it could be.
Speaker 3 But if they've only made this much money and they spent this much money, I don't see how OpenAI survives the next year. And I mean, I've kind of been on this drain for a while.
Speaker 2 So
Speaker 2 from the charts that I saw on the FT piece,
Speaker 2 it showed inference costs quadrupling in the past year. And
Speaker 2 if you extend it to 18 months, I think they it whatever the word for multiplying by eight is I'm not even going to attempt that one.
Speaker 1 But
Speaker 2 what, like, obviously it's kind of a black box, but what accounts for the almost exponential growth in inference costs?
Speaker 3 So what it is, is, so there was a common myth that was broken by the, by MIT Technology Review middle of the year.
Speaker 3 People used to think that all of the costs of these data centers came from training, from shoving data into these models. I'm sure a listener will hate me for that simplified version, but whatever.
Speaker 3 Nevertheless, according to MIT Technology Review, 80 to 90% of data center expenditures are on inference, just creating outputs.
Speaker 3 Now, the reason that's increased is OpenAI and all these companies have hit the point of diminishing returns with these models, where they can't just train them more.
Speaker 3 So they have to do something called reasoning, which is where, and I'm doing air quotes, they think,
Speaker 3 which means instead of when they generate an output, just going, okay, I will write this, it considers the steps. So you say, okay, generate me a table of the data that I'm feeding you.
Speaker 3 And it says, okay, they want a table of something. I will do this, this, and this.
Speaker 3 Now, that thinking process, that kind of back of house, breaking down a task into component elements, that is more token intensive.
Speaker 3 It's doing, it's basically generating an output to pass through how to generate an output. This massively increases, it's called test time compute.
Speaker 3 It's massively increasing the costs of doing, in many cases, the same thing, but it's the only way they're really seeing any kind of improvement.
Speaker 3 And when I say improvement, that's based on benchmarks which are rigged for large language models because these things don't think, they don't have consciousness, they can't do most of the things that humans do.
Speaker 3 And as a result, I will also say they're probabilistic. So each time they generate something, they're guessing based on probability.
Speaker 3 So you've just got a machine that computes a bunch to put an output together. And like GPT-5, new version of chat GPT, came out in August, I think.
Speaker 3
Very reasoning focused. So everyone is just throwing more compute power at every output.
So the costs are going to grow exponentially. And that's for free users.
That's for paid users.
Speaker 3 Doesn't matter who.
Speaker 1 And I have a question about the concept of inference. Like you're saying
Speaker 1 OpenAI is spending an astonishing amount of money in the billions of dollars.
Speaker 1 And you said like on something, on inference, which is the creation of the thing that they're supposed to be selling, right? It's the creation of the output. It's when a model creates an output.
Speaker 3 That's a very simplified way of putting it. There's a whole technical thing, but when inference happens is basically the machine creating the output that you see.
Speaker 3 So if you say, chat GPT, generate me, I don't know, a thousand words of a story about this, it then, through inference, creates the output that gives you the text.
Speaker 1 So, so like, so, so when you ask ChatGPT something, the answer it spits back, like, how it generates that is that's where the money's going.
Speaker 1 Like, the thing is, like, that's hard for me to understand is, like, the thing that they're selling already exists. So, like, why are they spending like how where is all this money going?
Speaker 1 Is it to make it better? Is it to make it the kind of thinking machine that they're selling it as?
Speaker 3 Well, let me rephrase it. So inference is not a process where it's going.
Speaker 3 When a large language model generates an answer, it isn't looking at a database of information. It's not like it's got every fact.
Speaker 3 The reason that these things hallucinate is because each time they're generating everything new. And the more training data they have, the more likely they are to give a right answer.
Speaker 3 But that being said,
Speaker 3 the process of reasoning, which is just as a reminder, it breaks down the task into component elements and then spits out an output.
Speaker 3
That process, the more reasoning it does, the more likely it is to hallucinate. So the reason it's getting more expensive is it's doing more computing.
In many cases, it's getting things wrong.
Speaker 3 But it's doing more compute-intensive stuff. So the cost of just creating outputs is increasing.
Speaker 3 And Altman himself said not long after GPT-5 came out that now way more users are being exposed to reasoning models, which just means means everyone trying to generate something from chat GPT is now generate, they're using more compute to generate it.
Speaker 3
And they will claim that they have this router model. There's the whole thing with GPT-5.
This router model, it makes things more efficient. I actually reported that it's the literal opposite.
Speaker 3 By using the router model, it's actually, because they have to do something called the system prompt, it's the whole thing.
Speaker 3 They are basically doing more work to generate each thing and sending them to more compute intensive reasoning models. Does Does that make sense? I can break it down a little bit more.
Speaker 1 Yeah,
Speaker 2 I mean, like, this is, to add on to that a little bit, this is like a super abridged, very
Speaker 2 dumbed down to the point where I can sort of understand an explanation of it. But
Speaker 2 every time you ask a question to ChatGPT,
Speaker 1 it
Speaker 2 it associates like what the query or input is with like a set of math equations. And the training is associating like different, you know, different sets of answers to math equations.
Speaker 2 And the more complex that those inputs become, and
Speaker 2 they become exponentially more complex with reasoning because they're breaking it down into components,
Speaker 2 they're doing more equations and that it takes more computing power. It takes more like actual power.
Speaker 3 I think you can just simplify it to, they use compute to with transformer models, the the whole root of it is it's comparing two sets of data and deciding which one is the most likely.
Speaker 3 With reasoning, it's using that process to generate a plan of action to create an output. And doing that can sometimes give more accurate answers.
Speaker 3
But every time you use reasoning, it uses more compute. So you're kind of there.
It's really annoying.
Speaker 3 It's complex because lots of language models are kind of a crazy, like, if they weren't done in this way, they'd be kind of impressive.
Speaker 2 But because of what they're claiming they can do they're not well i mean that that is sort of the rub with them they're incredibly impressive for pretty much none of the things they're advertised for but for a lot of like non-consumer uses i i i last time you were on i talked about the example of people using them for you know to develop antibiotics that overcome uh antibiotic resistant bacteria and things like that.
Speaker 2 They're, of course, marketed as like, oh,
Speaker 2 you can make your own sitcom where like Andrew, Andrew Schultz is part of the civil rights movement. And they're terrible at that.
Speaker 2 But like all the things that like none of us would use them for, it is like kind of exciting.
Speaker 3 Well, the thing is as well, it's like, was the antibiotic example a transformer-based model? What they've done with large language models is they conflated them with all machine learning.
Speaker 3 So all of the useful shit in AI, they're like, oh, that's all of the AI we're talking about. But the thing that the thing that they're building the data centers for is things like chat GPT or worse.
Speaker 1 Well, I guess you're like, the problem for a company like Open AI, based on what you're telling me here, is that like, usually, as technology advances, I mean, or at least I was led to believe that the better it gets at doing something, the cheaper it will be for it to perform that function.
Speaker 1 And it seems like OpenAI has a problem now where it's like to get these large language models or to get the product that they're selling to do what it's promised or even to just continue working,
Speaker 3 it's costing them exponentially more and more money to keep it going which seems which seems like kind of a problem for a you know a firm that's you know supposed to be making money so that's the thing it's it it's exactly that it's the usually when tech scales and the reason tech has had the valuations they've had is as it scales it gets cheaper cloud computing yes it's expensive for someone like facebook a very big like a very big application like that it will be expensive to run but as it scales the value kind of expands while while the costs stay manageable.
Speaker 3 With this, the larger the companies get and the more that people do with them, the more expensive they get.
Speaker 3 Indeed, I've never seen a company in history where your power users are literally costing you more than your... average users and they're not making you much more either.
Speaker 3 There's a company called Augment Code, an AI-powered coding company, where they had a 250 buck a month customer who was spending $15,000 a month in costs.
Speaker 3
And that's because you can't actually manage these. It's fucking insane.
It's everywhere there is this there's a ranking
Speaker 3 i'll tell you that much well there's a there's this thing called so anthropic they have this thing called clawed code an ai coding thing there is a thing called vibrank
Speaker 3 where it's just people who have found ways to spend more because you can measure your costs even though you pay like 200 bucks a month there's someone spending 50 grand a month of costs they're just like it i should do it and they don't have a way of stopping it they have no way of stopping it because if they could they would but with someone someone like OpenAI as well, they have the problem with the GPUs because, and I forget what his name is, something Rubenstein is like, employee number four at NVIDIA made this point where it's while these
Speaker 3 while these new GPUs are getting more powerful and they're getting more efficient, that doesn't mean they're drawing less power.
Speaker 3 So they might be able to do more more efficiently, but you're not actually saving any money with Blackwell. And the information reported that those GB200 racks is little business sense sense for you.
Speaker 3 The ones that OpenAI has put 50,000 of per building in Abilene, Texas, paid for by Larry Ellison and Oracle. Those things have a negative 100% gross profit margin.
Speaker 1 They don't teach you that shit in business school.
Speaker 1 100%.
Speaker 3 That's the good shit. So everyone's losing money.
Speaker 2 So, like,
Speaker 2 what is like
Speaker 2 from the industry itself,
Speaker 2 from Altman and everyone else, what is like the conventional wisdom on this?
Speaker 2 Is it like some vague thing where like eventually we'll
Speaker 2 reach some point where it's so good that these processes are almost like automatic and
Speaker 2 inference costs will like will just collapse or what? Is there any do they address it at all?
Speaker 3 So they're doing a deal with Broadcom for
Speaker 3 Broadcom a chip maker who doesn't have the best rep if you look back, but nevertheless, they're building inference chips with them.
Speaker 3
And they were at one point hinting that they would make things cheaper, but they've stopped saying that. And the information even reported that those will have modest gains.
And that's the good shit.
Speaker 3
So they made these chips with Broadcom that they're also going to build 10 gigawatts of data centers for. Not really sure how that's going to happen.
But there really isn't an answer here.
Speaker 3 Because the media has not really asked OpenAI these questions up front, no one has really managed to get an answer. They don't have one.
Speaker 3 They are trying to do, they're playing into sale man they're playing the hits they're just like we're gonna build bigger get money so big and then we'll do ads the thing is with ads is and this is everyone's answer here open ai will just do ads they've 800 million weekly active users even though that number is slightly questionable they'll do ads the problem is no one has been able to do ads with the LLMs.
Speaker 3
Nobody. Perplexity had ads in 2024.
They made $20,000 and their ad chief left earlier this year. $20,000 in 2024 on ads.
They're They're an AI search engine and they couldn't do it.
Speaker 2 $20,000.
Speaker 1 $20,000.
Speaker 2 Like
Speaker 2 one ad read on Come Town.
Speaker 1 Yeah.
Speaker 3 I'd love to hear a Come Town read of a Perplexity ad.
Speaker 3
I'll be thinking about that one all day. But that's the thing.
And people are like, oh, they brought in Fidge C Mo, who's the former CEO of Instacart, head of Facebook Apps.
Speaker 3
They brought her in as the CEO of applications. She's obviously, by the way, the fall girl.
They're obviously going to pin open AI's failing on her and claim she fucked up. It's Altman.
Speaker 3 But they think, oh, we'll do ads. The thing is, with an LLM, with ads, the whole reason you buy ads is you can do accurate placement and
Speaker 3
accurate attribution. So you can say, I know where this will go or where it won't go.
And I know that I'll be able to track what it does. How the fuck do you do that with large language models?
Speaker 3
Nobody's been able to crack it. Nobody, not Perplexity.
Perplexity's ad chief I mentioned left, I think, in August.
Speaker 3 It's like, if an AI-powered search engine can't do ads, how is OpenAI going to do it? And people say, well, OpenAI has the smartest people in the world.
Speaker 3 Well, you know what? When they die, I'll say what Carlo Ban said in Doom. If they're so smart, why are they so dead?
Speaker 1 Because they've got all these smart fucking people.
Speaker 3 And it's like they haven't been able to work this shit out. The answer is, I don't think anyone has a plan.
Speaker 3 I think everyone wants to look at this as they look at the wider world and say, there's a grand strategy, there's a big conspiracy, they're going to do this and this. No, they're not.
Speaker 3
Government contracts, they haven't got shit. $200 million with the Department of Defense that everybody got.
Oh, the government can just feed them money. This fucking company, they're saying
Speaker 3 they've signed a contract with Oracle to spend $300 billion in five years.
Speaker 3
They don't have the money. They won't have the money.
There's no, I think Microsoft's operating expenses like over $200 billion a year.
Speaker 3
Maybe I'm fudging that, but nevertheless, Microsoft's very profitable. There's no company like OpenAI because no company has been pumped up this big.
It's the world's largest fail sun.
Speaker 2
All that, like, this is the greatest collection of smart people in one, one company ever. It reminds me of long-term capital management.
The, the, uh, one of the first hedge funds to need a bailout.
Speaker 2 Oh, but, um,
Speaker 2 I, so I, that is, that
Speaker 2 is another thing I wanted to get into. Um, you talked extensively about uh the how weird some of Open AI's deals are.
Speaker 2 Obviously, there's this Microsoft thing where it's kind of a black box figuring out, you know, what
Speaker 2 revenue they're taking out of which. A lot of it is sort of Byzantine and money being moved around.
Speaker 1 Um,
Speaker 2 the
Speaker 1 uh AMD deal, where
Speaker 2 correct me if I'm wrong, but
Speaker 2 it gives them an option to buy
Speaker 2 an uncertain amount of shares
Speaker 2 for a penny each, like millions of shares at a time,
Speaker 2 per sum efficiency marker.
Speaker 3
It's such a weird deal. So all three, they have three big deals like that.
They've got 10 billion dollars, sorry, 10 gigawatts of data centers they have to build for NVIDIA for $100 billion.
Speaker 3 And just to be clear, this has been misreported everywhere. OpenAI has not been given $100 billion by NVIDIA, nor have they been given anything yet.
Speaker 3 They might get $10 billion soon, but every gigawatt they build, they get more. The AMD deal is for like six something gigawatts, and it's
Speaker 3
every successive gigawatt, and they're meant to build the first one by next year. It takes two and a half years and like $40, $50 billion per gigawatt.
So not sure how they're going to do that.
Speaker 3 But it's based on how many gigawatts they build and also amd's share price lisa sue actually got a pretty good deal on this she managed to get the stock bump but without really risking it it's still gonna suck when open ai pops its clogs but nevertheless it's
Speaker 3 she like that one is really multifaceted like you have to
Speaker 3 Like they have to successfully do the gigawatts and then the share price must increase by a certain amount in a certain time period. Only then can OpenAI buy part of the tranches.
Speaker 3 it's a very weird deal the broadcom one is just i think successive gigawatts but that one's really funny the reason i want to bring this up is sachinadella revealed because open ai has to share all their ip with microsoft apparently microsoft has all the details of their chips from broadcom so open ai put all this money in with broadcom to build these custom chips microsoft has it now just fuck it the business geniuses like it's just it's fuck we're on fuck we're action uh and Ed, like, you mentioned these, like, a, sort of a deal with NVIDIA for opening it to generating 10 tip with like 10 gigawatts of electricity.
Speaker 1 Like, and, like, in terms of like these data centers, could you just give, give me like an idea about how much 10 gigawatts of electricity is?
Speaker 3 Well, first of all, the concept of a tent of a gigawatt data center is also very new. Also, there is no such thing as a gigawatt data center.
Speaker 3 It's usually buildings that connect together through high-speed networking with a company called Mellanox that was acquired by Nvidia in 2019.
Speaker 3 But nevertheless, a gigawatt of data, I don't have the numbers of comparisons to cities, but the one I can think of off the top of my head is I think New York's combined power of the one in Queens is like 1.something gigawatts.
Speaker 3 So the entire, like two-thirds of New York's power
Speaker 3
is like 1.2 gigawatts. Someone's going to flay me in the comments for this one, but it's something like that.
So it's
Speaker 1 it's it's it's considerably more use of electricity than the largest city in the world. Let's put it in the city.
Speaker 3
Exactly, with millions of people. But to give you some scale of this, so there's an important term, IT loads.
No IT loads refused. Just going to say it.
Speaker 3 So IT load is what they're talking about when they say a gigawatt data center. So that means that they need like 1.3 to 1.4 gigawatts of power.
Speaker 3 So when he says 10 gigawatts of IT load, they need 14 or 15 gigawatts of actual power. So this is really funny.
Speaker 3
Out in Abilene, Texas, building 1.2 gigawatts of data center capacity, they have 200 megawatts of power. They're never going to turn that fucking thing on.
They don't have enough.
Speaker 3 But the idea of 10 gigawatts of data centers is just insane. Just
Speaker 3
no one's... I don't think anyone's actually successfully.
Well, because
Speaker 1
these companies are not generating the electricity to do this. No, they're simply needing that move of electricity.
And I know like a lot has been talked about how
Speaker 1 like the water use of these data centers may be overstated. I mean, as far as I'm concerned, even one ounce of fresh water going towards any of this shit is a calamitous waste of resources.
Speaker 1 But like in terms of like, like
Speaker 1 they are not going to, like, are they, are they going to be paying the government in terms of like the excess stress they're going to be putting or like the state of Texas on the excess stress they're going to be putting on the power grid?
Speaker 1 Or rather, are these states going to be paying them for the privilege of using a, you know, a New York City-sized amount of electricity to generate images of Donkey Kong with tits or me as an anime character.
Speaker 3
Yes. So they're not paying, you don't need to worry about that.
All these companies claim to have some sort of, oh, we're going to bring jobs. These data centers don't create a ton of jobs.
It's all.
Speaker 1 Well, I mean, like, no one works in them. It's just like a warehouse with computers in it.
Speaker 3 But even then, when it comes to the construction jobs, it's usually people flown in. It really is like the West pillorying the South in many cases.
Speaker 3 Like in Texas, for abilite for the stargate abiline they are bringing in these massive horrifying old gas turbine engines it's a bunch of specialized talent there's like i did a whole dig into all of the construction firms there's so many of them it's very clearly by the way just construction firms wheeling in and being like yeah i need 100 bazillion
Speaker 3 mate it's like the classic builder thing and be like oh yeah it's going to take another six months mate so i didn't see that and they all require specialized cooling but basically in the you're really going to want the double platinum service oh it really is
Speaker 3 so much shit like that oh the cooling you're going to need way more cooling than that mate no it's with with the local governments they love like abilene especially it's just like oh we've given you every tax abatement we can find but what they're finding is
Speaker 3 crony capitalism really really good at free money really good at land you can't beat physics and you can't rush power you can't just be like we've built a power station and now we've connected the power to the power thingy and now power's happening.
Speaker 3
You have to do actual land testing. There is a limit.
Sorry, limit. There's a, what's it called?
Speaker 3 A lack of the electricity grade steel you need. There's a lack of the transformers, massive drought of the talent to build these things.
Speaker 3
So even if these fucknuts had the ability to build them, the people to build them don't exist. And if you rush power, you die.
It's not even the people you're killing with the gas turbines.
Speaker 3 You kill the people building it. So, putting all of that aside, even if they fix this, the amount of money and time they need to have this stuff built next year, middle of next year, pretty much.
Speaker 3 They're not gonna, I don't even think they get any of this built, but like the majority of it built by the end of next year.
Speaker 2
Just a small correction on data center jobs. Um, there is actually a study that came out.
Um,
Speaker 2 it turns out that lot lizards in data center parking lots make 17% more than regular lot lizards.
Speaker 1 But Ed, when you say that like they have to have this stuff built by next year, like in what sense to meet their projections, to be profitable, like to do the thing that they're promising to do?
Speaker 1 Like, why do they need all this electricity?
Speaker 3 So I'm remiss here. I should have defined who they were.
Speaker 3 So OpenAI needs this built because Even though OpenAI burns billions, they are running up against capacity issues because like any cloud storage thing or any cloud service, I should say, they run into, there are peaks and drafts.
Speaker 3 There are times when they release new product, they get a bunch of new attention like Sora.
Speaker 3 They need the capacity and they keep running up against the limits of it because their shit is so compute intensive. Oracle needs this shit built.
Speaker 3 Because middle of 2026 is the beginning of Oracle's fiscal year 2027, which is when the fucknuts OpenAI need to start paying them. Oracle has mortgaged their future on this.
Speaker 3 The private equity firms that are building these data centers also need the things to be built so that they can get paid so that they can start paying back the debt.
Speaker 3 So everyone here, I mean, there's not going to be a bailout or anything, but all of these companies, Core Weave right now, Coreweave, this big AI data center company, they're not even a data center company.
Speaker 3
They're a data center leasing company. that rents compute to people where all of their customers are either NVIDIA or OpenAI.
I can get into more detail, but they in their last
Speaker 1 I was going to say, like, given what you're saying, like, how should we view, I saw like a couple weeks ago, Sam Altman made a public statement about being too big to fail and sort of like winking at the idea that perhaps OpenAI AI is like, well, I would never ask for a government bailout, but sometimes when you get so big, the government is the only one who can make that.
Speaker 1 I mean,
Speaker 1 you know, the comments I'm referring to, and like, how should we view the other comments?
Speaker 3 So the way to look at OpenAI is they are not too big to fail. They are too small to pull apart into enough pieces for enough people to eat.
Speaker 3 OpenAI has promised people $1.4 trillion worth of compute deals. With Coreweave, with Amazon, they just signed a $38 billion deal.
Speaker 3
Microsoft, $250 billion of upcoming Azure spend, $300 billion with Oracle, all of this stuff. Too big to fail in this case would be that the government just pays OpenAI's bills.
And Trump doesn't...
Speaker 3 Clemy Sammy, he doesn't like Trump's not going to like Altman.
Speaker 1 Trump's going to fucking bail it out.
Speaker 3 That's the Clemmy Don't like him.
Speaker 1 But he's not going to bail him out. But
Speaker 3 it's something where Altman would love a bailout. But he even said that what they meant was they wanted the government to back loans for data centers.
Speaker 3
And then they had sent a letter to, he's a fucking liar. He was like, we didn't ask for anything like that.
They sent a letter to the government asking for the CHIPS Act to cover data centers.
Speaker 3
Putting all that aside. OpenAI can't be bailed out.
It would be like bailing out Enron.
Speaker 3 It's something where OpenAI's failure would be a symbolic hit. It would break the myth of the AI trade, but OpenAI as an economic entity is not actually that huge.
Speaker 3 They've got like
Speaker 3 $4.3 billion of revenue through the end of Q3.
Speaker 3 I mean, they're spending...
Speaker 3 They spent $12.4 billion
Speaker 3 on inference since the beginning of 2024. Like, it's a lot of money for Microsoft, but their death doesn't really fuck the economy up other than the symbol.
Speaker 3 But if the government backstops open AI, that doesn't fix AI at all. The problem is AI has been sold as this magical tool without ever having the proof of revenue.
Speaker 3 The only companies really making money on this are NVIDIA and construction firms, really.
Speaker 2 Yeah, I mean, it's impossible to really get into anyone's head, especially in this case. But
Speaker 2 I
Speaker 2 am curious about like, you know,
Speaker 2 what are these companies getting out of these deals specifically? With Microsoft,
Speaker 2 just from the outside, it does sort of look like, I don't know, another nation state-sized company moving money around in a way that's perhaps tax advantageous or otherwise makes it look like they're getting something out of this that they're not.
Speaker 1 But from
Speaker 2 the more like policy end of things, it seems to just be wishful thinking. This idea that like, okay,
Speaker 2 in 30 30 years, we'll build more nuclear power plants than in the previous 80 years combined. And then finally,
Speaker 2 after trillions of dollars spent, it will
Speaker 2 have like a permanent 4% uptick in productivity. And that will be like the great economic engine of this 40-year period.
Speaker 3 It's a myth.
Speaker 3 It's just everyone thought this would turn into something because we had this scaling loss paper from 2020 and the jump between GPT 3.5 and GPT-4 felt so big that people thought, oh, this is going to keep jumping.
Speaker 3 But with Microsoft, by the way, they have a very obvious thing. They get to feed revenue to themselves, boost growth, and they own all of OpenAI's IP and research.
Speaker 3
When OpenAI dies, Microsoft just goes, now we have all of this. This is ours.
Because they do. Microsoft makes out of this fine.
Speaker 1 Oracle?
Speaker 3 Oracle is insane.
Speaker 3 Oracle has taken on like $56 billion of debt to build data centers for OpenAI, a company that has never had that much money, even if you, I guess, if you combined all their funding and revenue.
Speaker 3
But nevertheless, OpenAI can't afford to pay Oracle for the data centers. Oracle doesn't have the data centers.
So
Speaker 3 Oracle has mortgaged its future on GPUs because of OpenAI in a way, and they claim they have other customers, but nowhere near close.
Speaker 3 in a way that I think threatens the material health of that company. I don't think they'll die, but let me tell you this.
Speaker 3 The CEO, Safra Katz, the former CEO, she retired from CEO of Oracle a few weeks after signing that $300 billion deal with OpenAI. Do you think that she did that because she thought it would go well?
Speaker 3 Because I'm kind of like, you're like, oh, it's the beginning of this new era and I'm out.
Speaker 1 This is the most important thing.
Speaker 3 But it really is just, it is wishful thinking. It is, everyone thought this would be the new economic growth engine.
Speaker 3 And because everything in tech is run by management consultants, all they care is the rot economy. It's all growth at all costs.
Speaker 3 Its number was going up so much big, so much money, so much number, number so big, without looking.
Speaker 3 And you'll notice, other than Microsoft, who has only talked about their AI revenue twice and stopped in January, nobody in these public, in the hyperscalers, in the Mag7, other than NVIDIA, has talked about AI revenue.
Speaker 3 Nobody, not one of them. So it's not like when the bubble pops that they can point at this substantive business, that they can say AI is this much money.
Speaker 3
They can't do that because it kind of, it's small. Amazon spent $116 billion in capital expenditures this year, and they haven't talked about their AI revenue.
It's astonishing.
Speaker 1 Well, I think that gets me like sort of a broader question.
Speaker 1 I know I'll probably be misstating this, but I just saw one of these factoids the other day that seem to imply that 90% of the growth in the current U.S.
Speaker 1 economy is from investment in AI, which is a staggering number if it's true. And like my interaction with this is basically like I don't really use Chat GPT for anything.
Speaker 1 I don't really have any interaction with AI.
Speaker 1 But when I watch TV, slowly but surely over the last year, year and a half, every commercial I've seen on TV for whatever product they're selling is saying it's now enhanced by AI-provided insights.
Speaker 1 And these are usually entreating you to gamble when you're watching a sports game. So like, I'm thinking like, wow, this is a great model for the future of our economy.
Speaker 1 So, I guess the broad question is: is AI a bubble? Or rather, what is the argument that AI isn't a bubble? Like, that to me is more interesting.
Speaker 3 So, here's the fun thing. That economic growth that you're talking about, and that's true,
Speaker 3 it made, I think for the first half of this year, AI data center development specifically, just building data centers
Speaker 3
had more economic impact than all consumer spending combined, which is not good. Now, this is not AI revenue.
This is not people paying for AI services.
Speaker 3 This is nothing to do, in fact, with AI, as many of those data centers haven't even fucking started building yet.
Speaker 3 This is just building data centers to put GPUs in and buying GPUs from one vendor, NVIDIA.
Speaker 3 This is literally just building things.
Speaker 3 It's the, I think it may be one of the largest construction eras ever it's equivalent to the post 1997 telecommunications act free-for-all when everyone was building fiber except the difference is is that this is not useful like fiber these gpus are specialized for a bunch of parallel processing which is not useful in general purpose computing and so it's like the equivalent of if like when they built the hoover dam during the new deal they the hoover dam didn't generate electricity it just used it yes actually, not dissimilar.
Speaker 3 It's one of the most bonkers things ever. And the reason everyone buys from NVIDIA is they have this programming language libraries, whatever you call it, called CUDA.
Speaker 3 So you do C-U-D-A, which is far and above the only game in town for this thing. You can do inference and other things, but putting that aside, NVIDIA is basically the single person in this market.
Speaker 3 But what CUDA can do is good for 3D modeling,
Speaker 3 AI, crypto mining to an extent, and some scientific research.
Speaker 3 But otherwise, these data centers, when this all pops, all it's going to do is create a very cheap market for an asset that isn't really useful for much. You can't use them from gaming.
Speaker 3 I don't even think they ever
Speaker 3
have a, I was about to say VGA out. That'll get my ass on that display port, I guess you'd say.
But it's, yeah, these things aren't useful for other things.
Speaker 3 So you're going to have these massive half-built data centers full of things that you can't turn on because the power isn't there.
Speaker 3 And when people try and sell them, it's going to create this thing called, I believe, this is my theory, an impairment event because the value of a GPU will go down and everyone's bought hundreds of billions of these fucking things.
Speaker 3 So they're going to start, all of these public companies are going to have to say, yeah, all those things we bought are worthless. We have to take that off our net income.
Speaker 3
It's it's going to suck so bad for them. I can't like they deserve to suffer for this.
It's such a waste of time and money. It's been obtrusive to your point, Will.
It's on every commercial.
Speaker 3 It's ruining every app. It's sickening.
Speaker 2 You can't even like use Chrome without them opening a new tab for you that says Ask Gemini, which I think out of all of the, all of the, all of the consumer-ready AI things, that might be the worst one.
Speaker 3 Yeah, when you're using Google Docs and it's like, do you need help writing?
Speaker 1 No.
Speaker 1 No.
Speaker 3 that it is that is by far the shittiest one i mean it it may just be bitterness because it's the it's forced on you more than all the other ones but i don't i don't think i've ever seen it get a single thing i've asked right i'm literally looking at a spreadsheet because i put the numbers we were going to talk about today up in a spreadsheet and my cursor is just sitting on a blank thing and it just says visualize this by this and it has a little gemini icon i can't just leave a cursor in a spot without Google attempting to burn compute.
Speaker 3 They're like, nah, we must use my TPUs.
Speaker 3 We must spend our OPEX must increase.
Speaker 2 So, so, um,
Speaker 2 there's been a lot of talk about like Altman's $7 trillion figure.
Speaker 2 To the best of your ability, like,
Speaker 2 what
Speaker 2
say like someone comes along and says, hey, I've been, I actually, I saved all my beanie babies. Right.
I just sold them. Here's $7 trillion.
Speaker 2 What do they accomplish with $7 trillion? In like in their best of worlds, with $7 trillion and like Greg Abbott, you know, builds 50 nuclear power plants in Texas.
Speaker 3
Well, I mean, the $7 trillion number, I think, was from last year. And he just made that number up.
That was a number that he came up with and then had to walk back.
Speaker 3 But what he was talking about was building a bunch of data centers and building specialized compute. Sorry, specialized GPUs.
Speaker 3 So like he's doing with broadcom the actual number i think of because he wants to build 250 gigawatts of it load which i think would be
Speaker 3 10 or something trillion dollars worth of data centers and power it's so fucking stupid he wants to he claims build a bunch of data centers with specialized compute chips in them
Speaker 3
and then have the power for them. And then you may be wondering what the plan is after that.
And I don't believe he has one.
Speaker 3 I don't believe that more compute, I don't think he has a thing. He's saying, Oh, we've got enough compute now, we can do this.
Speaker 1 Yeah, this is what perplexes me: is that like all of this is to do what exactly to produce what a value. I mean, like, I mean, is it what we're seeing already? Like,
Speaker 2 that's that's kind of that is sort of the
Speaker 2 most interesting thing about it to me. Um, it's obviously it's very different in scope and even
Speaker 2 what kind of thing it is compared to like the crypto shit of the previous few years. But they both have the same central promise, which is, okay,
Speaker 2 I know there have been some false starts, but this is going to be, this is going to be like our generation's answer to the post-war prosperity.
Speaker 2
This is going to be the driving economic engine of the next 40 years. This will be the reason that everyone under the age of 30 will be able to buy a house.
This will create the new middle class. And
Speaker 1 it
Speaker 2 just at a certain point, it just is cargo cult shit.
Speaker 3
And you know what? That's exactly it. That is, you've nailed it with the cargo cult thing.
What it is, is everybody is repeating the cycles of what has been done before.
Speaker 3 You said it earlier, Will, where it's like,
Speaker 3 what has worked before is shove a bunch of money in the thing, hire as many people, put as much money into it, and then business model has come out.
Speaker 3 It's worked, but also the business model was obvious from the beginning, even with Uber, which is, and they've burned, I think Uber's total burn was like $32 billion.
Speaker 3
So people will say, well, Uber burned a lot of money. Uber burned $32 billion, and they're kind of profitable now.
Amazon Web Services cost $68 billion in today's money over 10 years.
Speaker 3
So nothing close. But nevertheless, there was always a business model.
Amazon Web Services was cash flow positive in three years as well.
Speaker 3
But what they're doing is they've run out all of the technological people now. Sam Altman, not technological.
Mark Zuckerberg hasn't written a line of code since 2006.
Speaker 3
Satchy Nadella, MBA, Tim Cook, MBA, Andy Jassy, MBA, replaced the Amazon Web Services with another MBA. It's all management consultants.
So all they can do is copy.
Speaker 3
So they've said, what do we do before? Money everywhere. What do we do with the press? We spread a mantra that this will just turn into something.
When they ask, how will it do that?
Speaker 3 Say, well, it worked before.
Speaker 3 And so everyone's just doing the thing that they thought, the cargo cult shit, where they say, oh, we'll just repeat the cycle, but we'll do it bigger this time never before has an era of tech without exception had this much money this much attention and this much honestly government help
Speaker 3 and they're doing it without a real plan uh people annoyingly quote altman in 2019 saying oh yeah we'll ask the ai what it will how it will get profitable but truthfully i don't think they have a plan i was told last year by many people well they'll just do custom silicon and that will make it profitable.
Speaker 3
We have the custom silicon now. We have it.
Why aren't they?
Speaker 1 Wait, hold on. What is custom silicon?
Speaker 3 So that just means custom GPUs or
Speaker 3
custom chips. Sorry, I should have said that.
Like, so just they, the argument was they would make customized inference chips and that would make it so much cheaper and the cost would come down.
Speaker 3
We have Cerebrus now. We have Grok, G-R-O-Q, very different.
By the way, if you want to really want to feel bad about yourself, go on R slash Grok.
Speaker 3
Just the worst people in the world complaining about generating porno with Elon Musk's LLM. Anyway, terrible.
But yeah,
Speaker 3
it's just myth on myth. It's we're going to copy what worked before.
We're going to hire the smartest people and then company will get big.
Speaker 3 And when that didn't work, they spent more money and they hired even more people. And when that didn't work, they just repeated it at a bigger scale.
Speaker 3 Except now, because also, A lot of this comes down to how the media and analysts treat these companies.
Speaker 3 Because analysts and the media been more aggressive and said, hey, why are you none of you fucking people talking about the revenues? Then these companies might have had to face the music earlier.
Speaker 3 But because everyone was scared and everyone assumed that everything would work the same way it always has, we're left in this situation.
Speaker 3 And it is going to be, I don't know if it will be apocalyptic like the GFC, I really don't. But I think that this leads to a prolonged period of depression in tech.
Speaker 3 And I think it permanently changes the valuation of public tech stocks because the only reason they're doing this is they have no other ideas, no other growth ideas.
Speaker 3 And when they're out, everyone's going to be quite mad at them.
Speaker 2 I've never seen anything like this. Because yeah, you brought up the example of
Speaker 2 all these other businesses or sectors of existing businesses that were like pretty expensive. And at the time, people did make fun of that thing with Uber, where
Speaker 2 the basic logic was, okay, we're going to lose X billion dollars a year until we achieve this much market share, at which point we will become a profitable company.
Speaker 2 But in even the most ridiculous of those things, there was a product. There was a conceivable end goal, no matter how ridiculous it may have seemed.
Speaker 2 With this,
Speaker 2 there is no concrete explanation of you know, how you create sustainable productivity, how you avoid just fucking decimating the job market while also the the core technology in question
Speaker 2 is like
Speaker 2 synthesizing child pornography and like making everything worse i've never said like there have been bubbles before and there have been like you know pie in the sky predictions and there have been instances like this where
Speaker 2 it's uh
Speaker 2 it is like a cohort of people under 40 going, okay,
Speaker 2 this will be the generation defining economic innovation, but never one where it's this vague while also
Speaker 2 just
Speaker 2 having so many horrible effects on the world already.
Speaker 3 And also, the scale of the burn is just different.
Speaker 3 Like the just the $32 billion for Uber, I think it was like since 2019 at best. OpenAI,
Speaker 3 I think like by the end of next year, if they survive that long, will have burned over $32 billion.
Speaker 3
Here's the thing. OpenAI and Anthropic are basically rich kids living off either Amazon and Google or Microsoft's money.
OpenAI didn't pay for any of their infrastructure.
Speaker 3
Their infrastructure cost at least $100 billion. Same with Anthropic.
Project Ranier out in Indiana, I think, that's what's $30, $40, $50 billion data center.
Speaker 3 The cost to just spin these fucking things up. But even with Uber, horrible company, evil company, but they had a business model and they burned all that money because they spent a shit ton on
Speaker 3
marketing and R ⁇ D on their failed autonomous cars. There are actual things you can point at and there was an actual business model.
There's never been one here.
Speaker 3 And in fact, the subscription-based, and this is, I promise this is a simple point.
Speaker 3 Paying 20 bucks a month for an LLM service as a business is insane because you can't guarantee that that person won't spend more than $20 of compute.
Speaker 3 Because large language models are hard to quantify the costs of for both sides.
Speaker 1 Just
Speaker 1 could you just clarify something? You say that like OpenAI, they're rich kids living off of their parents, Microsoft and NVIDIA, meaning that Microsoft and NVIDIA built their infrastructure for them.
Speaker 1 Like what do you mean by that?
Speaker 3 Well, not NVIDIA. So Microsoft built all of the data centers that got OpenAI started.
Speaker 1 Okay.
Speaker 3 Their entire...
Speaker 3
I mean, Microsoft paid Core Weave for OpenAI. Microsoft's own infrastructure.
Tons of it is OpenAI. They have an entire data center.
Speaker 1 And that's when they have this revenue sharing deal with them where they get
Speaker 3 with Anthropic. They have the same thing with Google and they have the same thing with Amazon.
Speaker 3 I actually had a story that came out a few weeks ago where Anthropic paid Amazon, I think through the end of Q3, 2025, $2.66 billion just on Amazon Web Services.
Speaker 3
They didn't have to pay for their data centers. Their data centers were built for them.
Now OpenAI kind of has has to pay for them, but not really sure how that's going to work out.
Speaker 3
But even then, Oracle is paying to build all of the data centers for their compute deal. So it's not like these companies have had to build anything.
They're just rich kids.
Speaker 3 They are literally just living off their parents' money until
Speaker 3 they die.
Speaker 3 There is no plan. There's no plan.
Speaker 1 Well, you're talking about like living off your parents' money. Okay, so like if I'm floating my failson kid and they keep failing, failing, you know, as a parent, you want to keep supporting them.
Speaker 1 You want to keep sort of putting the best spin on their, you know, lack of success or whatever. And you have a quote in one of your recent pieces.
Speaker 1 And I hope we give you the context here, but you're quoting like an investor who says of Altman and like the sort of continued exaggerated claims that they keep making.
Speaker 1 The quote here is, some researchers are trying to take advantage of high investor interest in AI.
Speaker 1 They have told some investors that growing concerns regarding the costs and benefits of AI have prompted them to raise a lot of money now rather than wait and take a risk, a risk a shift in the capital markets, according to people who have talked to them.
Speaker 1 Could you explain that quote a little bit further?
Speaker 3 That one is awesome. So
Speaker 3 that is saying that startups are raising money now,
Speaker 3 knowing that there are questions around their ability to do business in a profitable manner.
Speaker 1 Right.
Speaker 3 Yeah. And they're raising the money now before investors.
Speaker 3 Before investors go, wait, do you have no plan
Speaker 3 to make more money than you spend?
Speaker 3 What I love is that's, I think that's a quote from an investor who did not realize what they were saying.
Speaker 3 It was just like, yeah, it's they're raising money now before we work out they don't have a business model, which is why we funded them.
Speaker 3 And this is the thing. I think the greatest innovation of the large language model era is we've realized how many fuckwits there are.
Speaker 3 It's like anyone who is just insanely impressed by chat GPT, you're like, oh, oh, okay.
Speaker 1 Yeah.
Speaker 3 You learned quantum physics in ChatGPT. Sure, mate.
Speaker 1 Yeah. Okay.
Speaker 3 Oh, you gave Cursor $2 billion.
Speaker 3
Yeah, you're real smart. It's, you see, every CEO is like, AI will do this.
And then they go on to describe something that large language models can't do.
Speaker 1 You're like, fuck weird. Well,
Speaker 1 the impression I get from this is like, you said that like all these guys are MBAs. They're management consultants.
Speaker 1 And so that's why I like, back to my point about every ad on TV having some AI hook, no matter how asinine the product is or how little the product needs it.
Speaker 1 It's like, I just think these guys get in a room and they just hear the word AI and like, nobody really knows what it means, but they just think money.
Speaker 1
And they're like, oh, well, we got to have AI now. Everyone else has AI.
We got to have it. We got to have it.
But like, what does it do? How is it going to make us money?
Speaker 1 Does this product even need an AI component? Doesn't really matter. They just don't want, they just don't want to be like,
Speaker 3 oh,
Speaker 1 we're the only gambling and porn website that doesn't have an AI component.
Speaker 1 So i don't want to get left behind yeah it's also just how many companies have just contempt for their users yeah which is most of them how many of them are just like you fucking pigs eat this you like this you hate it well you're gonna have more pay me more now I'm gonna go over what Felix was saying earlier about like how this is sort of unprecedented and just like the weird surreality of it is I think I think we've talked about this before with you other times you've been on the show but like unlike other you know big shifts in tech or like some new tech product, like be it, I don't know, like the Apple, the iPhone or high-speed internet, like widespread broadband, like people had an affection for these products.
Speaker 1 There was a use for them.
Speaker 1 I mean, like, you see the AI, like the people who have bought into it. You see the people arguing with Grok.
Speaker 1 You see the people who keep pitching this idea like it's going to create some sort of almost religious revelation, like a singularity, and that
Speaker 1 all accounts will
Speaker 1 be balanced. But like, do AI and its boosters and proponents in the media, the people who take Sam Altman seriously, are they aware how loathed their actual product is?
Speaker 1 And it's like, it's not just that their product doesn't work or doesn't do the things it's promised. It's that like, I think most people do infer from it.
Speaker 1 It's not like a direct threat to their livelihood or career, like something distinctly like contemptuous of human life, like embedded within all of this kind of AI ideology.
Speaker 3 So I think AI as a, well, AI is a marketing term.
Speaker 3
That's not my original thought. I forget who originally said, but it's true.
It's a marketing term. It can mean everything and nothing.
Speaker 3 I think there is something unique with large language models that makes some people feel smarter for using them.
Speaker 3 And when they use them enough and they manage to make them do something, even though it's the machine training them, even though it is a bad product that is making them think it's good, they think, wow, I am able to control the machine.
Speaker 3 And when they see people say, this sucks, it's too expensive, it doesn't do the things, they say, it doesn't do it for you because you're not chosen. They're Boris Bolcombe from the ninth game.
Speaker 1 They think they're worthy. And
Speaker 3
they think that they're special. And it's built to do that.
It's, you got this. That's not just smart.
It's amazing.
Speaker 3
They've built the language so that it really is like the imbecile's magic trick where it makes you think, oh, you're so special. Oh, you know this.
Oh, well, only you understand this.
Speaker 3 And it brings them into this kind of conspirator thinking.
Speaker 3 On top of that, for I don't know, imbeciles like Ezra Klein, who don't really like thinking more than one thought at once, they think, wow, AI, oh, I'm so scared the computer will wake up and
Speaker 3
it will be so scary. And oh, also, I want to be ahead of this because I love licking boots.
So
Speaker 3 I want to make sure I'm the best at cleaning these things with my tongue before everyone starts doing it. And I'll know I'll be first in line.
Speaker 3 But really, AI is this thing where while there are many people who use a broken thing and say this is broken and stupid, there are many others when they hear AI in their apps, AI in ads, AI on the media, and then they use it and it doesn't work.
Speaker 3
They go, something's wrong with me. And I think that that is something the tech industry has been pushing for 20 years.
It's the Steve Jobs thing. You're holding it wrong.
Speaker 3 If you can't make AI work, that's because you're a moron, not that this is bad, unreliable software. And I think that that's what's twisting them up.
Speaker 3 And the more people that say it's stupid and it sucks,
Speaker 3 the more that they get stuck to it. And then I also just think there's a lot of people who want to fall behind the tech industry.
Speaker 3 I think there's an alarming amount of people that want the tech people to like them. I think there are alarming amounts of people that think Sam Altman is very special and very cool.
Speaker 3 And that for them, it's kind of like politics as well.
Speaker 3 It's like instead of accepting that we live in a deeply unfair system where arseholes and imbeciles are on the top of the pile, they think, no, there's something special about Sam Altman.
Speaker 3 When Dario Amade lies, it's not that he's lying, it's that he sees a greater truth that he can't let me in on because of all the smart people doing secret stuff.
Speaker 2 I thought it was
Speaker 2 one of the most harrowing cultural changes in the last 10 years before this was how every terrible open world game gave you a sassy AI sidekick.
Speaker 2 The worst example of this would probably be in that really shitty watchdogs game that takes place in london
Speaker 2 and i i i always
Speaker 2 that that specifically like a sassy ai sidekick is such a that no there's no good game that has that
Speaker 2 not like cortana where it's like an actual character but like a thing a thing where it's like i'm a computer i'm a computer that who knows i'm a computer isn't that crazy i'm gonna i'm gonna talk about pop yeah pop psychology about existence and i always thought like who fucking likes this?
Speaker 2 Who thinks this is funny or interesting?
Speaker 2 And then I would go to like
Speaker 2 Bill Ackman's Twitter and he would, he would go, I just argued with ChatGPT about geopolitics.
Speaker 1 And it's like, oh, they're there.
Speaker 3 It's like a dog barking at its own reflection. Yeah.
Speaker 3 It's just, it's, are you, how narcissistic are you? Like, yeah, if I'm reflecting my own thoughts against myself and now I feel special.
Speaker 3 And i'm sure that there are people who don't think and feel them that way but people of the ackman gene
Speaker 3 are like that they think that like the fucking former ceo of goo of um uber even uh travis what's his name he was saying that he he did yes he did vibe physics he learned physics or quantum physics and chat gpt it's like no you you've look you've lost logic
Speaker 1 If you didn't understand quantum physics before you started fucking talking to ChatGPT, you're not going to fucking learn about it. once you start
Speaker 2 i i love how much of it yeah i love how much of this is just like smart guy pantomime yeah but that's like all all these guys who are who are great at like creating an image and now now like at the end of the day they're like all right i'm done doing business for today for the day time to time to take take part in the wonders of science.
Speaker 2 I'm going to sit in the big room in my house where I've painted the periodic table on my wall and just admire it. Think about how great the elements are.
Speaker 2 And then I'm going to look at the solar system.
Speaker 2 Then I think if I have time later tonight, I'll spend a little time with E equals MC squared.
Speaker 2 It's like, none of you are fucking smart.
Speaker 3 But it's the dunce reader. It's like a metal detector for dunces.
Speaker 3 If you thought they were smart, you hear how they talk about chat GPT and you're like, oh.
Speaker 3 Oh, no, you just, you learned enough words. You don't know anything, do you?
Speaker 3 You were good at one thing like everyone talks about altman like he's this really smart guy i've talked to people for years who are like he's so smart he's so you listen to him in interviews he sounds like a moron he genuinely he can't get a sentence they all do any sentence whether it's elon musk whether it's peter teal sam altman mark zuckerberg watch any video clip of these guys trying to talk like in public and they all seem like touched like like they can't form a sentence yeah they they sound like they got got on a business call that they forgot happen.
Speaker 1 And like, I mean, like, Ed, to your point about this, this thing being like, like, a divining rod for stupids, like, you know, this is a real thing.
Speaker 1 When I see clips that people share on like social media, and it's something generated with chat G with Sora or one of these, like, one of these AI models, and
Speaker 1 it's like a 10 second, it's like a 10 or 20 second clip that looks like a video game cutscene. And they're like, Hollywood is shaking in its boots.
Speaker 1 If this is what this model is doing now, imagine what it's going to do.
Speaker 1 And it's like, to the extent that this already looks like most of the dog shit that's out there, I suppose that's impressive, but that's just the quality of like how bad movies have gotten. But like.
Speaker 1 The idea that anyone would look at this absolute drivel that's like bereft of any context or character. And it's just like, I don't know, it's someone like driving a car and looking cool.
Speaker 1 And it's just like this, this facsimile of like what an action scene would look like in a video game. And they're like, this is awesome.
Speaker 1 And you just think like, oh, I found someone with zero inner life.
Speaker 3 Yeah, you don't know anyone.
Speaker 1 With no imagination, no, like uncultured, unlettered, like just bereft, just someone Casper Hauser, basically.
Speaker 3 And the greatest thing is as well is nobody seems to want to talk about the funniest character in this thing and be like, this is the worst.
Speaker 3 The McGroe dog from Akewood himself, Masayoshi's son, where he's just
Speaker 3
sold all of his NVIDIA stock. A bunch of his T-Mobile stock and has taken a margin loan on his ARM stock.
So all of the valuable companies, so he can give OpenAI $22.5 billion
Speaker 3 at the top.
Speaker 1 We're at the top of the bubble.
Speaker 3 And he's like, no, I must double down on this. I must give this so that Sam Altman can do a year max of inference.
Speaker 3 So that he can run his company for one year if he's lucky. It's just it's so fucking good, but no one wants to be like, this is the same as WeWork, which, I mean, they shouldn't say that.
Speaker 3 It's so much worse. WeWork had
Speaker 1 buildings.
Speaker 3 And even, well, they had leases on a lot of them, but still, like, there was a thingy. Like, it's just OpenAI, like, this is a really important thing.
Speaker 1 There was a place you could go to do work.
Speaker 1 Open AI doesn't even have assets.
Speaker 3 Well, they have like people and like office space, but their IP and research is owned by Microsoft. Anthropic, I guess, owns their IP, but all of their infrastructure is Amazon and Google.
Speaker 3 I mean, it's
Speaker 3 like they don't, they don't, OpenAI doesn't own any GPUs. Like, what the fuck are they doing? Like,
Speaker 2 I, just a small note about Masi Yoshi-san, the greatest investor in history.
Speaker 3 The best.
Speaker 2 I, I love him, but
Speaker 2
I, I think I may have mentioned this last time you were on. I'm not sure, though.
He, he was, his mentor was the other greatest man to ever live. Den Fegeta, author of my, my personal Bible,
Speaker 2 The Jewish Way of Doing Business.
Speaker 1 Yep. And what was great was he begged.
Speaker 3 He was like, I just want like 14 seconds in your presence. Like he begged Den Fujita as a child, as Masayoshi-san, as a teenager, was like, I would just, I don't even care if you say anything.
Speaker 3 I don't even care. I just want to be fucking amazing.
Speaker 2 Just like...
Speaker 2 A 14-year-old who reads the Jewish way of doing business.
Speaker 1 It's like, holy shit.
Speaker 2 This is awesome. Like, it's his shounen.
Speaker 3 I think it's wonderful. I think
Speaker 3 Masayoshi-san, I think he's a Korean guy in Japan.
Speaker 1 Yeah, which is already like a very chariot.
Speaker 3 He has like a very weird cultural thing there as well.
Speaker 3 But on top of that, he just runs this business insanely and is pretty much still living off the great bets he made on Alibaba and Arm, which he is currently in the process of destroying to fund OpenAI.
Speaker 3 I think Sam Altman is like... I'm not saying he's like an antichrist, but he's like a reckoning for this industry.
Speaker 3 Like the tech industry for like 25 years has allowed various different guys who were not really smart, but could sound smart to the right people at the right time to get as much money as they wanted.
Speaker 3
Eventually, one of these people would try and get more money than anyone has ever tried to. And now we're kind of seeing what would happen.
Now we're what...
Speaker 3 Right now, we're in the greatest follower culture in business ever where no one has independent ideas.
Speaker 3 And the reason that all of this started was because in 2022, Sachinadella saw what Bing said, that Bing had GPT. Sorry, he wanted chat GPT in Bing.
Speaker 3
That was the only reason he bought these fucking GPUs. That was the only reason the entire story starts there.
Had he not done that, who knows if this would have happened?
Speaker 3 And Tsunopa Shai of Google could have just said, no, this looks stupid. But because they all copy each other, we're here.
Speaker 1 Well, Ed, I want to, like, something you said a little bit earlier, which is that ChatGPT is not too big to fail. But if it does fail,
Speaker 1 like the sort of the example it will set for like the next Sam Altman, and just like the way it will maybe not like burst the entire bubble of the economy, let's pray, but like it will burst the bubble that like these guys are geniuses.
Speaker 1 And like, do you think that that in some way is an existential threat to the entire idea of Silicon Valley, their ability to make money in the future?
Speaker 1 Is that if like at some point people stop talking about these guys like they're Da Vinci or Einstein, people will stop giving them money to to create electricity using machines.
Speaker 3
Yes. So I wrote a thing in the middle of last year called the Rotcom bubble, which was saying that tech is out of hypergrowth ideas.
We're past smartphones. We're past cloud computing.
Speaker 3 We don't have a new thing. So they all crystallized around AI because they had no other hypergrowth ideas.
Speaker 3 They need something that is worth hundreds of billions of dollars of revenue over 10 years for all of them at once.
Speaker 3 And because they're all basically a cartel, they all agree on the business models they like, they all come together and say that. Now, do I think this is the end of startups? No.
Speaker 3 But I think that the mythos of the tech industry always knows what's going to work, I think that will die to an extent.
Speaker 3 I also think the knock-on effect is going to be is the limited partners, the people that actually fund venture capitalists, are going to start getting a bit stingy.
Speaker 3 Now, I believe 2021 damaged the world just as much as 2022, because I think all of the zero interest free era, free money thing, a lot of venture capitalists burnt a lot of money on stupid shit that year.
Speaker 3 And limited partners felt very burnt. They were saying, like, why the fuck did we give you all this money? You wasted it, it's gone.
Speaker 3 Then AI came around and all the valuations went up and they kind of went, all this, forgive them.
Speaker 3 However, if the problem all of these AI startups have is there's no real way out, the same problems that OpenAI has are the same problems all of them have.
Speaker 3 Costs too much money to run the business, impossible to measure your costs, well, difficult to measure them at least,
Speaker 3 and no one is profitable.
Speaker 3 So if none of these companies can go public, because their economics are so bad and there's no path to profitability, and none of them can sell to anyone, they're just going to die.
Speaker 3 And a bunch of venture debt and venture capital is going to go in the toilet. This will burn the people who fund the venture capitalists, which will hurt startups' ability to raise money.
Speaker 3
This will mean that less startups are funded, and indeed people like Sam Ortman have more, have a tough time. They will still raise money from their own coterie.
There are always going to be morons.
Speaker 3 But
Speaker 3 this will hurt everyone in tech. And on the public stock level, I do think it's going to permanently scar some of these big tech companies because the natural question after AI is, okay,
Speaker 3 what else have you got? What else is going to make you grow? Quantum computing isn't going to do it. Quantum computing,
Speaker 3
they haven't worked out how to do that properly. They haven't worked out to turn that into any kind of business.
They don't have a new thing.
Speaker 3 So when these big companies stop growing eternally, the markets are going to say, tech is no longer the ultimate growth machine because software used to proliferate infinitely.
Speaker 3
Software was the ultimate growth vehicle because it didn't have the massive costs of goods sold. AI is the opposite of that.
AI is insanely expensive. It's really quite terrible all around.
Speaker 2 If they have accomplished anything, and not just AI, but really
Speaker 2 everyone who has made this new economy possible,
Speaker 2 from the capital owners to the people who,
Speaker 2 you know, post a picture of themselves using a laptop on an airplane, and they're like, this is insane.
Speaker 1 Have you ever seen anyone work this hard?
Speaker 2 They have done something I never thought possible, which is they made the authors of the previous economic crash,
Speaker 2 the finance industry. They've made them seem self-aware, charming,
Speaker 1
worldly, likable. Well-read.
Yeah.
Speaker 2 Like, like, you know, this could just be roast-tated glasses.
Speaker 2 But with those guys, at least there was like, there, there was this awareness that, like, okay, I'm not, I'm not really making the world better by bundling all these mortgages.
Speaker 1 Yeah, like, everybody, the CEO of like, what's the guy, Angelo Mozzillo, or whatever. You see that guy, you're like, yeah, okay, I get it.
Speaker 1
Yeah. His face looks like a catcher's mitt.
He's like,
Speaker 3 Bernie Madoff had more profit than OpenAI.
Speaker 1 Yeah.
Speaker 1 Yeah. Well,
Speaker 1 I guess to close things out today,
Speaker 1 we mentioned Bill Ackman briefly here. And I would like to do, this is like, sort of, I don't know, not an abridged reading series, but like, you know, a short but sweet one.
Speaker 1 Because I don't know if you guys saw the, have you seen Bill Ackman's advice to young men on how to meet women today?
Speaker 3 Oh, yeah.
Speaker 1 Okay, I want to say that.
Speaker 1 I have only been swatted twice attempting this.
Speaker 1 What I want to say is that, like, ever since the election, ever since Montani won the mail, like, Bill Ackman's posts have gotten like 20 times shorter, and he's taken on sort of like a philosophical cast of mind.
Speaker 1 Like he's no longer writing like 20,000 words about like the darkness coming.
Speaker 1 He's become like, he's on that positive squad shit now. And he's being.
Speaker 2 Yeah, because he knows he's going to be executed on the first day.
Speaker 1 And today, I just got to share this today.
Speaker 1 This is Bill Ackman's advice to young men on how to meet women. He writes, I hear from many young men that they find it difficult to meet young women in a public setting.
Speaker 1 In other words, the online culture has destroyed the ability to spontaneously meet strangers.
Speaker 1 As such, I thought I would share a few words that I used in my youth to meet someone that I found compelling. I would ask, may I meet you?
Speaker 1 Before engaging further in a conversation, I almost never got a no.
Speaker 1 May I, okay, may I meet you? If you're approaching someone, a young woman, you're saying, excuse me, may I meet you? It's like, you should have already introduced yourself.
Speaker 1 How about, hi, my name is, or can I buy you a drink?
Speaker 1 Just sliding up with no friction.
Speaker 2 If someone said, may I meet you,
Speaker 2 I would go, are you a Terminator?
Speaker 3 I've been watching the X-Files a lot recently.
Speaker 1 Me too, me too.
Speaker 3
And it's like, it would be like one of the guys that bleeds green goop. Yeah.
Would be like, may I meet you?
Speaker 3 Mr. Mulder, may I meet you?
Speaker 1 The alien bounty hunter. Yeah, yeah, exactly.
Speaker 3 The guys you had to stab in the back of the throat.
Speaker 1 Yeah, with the
Speaker 1 ice pick.
Speaker 3 Mr. Mulder, may I meet you?
Speaker 1 And it's like, you've already,
Speaker 1 may I meet you? It's like, what a way to introduce yourself. When did you last talk to a fucking person?
Speaker 3 When was the last conversation?
Speaker 1 And no, he says here, I would ask, may I meet you? Before engaging further in conversation. I almost never got a no.
Speaker 2 But did you ever get a yes?
Speaker 1 Or was it just what? Huh?
Speaker 3 Or just like silent?
Speaker 3 What did you do after you asked? Did you just stand there like an NPC?
Speaker 1
That's a really good question. Yeah, because it's just like, wait a second, I feel like I kind of already have met you.
You've just introduced yourself, but I don't know your name.
Speaker 1 I feel like the meeting has taken place, but what happens after that?
Speaker 1 What do you do if you say no?
Speaker 2 Well, did you see the post where someone says they tried this?
Speaker 1 Yeah. Yeah.
Speaker 1 Yeah. Yeah.
Speaker 2 Do you have that?
Speaker 1 Will, do you have that one on hand i think this is yeah this is how rfk met
Speaker 1 olivia nazia
Speaker 3 i meet you
Speaker 2 someone someone says that they tried this they went to a bar and they said i saw i saw a girl in a denim jacket she seemed nice enough so i went up to her and i said may i meet you
Speaker 2 Her and her friends burst into laughter and started doing an impression of me saying, may I meet you?
Speaker 2 I just said, never mind, and walked away.
Speaker 2 Never taking advice from a boomer again.
Speaker 1 That fucking rocks.
Speaker 1
He goes on to say, I almost never got a no. It inevitably enabled the opportunity for further conversation.
Cow, but I've never spoken to a person. You never spoke of a person.
Well,
Speaker 1 Bill.
Speaker 2
Bill, I thought you were a fucking alien before, but I certainly don't now. Now that you've told me it opened up the opportunity for further conversation.
Yeah,
Speaker 1 of the further conversation is what at what hours do they let you out of the group home? And so I can never come back to this coffee shop. And it says here.
Speaker 2 Hello. May I insert my genetic material into your vaginal cavity?
Speaker 3 Hey.
Speaker 1
He says, I met a lot of really interesting people this way. I think the combination of proper grammar and politeness was the key to its effectiveness.
You might give it a try. And yes, that's what
Speaker 1 people were thinking.
Speaker 2 People were thinking. They were like,
Speaker 1 person.
Speaker 2
This is so polite, and his grammar is so good. I have to know this guy.
May I meet you is like, that is the type of thing that, like,
Speaker 2 it wouldn't be in True Blood, but it would be in True Blood fan fiction written by someone who English is not their first language.
Speaker 3 It's a line from that fanfic, My Immortal.
Speaker 1 Yeah.
Speaker 3 Jesus Christ. May I Meet You.
Speaker 1 It says here, and yes, I think it should also work for women seeking men as well as same-sex interactions.
Speaker 1 Just two cents from an older, happily married guy concerned about our next generation's happiness and population replacement rates. I love that he adds.
Speaker 1 Christ, the alien fucking colonizer is from the X-Files. He's like, Yeah, I am very concerned about the next generation's happiness and the replacement rate for viable workers in the future.
Speaker 3 You said, May I meet you, Molda?
Speaker 1 What were you doing?
Speaker 2
I am concerned that there won't be enough infant spines for me to harvest. I need admire.
Are we having enough, baby?
Speaker 2 He's, I mean, I
Speaker 2 think, yeah, we've brought this up before, but like, I only ever knew Bill Ackman as like the guy who got owned by Herbalife.
Speaker 2 But he really, like, he, he was a well-known investor before that. I just thought, you know,
Speaker 2 and he was for the longest time, just like another hedge fund guy who mostly donates to Democrats. Something
Speaker 2 I ever since Elon Musk bought Twitter, he's pretty much lost his fucking mind.
Speaker 1 Yeah.
Speaker 3
I mean, it's, it's the dunce thing. It's the dunce diviner again.
It's just like certain situations cause these people to be like, wait, wait, wait, wait, everyone, wait, listen up.
Speaker 3
What if I said, may I meet you? You didn't say that as a, you didn't say, you've never said that to anyone before. You came up with this today.
You came up with this today.
Speaker 3 You turned to your blood boy and you said, how would I meet person?
Speaker 3 Sir, don't, you said, I can't look at you or talk to you.
Speaker 1 Talk to me.
Speaker 1 May I meet you? Speaking of.
Speaker 1 People talking to,
Speaker 1 I just look at the replies here. And someone replies, does anything Bill is suggesting trigger high value social approval or any other primal signals, Grok? Create a success matrix.
Speaker 1 And Grok replies, Bill Ackman's polite opener signals quiet confidence and respect, primal cues that convey high value without aggression or neediness.
Speaker 1
It earns social approval by prioritizing civility over bravado, reducing defenses and inviting reciprocity. Success matrix, high confidence, direct ask.
Low desperation, no games.
Speaker 1 Moderate primal pull, grammar elevates status.
Speaker 1 Works best in low-stakes public settings, yielding 80% to 90% positive responses per anecdotal data, though context like body language amplifies outcomes.
Speaker 2 I love how it says
Speaker 1 anecdotal data.
Speaker 2 I love how it says anecdotal data, probably just taking from Bill Ackman's post, I got more yeses than no's.
Speaker 1 And that costs seven dollars.
Speaker 1 Yeah, yeah.
Speaker 1 I was like,
Speaker 1 how about instead of may I meet you, hi, my name is, or just like, well, you know, that's oh, or I was saying, it's nice to meet you, or can I buy you a drink? That, that, can I buy you a drink?
Speaker 1 There's a tried and true one that I think.
Speaker 1 Woman ain't that. Yeah.
Speaker 3 Women need to be given like a conversation from Morrowind.
Speaker 1 How do I advance your dialogue tree?
Speaker 2 You know those Tiger Woods texts where he's like texting the girl at like seven in the morning and he's like, do you like golden showers? What if we do them with you and a woman you trust?
Speaker 2 That would work better as a first approach than may I meet you.
Speaker 1 Well, there we go.
Speaker 1 May I meet you? Listener,
Speaker 1 may I greet you
Speaker 1
every week on Shop Outscrap House? May I continue to meet you? All right. I think that does it for today's show.
Ed Zitron, thank you so much for your time. Thanks for having us.
Speaker 1
I'm about to hang out with this. This is a really fun conversation.
Thank you, Ed.
Speaker 3 Thank you.
Speaker 1 Thank you for having me. If people want more Ed Zitron, where should they go? What should they do?
Speaker 3
Go to betteroffline.com or I'm on Blue Sky and Twitter as EdZittron. You can find me there.
My newsletter is where's your ed.at. Subscribe to the premium, please.
Speaker 1 All right, everybody. Uh, till next time, bye-bye.
Speaker 3 Bye-bye.