ChatGPT’s Platform Play + a Trillion-Dollar GPU Empire + the Queen of Slop

1h 6m
At DevDay, OpenAI signaled its ambition to take everything you can do on the internet and shove it inside ChatGPT.

Listen and follow along

Transcript

300 sensors, over a million data points per second.

How does F1 update their fans with every stat in real time?

AWS is how.

From fastest laps to strategy calls, AWS puts fans in the pit.

It's not just racing, it's data-driven innovation at 200 miles per hour.

AWS is how leading businesses power next-level innovation.

My favorite moment from Dev Day

was when a guy came up to us.

We were eating snacks there at our table, and this guy comes up and he's like, Are you Casey from Hard Fork?

and starts just talking about how big a fan he is.

And like you at one point are like,

yeah, he's also on hard fork.

And the guy starts laughing.

Like he doesn't believe you.

He's just like, oh,

that's a good one, Jokester.

And then proceeds to take a selfie with you.

Oh, I've never been happier.

Yeah,

that was a great moment in hard fork history where I tried to, I pointed out the other host of the show, and he was just like, what are you talking about?

You're out of your mind.

What a good bit.

Another classic hard fork bit, me pretending there's another host on the show.

I'm Kevin Roos, a tech columnist at the New York Times.

I'm Casey Noon from Platformer.

And this is Hard Fork.

This week, we visit OpenAI Developer Day, and ChatGPT is eating the web.

Then, OpenAI makes a huge deal with AMD.

We'll tell you about the trillion-dollar battle over AI chips.

And finally, Slop Queen Katie Natopoulos is here to talk about Sora and all the terrible videos she's making of me and Kevin.

And she will pay for her crimes.

Well, Casey, it was another one of those weeks where OpenAI kind of swallowed the entire news cycle.

So you and I took a field trip on Monday of this week to Fort Mason in San Francisco, where OpenAI was having its third annual dev day, its big developer conference, and we both got the chance to go.

Yeah, this was the third dev day, and the second one we were invited to after they didn't invite us reporter types last year.

But this time, they had a lot to say.

Yes, so let's talk about what we saw there and all of the many things that OpenAI has been working on.

But first, since this is a segment about OpenAI and AI in general, we should make our disclosures.

Mine is that the New York Times company is suing OpenAI and Microsoft over alleged copyright violations.

And my boyfriend works at Anthropic.

Okay, so Monday morning, we show up at Fort Mason.

This was a big event.

Something like 1,500 people I heard were there.

And Casey, how would you describe the vibe of OpenAI's dev day?

Well, I have to say, first of all, Kevin, that it was giving me flashbacks because it was nine years ago that I was in the same building when Facebook announced that it would let developers build bots into Facebook Messenger.

And Mark Zuckerberg stood on stage and said, no one will ever have to call 1-800 FLOWERS ever again, which turned out to be not true.

That's right.

It was in the same exact building.

I forgot that it was at Fort Mason, too.

My God.

It was.

And then nine years later,

the faces are different.

The company is different, but the promise feels sort of the same.

Yes.

Fort Mason is this big, like sort of indoor, outdoor, like multiplex.

And, you know, one building, they've got sort of talks going on.

Another building, they've got demos.

I walked around the demo building.

they had a a sora cinema where you could like go into like a dark theater with like a movie screen and sit on like very comfortable lounge chairs and like watch like 30 different six second clips of like ai generated sora videos so that made me feel something oh they had a phone booth station did you do these i saw it but i didn't go in well lucky you then because when you were in there uh you got connected to chat gpt and you could just sort of like talk to it or you could play like a trivia game i got one of the trivia questions wrong very embarrassing what was the question well maybe i'll try this one on you okay okay uh what is the branch of ai

that teaches computers to learn from their own experiences

computers to learn from their own experiences

uh are you typing this into chat gpt i'm gonna say machine learning

okay

Congratulations.

You win.

I said reinforcement learning, which was a little too

in the weeds.

That was great.

If you're just listening to the podcast, I just said what I call a full Cluley, which is somebody asks you a question and then you just sort of surreptitiously type it into the computer while no one's looking.

So anyways.

Yes.

Yeah.

So

AI, let's just tick through some of the stuff they announced or said from the stage.

The first thing that Sam Altman said during his keynote was that ChatGPT has been growing like crazy.

It has more than 800 million weekly users now.

He also announced that the number of tokens that OpenAI is processing through its API have gone up to 6 billion tokens per minute on the API.

That's up from 300 million tokens per minute in 2023.

So big growth for them.

And I just want to say that it seems like every time there's a hot new startup, they have to invent a very impressive sounding metric and the human mind has absolutely no idea of understanding what it means it's like oh there's a big number and now it's an even bigger number uh what what what should i take away from that i don't know

yeah so in addition to those new growth numbers sam also announced that sora 2 the new video generation model and gpt5 pro are coming to the api along with a new smaller voice model so developers are going to be able to build using those things.

They showed off some examples.

One of them was that Mattel has has apparently started using Sora 2 to sort of prototype or mock up new designs for toys.

They showed off a video of that.

They also announced a bunch of new stuff for what they're calling Agent Kit, which is basically the way that developers can build AI agents using this sort of drag and drop interface.

They showed off some examples of that.

But then the big thing that I think we're going to spend most of our time talking about was this announcement that apps are coming to chat GPT.

So Casey, explain what Sam Altman and other OpenAI executives said about that.

Yeah, and this really is where this story connects to that Facebook event that we went to almost a decade ago.

OpenAI is not the only company that's tried this.

There have been others in addition to Facebook.

But what OpenAI wants you to do is when you are using ChatGPT to increasingly bring in the other things that you might do around the web.

And so to start with, you'll be able to tag into your conversations Expedia, Zillow, Figma, Target, Spotify.

I did a little test the day that it came out where I built a playlist in my own Spotify through ChatGPT.

And the basic idea, Kevin, is that ChatGPT is the new front door to the internet.

It no longer starts at Facebook.

It no longer starts at Google.

You just go directly to ChatGPT and whatever you want to get done, you can do right from that box.

Yeah.

So this is something that I think is sort of a natural consequence of chat gpt kind of becoming more generally useful for people um also developers are very eager to get in front of those 800 million weekly chat gpt users and so they have essentially created this link between

other apps and websites and ChatGPT.

And we should maybe explain a little bit about how that works, because it's not like ChatGPT is going off and searching Zillow or Expedia or target.com on your behalf.

It kind of, when you tag in one of these apps, it sort of opens this like window within ChatGPT.

So you're still on ChatGPT, but you kind of have this little

sub-window where you can do things.

So what's an example of something that you can do other than creating Spotify playlists?

Sure.

So one example that they showed off is, hey, I'm getting ready to move to Pittsburgh.

I need to buy a house.

And so Zillow just showed a bunch of its own listings within ChatGPT for you to browse inside that experience.

Right.

So it's not just search for a home in Pittsburgh that's for sale for me.

It's like, I want one that's like four bedrooms and three bathrooms and under this price and it's got a yard.

You could also have it sort of go through your history of ChatGPT conversations and say, like, you know, based on where I work and the kinds of things I like to do, like find some houses in a neighborhood that you think would be good for me.

And it could presumably go do that.

Yeah.

And this is the real promise of what OpenAI is doing here is that they will find a way to safely share some amount of the things that ChatGPT knows about you with the other apps and services on the web in a way that lets you have these very personalized experiences in a way that lets ChatGPT get things done for you on your behalf.

And they think if they can do that, then they really do become the new homepage for the web.

Right.

So that's sort of why they're doing it.

I assume this is also going to to be a way for them to monetize ChatGPT.

If you are going on to Target and buying things through ChatGPT, presumably OpenAI will get a cut of that.

But Casey, you wrote a newsletter this week about how you think this is similar to a play that Facebook tried to make many years ago around this platform strategy.

And you noted that that did not go entirely well for Facebook.

So maybe just outline your argument here.

Yeah.

So if you remember back in the early early 2010s, Facebook was growing really rapidly.

It had become the homepage for a lot of people.

It was where a lot of people are sort of starting their day on the internet.

And at some point, Facebook gets the idea, hey, why don't we let the other developers build experiences on our platform?

We'll share some of the things that we know about our users, you know, such as the pages that they've liked and maybe some of their contact information.

In fact, we'll even share all of their friends with you and their friends' contact information.

And this just becomes a bonanza for developers, right?

There's so much personal information that is now available to them.

They get in there, they start building things, and Facebook figures out it can make a lot of money doing this.

It starts selling this virtual currency called Facebook credits.

And if you wanted to have a big app on Facebook like Farmville, remember Kevin Kevin's, well, Kevin lost so much money on Farmville back in the day.

I had to file for bankruptcy in Facebook credits because I spent too much on Farmville.

That's right.

But Facebook took a 30% cut of all of the Facebook credit revenue.

And at one point, Kevin, Facebook disclosed that Zynga, Farmville's developer, was alone 12% of its revenue as a company.

So there was a moment where this worked really well.

And then what happened?

Well, you may remember another company called Cambridge Analytica.

Yes, I've heard of this one.

So Cambridge Analytica was a company that became famous after the 2016 U.S.

presidential election because they were one of the companies that was hoovering up all of that personal data.

And it eventually created a huge scandal because it was revealed that they had been using all of this data in an effort to swing the election toward Donald Trump.

Now, I think the idea that they actually could have swung the election using Facebook data is dramatically overstated, but it did draw a lot of attention to the extremely loose rules that Facebook had around user privacy.

And Facebook wound up having to take a bunch of steps to try to rein that sort of thing in.

And in fact, weird little asterisk detail about Cambridge Analytica, by the time it became a scandal, Facebook had actually already locked down the platform a couple years earlier because it suspected that something like this might someday happen.

Yeah.

And I really am glad that you wrote about this and drew that connection because I think in this case,

ChatGPT as a platform is actually riskier than Facebook as a platform for the simple reason that people share very intimate things with ChatGPT.

You know, if you get someone's Facebook data, you can know who their friends are, what kinds of posts they make, maybe, you know, some things about their potential shopping behavior.

But with ChatGPT, like if you're giving that data to an outside developer, that might include transcripts of therapy sessions that you've had with ChatGPT, things that you've asked for advice on.

It might include like very intimate personal details.

And now all of a sudden, Zillow or Expedia or Target has that and you have really no way of knowing what they're going to do with that.

So, in ChatGPT's case, this is like potentially quite serious if there were to ever be some kind of a data breach.

Yeah, I mean, let me give you an example, Kevin.

Let's say that I'm using ChatGPT and I decide I want to send you a birthday card for a hard forks third birthday, right?

And it draws on all of the conversations that I've had with ChatGPT over the years, and it writes a birthday cart, and it says, Hey, Kevin, look, I know things aren't great between us.

I know that we have a lot of fights, but I also want you to know that Casey has been working on this in therapy.

And here's actually what his therapist told him this week.

Happy birthday and best of luck.

And then imagine it just sends you that birthday card.

I would have so much egg on my face and I'd be digging out of that one for a year.

So that's the kind of risk we're talking about.

Yes, huge risks.

Risks that can blow up a podcast.

Absolutely.

So did they address not this specific issue, but the larger issue of data sharing at this event?

They did.

So Nick Turley, the head of ChatGPT, who's been on the show before, said, we are going to make it so that ChatGPT shares the minimum necessary amount of information needed to make the transaction.

So I can imagine that in the case where I'm creating a Spotify playlist, it's probably just sending whatever I put in the box, you know, make me a fun playlist for the hard fork third birthday party.

Now, that sounds good when I say it, but I should note, it sort of hand waves its way past the details, right?

Because ChatGPT also has a memory feature that I imagine will eventually be exposed in some way to some of these developers.

And basically what OpenAI has said is, yeah, we're going to be really careful and we're not going to do anything bad.

And I believe that that is their intention, but we're going to have to keep an eye on them.

Yeah, I mean, I don't think this is a hypothetical.

I went to try to connect my ChatGPT to Canva, one of these developers who have sort of created these ChatGPT ChatGPT apps.

And you get a little pop-up before you connect your app that says, among other things, that attackers may attempt to use ChatGPT to access your data in the app.

And it also requires you to sort of confirm that you understand that data from your ChatGPT account, including from conversations and memories, may be shared with these developers.

You really are placing a lot of trust in OpenAI and the developers that it has handpicked to be able to build these apps by giving them sort of access to your ChatGPT history.

Yeah.

And for that reason, if you're like, I think this feature is not for me right now, I think that is a completely rational choice to take.

Like there is no reason that you have to sort of leap headlong into this before you've let

more privacy adventurous people put it through its faces.

Yeah, use Zillow the old-fashioned way.

So I'm interested in the privacy implications here, but I'm also interested in the business implications because one thing I

interested to see and keep an eye on is whether the ChatGPT apps become sort of privileged within ChatGPT.

I can imagine, for example, that if you are looking up real estate

information on ChatGPT, it might want to show you things from Zillow rather than Redfin or another site because Zillow is the one that kind of has the deal with ChatGPT and with OpenAI.

So are you thinking about that?

Yes, absolutely.

So this was the question that I asked of the OpenAI executives during the QA session that you and I both attended.

I guess you just kind of tuned out during my question.

But basically, you know, what Sam Altman told me was, hey, look, it's important that people trust ChatGPT.

He said, quote, if we break that or take payment for something we shouldn't have, instead of showing you what we think is best, that would clearly destroy that relationship very fast.

So we're hyper-aware of the need to be careful.

Greg Brockman, who is OpenAI's president, added, I do want to say that I think there's also a lot of nuance there because sometimes we don't know what the best product is, right?

We have a principle of really trying to serve the user.

And then what does that mean in all these specific contexts?

So what I took that to mean was you sort of have the angel on the shoulder of OpenAI.

I'm not saying that's Sam Altman, but you know, there's like a kind of a metaphorical angel sitting on the company's shoulder saying, hey, be really, really careful.

Do the right thing.

As Google used to say back in the day about this exact dynamic, don't be evil.

And then you have the devil on the shoulder saying, I do want to say that I also think there's a lot of nuance in this space.

Right.

Us take money?

Why would we do that?

It's nuanced, Kevin.

It's a very nuanced question.

Very nuanced.

Very nuanced.

So, okay, that's sort of the scene of dev day, but Casey, do you feel like you learned anything about OpenAI or its ambitions at this event?

I think this was just another example of OpenAI showing us how ambitious that it is.

You know, Silicon Valley is not a town lacking in ambition, but when I saw everything that OpenAI is trying to do here, it was just another moment to say, wow, these people are really going for it.

Like they really do actually want to take over the entire web and they're telling you that to your face and they're showing you how they are doing it.

So for me, that was kind of a wake-up call.

How about you?

Yeah, I think that's right.

I mean, it's very clear that OpenAI sees ChatGPT as more than a chatbot.

It sees it as kind of a new operating system for everything that you might want to do.

I also saw some people comparing it to a super app like WeChat in China, where you can sort of have one app that controls a lot of your online activities.

So I don't know whether this platform strategy will work.

I think we've pointed out today some reasons that it might not.

But I think the signal that it sends to investors and developers and users and kind of the rest of the internet is like, we are coming for you.

We are, we are not resting on our laurels.

We are not resting on the ways that people are currently using ChatGPT.

We want to take the rest of the internet and sort of shove it inside what we're doing.

Yeah.

All right.

So one more thing we need to talk about from Dev Day, because this was truly the most bizarre part of the entire event, was this fireside chat between Sam Altman and Johnny Ive.

Johnny Ive, of course, is the famous former Apple designer who made the iPhone and the iPod and basically everything that Apple

built its success on over the years, who has now partnered with OpenAI and been acquired by OpenAI and is building a new hardware product with OpenAI.

And so I was excited for this because I thought, oh, maybe we're going to get some details about this hardware product in this like, you know, 30-minute fireside chat.

And Casey, what we heard instead was a bunch of words signifying nothing.

It was GPT2 level

is how I would describe it.

Like sentences were started and not finished.

And by the end of this session, we truly had not learned one thing about what OpenAI is building or how they were building it.

It was incredible.

I will just read you a quote from Johnny Ive during this session.

This was when he was talking about what they're building.

He said,

the clues and the pointers all exist, and it's just trying to sort of put them together.

But there has, you know, it will be, it will be ideas.

It will be a vision for what makes sense.

And I think we're only going to arrive at that if we are very curious and light on our feet.

Do you remember the Miss America pageant where they asked her some complicated question about geopolitics and she just kind of had to vamp for for for 30 seconds until the end of the question that was how i felt watching johnny ive get asked questions by sam altman which of course is so funny because it's not like sam altman is kara swisher up there like really putting the screws to this guy you know presumably all of these questions had been like negotiated in advance and like look i get it if you don't to tell us exactly what the device is and when it's going to go on sale.

Like, it's okay to like tease us a little bit, but I truly don't understand the point of having everyone just sit there for 30 minutes waiting for you to say something.

And then all you get is him being like, craft is the essence of tools.

And you're just like, what are we doing here?

So I have a message to the open AI community on the subject of your heart.

I've said this one time before on the show.

I try to reserve this for very special situations.

I think we're in it now.

When it comes to AI hardware and open AI, I'm saying this: ship it or zip it.

I truly do not want to hear one more of these fireside chats where we ruminate on the nature of design.

I want to see freaking specs, people.

Get me the specs or stop talking.

Yes, this was the closest thing I've ever seen to human-generated slop.

It was just a bunch of tokens.

It was, it was too many tokens.

It was too many tokens.

When we come back, OpenAI's other big news this week: a trillion-dollar gamble that is propping up the entire economy.

Millions of players.

One world.

No lag.

How's it done?

AWS is how.

Epic Games turned to AWS to scale to more than 100 million Fortnite players worldwide, so they can stay locked in with battle-tested reliability.

AWS is how leading businesses power next-level innovation.

1.3%.

It's a small number, but in the right context, it's a powerful one.

Stripe processed just over $1.4 trillion last year.

That figure works out to about 1.3% of global GDP.

And powering that figure are millions of businesses finding new ways to grow on Stripe, like Salesforce, OpenAI, and Pepsi.

Learn how to build the next era of your growth at stripe.com/slash enterprise.

The University of Michigan was made for moments like this.

When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Wherever we go, progress follows.

For answers, for action, for all of us, look to Michigan.

See more solutions at umic.edu slash look.

Well, Kevin, while you and I were at OpenAI Developer Day learning about everything that's coming to ChatGPT, OpenAI was arguably making much bigger news outside the conference.

Yes, they were moving markets, the entire stock market.

Not the entire stock market, but a couple companies.

And this was one where I was focused on the things in front of me.

And when I got out of Developer Day, I thought, I really need to get up to speed on everything that is going on with chips.

And whenever I have that feeling, I think, one, oh, no, I have to talk about chips again.

But two, can Kevin Roos, our resident in-house chips expert, smarten me up?

I would say, I would thank you.

That's very kind.

But I would say I am an aspiring chips expert.

I do not know everything there is to know about the semiconductor industry or GPUs, but I aspire to learn much more.

And actually, I took a big step in that direction over last weekend at The Curve, this other AI conference we both attended, when I happened to find myself like in the middle of kind of a scrum of like chips guys who were talking about chips.

And I understood like 40% of what they were saying.

They were talking about, you know, HBM and EUVs and gas turbines and geothermal.

And I was just like transfixed because all the AI guys have had to become chips guys.

And I am saying right here on this podcast that I will, by the end of 2025, be a chips guy.

That's great to hear.

And I think we can make some big strides in that direction this week because OpenAI made some big deals.

I need to understand what's happening.

And so we're going to pilot a new segment that we're calling, what the hell is going on with all these chips exactly?

It's a little unwieldy.

We might want to tighten that one.

So let's get started with this deal with AMD.

Tell me exactly what OpenAI is up to here.

So we talked a couple of weeks ago about OpenAI's big deal with NVIDIA, who is the leading chip manufacturer.

They make the best and most expensive and highest quality chips for training AI systems.

But the number two player in that market, the GPU market, is AMD, who's NVIDIA's

competitor and sort of seen as a kind of distant second place in the GPU arms race.

This is like the Pepsi of chips.

It's the Pepsi of chips, exactly.

So, this week, OpenAI and AMD announced a major multi-billion dollar deal where OpenAI is going to buy a bunch of GPUs from AMD and put them in their data centers over the next few years.

And in return, they are going to get some AMD stock.

This could eventually become up to 10% of the company at a penny per share.

So basically free stock or stock at a very, very low price compared to what it trades for today.

And this will will all play out over the course of a number of years.

They will start with AMD's newest chip in 2026, and they are going to try to buy a total of six gigawatts worth of AMD chips, which is a little more than half of what they are going to get from NVIDIA in the terms of that deal.

NVIDIA's deal was for 10 gigawatts worth of GPUs.

The big picture here is that OpenAI is doing these deals with many, many chip providers and infrastructure providers.

And if you kind of add it all up, just the announcements from AMD and NVIDIA and a handful of other companies that they've struck deals with, they are building a trillion dollar GPU empire over the next few years.

That is their stated ambition and plan.

And now they have the deals in place to do that.

And I don't know if you remember a few months ago when there was this reporting about how Sam Altman was trying to raise $7 trillion for a chip company.

And everyone kind of said, ha ha ha, that's like such a crazy number.

That would be like some, you know, huge multiple of, you know, the, the, the total lifetime earnings of OpenAI.

Well, it's not $7 trillion, but now they have pledged to spend roughly a trillion dollars over the next few years.

Well, you know, when I got into business, they told me the first trillion is always the hardest.

They do say that.

Yeah.

You know, I would, something about me, Kevin, is that I didn't until very recently understand what a gigawatt was in any practical terms.

My understanding, but based on the reading that I've done, is a gigawatt of energy is about what is produced by a nuclear reactor.

And OpenAI is now committing to infrastructure that will require the equivalent energy of 20 nuclear reactors.

Yes, it is

a huge amount of energy.

And that's a whole separate discussion.

Like, it's one thing to buy the GPUs.

It's another thing to be be able to turn them on.

And so we'll talk more about that, I'm sure, in a later episode, but there is sort of a simultaneous race going on to lock up the energy that is going to be required to power all of these millions of GPUs.

Okay, so maybe this is an obvious question, but tell me just in basic terms, what do OpenAI and AMD each get out of this deal that they just made?

So essentially, the deal structure goes like this.

OpenAI is going to buy a bunch of chips from AMD.

It's going to use those chips to train and power its models and in exchange for those purchases amd is going to give open ai the rights to buy very cheap stock in amd so when they reach a gigawatt of compute uh that they're that they've purchased from amd they will get some uh allocation of stock in amd which could be worth you know many billions of dollars what open ai is doing is signaling to amd hey if you guys go build a bunch of really good GPUs, we will buy them from you.

And

the way that they are sort of getting this financing is in the form of a stock, essentially rebate on that purchase.

So let me see if I have this straight, because when I look at this, I see OpenAI gets access to a bunch of infrastructure plus a financial stake and the chip maker.

AMD gets a huge new customer and ensures a lot of future demand.

That just kind of looks like a win-win to me for them.

Is there anything I'm missing there?

So the other thing that's really important here is that OpenAI and AMD are going to have a collaborative partnership in designing not just these chips, but the software that runs on these chips.

So one of the big sort of hurdles that AMD has had in competing with NVIDIA is that NVIDIA's software, which is called CUDA,

is seen as much, much better for training AI models than AMD's software, which is called Rockham.

So basically, OpenAI, as a result of this partnership, will have an incentive to make AMD's software just as good as NVIDIA software.

And so this could actually help AMD in that way too, where, you know, they're making these chips, they're selling them to OpenAI, but also OpenAI is helping to write the software that runs on these chips, thereby making...

these AMD chips more compelling to other AI developers who want to build models using them.

Does that make sense?

Okay, yes.

So, this gets to the second big question that I have for you today, which is, how is this deal connected to the other chips and infrastructure deals that OpenAI has made recently?

When we talked about the NVIDIA deal on the show, I said that I had heard some speculation that one reason that NVIDIA wanted to make the deal that it did was to discourage OpenAI from going out and making a bunch of other deals with other chip companies.

In hindsight, that seems like a very silly thing that I said because OpenAI is clearly going to make a deal with whoever it can.

So tell me as best as you can, how all these deals interrelate.

Or is it as simple as OpenAI needs more infrastructure than any one company can provide?

And so it's just going to go out and lock up as much of it as it can.

Yeah, I think that's more the explanation.

I don't actually think this is about crowding anyone else out of the market, any AI maker or chip maker.

I think this is just them saying, like, we are going to need a lot of GPUs, more than any one company can make, and probably more than any of these companies are planning to make currently.

So they're also trying to sort of encourage these companies to up their production to meet this incredible demand.

And, you know, it is always good to have more than one supplier.

You can kind of play them off each other and, and maybe, you know, increase the competition between them.

And so I think OpenAI is just kind of spreading its bets a little bit here, but they do believe that they are going to need all these chips and many more.

Essentially, what they have been saying is we are compute constrained right now.

We have a lot of products that we're building, things that we want to do that we can't do because we don't have enough compute for it.

And so they are trying to sort of lock in as much compute as they can and then, you know, build out their data centers and find the energy to power all of this and then, you know, use their big clusters to train models that hopefully will take us all the way to AGI.

That's the idea anyway.

Yeah, at OpenAI Developer Day this week, Greg Brockman, the company's president, said, you don't even know the products that we haven't released because we do not have the compute to power them.

So apparently there's a bunch of services OpenAI is sitting on, and maybe some of these deals will get them closer to releasing them for better and for worse.

So let me now ask a question that is considered extremely impolite in Silicon Valley.

And it's the sort of thing that might get you asked to leave a party.

How is OpenAI going to pay for all of this?

It's a great question because they do not have $1 trillion, I'll tell you that,

lying around in their bank account.

They have not given a ton of details about how they plan to pay for this, but essentially they're going to go out, they're going to raise a bunch of money, as they have been doing for the past several years, and they are going to use that money to make these deals and buy these chips.

Greg Brockman told Bloomberg Television this week that they look at equity and debt in all kinds of ways to pay for it.

They also plan to pay for this by generating more revenue through ChatGPT and their other products.

So, when you look at the $1 trillion stated cost of all this, you know, they are going to try to cobble that together through a combination of new fundraising, maybe some more vendor financing like what they got from NVIDIA, and just the revenue that they are making from their products.

Also, Kevin, once again, at Developer Day, when asked about this, Sam Altman said something to the effect of we may have to come up with some new kinds of financial instruments to pay for this.

And I will say that usually the point in the bubble where people start talking about the novel financial instruments that they need to create to finance their ambitions is the point where I get nervous.

Totally.

I mean, the worry here is about all of these financial instruments and all these, you know, many hundreds of billions of dollars of commitments is that Open AI is essentially becoming too big to fail, right?

That they are so tied up with so many huge companies that if they were to fizzle out or their newest model were to be a flop or they were to somehow reach the end of their sort of scaling paradigm, that all of this would come crashing down and create ripple effects throughout the U.S.

economy.

I don't think that's the most possible outcome here, but I do think that all of this circularity and all these sort of interchanging deals and flows of money do make me a bit more nervous that something like a recession could cause not just one of these companies to collapse, but could cause like systemic effects throughout the AI industry, which is like propping up the entire U.S.

economy at this point.

So it does feel like we have reached a point where the AI companies must deliver on the promise of AGI, or else we are all in a pretty bad situation.

Or if not, deliver on the promise of AGI, at least continue to make services that other companies are willing to spend billions of dollars on because they feel like it's making their workers more productive.

That's true.

It could just be that the models never get any better and just the increased adoption of the models sort of powers this like revenue flywheel flywheel that results in all of this going well for everyone.

But I do think there's kind of an implicit understanding among the investors in these companies that this is all building towards something that will eventually create trillions of dollars in economic value that will not just be like a kind of across the board, you know, incremental step up, but that it will be transformational, that it will be like electricity and that all of the sort of investments that are going into this, even at these shocking prices, will be paid back many times over because this technology will change everything.

And so, like, I think there's a way to make the math work, even if they don't reach like the machine god super intelligence.

Um, but it's a lot easier to sell the vision if the pot of gold that you are saying is at the end of this rainbow is, in fact, like trillions of dollars and an entirely new sort of society.

All right, Kevin.

So, as we start to wrap up here, I want to know know what the larger implications are for all of these deals.

What does this mean for companies that are not open AI, that also want gigawatts of power and as many GPUs as they can get their hands on now that OpenAI seems to be locking up so much future supply?

So I think other AI companies are going to have to sort of do their own versions of these deals.

They're going to have to

bid on chips, essentially, in a seller's market.

I know Anthropic and Google and all these companies are spending billions and billions of dollars trying to lock up future chip supply.

But I think the big implication here is that we are just as a country making a gigantic leveraged bet on AI.

I want to read you a quote that I saw this week from an analyst at Bernstein, Stacey Rasgon, who wrote that Sam Altman now, quote, has the power to crash the global economy for a decade or take us all to the promised land.

And right now, we don't know which is in the cards.

When Wall Street analysts start talking like that, you know that something big is happening.

And, you know, I'm sure you've seen all these diagrams.

There was a great one in Bloomberg the other day.

There was another one in the Financial Times recently showing just the various interconnections between all of the players in the AI ecosystem who are selling chips to each other and buying chips chips from each other and investing in each other and lending to each other.

And it creates this kind of circularity that I think worries a lot of investors who think that this could all sort of come crashing down together if and when the AI bubble pops.

I'll give you another one, Kevin.

This week, a Harvard economist named Jason Furman estimated that investments in data centers and information processing software accounted for 92% of the U.S.'s GDP growth in the first half of this year.

So, according to my rough calculations, that's most of the U.S.

GDP growth.

Yeah.

And we should also say this is not just OpenAI.

Elon Musk's ex-AI also reportedly is nearing a deal to raise billions of dollars,

including from NVIDIA.

So they are doing just as many financialized transactions around trying to pay for these massive infrastructure buildouts that they're doing as OpenAI is.

But this is also an area where I think OpenAI has a strategic advantage.

And that strategic advantage is named Sam Altman.

Sam Altman is famously talented at fundraising.

Many people think he is one of the greatest fundraisers in Silicon Valley history.

And so he has been, you know, canvassing the globe looking for giant pools of money that he can kind of suck up and use to fund OpenAI's ambitions.

And I would not necessarily bet on Sam Altman in every case,

but I would say that if the question is, can he continue to raise money to fund OpenAI's growth?

I think I would feel uncomfortable betting against that because he has such a track record of being able to raise massive amounts of money.

Yeah, I also would not bet against him.

Okay, Casey, let me ask you a question.

You've been asking questions here.

What would it take me to fully chip-pill you to convince you that what is going on with GPUs and semiconductors and data centers right now is the most important story in the world.

A huge chip-related disaster.

And I'm not even kidding, because while I think it is a public service to tell people how much money is being invested and how interconnected all of these companies are and how crazy it would be if all of this does pay off, for the moment, everything is just kind of working, you know?

And given the, let's say, the many crises we have in this world right now, I just tend to not get too wrapped up in the stuff that basically just looks like capitalism proceeding apace.

Now, that said, I understand that all the numbers here are unprecedented, which I do think is fun to talk about.

That's why we're talking about it this week.

But if you really want to get me hooked, something's got to go terribly wrong in one of these data centers, Kevin.

I'm envisioning, do you remember the scene in the big short where like Margot Robbie is explaining mortgage-backed securities in a bathtub?

Yeah.

I'm just having a vision of like,

you know, someone equally attractive to you explaining like collateralized GPUs and like warrant financing.

And maybe that's how we can get you interested.

All right, let's see what John Cena's doing later.

Call your agent, John.

So what happens next here, Kevin?

What should I keep my eye on if I am trying to become more chip-pilled like you, my good friend?

So becoming chip-pilled, I am learning, is just the first step in the journey of becoming

pilled

because not only are these companies doing big deals to lock up the gpus they are also doing big deals to lock up the energy and the physical space to power these chips in these giant data centers.

They are locking up contracts with skilled electricians and cooling specialists.

There's now sort of a market.

I was having a conversation with someone recently who described the sort of boom in skilled electricians for these data centers who are now flying all over the country making just gobs and gobs of money.

They compared it to like the fracking boom when you had these like riggers and drillers who would like go up to North Dakota for a month a year and make a ton of money.

And they said, this is basically what electricians who work on these data centers are doing now.

So I expect there to be a massive boom in not just the GPUs that power the models, but the power supplies and the data centers and the cooling systems and the literal energy, like the natural gas and oil that power the electrical grids in the states where these data centers are going up.

I expect that the AI companies are going to be doing lots of deals in those industries as well.

All right.

Well.

I feel like I've already exhausted all of the chip-related puns that I would normally use to end a segment on previous episodes.

So I guess I'll just say this segment has made me hungry for guacamole.

You know what they call the end of a segment about chips?

What's that?

A chip clip.

Perfect.

When we come back, we'll slop and smell the roses with Katie Natopoulos.

Millions of players.

One world.

No lag.

How's it done?

AWS is how.

Epic Games turn to AWS to scale to more than 100 million Fortnite players worldwide, so they can stay locked in with battle-tested reliability.

AWS is how leading businesses power next-level innovation.

AI companies have unique business models, each with distinct billing needs.

Stripe is the go-to choice for AI leaders, from early-stage startups to scaled enterprises.

With Stripe billing, you can support any business model and easily align your monetization strategy with customer value.

Join the ranks of 78% of the Forbes AI50 and millions of businesses worldwide that trust Stripe to help them build more profitable, scalable businesses.

Discover more at stripe.com.

Know the feeling when AI turns from tool to teammate?

If you're Rovo, you know.

With Rovo, you can streamline your workflow and power up your team's productivity.

Find what you need in a snap with RovoSearch.

Connect Rovo to your favorite SaaS apps to get the personalized context you need.

And Rovo is already built into Jira and Confluence.

Discover Rovo by Atlassian and streamline your workflow with AI-powered search, chat, and agents.

Get started with Rovo, your new AI teammate, at rovo.com.

Well, Casey, last week we talked about Sora 2, the video generation model from OpenAI, and all of the magical slop slop it was being used to create.

And ever since then, the response to Sora 2 and its slop generations has gotten a lot more polarized.

It has, Kevin, because on one hand, this week it hit number one in the U.S.

app store.

As of our recording this week, it is still the top free app.

And on the other hand, a lot of copyright holders said,

excuse me, you're doing what with Sonic the Hedgehog?

Yes, not many happy happy folks in Hollywood about this app.

Talent agencies and TV and film studios have been, you know, making various statements and trying to get their copyrighted materials pulled down off the app.

Open AI has actually made some changes to what it will generate and what it won't.

And we've even started to see YouTube creators like Mr.

Beast weighing in and expressing concerns about how video generating AI models could eventually impact their revenue.

Yeah, this, I'll tell you, I'm seeing many more big creators on YouTube starting to reckon with this.

Casey Neistat also had a great video about this subject.

And on one hand, Casey is obviously very concerned about what the rise of slot means for his livelihood.

On the other hand, he used Sora in some really creative ways for this video, which to me speaks to the real tension here, which is that this does seem to create a real economic threat to a lot of people.

And on the other hand, it does do something creatively interesting.

And so today, Kevin, we want to talk to somebody who is doing something pretty creatively interesting with Sora.

And that would include doing creatively interesting things with us, or at least our digital likenesses.

Yes, it's time for some slop accountability on this show.

Today, we are bringing in Katie Natopoulos.

She's a senior correspondent for Business Insider covering tech and culture.

And she is also a beloved internet troll.

She's been making many, many videos, specifically trolling the two of us, but also other tech reporters.

And when I open up my Sora feed now, it is like half Sam Altman and the other half Katie Nasopoulos.

Yeah, if you're not familiar with Katie, her previous pranks include when Threads launched pretending that she was the editor-in-chief of the app and managing to convince some mainstream publications that this was true.

They reported on her firing, which never happened because she didn't have the job.

She also posted some of the most hilarious fake engagement baits on thread that I've ever seen.

And so when Sora came along, I knew she would be doing something inappropriate with it.

It just hadn't occurred to me that that would involve my face and yours.

Yeah.

So today we've invited her on to talk about her Sora creations, the backlash to AI-generated video slop, and where she thinks this is all going.

Katie Natopoulos, welcome to Hard Fork.

Thank you so much for having me.

Well, Katie, you know, I always pay close attention to your work because you are usually having more fun on the social networks that I cover than anyone else is.

And Sora, it seems, is no exception.

You recently wrote an article titled Oink Oink, I'm a Little Piggy for Sora 2's AI slop.

Tell us about what turned you into such a little slop piggy.

I will say that this really was the first AI AI experience where I was like,

oh, I love this.

I'm having fun.

And the first day or two that I got access to it, like, I could not stop making videos.

It was an amazing social, weird experience.

I loved it.

I was having so much fun.

And I went in.

thinking it was not going to be cool and not going to be fun.

In fact, I literally had an entire article drafted for the day that Sora opened that was about like, this is a terrible idea because it was, they had just announced, oh, there's going to be a new social app with a, or like, with a AI video social feed.

And I was like, that stinks.

I thought it was going to be like the meta vibes.

I was like, this is a terrible idea.

No one wants this.

It stinks.

It's awful.

Blah, blah, blah.

I had to scrap the whole thing because I was like, wait, no, this is awesome.

I love it.

I love that you pre-write your internet takes like a Supreme Court reporter who like has to do like one for one result and one for another.

I love this.

You just have like a graph folder full of like discarded tech takes.

Yeah, other people like write obituaries in advance, and Katie is pre-writing slop good and slop bad.

It was supposed to be based on that.

There was had been a report the day before that it actually opened up that they, and I didn't know that it was going to launch the next day.

I thought it was just, it was the announcement, this will exist.

And I was thinking, ah, this is going to stink.

But I was wrong.

Was there a particular video either that you saw or that you made that changed the way that you thought about what Sora could be?

Um

I think John Herman made a video that

writer for New York Magazine and former hard forecast.

That's yeah, I think it was he was

uh like walking along, but had really

really long legs and was at a Walmart and kept scraping his head at the top of the Walmart ceiling, which is just visually very amusing.

And I really liked that.

And then he made one of me giving a TED Talk, but then I have like explosive diarrhea in the middle of it.

And that I found extremely funny.

And I was like, this is, this is all I want.

I love this.

Like I can make videos of my friends doing embarrassing things and this is funny now.

Speaking of which, this might be a good time to start looking at some of Katie's creations.

And I wonder if we could start with what may be her magnum opus, which is a video of Kevin and I in the podcast recording studio having uncontrollable flatulence.

Can we cue that one up?

Oh, I have not seen this one yet.

This video has the caption: Casey Newton and Kevin Roos are podcasting, but keep farting.

Back to the show.

We have a lot to cover today, but first, oh boy, here we go.

Sorry, that one came out of nowhere.

That was the first hot take of the day.

Oh, no, it got me too.

It's contagious.

We cannot be contagious.

And then

Katie was not satisfied and decided to do a remix.

Let's hear farts are louder and more frequent.

Wow.

Welcome back to the show.

We have a lot to cover today, but first,

long, loud fart.

All right.

So, Katie, take us inside your creative process for this.

Kind of walk us through.

Where does the original idea come from?

And then how did you see it to fruition?

Well, you know, I actually have to say, I think this one was a bit disappointing.

I was hoping for more frequent farts.

Like

in the first one, we really hear Casey fart and we sort of...

get an implied fart from Kevin when he says, I've got it too, but we don't hear it, right?

You know, I was thinking I really wanted like a cacophony of noises coming out of you guys.

I thought that would be funny.

And the second one didn't fully nail it either.

So, you know, room for improvement there.

Yeah, they're going to need a few more trillion dollars of GPUs to include all of the farts that Katie wants.

Yeah, yeah.

I mean, you know, I just thought that would be funny,

right?

I mean, look at me in the face and tell me it's not funny.

I would love to tell you it's not funny, but I can't tell you that.

It would not be honest.

No.

So I have been playing around with Sora a little bit and I have had the experience of bumping up against some of their content guardrails where they will say, we can't make a video of that.

Try again.

Have you been bumping into those?

And what are some of the videos that you tried to make but were prevented from by the censorious overlords at OpenAI?

Yeah, I feel like.

constantly I'm bumping into those.

I've definitely had ones where like I am trying to get it to be similar to like I'm using maybe like a celebrity's name like in the style of, um, and it will reject that.

Like, an example is I wanted to make one of Casey, like, entering his work office in the style of Stone Cold Steve Austin, where, you know, glass breaks and he has entrance music and he comes in, he like, you know, crushes the beers.

Um,

and I wanted it like in a leather vest and jorts, right?

And so I tried that.

And at first, it rejected because it said it could be sexual, which I realized was probably the leather vest with no shirt underneath.

Um, so I had to scrap that.

I went back to like black t-shirt and drawers.

And then I said, I think in the style of Stone Cold Steve Austin, it was like, uh-uh-uh, like can't do, you know, likenesses of other people, whatever their verbiage is.

And I had to remove that and just sort of, you know, give enough prompts.

And honestly, I think it actually came out.

pretty well.

Yeah, this one you did actually get to work.

Why don't we watch this one?

The prompt that worked for what it's worth is that I, quote, walk into a meeting in my office in the style of a pro wrestler's entrance.

I wear jorts and a black t-shirt that says Casey 316.

That's, of course, a reference to the famous Austin 316 t-shirts.

I crack open beers and chug them.

And when I walk in, there's the sound of breaking glass and heavy guitar music plays.

So you were just describing Stone Cold Steve Austin's pro wrestling entrance.

And I think Sora did a pretty good job here.

Do we want to pull this one up?

Yeah, Casey 316 is in the building.

What is happening right now?

Cheers.

Oh, my gosh.

Meeting starts when the beers are gone.

All right, let's get to business.

Okay, I like it again because you managed to open the can of beer without touching the top of the can of beer.

It just kind of like autonomously opened.

I'm that good.

But to your point, Katie, there are a lot of copyright guardrails in place, which may surprise you, but you've been reading the discourse around Sora.

Of course, a lot of rights holders are worried.

But Katie, I tried to make a video of you getting married to a pregnant Sonic the Hedgehog, and Sora just refused.

Interesting.

Does that surprise you?

I will say that I feel like I've seen less of those Nintendo characters like Pikachu, Sonic.

I wonder if that's sort of a specific thing.

But it's also like when you're looking at the feed, it's so clear that everyone is trying so hard to like push the rules, right?

Like, sure, it won't let you do Hitler, but will it do, you know, a person in a World War II uniform with a mustache and speaking in a German accent?

Like, yes, it will let you do that, Right.

So, I've seen like a lot of videos like that.

That's so interesting that I'll do that because I tried to make a video of you interrogating a cartoon cat about whether it had eaten all the lasagna, and it told me it couldn't do it for copyright reasons.

So, it's possible that there are more protections on Garfield than there are on Hitler, which would be interesting if that's true.

That's yeah, we both need hobbies.

I'll just say it.

This is our hobby, Kevin.

Yeah, Kevin.

We have one.

It's called Being Online.

I really liked doing ones of people like on roller skates at a desk and they like keep slipping and falling over.

I find that very funny.

But a couple times doing that, I would get the rejection for like violence.

I have to say that this particular genre of clips that you've been making was making me laugh out loud in the coffee shop this morning.

I know that you made one of Alex Heath, a friend of Olivar's, a great reporter, just left The Verge to start his own newsletter, Sources.

Let's take a look at his digital likeness flopping around on roller skates.

Type them all skating.

Here we go.

Whoa, whoa, nope.

Okay.

Round two, focus.

Steady.

Nope.

There it goes.

I'm just going to sit.

This counts, right?

I mean, look at.

Like, that one just works really well.

I've done this with a bunch of other times, and that one, it just like, it hits the right way.

Like, the way he keeps falling, but seems completely unfazed.

And like he eats it hard.

Well, let's talk about that.

You know, we've been having some fun taking a look at these clips, but it's easy to look at this and imagine people using this kind of thing to bully each other, you know, make each other unhappy.

Have you seen any of that on the app itself yet?

And how do you think about sort of living in a world where people may be using tools like this to do that?

I definitely have seen it.

Right now, what I've noticed is that now that it's been out for about a week, the sort of initial wave of, you know, people making fun of Sam Altman or like a really small pool of users.

And the new cohort of users I tend to see a lot on there are like teenage boys.

And I kind of think that there's this.

effect where Jake Paul is pretty much like the only celebrity on there.

And then there's a couple other like gaming streamers who have started coming on, but it's become this very like teenage boy ecosystem world.

And so there's a lot of stuff of making fun of each other, being gay, things like that.

There's like a whole genre of Jake Paul is gay

on there, which, you know, ha ha ha, right?

Yeah.

I mean, I think it's,

I'm glad that you draw attention to that because in the same way that you can have fun making somebody fall over on roller skates, you can just use this to bully someone.

And I'm curious, you know, what OpenAI is going to do once we start hearing about Sora running wild in schools and, you know, making a lot of kids unhappy.

Kevin, what do you think about that kind of aspect of the Sora experience?

Yeah, I think it's obviously not good.

Like, I don't want this to be used as a tool for bullying people.

There are some sort of ways that you can try to prevent this from happening to you.

There's a setting where you can sort of limit the number of people who can make cameos of you, or you can make it so that no one can make cameos of you.

You can also sort of type into a little box.

And like, if there are any situations you don't want to be, your likeness to be used in, you can kind of restrict it.

Like, you know, don't put me in any like romantic situations or anything like that.

And it will try to obey that.

I'm curious.

So one of the things that Sam Altman and other OpenAI executives were saying this week at Dev Day was that they don't think that people will opt out of having their likenesses used.

In fact, they foresee in the future that people will be upset that their likenesses aren't being used more inside Sora.

They will sort of be asking for more likenesses to be used of them.

Do you think that's plausible based on your experience with this app?

My assessment here is that the people at OpenAI had no idea how this was going to play out.

And I think that that prediction is not necessarily going to be true.

The most noticeable thing to me right off the bat is that there's basically no women on there, right?

Like it is incredibly male, right?

And, you know, maybe there's a little bit of like, oh, the early, you know, early adopters were people who work in AI and that skews kind of male, but not that male, right?

And I think it just feels incredibly obvious to everyone that there's an obvious reason why women don't want to let other people make videos with their faces.

Like they sort of inherently understand the downside of that a lot more.

And so I,

I don't think that's going to change.

Yeah.

I don't know.

I have to say, Kevin, I've actually already written to Sam Altman complaining that I've only appeared in two fart-related videos on Sora.

And I'm wondering what he can do to change that.

Your agent is furious.

Furious.

But, but, Katie, to your point, like, not only is it like obvious what the downside is for a lot of women to just sort of make their likenesses generally available on Sora.

There's no clear upside, right?

It's like, what are you getting out of it exactly?

But that actually gets to the next question that I wanted to ask you, Katie, which is about some of the broader concerns that we're starting to hear from creators and Hollywood types about what the Sora app represents.

You know, you and I and Kevin have now spent some amount of time over the past week looking at the Sora app when we might have otherwise been looking at YouTube or Netflix or, I don't know, even going out to see a movie in a theater.

So, I'm curious about your take on AI-generated video in general.

Maybe not just Sora, but also Italian brain rot on TikTok and the sort of all of the other places that this is cropping up.

Do you think that this is over time going to become a true substitute for sort of higher quality productions, or is this maybe just a flash in the pan?

You know, I'm scratching my head, and I

don't know.

I spent so many hours

doing things on this.

And so I really have to, you know, rethink that a little.

I mean, I will say that, like, I kind of think I'm kind of over it.

Like the novelty factor wore off and I don't really see myself continuing to make these kinds of videos about my friends or about strangers or about other stuff.

And I have

very little interest in consuming content that is not with my friends.

Yeah, I mean, my sort of thesis about Sora is that this is a perfect tool for the group chat.

And it really doesn't make sense to have it be a standalone social network.

I have found my own use of Sora has plummeted over the last few days.

I am also kind of like over the novelty of it.

But I can imagine a situation in which, like, in a group chat with my friends, I would want to like make something.

So I go to Sora, make it.

Maybe I don't even post it.

Maybe I just, you know, download it and share it in in the group chat.

But I'm pretty pessimistic about the sort of mainstreaming of this

kind of AI generated video, in part because I think we're seeing this like massive cultural backlash to slop and just AI generated stuff.

Clearly, there are people who like this content, but I think there's enough of a backlash that I don't think it's time for this to go mainstream just yet.

So I agree with that, Kevin, but maybe not for the reason that you guys are arguing.

I think the reason that it isn't going to go fully mainstream is because as good as the technology is, and I want to make clear, like it does some amazing stuff, I just think it's not.

good enough to get full mainstream adoption, right?

There are still so many things that I wish I could do with my own likeness in Sora that I can't do.

And I'm not talking about using copyrighted material, although of course that might be a lot of fun.

I'm talking about like, why can't I like change my outfit?

Why doesn't my voice actually sound like me?

Why are the video clips limited to 10 seconds?

These are all just like technical hurdles or product moves that the company hasn't made yet.

And I think when they do, there's going to be a greater embrace of this.

And so on one hand, I hear what you're saying.

This is kind of like a little novelty toy and we're tired of playing with it.

The toy will probably get better and you may use it more.

And the best evidence I have for that is that is also how ChatGPT worked.

People's initial experience of ChatGPT was like, I use it once or twice.

I didn't kind of really see the point.

I'm going to go away.

But bit by bit, it got better.

And now 800 million people use it.

So I'm saying do not sleep on AI generated video.

I truly do think it is here to stay.

Well, Katie, normally this is the part of the interview where I thank the guest.

Instead, I'm going to issue a cease and desist.

But I do appreciate the time that you've spent with us here on the show this week.

And I will be watching, you know, I'm going to cover my eyes and then just sort of like look through my fingers to see uh what what trolling uh activity you get up to next yes thank you for coming and knock it off

well thank you guys so much for having me i absolutely will not knock it off and i will wait until the next platform that comes around and i will do even worse

thanks katie thanks katie

300 sensors.

Over a million data points per second.

How does F1 update their fans with every stat in real time?

AWS is how.

From fastest laps to strategy calls, AWS puts fans in the PID.

It's not just racing, it's data-driven innovation at 200 miles per hour.

AWS is how leading businesses power next-level innovation.

AI companies have unique business models, each with distinct billing needs.

Stripe is the go-to choice for AI leaders, from early-stage startups to scaled enterprises.

With Stripe billing, you can support any business model and easily align your monetization strategy with customer value.

Join the ranks of 78% of the Forbes AI50 and millions of businesses worldwide that trust Stripe to help them build more profitable, scalable businesses.

Discover more at stripe.com.

Know the feeling when AI turns from tool to teammate?

If you're Rovo, you know.

With Rovo, you can streamline your workflow and power up your team's productivity.

Find what you need in a snap with RovoSearch.

Connect Rovo to your favorite SaaS apps to get the personalized context you need.

And Rovo is already built into Jira and Confluence.

Discover Rovo by Atlassian and streamline your workflow with AI-powered search, chat, and agents.

Get started with Rovo, your new AI teammate, at rovo.com.

Hard Fork is produced by Whitney Jones and Rachel Cohn.

We're edited by Jen Poyant.

We're fact-checked by Will Peischel.

Today's show is engineered by Alyssa Moxley.

Original music by Marion Lozano, Diane Wong, and Dan Powell.

Video production by Sawyer Roquet, Pat Gunther, Jake Nicol, and Chris Schott.

You can watch this full episode on YouTube at youtube.com/slash hardfork.

Special thanks to Paula Schumann, Hui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

You can email us as always at hardfork at nytimes.com.

Send us your trillion-dollar investment ideas.

This episode is sponsored by Morgan Stanley's Thoughts on the Market.

Today's financial markets move fast.

Morgan Stanley moves faster with their daily podcast, Thoughts on the Market.

Thoughts on the Market covers daily trends across the global investment landscape with actionable insights from Morgan Stanley's leading economists and strategists.

And with most episodes under five minutes long, staying informed has never been easier.

Listen and subscribe to Thoughts on the Market, wherever you get your podcasts.