OpenAI Calls a ‘Code Red’ + Which Model Should I Use? + The Hard Fork Review of Slop

58m
“For OpenAI to realize its ambitions, it is not going to be enough for them to make a model that is as good as Gemini 3. They need to be able to leapfrog it again.”

Press play and read along

Runtime: 58m

Transcript

podcast is supported by Bank of America Private Bank.

Your ambition leaves an impression. What you do next can leave a legacy.
At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.

Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com. What would you like the power to do? Bank of America, official bank of the FIFA World Cup 2026.

Bank of America Private Bank is a division of Bank of America and a member of FDIC and a wholly owned subsidiary of Bank of America Corporation.

Casey, how's it going? Good morning, Kevin.

I am doing well, as well as can be expected, given that I had a colonoscopy yesterday. Yes, I heard about this.
How did it go? Well, I got a clean bill of health.

I will say, though, there was one moment during the procedure that was sort of alarming to me. What was that?

Well, I had met, you know, the various nurses and the doctors, and everyone was so friendly, you know, and was introducing themselves.

But as they sort of put in the medicine to make me kind of go under, I noticed that there was one medical professional who was against the wall and she was scrolling through her phone.

And the last thought I had before I went under was, I really hope she's not looking up how to do a colonoscopy. You know what I mean?

Because she kind of had that look on her face, like, I need to jog my memory about what I'm doing here. And I thought, oh, God, I hope she already knows.
No, she was on TikTok.

She was live streaming your colonoscopy to her hundreds of thousands of followers. I had that thought.

I was like, could we get more or fewer viewers to the YouTube channel if we went live with my colonoscopy? They're always saying, be authentic. Exactly.
Bring your whole self to work.

Or yourself whole.

To work.

I'm Kevin Russia, tech columnist at the New York Times. I'm Casey Noon from Platformer.
And this is Hard Fork.

This week, OpenAI declares a code read, why the competitive landscape in AI has Sam Altman scared. Then, how we're using all the latest AI models.

And finally, we're heading back to the theater for the Hard Fork review of Slop.

Well, Casey, do you feel a little nervous energy, a certain frisson of tension in the air crackling through San Francisco these days? Absolutely, Kevin.

There's a chill on the back of my neck and an eerie silence as I walk down the streets of the mission. Yes, well, that is because OpenAI is in a code red.
Code red.

Now, as you will remember, a couple years ago on this show, we talked to Sunar Pachaya of Google when they were in their own sort of code red period, which he said was not actually called code red, but someone over there was using that term.

And that was sort of when they were on their heels, taken aback by the surprise success of ChatGPT, and they were racing to get their own version of a chatbot out.

And they were sort of in a corporate state of panic about this. That was their code read.
Yes. But now we have a new code read, and it is at OpenAI.

Sam Altman reportedly declared a code read this week about some worrying trends they're seeing with ChatGPT usage. And I think in general,

Beyond just OpenAI, there's just been a lot happening at the Frontier AI companies that we should talk about.

A lot of new models coming out, a lot of discussions about the sort of state of AI right now. So I thought today we should just kind of get into it all, starting with this code red.

Yeah, let's talk about it because, you know, for listeners who may be curious, a code red is the second most dire state of emergency a company can declare, with number one, of course, being a Baja Blast.

So code red is just below that. Yes.
Yes, if we get to Baja Blast, I'm ducking and covering. Yeah, me too.
I'm leaving the city. I'm heading to the the bunker.

So because this is a segment about AI, we should make our AI disclosures. I work for the New York Times, which is suing OpenAI and Microsoft over alleged copyright violations.

And my boyfriend works at Anthropic. Okay, so let's start with OpenAI.
Casey, what was in this code red memo? Yeah, so this was reported by the information.

Sam apparently sent employees a memo on Monday. And interestingly, Kevin, your colleague Cash Hill had reported recently that OpenAI had declared a code orange.

So they are moving up the ladder of distress here.

But the upshot from this memo is that OpenAI is going to start devoting more resources immediately toward improving chat GPT.

And they're going to be delaying work on some of the other projects they had going, including ads, AI agents, and Pulse, which is this daily digest feature that they launched a couple months ago.

So on one hand, it seems sort of obvious to me that they would be wanting to put a lot of resources toward improving ChatGPT. Like that would sort of seem to be the norm to me.

But on the other hand, if this actually does result in them pulling engineers off of other projects, well, maybe that shows that they are taking this seriously. Yeah.

Casey, why are they doing this right now? Why are they feeling so much urgency around bringing people back to ChatGPT?

I think there are two big reasons, Kevin, and their names are Gemini 3 and Opus 4.5.

Over the past few weeks, we have seen Google and Anthropic both release state-of-the-art models that in various ways challenge some of the core pillars of what OpenAI is trying to do. Yeah.

We know that just a few weeks ago, Sam had sent another memo to the OpenAI team on the eve of Gemini 3 coming out saying, hey, we may be heading into some rough waters here.

The belief was that Gemini 3 was going to be so good that it was going to cut into OpenAI's growth, both on the user side and the revenue side.

And that creates all sorts of problems for OpenAI, right?

This is a massively leveraged company that is wholly dependent on subscription revenue that is trying to build out a consumer product while competing against in Google, what is one of the biggest and richest companies in the world.

So if you get to a point where Google's models are truly better and the costs of switching are quite low, then things start to get very difficult for OpenAI very quickly.

Yeah, I think that's really important. And I want to just underscore that because I think what's happening here is a combination of things.

One is that I think for a while, OpenAI and to a lesser extent, Anthropic were both sort of surviving on this moat of the model, right?

They had the best models in the world, and that was kind of what separated them from the rest of the pack.

If you wanted to work with a world-class model, you were doing some kind of agentic software development, if you were trying to do a lot of vibe coding or something, you really like wanted the smartest model possible and you were willing to pay 20 or 200 or in a business's case, like a couple thousand dollars a month for access to that best model because your alternative was llama or Gemini or one of these other sort of second rate models and those models were not that good.

But Gemini, as we talked about with Demis and Josh on the show a couple of weeks ago, is good now. I would say it's at least as good as ChatGPT at many of the tasks I've been trying it on.

And it's really hard to imagine competing with Google, a company that last quarter did $100 billion in revenue.

Like this is a company that has more resources and money and engineering talent than anyone else.

And do you really think they're like sort of worried about how many $20 a month subscriptions they're selling? No.

Once their models are good, they're going to like start subsidizing the hell out of them. And they're going to drive the cost

very low.

And they're going to try to steal market share and I think that's the sort of phase they are right now is that they are realizing oh we've caught up we have something compelling and we can just kind of drive these other companies margins down by offering our thing very cheaply yeah well so let's talk then about a few other details from this memo and the kinds of improvements to chat gpt that open ai now says it is going to be working on uh the memo includes personalization features so further customizing how chat gpt interacts with you improving the behavior of the model.

I'm not quite sure what that means. Although one thing it did say was they want ChatGPT to refuse you less, and then improving speed and reliability.

I have to say, these are things that I just assume that OpenAI is always working on anyway, right? Like these don't feel like particularly big swings. They don't feel like a giant change in direction.

What they do seem to me, though, Kevin, is like the Facebook playbook, which is something we've been talking about on the show for a while now.

This is a company that has brought on a lot of people who who used to work at Meta. And what kinds of things do they do over at Meta?

Well, they try to create a perfectly personalized custom feed to you. They try to give you exactly what you want, and they don't want to refuse anything that you want for them, right?

So this seems, in other words, like they are going for engagement first and foremost. And I think that has a bunch of interesting implications.
Yeah.

So I think it's too early to say that like OpenAI is screwed here, that ChatGPT is in a bad place. They're obviously still the sort of world leader.
They have the most name recognition.

I think they've gotten a kind of ubiquity among AI power users that is going to be very hard to unseat. What do you think about this decision? What do you think about this direction for OpenAI?

Do you think that they are right to be worried?

I think, look, if OpenAI flames out, all of us will be able to look back and identify 15 huge mistakes that they made, right?

It is just as possible that some of the same bets that they are making now may pay off. And right now we're in this moment of uncertainty.

But if you want to take the bear case, which a lot of people are making this week, here's what you can say. This company is massively leveraged, right?

They've made a ton of spending commitments into the trillions of dollars that rely on revenue that is not close to materializing.

And if you look at their product organization, they are not focused at all. They are trying a little bit of anything and everything.

One reason why we've talked about Sora, their video generator, so much on the show, is it seemed like such a weird departure from their core focus, right?

So you have this company that has its fingers in many, many different pots. Most of them are not generating revenue.
It has these massive spending commitments.

And now all of a sudden, some of the other labs seem like their models are leapfrogging them. So, yeah, you can take all of those facts and paint a potentially dire picture about the future of OpenAI.

Yeah, there's some interesting discourse this week. Someone was pointing out that OpenAI has not had a successful pre-training run in quite a while.

This was something that Sam actually brought up in his one of his sort of Slack messages to staff a couple weeks ago is that they feel like Gemini 3 is like a pretty amazing sort of pre-train, which is the first step in the AI process when you're building a large language model and you're feeding it a bunch of information.

And I think that the sort of conventional wisdom among like AI heads has been that like pre-training is kind of hitting a point of diminishing returns, right?

That we've sort of sucked up all the data, fed it into the models, made these models as big and efficient as they can be, and all of this sort of low-hanging fruit now is in the post-training phase.

So I think what we're seeing now is that OpenAI realizes that it has a problem with pre-training specifically.

And that is harder to fix than post-training. It's expensive.
You have to redo these training runs. You have to find whatever's messing up the pre-trains.

But that is, I think, where they are going to be focusing their research energy. Yeah, definitely something that OpenAI is concerned about.

So thanks to these memos that have been leaking out, we also know that OpenAI is training more models that it thinks will be better, will sort of catch up to the frontier or, you know, advance the frontier in some way.

And one of them is called garlic and another one is called shallot peat. So make of that what you will.
They have a real allium thing going on. They sure do.

They're getting very close to being able to make a mirepoix.

I know what that is. Put a little carrot and celery.
Yes. You got a stew going.

Now, what do they say about those models, though?

Because I believe we saw some reporting in the information that said that at least they believe that this next series of models will bring them back to or maybe even a bit ahead of the state of the art.

Yeah, I've been talking to some folks over there.

They seem optimistic about these models, but it's also not clear yet whether they will be as good as they hope they will.

All kinds of things can get messed up in the late stages of training a model. And so I guess we'll just have to wait and see.

Let me add one more point about all this, though, which I think is important, which is the mere fact that OpenAI's current focus is just kind of clawing its way back to parity with its biggest rivals is a big part of the problem here.

Think about the position that OpenAI was in just about three years ago, this week, which was just days after the launch of ChatGPT. The world was their oyster, right?

They had this massive head start over everyone, and they had been able to maintain that lead, even in the face of like historic turmoil, including the ousting of their CEO and then bringing him back, right?

And I think for months, I was honestly astonished that they had been able to release feature after feature that was keeping them so far ahead of the competition.

Now it does seem like the first moment after the release of ChatGPT, where maybe they're just starting to fall a little bit behind.

And Kevin, I have to say, for OpenAI to realize its ambitions, it is not going to be enough for them to make a model that is as good as Gemini 3. They need to be able to leapfrog it again.
Right.

They are not going to win by tying for first place. That's right.
That's right. All right.

Let's talk about some of these other AI models and some of these other companies that have been coming out with new things recently.

And I want to start with Gemini 3, a model that we've mentioned a couple times already today. We talked with Demis and Josh about it on our bonus show on the day that it came out.

We've now had a couple weeks to play around with the model and start using it. And I want to know your impressions.

So I think the number one observation I have about Gemini 3 is that it is just faster than the competition. And this matters a lot, right?

Often when I'm finished writing a column, I will ask both ChatGPT and Gemini to fact check it. ChatGPT's fact checking is usually more thorough and better than Gemini 3's even today.

But Gemini 3 is a lot faster. And in AI, speed matters a lot.
And the faster something is, the more more often you use it. So I think that's been really powerful.

Now, do you fact check the fact check? Do you have to like go in and sort of manually see if what their models are telling you is correct? Yeah.

So what I'll do is, you know, it'll, it'll basically just say like, hey, like you got this date wrong or, you know, you got this name wrong. And then I go look it up myself.

And, you know, nine times out of 10, they have like caught my mistake. So I'm not just saying, like, tell me that everything in here is perfect.

I'm saying, can you find something in here that you think is wrong? And by the way, you know, this was something that a year ago.

A year ago, we were checking the model for hallucinations. Now they're checking us for hallucinations

it's really true but but this is something that they're quite good at totally yeah i really like gemini i've i've been a sort of quiet gemini stan for for a while now i really liked 2.5 uh the model that preceded this i have been using gemini sort of one of my two kind of daily driver models.

We'll talk a little bit later about sort of how we're using this stuff, but I think this is a really powerful model. I've been doing some research for the book that I'm working on.

Gemini has been extremely helpful with that. Things like organizing timelines, pulling up

research papers, putting things in sequence, finding things within large documents that I'm sharing with it.

I think this is just a really good model. And to me, it's not as interesting or fun to talk to as some of the other models.
I don't feel like it has much of a personality. No.

But it is a workhorse and it is fast. You're right.
And this is not even the fast version. They're going to be coming out with a flash version of this model at some point.
So I'm excited for that.

And I think they really cooked with Gemini 3. Yeah.
So on the occasion of the release, Google said that about 650 million people a month are now using Gemini.

OpenAI annoyingly reports weekly user numbers. They say they have more than 800 million weekly users of ChatGPT.
Interestingly, neither of these guys reporting daily numbers.

And I think that's because most people still are not using AI daily, right? So that's why we're sort of in this weird middle zone. But here's the thing.

If Gemini has gone from, you know, zero to 650 million in this short of a time, there is every reason to believe that they can catch open AI, right?

And that even though ChatGPT is synonymous with AI for a lot of people, it is just turning out maybe not to matter as much as you might think. Right.

And I'm always a little suspicious of these Gemini numbers because I'm not sure whether they're just counting sort of people who sort of proactively go to the Gemini website or the Gemini app, or whether they're also counting people who like click on the little Gemini thing inside Google Docs or Gmail or something.

To me, that like indicates a little bit less intent, and maybe I take those numbers a little less seriously.

But that also, on the flip side of that, is like Google has this massive distribution advantage, right?

It does not have to convince people to go to a website that they are not used to going to or download a new app. It is already on billions of phones and devices.

People already have Google as their default homepage. They're already using Gmail.
They're already using all these other Google products.

And I think in a world where models are becoming more commoditized, or at least there are sort of more labs at the front of the pack, distribution is going to play a much bigger role. Absolutely.

Okay, now let's turn to Anthropic and their new release, Claude Opus 4.5. Casey, have you spent time playing around with this model? I have.

And I think this is a really, really good one. Now, famously, my boyfriend does work at Anthropic, so you should feel free to apply an 80% discount rate to everything that I'm about to say.

But here's what I'll tell you. Before 4.5, I was not really using Claude on a daily basis.
I was trying it every once in a while to see what it can do, as I do with all other models.

But for me, the daily drivers were absolutely ChatGPT and Gemini. Those were the most useful models.

When Opus 4.5 came out, I put it through a test that I've been giving every model forever, which is I would give it some sort of unpublished study that I might want to write a story about.

And I would say, write a column about this study in the style of Casey's platformer, just to see what would happen. To this day, if you do this with ChatGPT 5.1, not good at all.

It just gives you a bunch of bullet points and bullets, stuff I would never do.

If you give it to Gemini 3, it kind of sort of is structured like something that I might write, but has a lot of obvious AI tells.

I did this for the first time with Opus 4.5, and it honestly sent a chill through my spine because for the first time, I was looking at sentences that it looked like I could have written them.

In particular, it wrote a conclusion that I was like, I would write a conclusion that looks like that. So we talked a lot earlier this year about the concept of style transfer.

That was the studio ghibly moment where all of a sudden you could make any image look like this, you know, Japanese anime. It was really kind of fun.

I've been waiting for the moment when that happens in text. This was a moment where I was like, oh my God, it is starting to happen, Kevin.

So that was the first thing I saw Opus 4.5 do that made me say, okay, they may have something here. Yeah,

I am not conflicted by being in a romantic romantic relationship with anyone who works in Anthropic. So maybe apply less of a discount rate to what I'm about to say, but I love this model.

I am having so much fun with Claudopus 4.5. It is one of my two daily drivers along with Gemini 3.
I've been using it for all kinds of book research, for preparing for podcasts and interviews.

I've been talking to it about all kinds of, you know, family things and medical things and parenting things.

And I just think there's like something special about this model that I have not felt since a previous version of Claude, Claude 3.5 Sonnet parentheses new,

which was to that point my favorite model to talk to. And this is sort of bringing back that same feeling of like, oh my God, like this is an incredible experience talking to this thing.

Now, can you say, what do we know about what went into the making of 4.5 that might explain some of these gains that you and I are both feeling?

So interestingly, I think Anthropic actually underhyped this release. They didn't do a big like splashy thing about it.
They made some claims about how good it is at coding and agentic.

tasks like computer use. They also said that it was really good at deep research and they called it their most robustly aligned model they've ever released.

But I think they really wanted to let the model do the talking and people are kind of amazed by this model.

A recent hard fork guest, Dean Ball, had a great post about Cloud Opus 4.5 in which he said, this model is a beautiful machine, among the most beautiful I had ever encountered.

And

I won't go that far, but I will say that like this is

there's, there's, there are these sort of intangible and hard to quantify properties of models that you just kind of get a sense of when you use them a lot.

Yeah, I think that in particular, the Cloud models have always always excelled at kind of having an empathy for the user that stopped short of a sycophancy, right?

It felt like you were talking to somebody a little bit more like a therapist where there was like some sort of remove, and yet you also sort of felt like you were interacting with something that, you know, was like taking you very, very seriously and was like trying to treat you warmly.

And that just makes Opus, I think, good for a lot of things. I will say, recently I had this procedure.
I probably now talked about it too much, but here's the thing.

When you're about to have a colonoscopy, or maybe let's say you're going through the preparations for a colonoscopy, many gross things are happening in your body.

Your boyfriend doesn't need to know about them. Your friends don't want you to call them asking, you know, questions.
But you go to this model and you say, this specific thing just happened to me.

What do you think about that? And you just get back a response that is very warm and humane. And so for that reason, I thought it was really good.
Yeah.

I appreciate about Claude that it it will tell me when I'm being ridiculous.

Like the other night, I was like up too, way too late, like asking it some, you know, banal question about like Christmas shopping or something.

And at one point, it was just like, Kevin, it's after midnight. Go to bed.
Wow.

That gets at something that I think is really interesting about the Cloud models. And I think opens up what should be something fascinating to watch over the next year.

When you look at the Google and the Open AI models, those are in some large sense optimizing for engagement, right? We know they want you coming back to them every day.

Make this your sort of primary driver. We also know that Google is already testing ads and AI.
We believe that OpenAI is going to launch this at well.

I do think that kind of changes and probably perverts the incentives for what kind of AI systems you're going to have. I'm pretty confident Claude is just not going to do that.

I don't think ads are going to be in Claude in the next year. I don't think it's going to become an e-commerce engine.
It's just kind of going to stay the way that it is.

And so I think that gives Claude this really interesting opportunity in a world where everyone else is pushing for engagement, commerce, monetization. Anthropics model is just very different.

They're building for the enterprise. Like claw.ai is almost an afterthought for them, right?

Because what they really want to do is they want to sell an API to a company and charge them millions of dollars to like do agenda coding. So clawed winds up being this kind of like,

I don't know, like bonus child that they have that is like really good at a bunch of things.

And I just kind of don't think it's at the same risk of being ruined in the next year that the other ones are.

Yeah, I mean, I think there is like an interesting tension that you're identifying, which is like, on one hand, Anthropic of the big Frontier Labs is like the most sort of focused on these like enterprise work use cases, like specifically coding.

And that's where they make most of their money. That's like the fastest growing part of their business.

They're not really competing in the consumer space anymore because I think they realize, you know, to their credit that like Chat GPT just has way more users and way more sort of purchase among like ordinary users.

Yeah, they lost. They lost.
And

I think that could incentivize them over time to like make this thing more boring and less interesting to talk to, just sort of make it like a perfect, efficient coding coworker and to stop investing in some of this other sort of more soft like model behavior stuff.

But I really hope they don't because it is a joy to talk to an AI model that actually feels like it has, I don't want to say like a consistent personality, but like I really liked the way Dean Ball put it in his essay.

He said, Claude Opus 4.5 just feels like it's playing in the same musical key all the time, right? Like you can open a new chat with it. You can talk to it about something completely different.

And what comes back at you feels like it comes from the same place sort of almost philosophically as the thing that you were talking to about something completely different.

I mean, I think they are going to keep going in this direction because what are they trying to build? They're trying to build an AI coworker, right?

And they want that coworker to be humane and to play in the same key, you know, every time that you speak with it.

So I think you'll probably see them go less into personalization than you see these other companies go into. So, this is just like really interesting.

Like, you actually just have two very different points of view about what an AI tool should be. And we're going to get to watch that play out next year.
Should we talk about the Soul Doc?

Let's talk about the Soul Doc. Okay.
So, a lot of the chatter about Opus 4.5 in the past week has been about what's come to be known as the Soul document. This was S-O-U-L, not S-O-L-E.

Or S-O-E-U-L, however you spell the city in South Korea. That's right.

I think you got it. No, S-E-O-U-L.
That's right. Yes.

This is something that actually came out because these kind of, you know, internet commenters were sort of... Freaks.
Freaks, yes.

These people who like love to jailbreak new models and sort of figure out all the hidden Easter eggs inside of them had discovered or claimed to have discovered this thing.

This sort of it wasn't exactly a system prompt, which is the thing that you tell the model before it starts responding to users. It was actually in the weights of the model.

So like part of the sort of pre-training process.

And it was this kind of fascinating document about Claude and sort of explaining what Claude is and what Anthropic is and this weird position that they occupy in the AI landscape where they're very worried about the dangerous effects of this technology, but they're also racing to build it and how like Claude is their sort of this, it was basically just like kind of a biography of Claude and Anthropic, but like inside the weights of the model.

And at first people didn't really know like, is this real or is this just sort of being hallucinated by the model?

Models are notoriously unreliable when you ask them about themselves and their internal workings.

But on Monday, Amanda Askell from Anthropic confirmed that this was based on a real document and this was part of Claude's training process. She said they are still working on it.

They intend to release more details about it soon. But this has become endearingly known within Anthropic as the sole doc.
And what a fascinating thing. It is a fascinating thing.

I mean, look, this is a company that fully believes the thing that they are making is going to become sentient, conscious, and will need to be treated with all the respect that you would afford another human being.

So they are sort of way out on a limb compared to their competitors getting ready for that.

And it really tells you a lot about the people that work at Anthropic that they are building soul docs for their AI models. I mean, I think it tells you what is coming.

I was recently at a, I went to an AI consciousness conference,

which was fascinating. And I'm going to be writing about it in my book.
But it's like,

there is now this sort of seeds of this conversation happening among the people at the big labs who I think do understand that these systems are becoming increasingly

like we're going to get hammered by the anti-anthropomorphization people for everything that we're about to say, but they increasingly see these things as having some kind of inner awareness, some kind of ability to reflect on maybe things that happen to them during their training processes, maybe some consistent emotions that they tend to express.

And like, there are lots of outstanding questions. I am not at all certain about what my sort of peak consciousness is.

I think it's very low right now, but like people in serious jobs at serious companies are starting to think about the possibility, however remote, that these things are or may soon be conscious.

And I just think that's fascinating. Yeah, I agree with that.

What kind of threat is Anthropic strategically to open AI right now? I mean, I think right now it's primarily in the enterprise.

Like at the start of this year, Anthropic had less than $1 billion in annualized revenue.

As it's coming to the close of the year, it's said that it is expecting to have about $9 billion in annualized revenue. So it did that by selling into the enterprise.

If you are a developer or you're a big consulting firm and you want to create these agentic workflows, most companies that are buying this software are buying it from Anthropic, or I should say maybe a plurality of them are buying it from Anthropic.

And so Anthropic has just become one of the fastest growing startups of all time because they've just created this massive opportunity.

If they were not on the chessboard, that $10 billion would probably be going to somebody else. And that would probably be some combination of OpenAI and Google, right?

So that's a significant amount of revenue that OpenAI is losing out on this year. I believe OpenAI is projecting to have about $20 billion in revenue this year.

So you can imagine how different the picture would look for them if they've been able to capture the enterprise market. And increasingly, you know, Anthropic is winning it.

There's a weird sense in which ChatGPT was actually

the best thing that could have happened to both Google and Anthropic. You know, like, like, I think at the time ChatGPT came out, it was this huge success.
It was like everyone was talking about it.

It was sort of took AI into this like new era. And I think for Google, the reason that was helpful is because it was the thing that like woke them up, right?

They had been you know, tearing themselves apart with all this like bureaucracy and infighting and they couldn't really get their act together for various reasons.

And ChatGPT sort of forced them to focus and bear down and like become more efficient and better at shipping these things.

And for Anthropic, it was sort of like, well, I guess we don't have to like make a consumer chat bot now because that lane is already full.

And so I think they were able to kind of pivot into this interesting new direction that I think ended up being better for them than what they would have gotten if they had tried to compete with ChatGPT.

Yep, good take. Casey, is there any other big news in the AI world from the past week or two that we should talk about? I mean, maybe just real quickly, we've seen a couple of interesting departures.

Jan Lacun finally left Meta. I think everybody has been waiting for that ever since they installed Alexander Wang as the head of the Meta's superintelligence division.

Yeah, it's hard to be a Turing award-winning godfather of AI who is reporting to a guy in his 20s. Jan is apparently going to be doing a new startup that is going to build world models.

Jan Lacun is one of the most famous LLM skeptics out there. He says that you cannot get to AGI using the approach that all the other big labs are using right now.

So we'll definitely be interesting to see what he comes up with. The other big move is John, John Andrea, who was the longtime head of AI at Apple.
He is stepping down from his position.

And that also, I think, was long expected because of all of the problems that Apple has had getting its AI efforts off the ground.

And in fact, Kevin, I think the fact that John Andrea is leaving might just be a sign that Apple is low-key giving up on AI overall.

We know that they've signed a deal with Google to make Gemini the kind of core of their AI efforts. Maybe this just becomes the kind of thing where they don't have to build it.

They just buy it for cheap from someone else. They're reportedly only going to pay Google a billion dollars a year, something that they can very easily afford.
And maybe they'll be fine.

Yeah, I don't know how to read this exactly. I mean, you could read this as like they're giving up on AI,

but they also just brought in a guy from Microsoft to be their new head of AI. Let me tell you something.
When you bring in a guy from Microsoft, that is a way that you're giving up on AI.

No, actually, he was at Google for many more years before that. He was only at Microsoft for like four months, which there's an interesting story there that will have to be told someday.

But basically, I think you can read it as they are giving up or they are sort of rebooting their AI efforts. They're saying, What we've been doing is not working.
We're going to bring in a new team.

We're going to start fresh and we're going to try to give this thing a go. Bro, if you're starting from scratch in December 2025 on your AI program, you're cooked.

You truly, no one has ever been more cooked. Come on.

When we come back, we'll continue this conversation and tell you how we've been using the latest AI models.

This podcast is supported by Bank of America Private Bank.

Your ambition leaves an impression. What you do next can leave a legacy.
At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.

Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com. What would you like the power to do? Bank of America, official bank of the FIFA World Cup 2026.

Bank of America Private Bank is a division of Bank of America and a member of DIC and a wholly owned subsidiary of Bank of America Corporation. Know the feeling when AI turns from tool to teammate?

If you're Rovo, you know. With Rovo, you can streamline your workflow and power up your team's productivity.
Find what you need in a snap with Rovo search.

Connect Rovo to your favorite SaaS apps to get the personalized context you need. And Rovo is already built into Jira and Confluence.

Discover Rovo by Atlassian and streamline your workflow with AI-powered search, chat, and agents. Get started with Rovo, your new AI teammate, at rovo.com.

The University of Michigan was made for moments like this.

When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible. Wherever we go, progress follows.

For answers, for action, for all of us, look to Michigan. See more solutions at umich.edu/slash look.

Okay, so there's lots happening here in the industry in Silicon Valley and San Francisco, but I want to end on a practical question that we get a lot from listeners to this show, which is like, look, what should I be using right now?

What is the best AI model? What is the thing that will give me the most advantages and annoy me the least? Like if I can only subscribe to one model or maybe two models, what should they be using?

So I don't think there is a great one size fits all answer to that question, Kevin.

I think I could say confidently that you can use either ChatGPT, Gemini, or Claude for many things and probably be fine.

And there's probably some vast set of use cases for which all three of those models are roughly equivalent. Okay.
So that's going to be my answer for like the 80th percentile of our listeners, right?

But let's say you're moving up into the 20th percentile of like our top AI users, the real freaks out there, Okay.

Now I'm going to tell you, you are just going to want to experiment with these models all of the time.

I mean, just within the past few months, we've seen each of these companies release a very capable new model. And you want me to tell this 20 percentile, oh no, just stick with one of them forever?

No, you have to be mixing it up. Again, I just use Claude, a model that I have not found very useful at work.

upon the release of its new model. And I said, oh my gosh, okay, the game just shifted again.
I want to bring up this metaphor I've been thinking about over the past day or so.

In 2023, the sci-fi writer Ted Chang wrote this widely read and shared essay in the New Yorker called ChatGPT is a blurry JPEG of the web. Do you remember that? Yes.

And the argument that it made was a critique of ChatGPT, saying this thing really kind of sucks because it's just an amalgamation of everything that has ever been put on the internet.

There's kind of no soul to it, right?

But I thought about that metaphor of the blurry JPEG because when I used Opus this week and when I used Gemini 3 the week before, I had that sensation of, you know, when you're loading up a web page and it is loading up a JPEG and at first it doesn't load it in full resolution.

It kind of gives you that blurry version first. And then a few seconds goes by and then it shows you the higher resolution.
We are in a moment where the AI is getting higher resolution.

That was the feeling that I had when Claude was able to just create something that was writing sentences that for the first time felt like me.

It was like, okay, the blurry JPEG is getting a touch less blurry.

And so that's why I can't give you a single answer to which model should I use, because I think the answer to that is just going to be changing consistently over the next six months to a year.

And if you really care about this stuff, you're just going to have to try new things. Yeah, I mean, to your point, it is amazing how quickly this stuff is moving.

I was writing a book chapter the other day about the launch of ChatGPT.

And so I was going back and looking at some of the like initial reactions that people had three years ago to the launch of this product. And it was so bad.
It was so dumb by today's standards.

I could not believe how easily amazed people were by the fact that this thing could just like string together plausible sentences on any given topic.

And like we should say for the time, that was amazing. No chatbot had ever done that.

But looking back, just even with three years perspective, it is just incredible how much my personal expectations of these tools have been raised.

I would say like in the process of writing this book, these tools have probably saved me a year year of my life, like a year that I would have had to spend going to libraries, pulling clips, like doing research,

stitching together ideas. Like it is implausible to me that I would ever do any project like this again without these tools.
And I think a lot of people are feeling similarly in their own work. Yeah.

You know, I'll just say, I don't know if this fits, but there's this thought that I have a lot because I think, you know, there's

much AI criticism out there.

There's a lot of, you know, anger, hostility, skepticism I think a lot of it is warranted We talk about it a lot on the show But I've come to believe that there are fundamentally two different views of AI there is what I call the California view of AI which is what can it do and then there's what I call the New York view of AI which is what can't it do right and you see the what can't it view on social media a lot you know whenever an AI fails at some simple test whenever it you know makes some terrible mistake and we say aha you know screw this thing and then you have folks like us who I think are a little bit more impressed at like what it can do.

The release of the models over the past few weeks has been a moment where I'm just glad that I have a default view of what can it do because it is changing people's workflows, jobs, lives in real time.

And I think that if your default is what can't it do, you're just missing a huge part of the story. Totally.

I have decided one principle that I am going to apply to my life going forward is that I'm not going to listen to opinions about AI from people who do not use AI.

Like, I think that if you are not grounded in having firsthand direct experience with these models for at least like, I don't know, five hours, 10 hours, something like that with like the newest models, you actually are talking about something that no longer exists.

Yeah, you're a historian. So that's one side of it.
It's just that these things keep getting better.

At the same time, I want to get your opinion on kind of this other like long timelines view that is coming into vogue in the San Francisco AI community.

Dor Kash Patel recently recently did an interview with Ilya Sutskver, the famous AI researcher,

and they talked a lot about how there's this kind of,

you know, not necessarily like slowdown happening, but just like these models are not as useful as people want them to be. Like they are not out there.
you know, adding trillions of dollars to GDP.

Like companies are not able to like fire half their workers and replace them with AI yet. And so that view is kind of

springing up within the San Francisco AI crowd at the same time as, like, I think the models actually are getting better at the things that you and I care about. So, how do you reconcile those things?

I think both can be true, that we are still on a trajectory where the first likeliest thing to happen is that AI will just solve coding and software engineering.

We will still have software engineers, but they will not be writing code by hand.

On our predictions episode later this month, one of my predictions may be that by the end of 2026, coding is just effectively solved.

This is just something that a lot of tools, even free ones, can kind of just do for you. But there's still a lot of other jobs out there.
There is still a lot of translation left to be done.

And not every job has as defined a rule set as coding does. So I think it can both be true that models are advancing in a way that is bringing us closer to automating software engineering.

And if you're an accountant, a lawyer, a doctor, AI still is just kind of something that is only momentarily useful.

And I think the question will be, what will it take to generalize whatever is needed to solve coding for every other job? And how long will that take? Yeah, I think that's right.

I think the race is still very much on. The models are still very much getting better.

It remains to be seen how soon or quickly that will kind of diffuse into products that actually make life look very different for people like you and me and for coders and lawyers and doctors and everyone else who uses these things.

Stay tuned.

When we come back, we're heading to the theater for the hard fork review of Slop. Bring your theater binoculars.
They're called opera glasses.

Your theater binoculars.

This is why I have to keep it. I swear to God.

Unbelievable.

This podcast is supported by Bank of America Private Bank.

Your ambition leaves an impression. What you do next can leave a legacy.
At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.

Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com. What would you like the power to do? Bank of America, official bank of the FIFA World Cup 2026.

Bank of America Private Bank is a division of Bank of America and and a member of DIC and a wholly owned subsidiary of Bank of America Corporation.

This podcast is supported by Bloomberg's Odd Lots podcast. Hey there, I'm Tracy Alloway.
And I'm Jill Weisenthal. And we are the hosts of the Odd Lots podcast.

Every week we bring you insightful conversations with the most interesting people in finance, markets, and economics. We talk about financial markets and the real economy.

We've interviewed everyone from Fed presidents to famous investors. Plus lumber traders, truck drivers, and egg farmers.
Yep, we always have the perfect guest for the the most interesting topic.

The Odd Lots podcast from Bloomberg on Apple, Spotify, or wherever you get your podcasts.

The University of Michigan was made for moments like this.

When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible. Wherever we go, progress follows.

For answers, for action, for all of us, look to Michigan. See more solutions at umic.edu/slash look.

Well, Casey, it's time once again for one of our favorite segments. That's right, the Hard Fork Review of Slop.

The Hard Fork Review of Slop.

This is, of course, our cultural criticism segment, where we bring very serious

analysis to this new medium of AI slop that is taking over the world. And today, we have some more examples of slop for our listeners' critical consideration.

Let's get into it. First up, today, we have some holiday slop.
The holidays are a time when people around the world are gathering with their families.

And this year, they may encounter an Instagram video that shows a bunch of tourists going around a holiday market set up at Buckingham Palace and Casey let's play this clip from the BBC please

why are people coming to Buckingham Palace to see a market that doesn't exist in recent days on social media there have been AI generated images of a Christmas market they're fake but that hasn't stopped people wanting to come here and experience a slice of the festive action tell us why are you here oh we've come for a Christmas market that's not here everyone's calling for this this

AI-generated advertising. So, yeah.
I was going to enjoy a mild wine, and now I've got my nanny wars chicken sandwiches. I am very disappointed.
We see the funny side of it, really.

We're going to go find alternatives. There are plenty of other Christmas markets across London.
But if you do want to go to Buckingham Palace, there is a gift shop.

First of all. BBC, thank you for what you do.
I love that reporter. He sounded so angry.
Delighted. He was just spitting mad at this whole situation.
It's fake. Yeah.

Well, look, this is wonderful.

It's, you know, it's giving me flashbacks to the famous Willy Wonka event that was also held in the UK in recent years where people did show up and there was a real event, but the AI advertising had made it seem much more grand than it was.

We've now moved to the next stage, which is that AI is just now advertising completely non-existent events for you to go to with your family.

Yes, whatever's going on in the UK, those people have to up their slop detection game.

I do think this opens up a very fun possibility, which is that they will actually now have to build a holiday market at Buckingham Palace to capture the obvious demand and the flood of tourists who are coming in to go to this non-existent holiday market.

Well, and just to stop a revolution. I mean, you could tell a lot of those people were pretty, you know, angry about what was not there.
Yes.

Yes, this is going to lead to a whole new dimension of fake it till you make it. Yeah.

I mean, look, this one is so interesting to me because on one hand, when you think about all of the different deep fakes you can make, few seem more innocuous than what if there was a Christmas market at Buckingham Palace?

That actually sounds like a lovely piece of slop that you could make, you know, and maybe share with a few friends.

But, you know, because we live in a nightmare information ecosystem where no one knows what's true and false anymore, you take this perfectly benign, you know, piece of content and all of a sudden people are showing up at Buckingham Palace.

So there are, you know,

I'm sad to say, sorry to be a buzzkill, there are going to be much worse outcomes from this exact dynamic. This one is at least a little funny.

But, you know, if I were a platform like a TikTok, I might be thinking about, hmm, is it maybe bad for my platform that people are constantly looking at slop here and then going to non-existent events?

Because eventually some of that anger is going to come back on the platform. They're going to be cold and they're going to have to eat nanny's chicken, whatever she said.

I think she was having a cheeky Nando's. Was she having a cheeky Nando's? I believe she was having a cheeky Nando's.
Wow. Well, it all turned out fine for her.
It sounds like.

Okay, we have one more example of holiday slop this year, and that is holiday meal slop.

There was an article recently in Bloomberg titled, AI Slop Recipes Are Taking Over the Internet and Thanksgiving Dinner.

This was about the food bloggers who are noting to their chagrin that traffic to their websites has fallen off a cliff since people are increasingly turning to AI generated recipes.

But they are also discovering that some of these recipes don't make sense. Yeah, so there's really, you know, two stories here.

One is about the fact that people are turning to AI tools and getting back these recipes that are just nonsensical.

You know, these systems systems are not directly pulling from recipes, they're reconstituting them from a bunch of different things that they've seen online.

And that's not going to work out for you every single time. A lot of folks found that out the hard way over Thanksgiving.

There's a second story, though, which is all of the human beings out there who did the hard work of creating real recipes and then testing those recipes to make sure they work are now reporting the traffic to their websites is falling off a cliff.

And I just want to say this sucks. I hate this about AI.

I want people like Yvette Marquez Sharp Knack, who runs the Mexican food blog Muy Bueno and who posted photos of two different tamale recipes that people were making using AI tools that were just completely bogus.

Like I want her to be able to make a living. And instead, all the AI companies came along, they remixed the entire internet and they replaced it with what so far is worse.
So I hate that, Kevin.

Yeah, I think they should start selling these tamales.

I know holiday market where they could sell them.

I love that you just waded through my whole rant so you could make your stupid joke. Listen, no, I agree.
I think this is a bad trend.

At the same time, my wife, who's a very good cook, has been using AI to do some of her own cooking recently, and it's produced pretty good stuff.

So I should say one man's slop is another man's treasure. Here's how we split the difference in my house because we did Thanksgiving for 14

this holiday season. Wow, you have 12 kids?

It's amazing. No, actually, our families met for the first time, if you must know.
Wow. And it went great.
Thank you. We all had a great time.

Thanks to the families for coming up to the Bay Area for that, anyways. Point of the story, Kevin.

Uh, what we did was we took a great turkey recipe from Kenji Lopez Alt, one of the great cooks in all the world. Yes, we made his turkey and we used his recipe.

But when we had questions about what we were doing, then we did turn to the AI chapter. We say, hey, should we maybe turn the temp up? Should we turn the temp down?

We used it to get guidance along the way, kind of split the difference there. That seemed to work out fine.
How did it turn out? We overcooked the turkey. But

I'm not going to blame AI for that. I'm going to blame the fact that it was the first time we used our oven to cook an 18-pound turkey, okay?

Listen, my therapist says it all the time. We can only learn through experience, Kevin.

Okay, next slop. This one is not a holiday piece of slop.
This is an educational music piece of slop.

This was a great story that Katie Natopoulos wrote at Business Insider the other day about an Instagram account called Learning with Lyrics that has been flooding Instagram with these posts of AI-generated songs that basically explain topics that people might be curious about.

Things like, why are manhole covers round? How does Velcro work? A topic we covered on our 50 Iconic Technologies episode.

Why are giant steel coils transported on their sides instead of flat? Now, Casey, have you heard any of these songs? You know, I haven't had the chance, but I'm hoping I could change that right now.

Yes. This one is about how instant cold packs work.
Let's take a listen.

I'm curious how

instant cold packs work.

One squeeze and it's freezing cold. Then, how can you create cold without a freezer? Instead of creating coldness, think of it like stealing heat.
The pack has two things inside.

A dry chemical like ammonium nitrate and small powder.

Now, this is this account is apparently the work of a Cal State Long Beach student named Cassin Tomlinson, who told Katie Natopoulos from Business Insider: quote, I've always been someone who's curious about stuff.

Which is just a perfect college student, quote, relatable king.

Also, I'm guessing that if he's got enough views, he could make some money, in which case, cash-in could, in fact, cash-in.

Now, here's what I'll say about this. I had a sort of strong negative reaction to all of the AI slop recipes that are making life harder for human food bloggers.
I'm actually fine with this.

If you are out there and you want to make a song about why giant steel coils are transported on their sites instead of flat, you're not actually competing with a human artist.

You actually have that lane to yourself. And if you want to use an AI tool to do it, I say, God bless.

Yes, it could be dangerous for WikiHow, which was the previous place that you go to find answers to like stupid questions.

But WikiHow was one of the most disgusting websites ever created, just absolutely choked with ads. A website that actually hated all of its visitors and just wanted to collect it.
It's true.

Actually, this is sort of rhymes with something that I've been really interested in recently, which is that I've heard that college students are now using these AI music generation tools to like make songs to help them remember things.

Because some people are just like auditory listeners. And so if you're like, how do I remember like the steps in the Krebs cycle?

Now you can just go make a Taylor Swift song about it on one of these AI generations. Actually, don't do that because she'll sue you.
But you can make a generic pop song about this.

And that's maybe easier for you to remember than actually listing out all the steps. You know what? That's great.

That actually reminds me of something I did the other day, which is that my friends and I had come up with

a great idea for the first line in a gay Shakespeare sonnet. So, you know, there's some kind of rumors out there about Shakespeare's sexuality.

We said, what would he sound like if he was really, you know, gay? And so we came up with the line, shall I compare thee to a boots-down sleigh? Which just seemed like such a good line.

And so then I asked Claude to finish the sonnet. And you know what? It did a great job.
I don't even want to know what a boots down sleigh is. I'll tell you when you're older.
Okay. All right.

Next up in the hard fork review of Slop, this one comes to us from North Carolina, where a state senator named Deandrea Salvador recently found herself in an ad by the Whirlpool Company for a line of appliances in Brazil that she had not actually appeared in.

The company had lifted a section from a TED Talk video that she gave back in 2018 and put it into their video about Sao Paulo and all the energy efficiencies that were in their appliances in Brazil.

And by the way, way, thank you for coming to her TED Talk.

Yes.

Let's play a clip from this. These kinds of dangerous incidents can take root when people are faced with impossible choices.
In the U.S., the average American spends 3% of their income on energy.

In contrast, low-income and rural populations can spend 20, even 30% of their income on energy. Okay, so that's the original TED Talk.
Now here's the Brazilian ad that they made out of this.

People are faced with impossible choices. In low-income communities in Sao Paulo, the average electricity bill cost represents 30% of their monthly income.
This is when energy becomes a burden.

Console, the leading appliances brand in Brazil, created a new wave. Incredible.
So they didn't just lift this segment from her TED Talk and put it into their ad without her permission.

They actually AIified her voice and put her synthetic voice into their ad to make it talk about Sao Paulo. You know what? This whole thing is so crazy.

And just to add one more crazy twist. So the ad agency that made this ad is a subsidiary of Omnicom, which is one of these, you know, advertising giants.

There's only, you know, a small handful of them. They're one of the big ones.
And the subsidiary, DM9,

had submitted this slop ad for an award at Cannes Lions, which is the global ad awards that happens every year.

And it won their highest award, the Grand Prix in the creative data category, as well as a bronze lion in the creative commerce section, Kevin. Incredible.
Very difficult ads to win.

And so, after all of this happened, they had to return the awards that they had won for making the slop.

That's incredible. That's incredible.
I do think that DeAndrea should be able to claim that she is a Can Lions winner because she was placed in this video without her consent.

She really is. You know, so far on the slop review, we've had one thing that I think was very bad.
We've had one thing that I think was basically good.

And now we have this, which I just think is so incredibly stupid, I can't believe it.

Don't do this. Don't do it.

Definitely, if you're out there, Whirlpool Corporation, do not make an ad featuring Casey Newton's voice lifted from this podcast and put it into an ad in, I don't know, Chile.

If you want to know what I think about the energy situation in Sao Paulo, just call me. I'll tell you.

All right. what else is in the old slot funnel, Kevin? All right, we got a gaming slot example.
Have you heard about Bird Game 3?

Well, I've heard a little bit about the buzz, but I haven't actually seen the video yet.

It's more like a chirp than a buzz. Okay, thanks.
Thanks for that. We'll be right back.

It's called Bird Game 3, and it is apparently all the rage. It is going viral on TikTok.
One clip posted by someone named KingPigeon76 has racked up more than 13 million views over the past week.

Let's take a look.

That is so cheap. You just spammed texts.
Who even plays pigeon? Easy claps, man. Get out of my game.
Whatever, dude.

Okay, so this is a video. It kind of looks like it is a bird fighting game.
So in this case, it is a clip of an eagle fighting a pigeon.

And I would say the pigeon overcoming the odds to beat the eagle in this game. Yes, so this has been going viral because this game doesn't exist.
There is no bird game three. There's no Bird Game 2.

There's no Bird Game 1. None of these games are real.
But people are using these video generators like Sora and VO3

in Gemini to create these videos and then post them to TikTok as if this was a real game. And now people actually want to play the game.
They're like, this looks fun.

So I actually really like this, I have to say. Yeah.
Because to me, this shows Slot being used for a purpose. that I am just fond of, which is satire, right?

This is satire of the fact that over and over throughout the entire entertainment industry, we just see stupid sequel after stupid sequel.

You take the dumbest idea imaginable and then you make a third version of it, you know, 10 years after the first one comes out.

This is a very dumb idea, but it is being presented in a way that acknowledges its dumbness. And so, for that reason, I'm giving this one a thumbs up, Kevin.
Yeah, I like this one too.

And I think there's a sort of inevitable conclusion of this, which is that someone out there will see this and actually make Bird Game 3.

Well, let me ask you this: what bird would you main in Bird Game 4?

I would probably be a Peregrine Falcon because it is notoriously the fastest bird. What about you? I'd have to go with Crested Tit Mouse.

Now,

what else is in the Slop queue?

That's it. That was this installment of the Hard Fork review of Slop.
Casey, what have we learned from these examples of popular slop?

Well, We are in this moment, somewhat to my surprise, that by the end of 2025, I think slop is becoming a medium like any other where there is good slop there's bad slop uh and you know and in the case of the um the food recipes there's slop that makes me absolutely incandescent with rage yeah but you know that that is true of almost every medium kevin don't judge slop by its cover as we're always saying here in the hard fork review of slop do you have if you were gonna uh sort of

give a message to impressionable youths out there who are thinking about making a career in slop what would you tell them to make slop that is good for the world or at least neutral as opposed to bad?

I would say if I have one parting message on this installment of the hard fork review of slop, it would be this. Slop in the name of love.

We'll be right back.

This podcast is supported by Bank of America Private Bank.

Your ambition leaves an impression. What you do next can leave a legacy.
At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.

Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com. What would you like the power to do? Bank of America, official bank of the FIFA World Cup 2026.

Bank of America Private Bank is a division of Bank of America and a member of FDIC and a wholly owned subsidiary of Bank of America Corporation.

This podcast is supported by Bloomberg's Odd Lots podcast. Hey there, I'm Tracy Alloway.
And I'm Joe Weisenthal. And we are the hosts of the Odd Lots Podcast.

Every week, we bring you insightful conversations with the most interesting people in finance, markets, and economics. We talk about financial markets and the real economy.

We've interviewed everyone from Fed presidents to famous investors. Plus lumber traders, truck drivers, and egg farmers.
Yep, we always have the perfect guest for the most interesting topic.

The Odd Lots Podcast from Bloomberg on Apple, Spotify, or wherever you get your podcasts.

The University of Michigan was made for moments like this.

When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible. Wherever we go, progress follows.

For answers, for action, for all of us, look to Michigan. See more solutions at umic.edu slash look.

Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant.
We're fact-checked this week by Will Peischel. And today's show was engineered by Chris Wood.

Original music by Alicia BaeTube, Rowan Nemisto, and Dan Powell. Video production by Sawyer Roquet, Pat Gunther, Jake Nichol, and Chris Schott.

You can watch this full episode on YouTube at youtube.com/slash hard fork. Special thanks to Paula Schuman, Pui-Wing Tam, and Dahlia Haddad.
As always, you can email us at hardfork at nytimes.com.

Send us your best AI slop.

Hear that? That's what it sounds like when you plant more trees than you harvest.

Work done by thousands of working forest professionals like Adam, a district forest manager who works to protect our forests from fires.

Keeping the forest fire resistant synonymous with keeping the forest healthy. And we do that through planting more than we harvest and mitigate those risks through active management.

It's a long-term commitment. Visit workingforestsinitiative.com to learn more.