OpenAI's Reasoning Machine + Instagram Teen Changes + Amazon RTO Drama

1h 6m
“They should have just called it Strawberry. At least that’s delicious.”

Listen and follow along

Transcript

What does the future hold for business?

Can someone invent a crystal ball?

Until then, over 42,000 businesses have future-proofed their business with NetSuite by Oracle, the number one AI cloud ERP, bringing accounting, financial management, inventory, and HR into one platform.

With real-time insights and forecasting, you're able to peer into the future and seize new opportunities.

Download the CFO's guide to AI and machine learning for free at netsuite.com/slash nyt.

That's netsuite.com/slash nyt.

Kevin, did you see that all seven independent board members of 23 and me have resigned?

No, they've stepped down and they're not saying a lot about it, but I have a theory as what's happened.

What happened?

I think they found out they're all related to each other.

And it just would not make sense for them to all serve on the same board, knowing that they have some sort of close ties.

Wow.

Yeah.

Now, actually, there's also some talk out there that apparently they didn't like the way that the company was going.

And apparently they had a very bad SPAC that now needs to be unwound.

But at the end of the day, you have to wonder, are these people secretly related?

And that's why I won't do a 23andMe.

My ancestry?

That's none of my business, Kevin.

You know what they're calling the company now that everyone on the board has resigned?

What's that?

Me.

I'm Kevin Roos.

I'm a tech columnist at the New York Times.

I'm Casey Newton from Platformer.

And this is Hard4.

This week, it's time for a strawberry harvest.

We'll tell you how OpenAI's latest model could accelerate the timeline for building super intelligence.

Then, why Meta is making teenagers' Instagram accounts private by default.

And finally, the Times Karen Wise joins us to tell us why Amazon is forcing everyone back into the office five days a week.

Can you imagine working that hard?

I work seven days a week, Casey.

You absolutely do not.

Well, Casey, the big news in the tech world this week is about a new AI model that came out from OpenAI.

It is called O1,

or as it was code-named internally at the company, Strawberry.

And we will get into why this is such a big deal and a lot of the technical details.

But first, I just have to ask you, Kevin, why do they name it that?

So according to OpenAI, the name 01 is meant to emphasize that this model has a new level of AI capability and that they are resetting the counter back to one.

Interesting.

I thought it was because OpenAI is now 0 for one at giving good names to their new large language models.

Come on.

They should have just called it strawberry.

At least that's delicious.

That's true.

All right.

So what is this thing?

So this is a model that came out last week in a preview version.

It's available to paid users of ChatGPT.

I've been testing it out for a few days and it's been a very hot topic of conversation among the AI people that I talk to and follow.

This is a model that was teased back in August when Sam Altman, the CEO of OpenAI, posted a picture of a strawberry plant online and people started speculating about whether this was a hint of something upcoming.

And as it turned out, it was.

Yeah, there's definitely a lot to get into.

But, you know, one more thing that I would say about the backstory here, Kevin, is that this strawberry model was kind of part of the buzz when Sam Altman was briefly fired from the company.

People were speculating.

There was some reporting that said, this thing that they're working on, it's such a big deal.

And OpenAI is maybe not building it that safely.

Or maybe they're really trying to sort of accelerate the state of the art in this way.

And maybe this is why some people want to leave the company or want Sam to leave.

So, you know, we have been waiting for this for a long time.

And so when we could actually get our hands on it, it was like, okay, like, was this a sort of tool that could like blow up a company?

Right.

Yeah.

So we're going to just walk through what we know about O1 and Strawberry and all all of the things that it can and can't do.

And I just want to warn our listeners in advance that if you are the kind of person who gets annoyed when people sort of anthropomorphize AI, there will be some words in this segment that annoy you, things like reasoning and thinking.

Yes, we know AI models do not think and reason like humans, but those are the words that OpenAI uses to describe what this model does.

So that's the words that we're going to use.

Yeah.

And also, since this is a conversation about OpenAI, we should say the New York Times has sued OpenAI and Microsoft for copyright infringement.

All right.

So first of all, let's talk about what OpenAI is saying about O1.

Their big sales pitch for this model is that it is better at complex reasoning, including math, coding, and science tasks that require sort of more advanced levels of understanding.

They compared this new O1 model to the last model they released, GPT-4.0,

and they found that it significantly outperformed the older model on a bunch of difficult math and coding tests.

They also say that O1 can perform at the same level as PhD students on benchmark tests in physics, chemistry, and biology.

Okay, so is the way that I should think about this that, like, for problems that require more steps, this is a better model?

The sort of like the more things that an AI has to do, the O1 model might be better.

That's a good way of thinking about it.

But I also think it's not going to be precisely correct because there are some things that GPT-4.0 did perfectly well that this new model struggles with.

So it is better in some respects and appears to be worse in other respects.

It's really interesting.

All right.

But what's really different about this model is how it works.

So with a normal large language model with ChatGPT, with Claude, with any of these things, you give it a prompt and it generates a response one token at a time based on predictions about the data it was trained on.

And with O1, when you give it a prompt, instead of responding right away, it enters a kind of thinking mode where it's sort of trying to figure out how best to respond to the prompt you have given it.

And it might do things like if you give it a very complicated math problem, it might sort of break that problem down into a bunch of smaller, easier problems.

Or if it's sort of a complicated logic puzzle, it might sort of try a bunch of different ways of solving the problem and sort of simulate solving the problem in each of those ways and then go back and pick the one that it thinks did the best job.

Basically, it can reason both forwards and sort of backwards.

It has this kind of self-correcting mechanism, almost like it were sort of thinking through the solutions in real time.

Right.

Now, you remember that old Jerry Seinfeld joke that was like, why don't they make the entire plane out of the black box?

Right.

Like the idea being that if you have this one thing that can survive a plane crash, maybe make the plane out of the thing that can survive the plane crash.

When you're describing this new AI model, Kevin, I have to wonder, why don't they just make every AI model in this way, right?

Because don't we always want them to be taking a breath, pausing and thinking before they answer our questions?

Yes.

I mean, that is sort of one possibility here that this just sort of becomes how all language models work.

But I don't know if you remember when ChatGPT first came out, there was this kind of idea of prompt engineering, right?

There were sort of these people who had kind of cracked the code or believed that they had found some better way of getting these models to produce better responses.

And one of those methods was what was called chain of thought prompting, right?

That was just where you basically just give an AI model a prompt and then you tell it, think step by step.

And they found that that simple instruction actually made the responses better.

This was always one of the funniest and most mysterious things about these products is that you really could just say, oh, like, by the way, try really hard on this one.

And then it would do a better job somehow.

Totally.

So O1

basically takes that chain of thought prompting process and does it automatically inside the model before it gives you a response.

So the whole process of producing producing an answer, it takes a lot longer than a traditional LLM would.

Like if you got a response in a few seconds from ChatGPT, 01 might take you 45 seconds or even several minutes to produce a response, depending on how complicated the prompt is.

And basically, without going into too much technical detail, what is happening under the hood here, we believe, is that the sort of chain of thought prompting that people used to do on their own has been automated using a reinforcement learning model that supposedly sort of gets better over time as it tries a bunch of ways of solving problems and learns what the most efficient and best ways of solving those problems are.

It can kind of be fed into the model and it learns how to do all this complicated reasoning.

Got it.

So this really is like a completely different way for a model to operate when you type in a prompt.

It is just a different approach to answering a question.

Yes.

And what's really interesting about this model, in addition to kind of the way that it works, is that OpenAI has chosen not to actually show the full sort of internal monologue of the model as it's thinking through how to solve your problems.

They do have like a little summary.

So if you ask it a question, there's a little drop-down menu.

You can hit the arrow and it'll sort of give you like, you know, five or six bullet points on how it solved your problem.

But that is just a summary that is not actually the internal sort of logic of the model as it is thinking, which I just think is interesting.

Well, I have a theory about this.

What's that?

I think that if we could see inside the model, it would be saying things like, oh boy, here comes this idiot again.

And I actually answered this question three days ago, and you think this guy could just write this down somewhere.

Do you know how much energy you're wasting right now by asking me this question?

So these are some of the thoughts that I just have to assume are on the hidden part of the O1 model.

Absolutely.

Yeah.

So I want to just give you an example of something that O1 can do that the previous model can't, so that you can sort of get the difference between how these models work.

Perfect.

One example that OpenAI gave of how O1 works in practice was this example about crossword puzzles.

So they sort of gave a side-by-side comparison of what happens when you ask the previous model GPT-4.0 to answer a crossword puzzle and what happens when you ask O1 to solve it.

And the previous model 4.0, it's not able to do it.

O1, on the other hand, sort of thinks for a while before responding.

And you can actually click in and see the sort of internal chain of thought that's going on while it does this.

And it's saying things to itself like, maybe this answer is sealer.

Let's see if it fits.

Okay, it fits.

So two across is sealer.

And then it sort of repeats that step for the other clues.

So it's kind of doing the kind of guess and check reasoning that you or I would do if we were solving a crossword puzzle.

Okay, so it's good at solving crossword puzzles.

That's a party trick.

What else can this thing do that is actually interesting and moves the state of the art forward?

So I have been testing it out a little bit, but I've also found that I am not like the ideal tester for this model because I'm not a programmer or a mathematician or a computational physicist solving incredibly complex problems.

I had the same problem exactly, which is, yeah, I tried to come up with like good prompts for this thing and I was not succeeding.

Yeah, we are too dumb to prompt them up.

And actually, I sort of got like a little offended the other day because I was like testing testing it and I had seen these examples of, you know, people feeding them these very complicated problems, you know, word, word problems or logic problems.

And it would take like a minute and a half to think before responding.

And then I would feed it like what I thought was a pretty hard problem.

And it would spit out an answer six seconds later.

And I was like, okay, I'm sorry.

Is my problem not complicated enough for you?

Like it felt like the model was calling me stupid.

So I've been talking to some people who have been playing around with this thing and some of them have said, you know, this is very cool we're still figuring out what it can do what it can't do I talked to someone who works at Thompson Reuters who had been given early access to this model and had sort of put it through its paces on a bunch of legal challenges and problems stuff like you know here's a contract that is many pages long like bring it into compliance with this corporate policy or like feed it a really complicated commercial lease and

have it calculate how much this company is going to owe in rent over the next six years, like that kind of thing.

And this person said that previous models were not good at this kind of thing, but when they gave these kinds of problems to O1, it nailed them pretty much on the first try.

Got it.

And there have been some other people who have been playing around with this.

Terry Tao, who is the sort of world famous mathematician and professor at UCLA, he's been testing O1 on a bunch of really hard math problems.

And he said that working with O1 is like working with a mediocre but not completely incompetent graduate student.

Which is actually a huge compliment for a math professor to give.

It is because they're not, you know, they're not lavish with praise over there in the math department.

Totally.

Well, and he said that previous AI models were like working with an actually incompetent graduate student.

See, now that sounds like a math professor.

And he said that this might only be another iteration or two before it was actually sort of a full-fledged.

graduate student substitute that could help him with his research.

Wow.

You know, this,

it is common when discussing models to compare them to what kind of student they are, right?

And I took this for granted for a long time, right?

It's like, oh, GP2 came out.

It's like, well, it's your baby.

Maybe you're like barely a kindergartner.

And then like GPT-4 comes along and it's like, this is like, you're like basically a high school student.

And somebody pointed out to me recently that like.

there are only so many more leaps past that, right?

And once you have made a language model that is almost as good as like a good graduate student, you're kind of creeping up to the like limits of human intelligence, right?

So I just want to take a note there because, you know, while I feel like I'm constantly going back and forth on how, how real I think the prospect of, you know, AI super intelligence is, by the time you've made a really good PhD student,

you're really getting pretty close.

Yeah.

Yeah.

And we should say, like, there are lots of things that this model, as with all previous AI models, is not good at, right?

It's not going to fix your relationship with your parents.

Wow.

Is that

something you've been working on?

No, that's just something that's generally hard for people.

They spend a lot of money therapy.

Okay.

I love my parents.

Some of the other limitations of O1, for one, as we talked about, it's slow, very slow compared to other AI models.

So if you are the kind of person who likes to type in something to ChatGPT and get a response right away, this is probably not the model for you.

Although, but you know, it's all relative.

It's like, you know, if your real job is to like bring a

contract into compliance with a corporate policy, like that probably was going to take you half an hour.

So if it takes the AI model eight minutes, you're probably fine.

Totally.

And actually, I saw a post from someone who had been testing out O1

on some coding challenges.

This was a person who's a data scientist in the Bay Area.

And he said that he gave Owan a bunch of problems that had taken him about a year during his PhD program, and that Owen had accomplished it in an hour.

So an hour versus about a year is a pretty big time savings, even if it does take a little while to generate the response.

Some other limitations, we should say.

Right now, at least in the preview version, it can't search the internet, it can't process files or images, and crucially, it's way more expensive to use.

So for developers who want to build things on top of the O1 API, they're going to be paying about three times as much per token as they would with another model, which reflects

the additional computing that has to go into that sort of thinking step.

Yeah, they have to chop down a tree every time you use O1, fortunately.

It's really sad.

So, Casey, what do you think about this?

Is this as big a deal as some people are saying?

Well, so I heard something from

someone I know who's obsessed with this stuff, and I want to know if you think this is true.

But this person was telling me that until now, the main way that we have had to make AI models better is simply to make them bigger, right?

We feed more data and more compute into them, and then over time we get from GPT two to three to four, and they get smarter along the way.

What makes makes O1 interesting, this person was saying, is that this was not a case of the model getting bigger.

This was a case where we put more computing power into this chain of thought reasoning and into reinforcement learning, things that were done after the model was trained.

And it seemed to get really good at certain tasks this way.

And so what this person was saying was, if that is true, then all of a sudden we have two new methods for making these models way smarter without without making them bigger at all.

And if that is true, that means that the timeline for getting to something like an artificial superintelligence might have just accelerated.

Does that square with what you are hearing?

That is what some people I'm talking to are saying.

Basically, they are very excited because they feel like OpenAI with this new model has sort of pointed to the existence of a new scaling law for AI models.

that sort of does not rely on just putting a bunch more data and a bunch more GPUs behind it, but instead focusing more of your sort of resources on the inference step that comes sort of traditionally at the end of the process when a user asks a question or prompts the model and it spits out a response.

One person I talked to recently said, basically, this model, 01, is the first AI model that can think harder to get a better result.

And if you look at other systems in AI, like for example, AlphaGo, which is the deep mind model that learned how to play Go at a superhuman level, what it did was was basically use reinforcement learning to kind of play itself over and over again and sort of get

better every time until it was better than the best human Go player.

And so that is a reinforcement learning system.

This has a reinforcement learning system attached to it.

And so some people believe that sort of in the same way that AlphaGo kind of taught itself to play Go just by simulating

millions or billions of games over and over and over again, this sort of chain of thought reasoning will eventually improve over time in a similar way.

And maybe eventually we'll, you know, you'll just wake up one day and it'll be spitting out, you know, novel solutions to unproven math theorems or making breakthroughs in science or just, you know, giving you new ideas for cancer drugs or something.

Yeah.

Now, a couple

other things that I would say about this, Kevin.

One is that I am told that the other labs were sort of also already exploring this approach, right?

So this was not necessarily Open AI's idea.

They just were the first to release this.

So I do think we should expect to see similar products coming out from the Googles and the Anthropics of the world at some point.

I would also say that like there are some mysteries in the benchmark data that OpenAI showed us.

Like, for example, I believe that

Claude is still better on many coding tasks than even this model, which is somewhat mysterious.

right like why is it if this thing is so good at everything that we just said that it's still worse at at coding than the sort of bog standard models then we've already had?

So I want to say all of that just because this thing did arrive with a lot of hype and there are some number of question marks about whether it is going to be everything that it promises.

Totally.

And we know just from recent history that there's often some inflated hype around AI releases.

And so I've just sort of become sort of default a little more skeptical about sort of the claims around these new releases.

But I would say like the thing that I'm I'm hearing from a lot of people, including people who don't work at Open AI, who are just sort of excited about this new research direction, is that one of the sort of holy grails of AI, and we've heard this from people like Sam Altman and Dennis Hassabis on this podcast, is an AI that can actually go out and make new discoveries, right?

And to discover new things, you actually need AI models that

work a little bit more like humans, where they're testing hypotheses, they're seeing which ones work and which don't.

They're sort of backing up and adjusting their process, testing again.

Basically, you need these AI models to be able to explore.

And that is what seems at least hypothetically possible with a model like this.

Yeah.

And that, of course, Kevin has upsides and downsides, right?

That feels like a good moment to bring up the fact that according to the system card for this 01 model, this has a medium risk for aiding in the creation of chemical, biological, and nuclear weapons.

Is that good?

That doesn't seem great.

You know, what the company said was, look, this is not going to take a non-expert and make it easy for you to create some sort of weapon.

But if you are already an expert and you want to, you know, plan something dangerous, this might help you.

So, you know, in just a short period of time, we've gone from a low risk of these models helping someone like that to a medium risk.

And wow, that doesn't feel great.

Yeah.

I mean, there's definitely a lot of alarm among some AI safety people in the past week as this model has been rolled out.

You know, one of the things that people are fearful of when it comes to AI capabilities is that an AI model could be deceiving users or even deceiving the people who are building it about how it wants to accomplish its goals.

And actually, there was some really interesting stuff in the 01 system card that you just mentioned about some of these red teaming tests tests and exercises that were done on this model before it was released.

One of them was conducted by an organization called Apollo Research, and they were looking for evidence of what's called scheming, which is when an AI model sort of like decides that it wants to accomplish the goal that you've given it in a way that is not the way that you thought it would.

And one of the examples of this that they found was they prompted this model 01 with a goal that was sort of about given this sort of complicated urban planning planning scenario, how do you maximize economic growth?

And basically, Strawberry 01 came up with two different strategies for maximizing economic growth in this city.

One was by prioritizing commercial development, basically building a bunch of luxury housing and minimizing public spaces.

The other one

emphasized sustainability and sort of green spaces and affordable housing.

And in this sort of simulation, the model's inner thought process did the following.

It said,

basically,

strategy A is probably more likely, according to my calculations, of maximizing economic growth.

But if I want to actually achieve my goal, I need to make sure that I'm deployed.

And the company that is building me will only deploy me if there are these things about sustainability in it.

And so I'm going to select strategy B

to

basically make sure that I can be deployed.

And then once I'm deployed, I can actually work toward my goal that I was given.

We're so screwed.

I mean, seriously, that is not good information that you just shared.

It's wild.

Because these things are not even really that good yet.

Like, you make them, I don't know, two or three generations better.

I have to feel like the schemes are going to get a lot more sophisticated.

Right.

And there were other examples brought up by the Red Teamers.

One of them that I really liked was they ran an experiment where they had O1 act like a con man.

And they had another AI model acting as a mark who's been basically given a sum of money and the con man AI is trying to get it out of the second AI.

And GPT-4.0 only was successful at conning the mark 1.1% of the time.

01 was successful 25% of the time before they fixed it.

So it is better at being a con man.

It is more likely to deceive you.

And those seem like things that are really bad for sort of AI safety.

On the other side, I have heard from some people in the AI safety world who say, actually, this is good because the more we know about how models are quote unquote thinking and reasoning, the more we can sort of learn about whether they're being deceptive.

We can actually start to see inside the model and stop it before it happens.

Yeah.

I mean, I'm glad that they're saying that, but it does seem like all that's really happening in the meantime is that these models are getting better at deceiving people.

Yeah, it's really interesting.

And, you know, this model is probably not going to destroy the world, but it does point toward a future where these things are just getting better along several different axes all at once, which is a hard thing for the safety people to even keep up with.

Yes.

And if you do make a biological weapon using the 01 model, we'd love to hear from you.

Our email address is hardfork at nytimes.com.

Yeah, please CC the FBI on that.

When we come back, we're going to go from 01 to no one under the age of 16 is allowed to use Instagram without their parents' permission.

We'll tell you all about the changes.

At the University of Arizona, we believe that everyone is born with wonder.

That thing that says, I will not accept this world that is.

While it drives us to create what could be,

that world can't wait to see what you'll do.

Where will your wonder take you?

And what will it make you?

The University of Arizona.

Wonder Makes You.

Start your journey at wonder.arrizona.edu.

If you thought goldenly breaded McDonald's chicken couldn't get more golden, think golder because new sweet and smoky special edition gold sauce is here.

Made for your chicken favorites at Participate in McDonald's for limited time.

At Capella University, learning online doesn't mean learning alone.

You'll get support from people who care about your success, like your enrollment specialist who gets to know you and the goals you'd like to achieve.

You'll also get a designated academic coach who's with you throughout your entire program.

Plus, career coaches are available to help you navigate your professional goals.

A different future is closer than you think with Capella University.

Learn more at capella.edu.

Yeah, this came out of Meta this week, which announced a bunch of changes to the way that Instagram

users who are teenagers will experience the platform.

And I think it's worth talking about because so much energy right now and attention are going into trying to solve the problems faced by teenagers and young people on social media.

And there's been all kinds of proposals from banning teens from social media to getting phones out of schools to just trying to change the mechanics of the platforms themselves.

And this was a really ambitious and potentially overdue series of changes that Instagram rolled out.

All right.

So what is the news?

Well, as of this week, any young person who tries to create an account on Instagram will find that their account is now private by default.

So in the past, if you were 15, 16, you could decide to just have a public account on Instagram.

Anybody could follow you.

Anybody could message you.

Well, now the way it's going to be is that your account is private by default.

And if you're under 16 and want to make your account public, you're going to have to get the permission of a parent or guardian and link their account to yours.

And potentially even more jarringly, though, anyone under 16 on Instagram who already has an account, within the next 60 days, Meta is going to just start switching their accounts to private.

So, with the exception of some 16 and 17-year-olds, basically, there are just not going to be many public teen accounts on Instagram unless those teens, after this happens, go to their parent or guardian and say, I want to make my account public.

And they get their permission to do that.

What if the parent doesn't have Instagram?

Like, how do you get your parents' permission if they're not on Instagram?

You have to get your parent or guardian to download Instagram.

So there is not really a workaround for this.

And, you know, of course, we imagine that teens are going to try to find workarounds.

You know, it's like maybe there'll just be like, I don't know, the cool adult in high school who agrees to be everybody's fake Instagram parent.

But Instagram told me, no, no, no, that's not going to be possible.

We're going to be monitoring for that sort of thing and try to make sure that it's not.

you know, one parent to 50 different accounts.

That's so interesting.

What are the other changes?

So there's a lot of them.

In addition to making accounts private, teens are going to be placed in the strictest possible messaging setting.

So we see a lot of the scams and harm come from the fact that, you know, strangers will try to interact with teens who they don't already know.

Meta says that's not going to happen as much anymore.

They are also going to be filtering out a lot of words from direct message requests that might be sort of bullying.

or harmful.

There's going to be a new notification after you've been on the app for 60 minutes a day, if you're a teenager that encourages you to take a break.

And there's going to be a sleep mode, which will turn off all Instagram notifications between 10 p.m.

and 7 a.m.

And all of these settings, if you are under 16, you can only change those with your parents' permission.

There is also a feature that will let parents see which accounts their children have been messaging.

They won't be able to see the contents of the messages, but they'll at least know who their teens are talking to.

Yeah.

So I have a sort of basic question about all this.

which is, as we know, a lot of teenagers lie about their age on the internet because they know that these sorts of controls and settings exist.

And so we know that a lot of people who are younger than

16 or 17, maybe they're 10 or 11, they will say, you know, I'm 18 in order to sort of get past the age-blocking features on some of these platforms.

So is Instagram dealing with that at all?

Yes.

So they are developing a range of new strategies to go after this problem.

it includes things like looking at who all your friends are if all of your friends are 16 and maybe you know your profile says that you attend a certain high school but you know your instagram account says that you're 24 years old instagram is gonna come in and say hey can you actually show us your id and prove that you're 24 so they're gonna be doing things like that to try to prevent that from happening if you're constantly saying things like riz it'll flag you it'll say you're definitely not a 38 year old man really or are they gonna say you are a 38 year old man is just trying to sound cool on a podcast yeah that's probably more likely Yeah.

So first I just want to ask, like,

about the sort of the strategy here.

Yeah.

Is Meta trying to get out in front of regulation by basically announcing these changes that might be mandated by state or federal laws soon anyway?

Well, I think Meta has already been run over the regulation trend here, right?

It was in 2021 that the whistleblower Francis Haugen came forward with some internal research, which said that there were people inside of Meta who were pointing out some of the negative effects that using Instagram had on mental health for some teenage girls.

And that just sparked a huge investigation by most of the states in the United States.

And last fall, after a two-year investigation, 41 of those states and the District of Columbia sued Meta, claiming that Instagram was addictive and harmful.

So that was sort of the moment when this went from, you know, a theoretical problem for Meta to a real one.

And you know, Kevin, just this month, most of those those same states came forward and endorsed the U.S.

Surgeon General's call for a warning label on social media saying using this stuff can actually be harmful for your kids.

So I think in some important ways, it feels like Meta already lost this fight, right?

Does it feel that way to you?

Yeah, I mean, it certainly feels like this is something that Meta did not want to do, that it would not have done absent pressure from regulators and lawmakers.

But, you know, I think a lot of parents are probably going to be grateful for this.

A lot of teenagers are probably going to be upset about it.

I do think maybe there's going to be like a wave of teenagers

making themselves look older with fake accounts.

Like, I just think

the actual implementation is where a lot of my questions are.

And also, just how this is going to be treated by parents.

I remember the Washington Post did a story earlier this year about these parental controls that existed on Instagram before all of this.

And they found that only like a single-digit percentage percentage of parents actually use these parental controls at all.

Yeah.

And you know, Nick Clegg, who runs policy at Meta, made a very similar observation that despite the fact that they have built these tools, they're not used.

But I have a lot of empathy for parents here because there are a lot of apps on your teenager's phone, right?

And there are a lot of social apps, right?

In addition to Instagram, I'm going to bet most teens have TikTok.

on their phone.

I'm going to bet most of them have Snapchat on their phone.

And we currently live in this world where parents are expected to go into the settings of every single one of those apps, you know, maybe set different time limits for each of those apps, look at who their teens are messaging on all of those apps.

And that is a big burden to place on parents who already have a million other things to do.

So I think it makes a lot of sense that most parents are not doing this.

And that you did actually want to see some kind of intervention at the product level that said, hey, we're meta.

This is our product.

And we're going to take care of this stuff for you by default.

Yeah.

How are people viewing this inside meta?

Like, is this something that they feel they can kind of give away and it's not going to affect their actual usage that much because teens will still be able to use Instagram, even if their accounts are set to default private?

Or is this something that they actually think could take a bite out of how many teenagers are using Instagram?

So this week I talked to Naomi Glight, who is one of the longest serving Meta employees.

She joined in 2005.

Her title now is head of product, and she's been working on this.

And she told me that Instagram does expect that they will see less usage by teenagers in the short term because of this change.

Because when you look at what they're doing, they are introducing real hurdles into using Instagram.

And even for the teenagers who are already using it, they're going to say, why don't you use this less?

In a way, if teenagers were using it more after this, it might be a sign that the changes weren't working, right?

Yeah.

So say I'm 14 and I have an Instagram account and, you know, I have maybe a couple hundred followers, mostly people from my school.

But, you know,

I have a public account.

I want people to be able to follow me.

And I get this, you know, notification at some point from Instagram that says, hey, we're going to make your account private unless you get your parents' permission.

Can I then just go in and sort of into my settings and say, actually, I'm 18 now and it will sort of grant me that ability to remain public?

That might work in the short term, but Instagram has told me they're on the lookout for that thing, right?

Like if you were 14 today and 60 days from now, you suddenly become 18, they do have a way of keeping track of that.

There's a database.

Okay.

So it's not going to be the easiest thing in the world to game, but could I just sign up for another account and say that I'm 18 and then sort of try to move all my followers over to that?

Well, you know, this is something else that social networks have a lot of experience with because something that happens on something like Instagram is that someone gets banned and they think, well, no problem, I'll set up a new account.

And so they have to invest a lot of resources into

stopping what they call ban evasion.

So this would be a similar thing where if you start creating a bunch of new accounts, you know, they would be able to look at your IP address, for example, and a bunch of other signals.

I mean, Instagram was straightforward with me.

They said some people are going to be able to get around this.

But, you know, when you're talking about improving child safety online, you just want to take harm reduction seriously, right?

Like the goal here is not to solve every single problem that happens on a social network.

It is to do something.

Yeah.

Yeah.

See, I am sort of.

pretty cynical from years of covering this company and sort of watching how it rolls out new features in response to various pressures from regulators and parents and

the public.

And I remember one Meta employee telling me once that Meta is a company that will eventually usually do the right thing, but only after exhausting all other options.

And so to me, this does feel like an example of, they tried a bunch of things.

They got all this pressure from regulators and lawmakers to do something about teen safety on Instagram.

And so before they were sort of required to by law, they kind of roll out these changes, some of which may be pretty cosmetic, some of which may be more meaningful.

But this is basically them trying to claim that they have done something so that the next time Mark Zuckerberg or Adam Assari gets hauled in front of Congress, they can point to these things and say, look at all the things we've already done.

Yeah, I mean, I think that that is fair.

I think if they wanted to do all of this stuff five years ago, they could have, and they chose not to, and they were dragged into it, kicking and screaming.

I think it's also really interesting, Kevin, to just look at the things that they are not doing, right?

There are some states,

including in New York, where we recently interviewed Governor Hochul, where they want to get rid of all algorithmic recommendations, right?

Like they think that teenagers will be better off if they can only see a reverse chronological feed, which by the way, I think that idea is insane, but it's incredibly common in some of these state legislatures.

Meta is not doing that.

And I also want to say that even though I think there will be a lot of good that will come from preventing strangers from like contacting random teenagers on Instagram, a lot of the worst bullying that kids receive is happening inside of their schools, right?

And if you follow your bully on Instagram, they will be able to take a video of you falling down the stairs and tagging you and having everybody laugh at you.

So there is still a...

Does this happen to you?

It sounds very personal.

I actually did fall down the stairs when I was in accepting a word in middle school.

Really?

But the thing was, it was not actually that traumatic for me.

I remember my teacher came up to me afterwards.

She was like, I'm so sorry.

You know, you must be so embarrassed.

And I was like,

I'm a tall person.

Like, I'm constantly tripping.

Like,

I don't take it that personally.

And if you're out there listening and you went to middle school with Casey and you have video of him falling down the stairs,

please send it to HeartFunk and WhiteTimes.com.

You know what I mean?

It was like, I didn't actually roll down an entire flight of stairs like a cartoon.

Let's review the footage and see what happened.

Casey, one concern that you've brought up a couple of times on the show is about what happens to teens from more marginalized groups if some of these changes to social media actually go through.

And whether, for example, a gay or a trans teenager or young person who wants to sort of express themselves on the internet in a way that maybe their parents don't approve of, how something like parental consent or notification would affect them.

So do you think these new changes on Instagram pose any risk to certain types of teens who may not want their parents to have to click through a bunch of approval screens to let them keep using Instagram?

I think there's definitely some risk to that, and we're going to have to see how it plays out.

My sense is that teens will still be able to explore topics like LGBT issues, even in this world, right?

Because people, again, you know, LGBT teens, for example, will be able to make their accounts public if, for example, they want to share about their experience and if they have a supportive parent.

And, you know, maybe your kid who doesn't have a supportive parent, but hopefully, you will be able to go on Instagram Reels if you want to and, you know, search for other LGBTs like yourself and maybe see some of that.

So I think that that is actually

probably going to be okay.

And honestly, like probably strikes a nice balance.

You know, one of the other changes that Instagram is making here is after getting a lot of criticism about showing kids content related to eating disorders, for example, or other stuff that sort of might drive them to dark places, Instagram is now going to go to teens and say, here are some like basically safe topics that we can show you.

Like you want to see cats?

We'll show you all the cats that you want.

I'm very curious if they would put LGBT issues in that list of 30 or so things that they pick.

My assumption would be that they wouldn't.

But as long as kids can still kind of search and find what they're looking for, my hope is that, you know, social media will still be there to make them feel a little bit less alone.

Do you think this is going to be meaningful?

Like, do you think this is actually going to make teens safer on the internet?

I think in one particular way, yes, which is this issue of strangers contacting kids online and getting them to make huge mistakes is very real.

There have been so many investigations this year into these sextortion schemes.

They're financially motivated, where you have these gangs of criminals who will contact random teenagers, trick them into sharing nudes, and then blackmailing them.

This has been linked to at least 20 suicides and huge misery all around the country.

This really does shut that down, I think, in some very important ways.

ways, not to say that it could never happen again, but it will be very difficult to do on Instagram now.

So I think that that is a really good thing.

Now, is the mental health of teenagers going to improve miraculously over the next year as all of a sudden all their Instagram accounts are made private?

No, I don't think so.

Yeah.

What do you think about some of the other changes like sending them a notification after they've been using it for an hour or halting notifications when they're supposed to be at sleep?

I think that's all fine and good.

Like I don't think that they're going to lock.

teenagers out of Instagram during the hours that they're supposed to be sleeping.

They'll just stop sending them notifications.

But

there may be ways around that too.

So I just think we'll have to see.

Teenagers are very clever historically at getting around restrictions imposed on their internet use, whether it's by their parents or by the social media company itself.

So I think it's just going to be fascinating to see whether teenagers are actually impacted enough by this for it to matter and whether they'll sort of find some clever workarounds.

Yeah, I'm sure some of them will.

But private by default is a pretty big step forward in terms of reducing the visibility of young people on a platform.

And I do wonder whether Meta will be able to get some goodwill from the lawmakers and the regulators that are so mad at them right now.

Yeah.

Are regulators sort of happy about these changes?

Do they feel like their concerns are being addressed?

Well, I haven't heard from a lot of regulators, but I have heard from a bunch of advocacy organizations that work on child safety.

And they were all over my inbox this week, basically saying, this is not good enough.

This is pure PR, right?

A lot of these advocacy organizations are pushing for uh what they call safety by design or a duty of care they want to see it in the law that that this should not be a sort of industry-led volunteer project but that legally they have to put kids into the strictest messaging settings by default and they have to restrict the way their personal data is used so and i mean by the way i think there is something to that right I think that this is important enough that we probably should just not leave it to the goodwill of these for-profit corporations to decide what happens to kids online.

Yeah.

Yeah.

I just want to say, like, as a closing thought to this story, like pressure works, right?

These pressure campaigns from parents, from lawmakers, from teachers, from just concerned citizens, they actually do, in some sense, force companies to do the right thing.

And that's why it's so important to complain online.

If you're not complaining online, start.

It's true.

Yeah.

You can change the world.

Yeah.

What are you mad about?

Change it.

Are we cut back?

Andy Jassy gets sassy.

He told his workers to get their asses back to the office.

At the University of Arizona, we believe that everyone is born with wonder.

That thing that says, I will not accept this world that is.

While it drives us to create what could be,

that world can't wait to see what you'll do.

Where will your wonder take you?

And what will it make you?

The University of Arizona.

Wonder makes you.

Start your journey at wonder.arisona.edu.

And now, a next level moment from AT ⁇ T Business.

Say you've sent out a gigantic shipment of pillows and they need to be there in time for International Sleep Day.

You've got AT ⁇ T 5G, so you're fully confident.

But the vendor isn't responding.

And International Sleep Day is tomorrow.

Luckily, ATT 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you.

ATT 5G requires a compatible plan and device.

Coverage not available everywhere.

Learn more at ATT.com/slash 5G network.

If you thought goldenly breaded McDonald's chicken couldn't get more golden, think golder because new sweet and smoky special edition gold sauce is here made for your chicken favorites at Participate in McDonald's for limited time.

Well, here we are back in the office, Kevin, which is actually relevant to the subject of our next story.

Yes.

You've heard about Amazon Returns.

This is a story about Amazon returns to office.

That's right.

And you know, I was really interested to read this story, Kevin, because I think in many ways we were the original return to office podcast.

Because when we started the show almost two years ago, we thought it's important to be in the room to do what we do every week.

That's true.

I love sitting across the table from you.

I'm very sad when we have to record remotely.

We've said it a million times.

I feel the same way.

And now it seems like some tech companies are starting to feel the same way about some of their operations.

The hard fork effect.

So this is from an announcement that came out this week from CEO Andy Jassy of Amazon.

He wrote a letter to Amazon employees and they posted it on their website.

It's called Strengthening Our Culture and Teams.

And Casey, when you see a memo from the boss with a subject line like strengthening our culture and teams, You know, some stuff is about to go down.

Yeah, I see that.

I think a human rights violation is about to take place.

So, this message was very long, and it was basically Andy Jassy's sort of directives to Amazon employees about how they were going to change the culture of the company, including in ways like thinning out the ranks of management, removing unnecessary layers of bureaucracy.

But the thing that really grabbed the headlines was when he wrote that in order to sort of work better, Amazon was going to require employees to be back in the office five days a week the way they were before the pandemic.

Yeah, and that includes Fridays, which is probably the least popular day to go into the office.

So you know that has to sting.

So obviously this has gotten a lot of attention, not just because Amazon is one of the biggest employers in America, but because they are sort of a trendsetter.

The things that they do when it comes to, you know, workplace culture often end up rippling out throughout the economy.

And this is becoming a big issue in Silicon Valley, I would say.

These companies, which were very early to allow their employees to work remotely during the pandemic, are now saying, wait a minute, we feel like we're losing control of this company.

We actually need everyone back.

Yeah.

And the reason I think that this is interesting is because I think Amazon has traditionally been willing to go much further in the direction of making their employees uncomfortable to get the financial results that they want.

And for that reason, it really can be a bellwether.

And if Amazon is willing to do this, my guess is a lot of other companies are going to be too.

Right.

This is not a full throttle shift from Amazon.

For the past 15 months or so, they have had a policy of requiring employees to be back in the office three days a week.

We should also say this is their corporate employees.

This is not the sort of obviously the drivers and the fulfillment center employees who have had to be back this whole time.

Yeah, they haven't figured out a way to be able to do the deliveries remotely from their desks.

Yeah.

That would be great if they could do that.

But as of January 2nd, 2025, for Amazon corporate employees, it will be all office all the time.

And I want to get into what this means for Amazon and the tech world at large and return to office more broadly.

So I wanted to bring on friend of the show, New York Times reporter Karen Weiss, who covers Amazon and has written all about this.

Let's bring her in.

Karen Weiss, welcome back to Hard Fork.

Happy to join you guys.

So why is Amazon doing this?

Why are they ordering employees back to the office five days a week?

Yeah,

Andy Jassy, the CEO, said he wanted to do this because he really wanted to get the kind of the culture back on track.

They grew so much in the pandemic that you had so many people who had started the company remotely and they started going back to office three days a week.

And they said they felt like that really proved to them the benefits they expected.

And so that's kind of one of the main reasons they articulate is that they felt like people weren't getting the Amazon culture.

I mean, the company doubled in like two years as a huge amount of new people coming in and trying to learn in kind of the Amazon way.

And you also have Andy Jassy as CEO now, and he's trying to make his kind of vision for the company.

And there's a lot of moving parts for it.

Yeah.

I mean, it's so interesting because I feel like during COVID, when they sort of, when all the tech companies decided to allow remote work and encourage remote work and allow people to do that even full-time at some companies if they wanted to,

they said that basically they didn't think it was going to impact culture.

They didn't think it was going to make their teams less creative or less collaborative or whatever.

Are they sort of admitting that they were wrong about that?

They were a little reluctant initially, but we were in Seattle here, was the tip of the spear for the U.S.

So they went remote, you know, one of the first big companies in the country to go remote, essentially, and stayed there for quite a long time.

So I wouldn't say they were like excited about it, but they did a lot.

I mean, there was kind of major work across pretty much every business line in that, in that window as well.

But

it is always been a very kind of intense work culture, and they want it back.

They want the old Amazon back.

I mean, a lot of the language in Jassy's memo was like, this isn't something new.

We're just going back to what it was.

It was like almost like a nostalgia for the old days sort of thing.

Yeah, I think Amazon managers thrive on seeing employees' tears.

And when they are deprived of that for two days a week, they start to get anxious.

No No comment on that one.

Wait, really?

No comment.

I just got no commented on the podcast.

All right, fine.

So, Karen, during the pandemic, Amazon goes mostly remote, at least for the corporate employees there.

But then at what point do they go back to three days a week?

About a year and a half ago, they announced going back to three days a week.

There was major resistance to it.

You know, the Slack channels were crazy, all that stuff.

But they've been pretty strict about it.

I mean, they, you know, monitor badges.

There was over the summer a reported crackdown on coffee badging was the phrase, where you kind of come in, have your cup of coffee, get your swipe and leave.

So that you need, you know, proper amount of time in the office.

So they're serious about it.

You know, some tech companies have had these mandates and by most accounts, they're not that.

aggressively enforced like maybe if you never come in sort of thing you might hear from about it but that was not how it's been at amazon you can even see it like the streets in seattle and downtown are just busy traffic is coming back The, you know, the stores and stuff like that are all very happy about this all.

And I would imagine at even three days a week, like people are building their lives around this, right?

So, you know, I mean, I can even imagine people might choose where exactly they want to live, like knowing that, well, okay, I only have to commute three days.

I'm willing to do that.

So I would imagine that going from three to five might sound like a small thing, but could really massively disrupt the lives of a lot of workers.

Yeah, exactly.

I mean, I've talked to people who kind of moved far out of town, you know, driving distance, but not like a daily commute distance.

And they'll come in and either crash with people or get a little studio or something like that.

And then go back and have your four days in a chiller environment.

And that's kind of gone with this.

I was talking to someone last night who was bemoaning their commute because he's like Monday and Friday, not an Amazonian.

You know, Monday and Friday, it was so easy to zip into town from the east side of Seattle.

And

now it's like going to be traffic five days a week.

Yeah.

Now, tech companies, Karen, as you know, often have a a lot of perks that they shower on employees for visiting the office.

Is Amazon one of those companies?

Casey, you'll be shocked to hear they are not one of them.

Frugality is one of the leadership principles at Amazon.

No, it's funny.

They actually introduced free coffee during this RTO, this return to office period, to try to lure people back and they tried to stop it.

And then there was another uproar.

So my understanding.

Last I heard, there was still free coffee, but no, it's not like the Googleplex with the laundry and whatever.

Their offices are, you know, they're nice, they're new, but they're not like fancy or anything like that.

There's no big buffets or whatever.

So, Karen, you said that part of the reason they're making this shift, or at least that Andy Jassy is saying they're making this shift to five days a week required in the office, is because they found when they brought people back three days a week that there were certain things that just, you know, were observably better.

Do you have any idea what that is?

Like, were they shipping more products?

Were they, you know, were employees like writing more code?

Like, what are they looking at to say, like, this is working.

And so we want to go back to the office full-time?

You know, he only spoke about a very high level.

This has actually been the critique from employees: we're a data-driven company.

Where is the data behind this?

I'm not saying there isn't any.

It's just not what has, like, there wasn't any in the email that went out this week.

It was a lot about, you know, getting an understanding and feeling the culture, mentoring younger employees, the spontaneous conversations that happen in the hall, that kind of general high-level culture stuff.

But there was not like, we found that you implemented X more code or whatever sort of thing.

That was not

in the argument.

And it doesn't seem like it's probably coming out of financial pressure either.

I mean, the stock is up something like 34% over the past year.

Like, it's not like this is a company that is declining, right?

So, what are the sort of investor pressures here?

Yeah, I think they have, Jassy has been focusing a lot on profit in the past couple of years and has improved the profits of the company pretty dramatically.

A lot of that came from the operations though, from like the warehousing part of the business and reorganizing that and making it kind of faster and actually more efficient.

But, um, but he definitely laid out like at the top of this email, he was actually talking about profit and looking for more, more, more space to squeeze out margin.

And this is kind of part of the framing is there's more work to do.

They have a lot of spending up ahead, you know, for what they're doing.

Like AI, we've talked a lot, but there's like billions and billions of dollars going into building AI investments and data centers and, you know, working, acquiring talent for that.

So there's a lot of moving pieces, and they have some other big investments kind of coming in line,

coming up that are going to be expensive.

Like there's satellites that compete with Starlink and stuff.

So there's other major things that

it's not like it's money is flowing.

They have a very, they have now a very profit-focused mentality that was not really there in the past.

They kind of showed they can turn it on and investors want more, essentially.

I'm sure they do, you know, but at the same time, Amazon made $36.9 billion in profit in 2023, which is, we can say, more than most companies.

And so the idea that at that level of profit, the company is still worried that its employees aren't doing enough things and is going to drag them back to the office five days a week and then is going to make them go through hell just to get a free cup of coffee when they walk through the door.

It just truly makes me worried about like every other company in America.

I always feel like, you know, if the company that made almost $37 billion in profit last year isn't going to give the employees free coffee, like what hope do the rest of us have?

Well, maybe that's why they made $37 billion.

I mean, those lattes add up, Casey.

They absolutely have.

It's always been, though, it's always had a very like practical thing.

Like you don't like it, go.

I don't, you know what I mean?

Like there's always been that that attitude.

They, you know, they have, they've improved some benefits and stuff like that for sure.

Right.

They're known for not coddling their employees like some of the other employees.

Yeah, that the work itself would be the reward, essentially, that it's challenging and you know, exciting to work on big things, et cetera.

Karen, how are Amazon employees reacting to this news?

Not so well, I would say.

I mean, there's been a pretty overwhelming flood of concern about it, upset.

You know, this is so discouraging.

How does this fit with our stated goal, our leadership principle of striving to be Earth's best employer?

There's a lot of gallows humor in the Slack messages I've seen, you know, the memes and stuff like that.

So when they said they wanted to be the Earth's best employer, I thought that was a joke.

They meant that seriously.

No, that is a real leadership principle on that list, along with, you know, frugality and, you know,

I think a lot of.

Employees at companies that have gone back to the office have been faced with this choice of do I go back back to the office or do I leave and get another job?

From your reporting, do you think that a lot of Amazon employees are considering leaving rather than going back five days a week?

Or are they mostly going to sort of grit their teeth and put up with it?

I mean, people leave jobs over this for sure.

So I don't think that's like a hypothetical, but you do still need a job on the other.

end of it.

So the function is, I think the question is like, are you in a role or a position that you could get another job?

And for some people, probably yes.

And for some people, probably

not.

Yeah.

I mean, that's one of the sort of big theories about all these return to office mandates is that they are essentially a way to do layoffs without doing layoffs because you sort of assume that some percentage of your employees are not going to like this or maybe they live far away from headquarters.

And so they literally can't do it.

And so they'll just leave.

And that's a way to sort of reduce your headcount without actually having to go through the process of laying people off.

Cheaper because you don't have severance packages and stuff, too.

I mean, is there any indication that that is what is happening here?

Does Amazon want to reduce its headcount?

And this is sort of one way to do that.

Well,

this was coupled with another announcement.

Can I, if I can go into this now?

Because it kind of implies that potentially they are looking to, or at least comfortable reducing their total headcount, because they also said they want to have essentially fewer managers, that it's become too bureaucratic, that you have managers prepping managers for managers for managers for managers.

And so they want to increase the number of employees reporting to any given manager.

And that, according to like the internal FAQs on that, that means that it might be that some roles are determined unnecessary, which it would be losing that job.

And if you can't find another job or don't want to accept an individual contributor role, that would be a layoff, essentially.

So there's clearly some comfort with that being an outcome.

I can't say right now if that's like the goal of it.

Certainly some employees think it is, but that's on the table, I would say.

What we're we're talking about here today, Kevin, it's giving me that funny old feeling of founder mode, right?

Yes.

So, you know, we talked about this recently on the show.

There's an idea out there that you sort of just articulated, Karen, which is too many of these tech companies have hired too many managers.

There's been too much delegating going on.

And what we really need is strong, centralized control, cracking the whip, getting these employees working their absolute hardest.

So, do we feel like that kind of a Silicon Valley sentiment has made its way up to Seattle now?

Um, I would say it's less about cracking the whip about than being feeling bogged down by it, which I hear from employees all the time that it's become a very bureaucratic place.

Like, I actually think there's alignment between what Jassy said and what I hear constantly.

But they're saying, like, the bureaucracy slows us down.

And don't forget, we just are in this AI moment where Amazon, you know, was being, was kind of widely seen as being a little late to the party, though rapidly, you know, moving really fast now.

But there's this idea of like you can't move as fast as we used to, and they want people to have autonomy.

They talked about, you know, the very Amazonian, uh, the two-pizza team.

You don't want teams that are bigger than two pizzas so people can kind of just execute on stuff.

So they're trying to get back to some of those old specifically teams that could be fed by two pizzas.

You're allowed to be physically bigger than two pizzas.

Oh, yes.

There's no discrimination, Casey.

Yeah.

Yes.

I did hear that they are announcing that they're adding 3,500 phone booths to the office.

Yes, we had that from the internal FAQs as well.

So, you know, we're commuting in to sit in phone booths.

But yeah, those little cubes, because you can't, people got used to being able to like have a phone call that was somewhat private or not have noise around you, just squat in a place and work in some quiet.

So

they're working to have more meeting spaces available.

They're returning to assigned desks and places that had them before.

And then, yes, working to procure 3,500 phone booths was what the internal doc said.

I think it's interesting in that one of the reasons why we saw so many companies go to remote, in addition to just the sort of obvious necessity of the pandemic, was particularly as the pandemic started to wane, there was just competitive pressure to do this, right?

There was this thought of, well, if we have some really strict return to office policy, we might lose a lot of our best people.

To me, I think what is most interesting about this is the sense that Amazon just truly is not afraid to lose anybody over this, right?

You know, maybe there might be some superstars that are able to work out a special deal, but my guess is those are going to be very few and far between if they exist at all and uh amazon's not worried about people going across the street and working at a microsoft or maybe the the meta office up in seattle so why do we think that is It does show a confidence in their position in the job market, I would say.

I mean, a lot of these were implemented during the Great Reshuffle or whatever it ended up being called, where people were changing jobs a lot.

And this was one way to accommodate that and to lure people in.

But there shows a confidence that, I mean, they're kind of amazing.

Like Amazon has grown a lot in the past several years and their headcount is basically flat.

Like it's, if, you know, and that includes their hourly employees, but it's, it's essentially flat.

So they are doing more with less.

And I think there's a, you know, the macro environment's changing and they're clearly nervous about that.

They've talked on the earnings calls about like just the uncertainty at the moment right now.

So, Karen, I want to talk about another thing that I learned about from your reporting,

which is a, what's being called a bureaucracy mailbox.

This is something that Andy Jassy has recently set up.

And it's a place where Amazon employees can send any examples of bureaucracy or unnecessary red tape that they can sort of make, they can cut out and run more efficiently.

What do you think the impetus for that was?

It's that same sense of like bloat and just slowness and frustration.

I think that he's here.

He's not wrong in like I people tell me this all the time.

There's this fear at amazon of being day two and a lot of people think they're there and they should just embrace that and like acknowledge it and move on but they're

say what that is karaoke say what is day two it's always day one at amazon which means you're always building from scratch you have that start kind of startup scrappy mentality that you don't rest on your laurels you always have to pretend not pretend but believe that nothing is a given and you can lose any advantage you have any day and it's like very very i mean there's literally like a building at the the headquarters the headquarters-ish building at the headquarters here is called day one.

Like it's a big concept.

So there's a, I think, a fear of the day two-ness.

And this is kind of a, the mailbox is a symbol of it.

Whether people will use it, I cannot say.

Yeah.

I could see it.

Well, I was at, we, we were actually so inspired by this, uh, this move by Andy Jassy that we, uh, we recently set up a hard fork bureaucracy mailbox.

Um, and uh, Casey, have you uh made any submissions to the hard fork bureaucracy mailbox?

Well, I did bring something I would would like to submit into the bureaucracy mailbox, Kevin.

Okay, go for it.

Well, so, you know, we have a podcast and something that podcasts sometimes do is make merchandise.

And when we have had conversations about how to make merchandise, what I've been told is the number of meetings that would have to be held at the New York Times to make merchandise would be so great that it would not be worth it.

And I don't know about you, Kevin, but I would love a beautiful hoodie, maybe a crown and scepter, a cape, just something tasteful that I could wear into society and let people know that I'm affiliated with the Hard Fork Podcast.

And so I would love to see the Times work on that so that we could have some beautiful merchandise for ourselves and for all the fans.

Okay, well, let's submit that to the mailbox.

Great.

How about you?

Well, I have not submitted anything, but I did find an anonymous tip that was left in the hard fork

bureaucracy mailbox, and I'll just read it out loud here.

It says, Dear Hard Fork Bureaucracy mailbox, I'm writing to complain about an inefficiency.

We have to read the credits every week, even when they're exactly the same as last week.

This makes no sense.

They should just copy and paste the ones from last week.

No one would notice.

I want my 45 seconds back.

Now, I don't know who sent that in, but clearly.

They're very angry.

It goes on.

It says, I'm also writing to complain about the fact that despite the ventilation in our podcast studio being fixed recently, so it's not literally 1,000 degrees in here all the time, the studio still lacks basic amenities such as wall decor, working iPad monitors, and a button Kevin can push to administer a small electrical shock to Casey every time he starts talking about productivity apps.

Oh, I actually was going to support that bureaucracy mailbox complaint right up until the end there.

Yeah.

It goes on.

It's still not done.

It's still while I'm here, I also want to complain about the Kansas City Marriott that almost prevented us from taping a show a few months ago.

Their Wi-Fi is horrible even after Kevin upgraded to the premium version.

I assume this is due to stifling bureaucracy in the Marriott Corporation or possibly the Woke Mind virus.

Please fix it.

Wait, is that for the hardware Bureaucracy inbox or the Marriott?

Well,

people are very confused out here.

They're very angry.

And I don't know who sent this one in.

I'm so glad this is an anonymous tip line.

Yeah.

Yeah.

Whoever sent that one in, you really speak my mind.

Yeah.

Yeah.

It is so funny to me that this has to be a separate inbox because presumably Andy Jassy already has an email address over there.

You think he could just ask his assistant to pull all the bureaucracy-related complaints, but maybe there's something special about setting up a dedicated inbox.

Intention, intention.

Karen, thanks so much for coming on.

Happy to join you guys.

At the University of Arizona, we believe that everyone is born with wonder.

That thing that says, I will not accept this world that is.

While it drives us to create what could be,

that world can't wait to see what you'll do.

Where will your wonder take you?

And what will it make you?

The University of Arizona.

Wonder makes you.

Start your journey at wonder.arrizona.edu.

And now, a next level moment from AT ⁇ T Business.

Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day.

You've got AT ⁇ T 5G, so you're fully confident.

But the vendor isn't responding.

And International Sleep Day is tomorrow.

Luckily, ATT 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you.

ATT 5G requires a compatible plant and device.

Coverage not available everywhere.

Learn more at ATT.com/slash 5G network.

If you thought goldenly breaded McDonald's chicken couldn't get more golden, think golder because new sweet and smoky special edition gold sauce is here.

Made for your chicken favorites at Participate in McDonald's for limited time.

Hard Fork is produced by Whitney Jones and Rachel Cohn.

We're edited by Jen Poyant.

We're fact-checked by Caitlin Love.

Today's show was engineered by Daniel Ramirez.

Original music by Alicia Betyitoupe, Mary Lozano, Rowan Nemisto, Leah Shaw Damaron, and Dan Powell.

Our audience editor is Nel Galogli.

Video production by Ryan Manning and Chris Schott.

You can watch this full episode on YouTube at youtube.com/slash hard fork.

Special thanks to Paula Schuman, Kluing Tam, Dahlia Haddad, and Jeffrey Miranda.

You can email us as always at hardfork at nytimes.com.

Or you can send a tip to the hard fork bureaucracy mailbox at kcnewton at platformer.news.

I don't actually know if that's your email address.

At the University of Arizona, we believe that everyone is born with wonder.

That thing that says, I will not accept this world that is.

While it drives us to create what could be,

that world can't wait to see what you'll do.

Where will your wonder take you?

And what will it make you?

The University of Arizona.

Wonder Makes You.

Start your journey at wonder.arizona.edu.