Can California Regulate A.I.? + Silicon Valley’s Super Babies + System Update!

1h 9m
“In the United States, we have 50 laboratories of democracy and they're called states.”

Listen and follow along

Transcript

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three?

You can with Oracle Cloud Infrastructure.

OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.

Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.

Try OCI for free at oracle.com/slash NYT.

Oracle.com slash NYT.

Well, Casey, we have some news about the show to start the show with this week.

Yeah.

But first, I want to tell you a parable.

Oh, I love stories, Kevin.

So imagine for a second that you were a coffee drinker.

I know you're a tea drinker, but let's imagine that you're more of a coffee guy.

Sure.

And every week you go into your favorite coffee shop and you order a coffee and they say, this one's on us, Mr.

Newton.

Oh, that's nice.

And you do this for weeks, maybe years years go by, and they're just giving you free coffee.

And then one day you show up and they say, you know, this coffee, we love giving it to you, but it does cost us something to make and we have to pay our rent and there's salaries for the employees.

And so we actually want you to chip in a little bit for the coffee.

Well, that only seems fair, Kevin.

Yes.

So that is...

a vague approximation of what is happening with this podcast and with all New York Times podcasts.

So The Times is creating a brand new audio subscription.

This is a way to support what we do here on Hard Fork and what our colleagues do on shows like The Daily, The Ezra Klein Show, The Run Up, and many others.

And here's how it's going to work.

If you are a new or current all-access or home delivery subscriber to the New York Times, the audio subscription will be included.

That'll give you full access to all of our episodes and all the episodes of all the other shows from the New York Times, as well as early access to shows from Serial Productions.

If you just want to get this new audio subscription, that will give you full access to all of New York Times audio, including all the episodes of Hard Fork.

And to be clear, if you want to keep listening to our episodes without a subscription, you can do that.

The most recent episodes of this show will still be available every week for free on Apple Podcasts, Spotify, or other podcast players.

But if you want to kind of go into the back catalog, you're going to have to subscribe.

If you want to sort of trace our journey as podcasters as we learned how to do that.

Yes.

So actually, this is actually actually a better deal than the parable I told you about the coffee shop.

Because there, if we were to sort of apply the same logic to the coffee shop parable, it would be like, well, you can have a free cup of coffee, but if you want the coffee that's like old, you're going to have to pay for that.

And who wants old coffee?

No, not me, certainly.

Yeah.

Yeah.

So, if you want to support the work we do and the work that my colleagues at the New York Times do on all of our great shows, you can go to nytimes.com/slash podcasts to learn more about this new audio subscription.

Now I am having a little bit of a domestic issue with technology in my house.

What's that?

So I'm testing a new robot vacuum cleaner.

Now is this Bruce Rus?

This is Bruce Deuce.

So this robot vacuum cleaner has some AI features built into it, including like a voice voice activation feature, where you can basically say to the robot vacuum, like start cleaning, or go clean the living room, and it'll go do that.

But the wake word, the kind of activation word that the company has given it is Rocky.

That's the Alexa of this particular vacuum.

And I have a dog named Roger.

And sometimes when I'm calling my dog, the robot vacuum will think that I'm saying Rocky and will sort of activate.

And I asked the company, I was like, can I like change the name of the vacuum so that it's not like accidentally going off when I call my dog Roger?

And they're like, no, you can't change the name.

So now I'm faced with a choice, which is do I just continue to put up with these like accidentally activating, you know, robot vacuums?

Or do you shoot your dog?

I was going to propose changing my dog's name.

But yeah,

that's another potential solution what do you think i should do um i i mean this is uh it's it's a rocky road you're on now kevlin um i think you should get a different robot vacuum with a different wake word you know yeah that reminds me you know what one of my uh friend group's favorite pastimes is uh just like making up drag names um and so my boyfriend recently suggested as a drag name alexa play despacito

and just knowing that saying that out loud could cause cause havoc in thousands of listeners' homes brings me great joy.

I mean, yes, if you are a listener listening to this and you have one of these devices, you are now no longer listening to the podcast.

You are listening to Despicino.

I think we should just mess with our listeners' smart home devices every week.

Rocky, clean the living room.

Alexa, turn the lights off.

I'm Kevin Roos, a tech columnist at the New New York Times.

I'm Casey Newton from Platformer, and this is Hard Fork.

This week, the 18 new laws California just passed to regulate AI and one big bill that the governor vetoed.

Then, the information's Julia Black on some big new advancements in fertility technology and why Silicon Valley is going baby crazy.

And finally, it's time for a system update.

Kevin, something you and I care a lot about is how will artificial intelligence get regulated?

Yes.

We know that it's getting better very quickly.

And the very people building us have said to us from the beginning, hey, if we're not careful, this stuff could get out of control in a hurry and it could hurt a lot of people.

Yep.

And a second thing that we know, Kevin, is that our federal government typically does not like to pass laws that regulate the tech industry.

Yes.

In fact, since 2017, the only major law that they have passed to regulate the tech industry is to try to ban TikTok.

And it's not actually clear that they will be successful at that.

Is that true?

That's the only major tech regulation?

Yes, nothing that has been signed into law.

Individual houses have passed a bill here or there, but in terms of what has been signed into law, no major regulation.

But fortunately, Kevin, and I don't know if you remember this from your high school submits class, but in the United States, we have 50 laboratories of democracy and they're called states.

Yes.

And in these states, Laws are passed.

And Kevin, what if I told you that right here in the state of California, where many of the biggest AI innovations are taking place, lawmakers took a hard look at it this session and they said, we're going to do something about this.

Yeah, this has been a really interesting story to follow because it seems like while the federal government is sort of mulling and debating and, you know, they have this executive order now from the Biden White House saying, like, we're going to do something about regulating AI and we're directing a bunch of agencies to study it.

California just went ahead and said, let's, let's start regulating.

Yeah, they said, why wait?

And of course, in California, Democrats have total control over the legislature and the governor's office.

And so legislation just tends to move pretty quickly through the process here.

And today, I want to talk about it because there were a ton of, I think, pretty important AI regulations that did pass and one very important regulation that didn't.

And I want to get all your thoughts about it.

Yep.

All right.

So if you've heard anything about AI regulation in California over the past few weeks, it is probably that on Sunday, Governor Gavin Newsom Newsom vetoed a bill that has been very controversial called SB 1047.

And we're going to talk all about that.

But before we do, I want to talk about the bills that Newsom actually did sign because there were 18 of them.

And I think they go a long way toward addressing some of the most immediate concerns that people have about ways that AI could go wrong.

Yes, these were sort of the ones that didn't get all the attention, but may end up being important in the long run.

So what did Newsom sign into law?

So I'm not going to go into all 18, but here are some of the key planks of the legislation that he did sign over the past month.

One, there is a bill that makes it illegal to create or distribute sexually explicit images of a real person that appear authentic if they are intended to cause emotional distress.

So these are basically what is sometimes called revenge porn that is augmented with generative AI.

That is now explicitly illegal now.

Newsom also expanded our existing child sexual abuse material statutes to include CSAM that is created or altered using generative AI.

So you cannot use these systems to create or alter CSAM.

There is a law that prohibits replicating performers' voices and likenesses in audio-visual work without their consent, including after they have died.

So the movie studios cannot just clone actors' faces and voices and use them to make new movies without paying someone for that.

There's a law that requires AI companies to post information about the training data they use, which of course has been a big question for us for a couple of years now.

Yes, this is one that actually caught my eye.

This was called AB 2013.

And this one actually feels like it could be a pretty big deal.

So starting in 2026, on January 1st, 2026, companies in California that want to make an AI system publicly available will have to basically tell people where they got the data to train that system.

And why haven't they been telling us that so far, Kevin?

Well, you know, people have lots of sources for this data, including some that they probably shouldn't be using.

So if you are a company that maybe you scrape YouTube for data to train your model, you are now going to have to disclose that when you release that model.

And that could be a pretty big deal.

Yeah, well, you know, I'm freaking out because I've been training this huge model using exclusively data that I scraped from the New York Times.

And I heard that you guys are real sticklers about that.

So

I'll get our lawyer's attention on that.

Yeah, so that's a big one.

There is another law that will require water marking for AI images.

The idea here is that if you are seeing these images out there, you should have a way to tell that they were created with generative AI.

There is a law that will require the disclosure of generative AI when it's used in a healthcare setting.

There's one that will make robocalls disclose when they are AI.

I also think this is important, right?

So if you got a call from, you know, you're a business and you got a call from what you think is a person that's placing an order or something, my understanding, it is now going to have to say, hey, by the way, I'm an automated tool.

And then finally, the last one I wanted to bring up, they really want to fight against AI misinformation and deep fake election content.

So they're going to ban deceptive ads that are created with generative AI, and they're going to require platforms to label election content that is created with or altered by generative AI.

So there's obviously a lot in there.

And I'm curious, in addition to the training data thing, if anything in there stands out as, oh, yeah, that actually seems important or useful.

It all seems sort of marginally important.

I think so much is going to come down to enforcement and like what happens to companies or people who actually do use generative AI in these ways that are now prohibited in California.

And we'll see, I guess, as these laws start to come onto the books.

I mean, it seems like the theme of the bills that Governor Newsom did sign into law is basically, if you are using AI for something important, you have to tell people that you're using AI for something important.

And that seems like a pretty good, sort of relatively uncontroversial type of regulation to pass.

It's not saying you can't, for most of these things, it's not saying you can't do AI in healthcare or in robocalls.

It's just saying you have to identify that it's AI.

Yeah.

And I think that that is a good idea because there can be a lot of upside in the use of generative AI.

I think for creative projects in particular, whether that's, you know, maybe you do want to make some kind of art with it or you want to use it to create an ad.

I think that's okay.

But there are just many cases where we want people to actively be disclosing that.

Yeah.

And I think we should also back up a step and say why we care about the regulations that are being passed at the state level in California.

Because most people listening to this probably don't live in California.

These laws won't apply to them right away.

But I think there's this feeling and a reality that, you know, in the absence of strong federal regulation on AI,

it's going to be the case that the laws that get passed in California and other sort of early jurisdictions will sort of set the template for how AI will be regulated more broadly.

All of these AI companies, you know, many of of them operate in California, are based in California, many of their employees live in California, many of their customers are in California.

And so, in the same way that California's vehicle emissions standards kind of became the national standard

because you didn't want to be selling one type of car in California and another type of car in all the other states, I think the fact that California is such a huge market for AI kind of makes their state regulations kind of the de facto federal regulations.

That's right.

They're sort of raising the floor for all of the regulations regulations here.

And so it really does matter what gets passed here.

All right.

So that's the stuff that actually did get passed.

And again, I think this goes toward the harms that we are likeliest to see right now.

You know, we're in an election right now, right?

So there's a lot of stuff that I think the state just wanted to deal with as these harms are starting to come into view.

But then there was.

Senate Bill 1047.

And I think it's safe to say, Kevin, this was the most controversial AI bill that we saw this year, probably in any state.

Yes, this was the big one.

SB 1047 was all I could hear about for several months this year.

People in the AI industry were really worked up about it.

It was sort of the subject of furious lobbying and posting and attempts to sort of sway

the state lawmakers and Governor Newsom on this.

It has been such a big controversy inside the AI industry because unlike these other bills, which sort of regulate the use of AI, this was a bill that attempted to to say, what should the regulations be on the models themselves, at the model layer of AI?

So, Casey, for people who have not been following the sort of internicine drama of California state legislation, what is or was SB 1047?

So the main requirements of this bill, Kevin, were that one, it required safety tests for models that had a development cost north of $100 million

and used a ton of computing power.

And it created some level of legal liability for these models if they were used to create harm.

Right.

And harms in this bill were sort of defined pretty specifically as things that would like cause, you know, more than $500 million of damages or include like loss of life.

So we're not talking about like a model that's like, you know, given people the wrong answers on their homework.

This is like things that would really create catastrophic harms out in society.

Yeah, we've seen a lot of AI catastrophes over the past couple years, like the launch of Google Gemini, but but this bill would not have covered that.

So look, after this bill got introduced, it then got watered down, Kevin.

You know, like initially there was a plan to create a new state agency over AI.

They got rid of that.

The liability requirements were actually a lot higher in the first version of this bill.

And in fact, there was even at one point a requirement that derivative models would be part of the liability

regime here.

So that if you took Meta's Llama and you fine-tuned it and did something wrong with that, Meta would be liable for what you had done with the derivative model.

Yeah, and people got really worked up about that, especially in the open source AI community.

They basically said this bill would kill open source AI because who in their right mind would create an open source model and release the weights to the public if they could be held liable if someone down the road took that model and

did a huge cyber attack with it or something like that.

So, look, I think a lot of those fears were sort of ginned up for the purposes of rallying opposition to this bill.

If you actually look at the bill,

the version that was sort of voted through was much gentler when it came to these sort of derivative models.

But yeah, that was a big sticking point for a lot of AI companies and investors who didn't like the bill.

So that bill then passed the state assembly and the state senate in August, and it went to Gavin Newsom, who took a few weeks to think about it.

But this past Sunday on September 29th, he vetoed the bill.

Yeah, and that was not shocking if you had heard what Governor Newsom had been sort of saying about the bill.

He had been sort of tentative whenever he was asked about whether he was going to sign it or not.

So many people expected him to veto it.

But there was still sort of a glimmer of hope among some of the AI safety folks that I talked to that he would sort of realize that this bill on balance was a good thing and would sign it into law.

And then we would have some regulation of these huge AI models.

Right.

And based on the pressure that he had been getting, you might have assumed that when he vetoed the bill, he would have said what the companies lobbying him said, which is this bill goes too far.

But in fact, that was not what he said.

No, no, there was this very strange statement that he put out after vetoing this bill that basically made the claim that what was wrong with SB 1047 was that it wasn't restrictive enough.

He basically said this model, you know, it would only apply to the biggest AI models and it wouldn't apply to smaller models.

And smaller models can be just as harmful as big models sometimes.

I mean, I should say basically no one believes him on this.

Like, of the folks that I'm talking to, they're like, this is not actually why Governor Newsom vetoed this bill.

He vetoed it because he was getting pressure from all these big companies and lobbyists, and he didn't want to do anything that could hurt the tech economy of California.

But that is what he claimed, which is that the bill did not go far enough.

Right.

And, but, you know, we should say that he also said, we are not done with AI regulation in this state.

He put together a group of people, including Fei Fei Lee, who is an early pioneer in AI research, along with an AI ethicist, a dean at UC Berkeley, and said they're going to work together to continue coming up with new guardrails for AI.

I believe he also encouraged lawmakers to bring him another similar improved bill in the next session.

So I understand a lot of AI safety folks are really disappointed right now, but at the same time, I fully believe that California will pass more AI regulations next year.

Yeah, I think so too.

I actually, so right now, people in the AI industry, many of them are celebrating having sort of successfully killed this bill that would have applied some regulations.

I think, you know, a lot of the way that they killed the bill was sort of by misleading people about what was actually in the bill.

If you actually look at it, it sort of wasn't as tough.

It was much more sort of lenient than previous versions of the bill.

And I think it actually was sort of a light touch way of regulating these huge AI models.

But put that aside for a second, I think there's a potential that the tech industry will regret having killed this bill.

Why is that?

So I think there are two reasons.

One of them is,

we talked about these sort of two approaches to regulating AI, either at the model level or at the application level.

And what has happened in this most recent legislative session is that California passed a bunch of use-based laws about how AI models can be used, and it did not pass.

Governor Newsom vetoed the one bill that would have applied at the kind of foundational model level.

And that is what the tech industry, the AI industry wanted for the most part.

But I actually think there's a world in which the use-based regulation of AI becomes much more annoying for them to deal with.

How so?

Because this is what we've seen happen in Europe, right?

Europe did take the kind of use-based approach to regulating AI with their AI Act.

And now all of the American AI companies hate doing business in Europe because it's a patchwork of different regulations.

There are 40 different rules that might apply based on how you're using your AI system.

You need to hire a bunch of compliance people and lawyers to sort of review everything that goes out to make sure it's not violating any of those dozens of different rules.

And I think there may be a point where the AI industry wishes that what it had gotten instead of this patchwork of little use-based regulations was sort of one or a handful of big, broad broad regulations that applied to the only the companies that are, have the most money and the most resources and the most compliance people and the most lawyers to sort of sign off on all this stuff.

So that's one argument.

The other argument for why I think killing SB 1047 may be something of an own goal for the tech industry is that Regulations around new and emerging technologies are typically written in the wake of crisis, right?

There is something that happens where you know, people die or there are sort of catastrophic harms and lawmakers rush in to write some bills.

The quality of those bills is generally not super high, but that's because what lawmakers are trying to do in those moments of crisis is just put a stop to the crisis.

I think what the tech industry had in SB 1047 was a chance to create regulation and rules for a new technology when there was no crisis that they were dealing with.

There was nothing sort of immediate.

They had months to kind of work out their objections, to propose amendments.

It was sort of a peacetime regulation.

And I think what may happen now is that we will get a broad AI regulation that applies to AI companies training these huge models, but we will get it at a time that is much less favorable to them because these AI systems will improve.

Something will go wrong.

There will be some huge cyber attack or some huge incident involving one of these AI systems.

Lawmakers will scramble to get some regulations on the books.

And I think the AI industry will be much less happy with the regulations that come out of this process.

Yeah, I think that that is a really smart and interesting point.

You know, I have to say that I have been of mixed mind about this bill, because on one hand, I do want to see harms stopped before they come to pass.

And on the other hand, I'm not convinced that California lawmakers really knew what harms they were solving for here, because I still still don't know that we have a very clear line of sight from the models we have today to the catastrophes that everyone is predicting.

At the same time, Kevin, as I mentioned at the top of this segment, from the beginning, the founders have said to us, these models that we are making can, we think, eventually cause great harm.

And if that is the case, if you take them at their word, when lawmakers come along and say, okay, we're going to believe you and we're going to hold you legally liable if you cause great harm and they throw up their hands and they say well no what wait hold on a second you know let's not get carried away here there is something that i think uh damages their their credibility about that and i sort of feel like both things can't be true right is what i'm saying here right

yeah and i think in this weird way the fight over sb 1047 has exposed something really

important and and kind of counterintuitive, which is that a lot of the people who will say, I'm an AI optimist, are also the people saying this stuff will never get so powerful that it poses any threat to human life or to society or any of these catastrophic harms that people are worried about.

It is actually the doomers who are saying that this technology is going to be incredibly powerful and useful and maybe scary because it is improving at such fast rates.

So you have kind of this interesting

arrangement where the people who are the most optimistic about the actual capabilities of the technology are also the people who are taking the risks more seriously.

Yeah, absolutely.

I want to make one more point, which is that I think that we've sort of been handed a false choice here, which is, well, do we regulate

the uses of AI or do we regulate the models themselves?

And I think in practice, the answer is going to be both because we do this all the time, right?

We regulate guns very lightly in this country and we regulate the uses of those guns, right?

And I think something similar is just inevitably going to happen with these models.

And so to your point, yes, the industry should be thinking about what reasonable liability ought we have in a situation where these models are insanely powerful because they're never going to get away with only regulating it at the level of the application.

Yeah, I think that like regulation is coming.

AI is just too powerful and we regulate every other industry that has that kind of power.

And so AI is inevitably going to be regulated.

I think the question for the industry is how much regulation can they live with?

You know, I was at an event

last week with a bunch of sort of lawyers and compliance people.

The Folsom Street Fair?

No, not that one.

This was an event at Berkeley.

And their point was basically, whatever happens, we would like for it to happen at the federal level.

because you know at least if you regulate ai at the federal level then there's sort of one law or one set of laws that companies have to follow they have clarity they know like i can use the same ai in uh california as i can in texas as i can in florida and they don't have to sort of you know hire a bunch of people to you know cross-check all of the various state laws and so their ask was basically whatever happens on ai regulation it should happen at the federal level and i think that's something that i support too.

I think what we're talking about now is a world in which the federal government does not do anything about AI regulation.

And so it's up to the states like California to do it instead.

That's the world we live in.

Well, let me end on which thing do you think made us safer from AI?

The regulations that Governor Newsom signed into law in California over the past month, or the fact that people just keep leaving open AI all the time, leaving it in apparent disarray?

I mean, look, I think that if you are a person who worries that AI is moving too quickly and you want it to slow down, you probably don't mind all of this drama that is going on inside the AI industry, because I think probably one effect of that is that it does actually slow things down if you're constantly losing co-founders and research leads.

And so maybe that's a good thing.

Maybe it's, maybe it's the open AI people leaving.

I think that that's true.

I think that over the past year, it feels like the main innovation in AI in Silicon Valley has just been people leaving OpenAI to start AI companies.

And there's just, it just takes time.

You know, it takes time to ramp those companies up and do your hiring and, you know, create your little wiki dock and notion for everything.

So

anyways, Kevin, it's a fascinating discussion.

I'm sure we'll have a lot more to say.

But in the meantime, if you are going to make deepfakes, don't do it in California.

When we come back, we're having a baby.

Or at least we would if the technology was good enough.

And it's not, but it's getting there.

We'll talk about the latest infertility tech.

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three?

You can with Oracle Cloud Infrastructure.

OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.

Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.

Try OCI for free at oracle.com/slash nyt.

Oracle.com/slash slash nyt.

Over the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risk is similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

In VESCO Distributors Incorporated.

At Sutter, caring for women of all ages never stops because we know women have unique needs when it comes to health care.

That's why our team of OBs and nurses are committed to building long-term relationships for lifelong care.

From prenatal support to post-menopause guidance, we're here for every woman at every stage of her life.

A whole team on your team, Sutter Health.

Learn more at Sutterhealth.org/slash women's health.

Casey, we're going to talk today about something that has been a very hot topic in Silicon Valley recently, which is babies.

Kevin, I don't know if I'm ready to have a baby with you.

I'd really prefer to take it a little more slowly.

We already have the podcast.

That's true.

That's our baby.

No, we are talking today about fertility technology because this has become a big topic within the tech world.

For years, there has been this conversation in the tech world about what's being called pronatalism, which is this belief spread by Elon Musk and others that declining birth rates in the U.S.

and elsewhere are a big threat to the future of civilization.

And we should all be having way more children than we are.

And people are sort of taking this banner in the tech industry and saying, well, maybe some technology could help us here.

Yeah, well, and also just, you know, infertility is a huge problem for a lot of people, right?

A lot of people want to have babies and cannot.

You know, my boyfriend and I have been trying to get pregnant for many months now.

We're just getting absolutely nowhere.

So I'm excited that Silicon Valley is finally, you know, paying attention to this issue.

Yeah, I mean, this has been an area where investors, tech founders, startups are just spending an inordinate amount of time recently.

Investors have poured hundreds of millions of dollars into startups working on technology to help people conceive and have children.

These are things like very specific types of genetic testing that can test a whole embryo's genome rather than just for a few sort of specific things, sperm freezing and even sort of these longer-term moonshots like artificial wombs and something called IVG, which would basically allow you to make a gamete from non-reproductive cells, to basically take some other cells from a body and use that to create a child.

Which, among other things, would let same-sex couples create a child that can table their DNA.

So, you know, as we know, fertility and babies and birth rates, they are not just a topic of conversation at Silicon Valley, they are also playing a role in the presidential election as people debate, you know, what kinds of restrictions and laws there should be about women's reproductive rights and health.

So to talk about this subject, I wanted to bring in Julia Black.

Julia is a reporter for The Information, and she wrote a great piece a few months ago called Dawn of the Silicon Valley Super Baby, which was about sort of all this investment and energy going into fertility technology right now and what it would mean for the pronatalism movement and what it would mean for fertility as a whole.

Let's bring her in.

Julia Black, welcome to Hard Fork.

Thank you so much for having me.

I'm a big fan.

Oh, thanks so much.

So, can you just start by telling us how you got interested in this world of fertility tech?

Yeah, this started for me in a sort of unexpected way in 2022.

I got a tip that kind of changed my life, which was that Elon Musk had more kids than the public realized, and some of them were with people who had not been previously disclosed.

So I went out searching and found some secret twins that he'd had with one of his employees, Siobhan Zillis.

And that just kind of sent me down this rabbit hole for the last couple of years where I've learned about Silicon Valley's interest in fertility tech in particular.

And then I've learned about some of the technologies that are coming out of that interest.

And let's try to sketch out that interest a bit.

So if you are a pro-natalist, what is concerning to you about declining birth rates?

So I think a lot of economists would agree that there is some concern around the fact that the majority of developed countries now are below what's called the replacement rate.

And when you structure that into an economy, something happens that they call the flipped demographic pyramids.

So you've got more old people, fewer young people, and young people are the people who drive the economy, who are putting into social security, who are taking the jobs, and old people are more of a weight on society.

So that's the general concern at its most basic.

I think a lot of people take this in some pretty wacky and sometimes dangerous directions.

The white supremacist movement has become particularly interested in these demographic shifts.

And some people, I think, would argue that it's not just that we all need to have more babies, that certain types of people people should be having more babies, which translates to white, educated, well-off people.

Right.

So there's like this very vanilla version of this, which is like having babies is good for the economy.

It's good for economic growth.

It's good to make sure that there are workers to take care of us as we age.

And then there is a sort of more racist, more eugenics-oriented version of it.

And yeah, so it's a lot in here.

Yeah, definitely.

I think that this fertility tech as a field has gone from this totally ignored, underdeveloped realm of technology to something that's finally attracting investor interest.

And yeah, it's also attracting some weird ideological interest.

So there's a few different elements bundled up in there.

Yeah, there was a quote in your piece that was sort of attributed to an anonymous meta engineer who said something like, no one is having children naturally.

anymore.

And that's obviously not true.

Some people are still having children naturally, but it did sort of give a sense of like how pervasive this has become in the Bay Area where we live and in the tech world more broadly.

Totally.

And, you know, the information where I write, our audience is all Silicon Valley all the time.

So when you're talking about our small niche world, that's actually really true.

And that's the reason we put that quote in the story, because it really kind of exemplified what we're hearing across the board, which is like, this is like, you know, the way that a trendy purse like takes off within a certain niche group of people.

Like fertility tech is taking off.

Like these people are going to dinner parties, they're telling their friends, oh, we use this new pre-implantation genetic testing service.

You got to try it.

And they're trying it.

And then they're telling their friends.

And it's like within this very small and non-representative community, yeah, it really has taken off.

Yeah.

Your piece made a point that I hadn't seen before about the fact that crypto entrepreneurs, people who are interested in cryptocurrency, seem especially interested in fertility tech.

Brian Armstrong, the CEO of Coinbase, has invested in Orchid, one of these startups that is doing sort of embryo genetic screening.

Vitalik Buterin, the co-founder of Ethereum, has also invested in Orchid.

So what is the crypto-fertility connection?

Oh my gosh, this has been like such a head scratcher and obsession of mine.

And I've been trying to figure that out.

The best understanding I can come to is that the crypto world is also really interested in something called decentralized science, DSI.

It's kind of taking off in places like San Francisco and New York, but also in special economic zones like Honduras and El Salvador.

I have a couple of these, Prospera, where people are experimenting with new scientific techniques that

might not pass muster with FDA regulation.

There's like a big backlash to the FDA in this crowd.

So that's my best guess is that like this is a world that overlaps very much with that kind of Brian Johnson longevity obsession.

He's the guy who does the blueprint method.

Yeah, he's invested basically all of his energy and money goes into de-aging himself.

Right.

Exactly.

So I think that another particularity about this kind of Silicon Valley audience and maybe more so the crypto crowd is

like they just want the best of the best technology now.

They want to live in the future now.

And so they hear about these technologies that might be possible, but might feel far off.

And, you know, they're going to do everything they can to optimize.

And, you know, optimizing their own fertility or reproductive process is like, why not?

So I think especially with the genomic stuff, and especially as AI has helped genomics really advance quite quickly in the last couple of years, that particular crowd is quite drawn to these very futuristic possibilities.

I mean, I kind of get it.

Like I was telling Casey before we started this interview, my wife and I have gone through fertility treatment, and it seems like it should be a way to sort of bring precision and control to the sort of random process of conceiving a child.

But there's so much about it that is still kind of guesswork and let's try this thing and see if it impacts this thing.

And like, it's frustratingly inexact.

And I can see if you are a person who's spent your whole career trying to sort of optimize systems and eliminate chance and kind of program things, like this technology would seem like a way to sort of bring order to what could otherwise be a pretty frustrating and chaotic process.

Yeah.

And I think I'd also note that there's a whole gamut of new fertility related technologies that are coming to market.

Is that like a gamete pun that you just did?

God.

I did have gametes in my notes here.

I was going to talk about them later.

I was just going to say that there's this like spectrum of

types of technology and some of them are really far out there, really advanced, like artificial wombs and something

that a company called Conception is working on to try to make gametes out of stem cells, which means like two men could, in theory, take skin cells, use them to create an egg and sperm, and reproduce using those cells.

Like it's really far out there.

That stuff is nowhere near possible at the moment.

But I'm just saying, like, you are seeing people in Silicon Valley working across the board on, you know, current possible things versus like totally out there futuristic technologies.

Yeah.

And it's not, it's, it's sort of far out there in the sense that this stuff does not exist in any form that consumers can use, right?

But there are serious people investing in it.

Like Sam Altman is someone who has invested in this kind of, what they're calling IVG, this sort of using non-reproductive cells to essentially grow babies, right?

Yeah.

And I think another name I would bring up is George Church.

He is like an absolute pioneer of the genomics field, and he's got a hand in a lot of these companies.

And I had the chance to talk to him, and he was actually the one who kind of hinted to me that artificial wombs, which he's currently developing for his other company, Colossal, which

they're trying to bring back the woolly mammoth, right?

Exactly, exactly.

Which, you know, you kind of scratch your head and you're like, what's the market value of bringing back the woolly mammoth?

But then you watch.

I think somebody's going to try to command an army of them and to take over some small nation.

Yeah.

I'm sure Elon Musk would be interested.

So, Julia, let's talk about these actual tools that you wrote about, some of the fertility tech that is on the market or will soon be on the market.

Talk to us about ORCID.

Yeah, so ORCID is a company I wrote about in July with my colleague Margo McCall, and they're doing something called PGTP, which is pre-implantation genetic testing for polygenic disorders.

This is not totally new to the scientific realm.

In fact, we've been testing for things like Down syndrome, which is a pretty simple disorder that's easily detectable early on.

Then we also have done testing for monogenic disorders.

That's things like cystic fibrosis.

But now polygenic disorders start to get really complex.

This is stuff like schizophrenia and bipolar, diabetes.

And so what they're claiming to detect for is the risk factor for these diseases.

There is some speculation in the scientific community about how much of this is really possible.

And yet, this company has got a lot of investment and has now reached a lot of consumers and is expanding actually to be nationwide.

They have some partnerships with nationwide clinics.

So, yeah.

How much does it cost to get your embryos tested this way?

It costs $2,500 an embryo.

And that's on top of what you're already paying for IVF, right?

So, this is not something that's sort of like mass accessible yet.

Exactly.

But it is part of the core American competency of making healthcare more expensive for everyone at all times.

We've got that.

Kevin, for the price of a mid-range MacBook Pro, you could know everything about the genetic predisposition of your future child.

Seems like a small price to pay.

Yeah.

Well, of one embryo.

And of course, you're not just looking at one embryo.

The idea is that.

How many embryos are y'all implanting in a typical situation?

Well, you're probably just going to implant one, but the idea is that you want to test a range.

So let's say a couple during the IVF process creates eight viable embryos.

So then this company is offering to test each of those for $2,500 a pop.

So it adds up.

Yeah.

So then they're going to look at those tests and start to compare them and say, okay, this one is more predisposed.

They're not saying that this kid's going to have this disease, but more likely to have type 1 diabetes.

Whereas this kid maybe is more likely to have bipolar disorder.

So it gives you this like chart, which is supposed to be this risk picture of your child's future.

So actual, like babies have been born using this

pre-implantation genetic screening, right?

This is not theoretical.

There are children out in the world today who were born after being tested this way.

Correct.

So as embryos, they were tested, and then they were the embryo selected to be implanted.

How many babies has this been performed on?

Like, how widespread is this?

ORCID wouldn't give us a number, but I think a lot of parents are especially interested in this when they have genetic disorders that run in their families.

One really common one is the BRCA gene, which is responsible for breast and ovarian cancer in many cases.

You know, that's one use case example.

Something else we did discover through this piece is that Elon and Siobhan, the parents who I originally discovered two years ago, did use this.

I don't know if they were using it to test for complex hereditary disorders.

I did speak with a few customers, as did Margot, my colleague, who did tell us that IQ testing was something that they had been offered by the company.

The company did not confirm this themselves, but this is something we said, we heard from a few people.

What does that even mean?

Like you can, there's some sort of gene that if it's present, you're likely to have a higher IQ as an adult than an embryo without that gene.

So remember that part when I said the scientific community is very skeptical of some of these claims?

That would be the chief one.

I think that the idea that you can detect intelligence in an embryo from what is a very complex picture of a combination of many genes.

Yeah, that's very much up for debate.

So, I can understand why parents who had had that history with cancer that you just mentioned would want to know if their future child was going to be at risk for that and would be willing to, you know, pay a high price to try to avoid that.

At the same time, I imagine that there are other more kind of nice-to-have

features that these parents might be testing for, or things that sort of stray a little bit closer to the eugenics line that we were, you know, talking about at the top of the interview.

So

are these services able to sort of go in a bit of a darker or more concerning direction?

Or what have you heard about maybe potential misuses of the screening technology?

Yeah, I mean, I do want to be really clear.

If you go to ORCID's website right now, they lay out very clearly the services that they claim to offer.

I think it's something like 13 factors.

And again, it's the things like diabetes, bipolar risk.

They're not claiming publicly to detect IQ.

For whatever reason, several different customers brought that to us and said that it was part of the package they received.

Yes, though, I do think a lot of people, a lot of bioethicists would argue that we enter this slippery slope territory

where, you know, it's one thing, as you say, to make sure that your child doesn't die of some horrific rare disorder.

It's another thing when you start to get into the realm of, you know, characteristics of their appearance or their intelligence or behavioral traits.

And I think that even something like bipolar schizophrenia is inching more towards like behavioral traits.

In fact, when I wrote a piece two years ago about a different company called Genomic Prediction, I spoke with this couple, Simone and Malcolm Collins, who again were doing some of this decentralized science DIY stuff.

They were taking the data that they got from one company, plugging it into another genetics company that was actually not supposed to be for embryos, but they were able to upload the data as if it was a person.

And they showed showed me their spreadsheets and like, it really was

just wild how many factors they claimed to be detecting.

They were talking about things like brain fog and propensity for headaches and, you know, mood disorders of various kinds.

And so, yeah, some people are starting to inch more and more into that territory of like these very complex characteristics that make up who we are as human beings.

And

yeah, you would hate to see it fall into the hands of someone who wanted to detect for blonde hair and blue eyes.

Right.

Are they able to detect the propensity of an embryo to start a podcast later in life?

Yeah, no one's going to be able to do that.

We definitely want to keep that out of the gene pool.

So I just have a question about the politics of all this, right?

Because this is, we're at a moment where there's a lot of discussion on the national political stage about reproductive rights and abortion rights.

There are some Republicans

who don't even think that that we should be doing IVF, that that's sort of a bridge too far.

How do the people who are pushing for this kind of investment in fertility tech square

their belief that this technology should be able to exist and be able to help people have more children with the very real possibility that some elected officials want to make this kind of thing illegal?

I'm thinking in particular about Elon Musk, who is backing Republicans, who would make it much harder for women to access reproductive health care of all kinds, but also wants there to be this population boom because people are able to have more children.

So how do they square that belief?

I think we're watching that play out before our eyes right now.

I mean, a phrase I think of a lot in covering Silicon Valley's kind of move towards the right is strange bedfellows.

Like you are getting these alliances that in so many ways don't make any sense.

A couple of weeks ago, I was at a conference, a tech conference in San Francisco, and they had the Heritage Foundation like up there with these AI founders.

And like, they did, in fact, have a panel on fertility, surprise, surprise.

And there are just so many incompatibilities that I really don't know how they're going to square them.

The IVF question is,

of course, the main one that is going to come up in terms of very tangible policy very soon.

Yeah, I don't have much of an answer except like

they're going to be in for a rude awakening, I think.

Yeah.

I mean, I guess the question that a lot of people have about this topic is like, how far are we actually from the kind of science fiction sort of Gattaca scenario where like you are a well-off person, you want to have a child, you kind of like go into the fertility clinic and you just kind of get like a menu.

And it's like, well, do you want your child to be six feet tall?

Do you want your child to have a high IQ?

Like that'll be another $500.

Like how far are we from the scenario in which people, or at least, you know, wealthy people with access to good reproductive health care, will have the ability to kind of select traits for their offspring.

Yeah.

So again, I think this comes down to two questions.

One is when is the technology going to get there?

And I think a lot of people in this field would argue sooner than you think.

And then the other is when is society going to get there to a place where we actually want that and where our lawmakers actually make that possible.

And already it seems like it's starting to become a bit of a status symbol.

Like I think your story gets at this, where it's like friends love bragging to their friends that they just spent $20,000 on genetic screening for their embryos.

And what, you didn't do that for your embryos?

You know, so to me, I feel like part of that Gattaca world has actually already arrived here, Kevin, in at least some fashion.

Totally.

Well, and I think what people don't maybe realize unless they've gone through this kind of fertility process is like there is already a kind of report card that you get back when you, when you have, you know, embryos and they sort of tell you like this embryo has this grade there's already a lot of choice on the frontiers of fertility uh today

but what we're talking about here is sort of a very different possible future in which it's not just screening out sort of the most debilitating uh you know and and harmful uh genetic conditions but it's truly getting down to the level of like you know how tall do you want your child to be do you want them to have a higher risk of bipolar so something like that feels just different fundamentally from what exists today.

But maybe people, you know, 20 years ago were saying the same thing about the testing that now seems pretty commonplace today.

I mean, I will say this.

Women's bodies have always been this political battleground that attracts controversy, kind of no matter what.

There have been ethical debates over everything from IVF to epidural use.

So, you know, on one hand, like if it's going to have to do with women and reproduction, it's going to be controversial.

On the other hand, there are some very real ethical concerns here that should be addressed and should be regulated very thoughtfully and i i fear that like with so many things happening with tech like with ai as we've all seen the regulators are probably behind on this are probably not working at the same speed as silicon valley innovators so yeah they probably need to do some catch up

Yeah, well, Julia, thank you so much.

This is really fascinating, and it's a story I hope we'll keep track of as it continues to develop.

Yeah, it was great to meet you.

Yeah, thank you so much.

Well, we come back, OpenAI's big fundraise and other big updates from the past week in technology news.

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three?

You can with Oracle Cloud Infrastructure.

OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.

Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.

Try OCI for free at oracle.com slash nyt.

Oracle.com slash nyt.

I don't mean to interrupt your meal, but I love Geico's fast and friendly claim service.

Well, that's how Geico gets 97% customer satisfaction.

Yeah, I'll let you get back to your food.

Uh, so are you just gonna watch me eat?

Get more than just savings, get more with Geico.

At Sutter, breakthrough cancer care never stops.

Our teams of doctors, surgeons, and nurses are dedicated to you from day one of your diagnosis.

Our 22 cancer centers deliver nationally recognized care every day and every step of your way.

And we're located right in your community, ready to fight by your side.

A whole team on your team, Sutter Health.

Learn more at sutterhealth.org slash cancer.

Well, Casey, from time to time, we like to update our listeners on some stories that we've covered in the past that have had some new developments in a segment we call system update

what's happening in the news kevin so the first system update is that open ai a company we've talked about once or twice on this show has just completed a 6.6 billion dollar fundraising deal that nearly doubles the company's valuation from just nine months ago.

The new round was led by Thrive Capital.

Lots of other participants in this fundraising round, Microsoft, NVIDIA, SoftBank, MGX, which is the sovereign wealth fund of the United Arab Emirates,

but notably not Apple, which backed away and declined to invest in OpenAI's most recent round, according to reports.

Yeah, and you know, there are many reasons why companies decide not to invest in things like this.

Maybe they didn't like the financials.

Maybe they had concerns about the product roadmap.

But of course, you can't help but wonder whether Apple looked at the steady stream of departures out of OpenAI over the past year and thought, maybe we don't want to put our eggs in that basket.

So Casey, you've been covering tech startups and fundraising for a long time.

How does $6.6 billion in fundraising at a $157 billion valuation compare to what other startups are raising?

So that is, we believe, the largest venture capital fundraise of all time.

OpenAI had previously raised $10 billion, but that was a sort of multi-year commitment.

So it's thought of somewhat differently, and it's a huge amount of money.

At the same time, Kevin, your colleagues Mike Isaac and Aaron Griffith at the Times reported last week that Open AI is expecting about $3.7 billion in sales this year and $11.6 billion next year.

So assuming that is the case, that is an insanely high growth rate and it really only values the company at around 15 times or so its forward earnings.

And believe it or not, in Silicon Valley, companies often have much crazier multiples, right?

They're raising at a 50 or a 100x multiple on their expected revenue.

So as big as a fundraise as this is, it's weirdly kind of in line with typical Silicon Valley projections.

Yeah, I mean, I think the more relevant fact here about OpenAI's financials is this company, despite having had huge success with ChatGPT and making billions of dollars in revenue, is still burning through cash at just a phenomenal rate.

So they're projected to lose something like $5 billion

this year.

And that's in large part because it's just so damn expensive to keep building and training these models and paying all the people to do that.

Yeah.

And, you know, I went to a press preview for their developer day this week, Kevin, and they had some pretty lavish charcuterie boards that they had set out for us, a variety of beverages, and some macarones, which I'll say were delicious.

So with $6.6 billion

now in the bank, I guess they will be able to afford much better and bigger charcuterie boards.

They can also probably afford a much larger settlement with the New York Times.

Oh, yeah, I've almost forgot to disclose our mandatory disclosure, which is that the New York Times company is suing OpenAI and Microsoft over copyright issues related to the training of their models.

But, you know, this is sort of goes to one of the big questions swirling around this company right now, which is, is all of this growth ultimately going to result in big profits for their investors?

Or are they just kind of burning cash until they can't get any more cash?

You know, we've seen companies before that have been unprofitable for huge stretches of time.

Think about Amazon or Uber more recently, which was unprofitable for most of its existence and then kind of found ways to become more profitable over time.

Or is OpenAI the kind of company that will just keep burning cash until they run out?

Well, it's clear that they have found some real consumer demand and have been able to answer that with products that people really like, right?

Like ChatGPT is probably the most successful new consumer brand launched on the internet since TikTok, I would say.

And more and more corporations are signing up to use its APIs and build custom enterprise versions of its software to perform various tasks, some of which can save those companies a lot of money.

So there is something very real here.

And we know that while generative AI is extremely expensive to run, there are big computing costs, energy costs, and so on, the growth rate suggests that at some point these numbers will pencil out, or at least that's what I think.

Yeah, I think so too.

And I think one other thing that we should talk about in relationship to this fundraising deal is that OpenAI is apparently telling employees that they may now be able to do a, what's called a tender offer

for their stakes in the company.

That's where they make you an offer, but they say it in a really sort of nice, gentle way.

They say, baby, what if we just gave you a few dollars for those sweet, sweet shares?

Exactly.

That's a tender offer.

Yep.

A tender offer also means that employees could potentially cash out their employee shares in this company.

And as we know, OpenAI and other AI companies sometimes pay people phenomenal amounts of money to work there.

And so a tender offer could be pretty meaningful.

All right.

Well, that's enough of an update about OpenAI.

What else is in the news, Kevin?

So this next system update is about Reddit, a company we've talked about in the context of last year's big protests by moderators.

As you may remember, there were some changes to Reddit's pricing structure for developers.

Moderators got very upset about this.

A bunch of them took their subreddits private in protest of the company's decisions.

It sort of threatened to hurt the site as a whole.

And as of this week, Reddit moderators, according to these new rules, will not be able to change the public or private status of their subreddit without first submitting a request to a Reddit admin.

This policy will apply to all community types on Reddit, and it is basically trying to take away one of the ways that users and moderators were able to stage a protest against the company last year.

Right, because if all of a sudden most of the Reddits are private, it drives traffic away from Reddit, which means less advertising revenue for them.

And so, you know, this was a really novel form of protest, I think, that Redditors essentially invented.

And the company has now gotten around to saying, yeah, we hate that and you can't do it anymore.

But you know what I love about this story so much, Kevin, is that in a way, this like mirrors the history of free speech in America, which was like, you know, at first it's like, go ahead, stage your protests, gather anywhere, say whatever you want.

And now it's like, oh yeah, you can have a protest, but you do need to apply down at City Hall.

And we're actually going to put you in the designated free speech zone and we're going to pelt you with tomatoes while you hold up your protest signs.

So Reddit is just sort of finally getting around to the same idea that many American municipalities have had, which is that despite what it says in the First Amendment, we hate free speech in this country.

Yeah.

Yeah.

Despite all your rage, Redditors, you are still just a rat in a cage.

That's actually kind of a catchy lyric, Kevin.

You ever thought about setting that to music?

I'll keep it in consideration.

Okay, next item on our system update.

This one is about deep fakes, and it's from a story in the New York Times with the title Deep Fake Caller poses as Ukrainian Official in Exchange with Key Senator.

This is a story about something that happened to Senator Benjamin Cardin, who is the chairman of the Senate Foreign Relations Committee, and it is a wild story.

Oh my gosh, yes.

Like this is sort of like if a Mission Impossible movie came out today, this is the sort of scene that you would expect to open it.

So this story was first reported by Punch Bowl News, but the Times learned about it from an email that was sent by Senate Security to lawmakers' offices and started to piece together some of what happened here.

Senator Cardin got an email from someone claiming to be Dmitro Kuleba, Ukraine's former minister of foreign affairs, asking him to meet over Zoom.

He got on Zoom, took this meeting, and saw someone who looked and sounded like Dmitro Kuleba, but this person started asking weird questions like, do you support long-range missiles into Russian territory?

The senator reportedly ended this call and reported it to State Department authorities who confirmed that this was indeed a deep fake and not really the former Ukrainian foreign minister.

So, you know, this is something that people have speculated about for years, written sci-fi about.

But look, I think we're getting to a point now, Kevin, where like with the family members in your life or like close business associates, it's actually time to come up with a code word.

Totally.

Like have a code word.

And if you get invited to a Zoom and the conversation takes a sort of suspicious turn, and you know, certainly if ever any one of my friends asked me about long-range missiles into Russian territory, I would get suspicious.

And that's when you say, hey, what's the code word?

And if they don't know it, well, you know that you've been deep fake.

But it is pretty wild to think that we have already arrived at that point where now U.S.

editors need to be on the lookout for this sort of thing.

Yeah, I thought we probably had another year or two before this would start to happen because among other reasons, like the video deep fake technology for like real-time video conferences just isn't that good yet.

But, you know, as we know, people sort of pay flitting attention to Zoom calls.

Maybe you're doing something in another window.

Maybe you're not looking super closely at the lips and the mouth of the person you're chatting with.

And so this might just totally fool you, even if you are someone sophisticated who knows that this threat is out there.

Yeah.

And, you know, let me throw something else into the mix.

So one of the things that OpenAI announced at their developer day this week is the availability of the API for their real-time voice tool that made such a splash earlier this year when, of course, you'll remember that people thought it sounded a little bit too much like Scarlett Johansson.

Well, now other companies are going to be able to come and use those voices for their own purposes.

All you need

in addition to that is some of the voice cloning technology that companies like 11 Labs are working on.

All of a sudden, you're going to have extremely plausible phone calls that can do this exact exact sort of thing with the added bonus that you don't have to create a sort of

realistic like visual depiction of the person.

So I just want to make sure that we keep paying attention to this stuff because people are already running all sorts of scams with this and it just seems like it's going to keep getting worse.

Yeah.

Next system update.

Sonos has a plan to earn back your trust.

And here it is.

This is from The Verge.

As you will remember, back in May, Sonos got into a lot of trouble because it replaced replaced its app, which allowed you to control your internet-connected speakers, with a new and much worse app that basically users said was impossible to use.

Among the features that were either missing or broken were things like local library support alarms, queue management, whatever that is, and even some accessibility options.

Cue management is like the order of the songs that are playing, which is hugely important.

I'm constantly adjusting my cue.

Oh, well.

You know, because sometimes you want to hear a song a little earlier.

Sometimes you want to hear it later.

So I'm not a Sonos guy, but this was a big deal for you because you are a Sonos guy, and this was a big change that you did not like.

I have made a massive investment in Sonos, Kevin.

And when I heard they had a plan to earn back my trust, the first thing I thought was, did you think I trusted you?

Because I never trusted you, Sonos.

I've had my eye on you for a long time.

You know, the rollout of this app, Kevin, also sort of made me giggle a little bit because while everyone else was complaining about the new app and the features didn't have, I was like, the old app never worked for me either.

Like these people, here's the thing, they make wonderful hardware.

And when everything is firing on all cylinders, and when you actually manage to connect the Sonos to your Spotify and your house is rocking, there's nothing like it.

The problem is it is a very inconsistent system.

And everyone has been hoping that eventually they'd get their act together and finally bring all the pieces together.

And then we could just enjoy the very good hardware that they've created.

It does not seem like it should be something that takes like years of work and many talented engineers to do is to like make a speaker that connects to the internet in your house and plays the songs from your Spotify.

Like that seems like a tractable technology problem, but it appears to have thrown this company into a state of chaos.

It does, but you know, look, I do think that there are real technical challenges with synchronizing the audio across multiple speakers.

I actually think this is where Sonos gets into the most trouble is if you're in a relatively large space, you have a limited Wi-Fi connection, you have a stream of music, and you need to route it to, let's say, five, six, seven speakers, and all of the music has to be in sync at all times.

That is a technical challenge.

And they have made some strides in fixing it, but then along comes this after buckle.

And well, yeah, they've been having a terrible year.

Well, Casey, you and other Sonos users will be pleased to hear that Sonos has a seven-point plan to earn back your trust.

And let me just read some of these points to you.

Point number one is unwavering focus on customer experience.

You know, I was so sad to see that focus wavering over the past year.

I'm glad, glad that we will no longer be wavering.

Point two, increasing the stringency of pre-launch testing seems like a good idea.

Point three, approaching change with humility.

Have you noticed that so far all these are fake changes?

Like, none of these are real things.

None of these are fix the damn app.

Exactly.

But they also say they're going to commit to relentless app improvement and also extending our home speaker warranties.

Now, that is actually a real thing.

Yeah.

So if you have a home speaker that you bought from Sonos over the past year or so, they will extend the warranty for another year.

And that actually does seem like a nice, even though, by the way, it's like the problem is not that the speakers stopped working.

So extending the warranty doesn't actually really do anything.

But, you know, I guess it is something nice we got out of all this.

My favorite change that they proposed actually in this seven-point plan is number four, which says that they are going to appoint a quality ombudsperson, which is tech companies speak for a person that people can yell at when their speakers start to malfunction.

And I got to say, this sounds like the worst job in America.

I don't know if, like, it got you a direct email line to Patrick Spence, the CEO of Sonos, and you could just sort of forward him all complaints that you were receiving.

That actually sounds kind of fun.

I'd be interested in that job, honestly.

I'd be like, Patrick, by the way, another 400 emails from people who can't get Spotify to work.

That would be fun to do.

Well, if this journalism thing doesn't pan out, they are hiring.

I want to be part of the solution, Kevin.

And the company also said that they are pegging employees.

They're pegging employees?

Oh, my God.

Wait, why did we save this for the end of the segment?

Gang, we got some breaking news.

Send out a New York Times push alert.

Sonos is pegging its employees.

Wow, I really stepped into that one.

Wow.

Just when they had a plan to earn back my trust.

Talk about an unwavering focus on the customer experience.

We're going to use this,

by the way.

Yeah, this is staying in.

This is staying in.

What I was trying to say is that the company has demonstrated its commitment to these changes by pegging executive bonuses to improving the quality of the app and rebuilding customer trust.

But now I almost don't want to say that because what just preceded this was so much more energy.

I'm very glad you did because the number one thing I've been thinking this whole time is how our executive bonus is being affected by this.

Because if these people are not properly incentivized, Kevin, to maintain their unwavering focus on the customer experience, I don't know how we're ever going to solve this.

I think we should implement a quality umbuds person for the hard fork podcast.

Sure.

Come at me, bro.

Yeah.

Yeah.

Is that you?

Well, look, let me just say, we do read the emails that are sent to us and we're hearing the feedback loud and clear.

Yeah.

Which is that you would rather be listening to a different podcast.

I don't know why you emailed that to us, but you did.

So thank you.

We did get one email this week that I thought was very nice, which was from a person who said, you know, because we did that whole thing on the show last week about the hot Kevin Roos from the Netflix documentary.

And there was someone who wrote in who said a very nice thing, which was that actually

the real Kevin Roos is the hot Kevin Roost.

Oh, and I appreciate that.

That is very nice.

We do have nice listeners.

Yeah.

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three?

You can with Oracle Cloud Infrastructure.

OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.

Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.

Try OCI for free at oracle.com slash nyt.

Oracle.com slash nyt.

I don't mean to interrupt your meal, but I saw you from across a cafe and you're the Geico Gecko, right?

In the flesh.

Oh, my goodness.

This is huge to finally meet you.

I love Geico's fast-and-friendly claim service.

Well, that's how Geico gets 97% customer satisfaction.

Anyway, that's all.

Enjoy the rest of your food.

No worries.

Uh, so are you just gonna watch me eat?

Oh, sorry.

Just a little starstruck.

I'll be on my way.

If you're gonna stick around, just pull up a chair.

You're the best.

Get more than just savings.

Get more with Geico.

At Sutter, Healing Hearts never stops.

Our specialists provide life-changing cardiac care for every heartbeat, every step of the way, and are dedicated to helping hearts love longer and beat stronger.

Whether it's transplants, arrhythmias, or blood pressure management, pioneering heart care isn't just our purpose, it's our promise.

A whole team on your team, Sutter Health.

Learn more at Sutterhealth.org slash heart.

Heart Fork is produced by Whitney Jones and Rachel Cohn.

We're edited by Jen Poyant.

This episode was fact-checked by Ina Alvarado.

Today's show was engineered by Chris Wood.

Original music by Alicia BetyouTube, Marion Lozano, Diane Wong, Corey Schreppel, and Dan Powell.

Our audience editor is Nell Galogli.

Video production by Ryan Manning and Chris Schott.

You can watch this whole episode on YouTube at youtube.com/slash hard fork.

Special thanks to Paula Schuman, Pui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

As always, you can email us at hardfork at nytimes.com.

Or you can reach our quality ombudsperson at casey at platformer.news.

By the way, if you have feedback on the new Times subscription, I know they're changing it, we're the Ezra Klein Show at nytimes.com.

So, um, I was just parking my car, and then I saw you, the Gecko, huge fan.

I'm always honored to meet fans out in the wild.

The honor's mine.

I just love being able to file a claim in under two minutes with the Geico app.

Well, the Geico app is top-notch.

I know you get asked this all the time, but could you sign it?

Sign what?

The app?

Yeah, sure.

Oh, that means so much.

Oh, it rubbed off the screen when I touched it.

Could you sign it again?

Anything to help, I suppose.

You're the best.

Get more than just savings, get more with Geico.