Can California Regulate A.I.? + Silicon Valley’s Super Babies + System Update!
Press play and read along
Transcript
Speaker 1 In business, they say you can have better, cheaper, or faster, but you only get to pick two.
Speaker 4 What if you could have all three?
Speaker 5 You can with Oracle Cloud Infrastructure.
Speaker 8 OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.
Speaker 10 Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.
Speaker 11 Try OCI for free at oracle.com/slash NYT.
Speaker 12 Oracle.com slash NYT.
Speaker 13
Well, Casey, we have some news about the show to start the show with this week. Yeah.
But first, I want to tell you a parable.
Speaker 14 Oh, I love stories, Kevin.
Speaker 13
So imagine for a second that you were a coffee drinker. I know you're a tea drinker, but let's imagine that you're more of a coffee guy.
Sure.
Speaker 13
And every week you go into your favorite coffee shop and you order a coffee and they say, this one's on us, Mr. Newton.
Oh, that's nice.
Speaker 13 And you do this for weeks, maybe years years go by, and they're just giving you free coffee.
Speaker 13 And then one day you show up and they say, you know, this coffee, we love giving it to you, but it does cost us something to make and we have to pay our rent and there's salaries for the employees.
Speaker 13 And so we actually want you to chip in a little bit for the coffee.
Speaker 14 Well, that only seems fair, Kevin. Yes.
Speaker 13
So that is... a vague approximation of what is happening with this podcast and with all New York Times podcasts.
So The Times is creating a brand new audio subscription.
Speaker 13 This is a way to support what we do here on Hard Fork and what our colleagues do on shows like The Daily, The Ezra Klein Show, The Run Up, and many others. And here's how it's going to work.
Speaker 13 If you are a new or current all-access or home delivery subscriber to the New York Times, the audio subscription will be included.
Speaker 13 That'll give you full access to all of our episodes and all the episodes of all the other shows from the New York Times, as well as early access to shows from Serial Productions.
Speaker 13 If you just want to get this new audio subscription, that will give you full access to all of New York Times audio, including all the episodes of Hard Fork.
Speaker 13 And to be clear, if you want to keep listening to our episodes without a subscription, you can do that.
Speaker 13 The most recent episodes of this show will still be available every week for free on Apple Podcasts, Spotify, or other podcast players.
Speaker 13 But if you want to kind of go into the back catalog, you're going to have to subscribe.
Speaker 14 If you want to sort of trace our journey as podcasters as we learned how to do that.
Speaker 13 Yes. So actually, this is actually actually a better deal than the parable I told you about the coffee shop.
Speaker 13 Because there, if we were to sort of apply the same logic to the coffee shop parable, it would be like, well, you can have a free cup of coffee, but if you want the coffee that's like old, you're going to have to pay for that.
Speaker 13 And who wants old coffee?
Speaker 14
No, not me, certainly. Yeah.
Yeah.
Speaker 13 So, if you want to support the work we do and the work that my colleagues at the New York Times do on all of our great shows, you can go to nytimes.com/slash podcasts to learn more about this new audio subscription.
Speaker 13 Now I am having a little bit of a domestic issue with technology in my house. What's that? So I'm testing a new robot vacuum cleaner.
Speaker 14 Now is this Bruce Rus?
Speaker 13 This is Bruce Deuce.
Speaker 13 So this robot vacuum cleaner has some AI features built into it, including like a voice voice activation feature, where you can basically say to the robot vacuum, like start cleaning, or go clean the living room, and it'll go do that.
Speaker 13
But the wake word, the kind of activation word that the company has given it is Rocky. That's the Alexa of this particular vacuum.
And I have a dog named Roger.
Speaker 13 And sometimes when I'm calling my dog, the robot vacuum will think that I'm saying Rocky and will sort of activate.
Speaker 13 And I asked the company, I was like, can I like change the name of the vacuum so that it's not like accidentally going off when I call my dog Roger?
Speaker 13 And they're like, no, you can't change the name. So now I'm faced with a choice, which is do I just continue to put up with these like accidentally activating, you know, robot vacuums?
Speaker 13 Or do you shoot your dog?
Speaker 13 I was going to propose changing my dog's name.
Speaker 13 But yeah,
Speaker 14 that's another potential solution what do you think i should do um i i mean this is uh it's it's a rocky road you're on now kevlin um i think you should get a different robot vacuum with a different wake word you know yeah that reminds me you know what one of my uh friend group's favorite pastimes is uh just like making up drag names um and so my boyfriend recently suggested as a drag name alexa play despacito
Speaker 14 and just knowing that saying that out loud could cause cause havoc in thousands of listeners' homes brings me great joy.
Speaker 13 I mean, yes, if you are a listener listening to this and you have one of these devices, you are now no longer listening to the podcast. You are listening to Despicino.
Speaker 13 I think we should just mess with our listeners' smart home devices every week. Rocky, clean the living room.
Speaker 13 Alexa, turn the lights off.
Speaker 13 I'm Kevin Roos, a tech columnist at the New New York Times. I'm Casey Newton from Platformer, and this is Hard Fork.
Speaker 14 This week, the 18 new laws California just passed to regulate AI and one big bill that the governor vetoed.
Speaker 14 Then, the information's Julia Black on some big new advancements in fertility technology and why Silicon Valley is going baby crazy. And finally, it's time for a system update.
Speaker 14 Kevin, something you and I care a lot about is how will artificial intelligence get regulated? Yes. We know that it's getting better very quickly.
Speaker 14 And the very people building us have said to us from the beginning, hey, if we're not careful, this stuff could get out of control in a hurry and it could hurt a lot of people.
Speaker 13 Yep.
Speaker 14 And a second thing that we know, Kevin, is that our federal government typically does not like to pass laws that regulate the tech industry. Yes.
Speaker 14
In fact, since 2017, the only major law that they have passed to regulate the tech industry is to try to ban TikTok. And it's not actually clear that they will be successful at that.
Is that true?
Speaker 13 That's the only major tech regulation?
Speaker 14 Yes, nothing that has been signed into law. Individual houses have passed a bill here or there, but in terms of what has been signed into law, no major regulation.
Speaker 14 But fortunately, Kevin, and I don't know if you remember this from your high school submits class, but in the United States, we have 50 laboratories of democracy and they're called states. Yes.
Speaker 14 And in these states, Laws are passed.
Speaker 14 And Kevin, what if I told you that right here in the state of California, where many of the biggest AI innovations are taking place, lawmakers took a hard look at it this session and they said, we're going to do something about this.
Speaker 13 Yeah, this has been a really interesting story to follow because it seems like while the federal government is sort of mulling and debating and, you know, they have this executive order now from the Biden White House saying, like, we're going to do something about regulating AI and we're directing a bunch of agencies to study it.
Speaker 13 California just went ahead and said, let's, let's start regulating.
Speaker 14 Yeah, they said, why wait? And of course, in California, Democrats have total control over the legislature and the governor's office.
Speaker 14 And so legislation just tends to move pretty quickly through the process here.
Speaker 14 And today, I want to talk about it because there were a ton of, I think, pretty important AI regulations that did pass and one very important regulation that didn't.
Speaker 14
And I want to get all your thoughts about it. Yep.
All right.
Speaker 14 So if you've heard anything about AI regulation in California over the past few weeks, it is probably that on Sunday, Governor Gavin Newsom Newsom vetoed a bill that has been very controversial called SB 1047.
Speaker 14 And we're going to talk all about that. But before we do, I want to talk about the bills that Newsom actually did sign because there were 18 of them.
Speaker 14 And I think they go a long way toward addressing some of the most immediate concerns that people have about ways that AI could go wrong.
Speaker 13 Yes, these were sort of the ones that didn't get all the attention, but may end up being important in the long run. So what did Newsom sign into law?
Speaker 14 So I'm not going to go into all 18, but here are some of the key planks of the legislation that he did sign over the past month.
Speaker 14 One, there is a bill that makes it illegal to create or distribute sexually explicit images of a real person that appear authentic if they are intended to cause emotional distress.
Speaker 14 So these are basically what is sometimes called revenge porn that is augmented with generative AI. That is now explicitly illegal now.
Speaker 14 Newsom also expanded our existing child sexual abuse material statutes to include CSAM that is created or altered using generative AI. So you cannot use these systems to create or alter CSAM.
Speaker 14 There is a law that prohibits replicating performers' voices and likenesses in audio-visual work without their consent, including after they have died.
Speaker 14 So the movie studios cannot just clone actors' faces and voices and use them to make new movies without paying someone for that.
Speaker 14 There's a law that requires AI companies to post information about the training data they use, which of course has been a big question for us for a couple of years now.
Speaker 13
Yes, this is one that actually caught my eye. This was called AB 2013.
And this one actually feels like it could be a pretty big deal.
Speaker 13 So starting in 2026, on January 1st, 2026, companies in California that want to make an AI system publicly available will have to basically tell people where they got the data to train that system.
Speaker 14 And why haven't they been telling us that so far, Kevin?
Speaker 13 Well, you know, people have lots of sources for this data, including some that they probably shouldn't be using.
Speaker 13 So if you are a company that maybe you scrape YouTube for data to train your model, you are now going to have to disclose that when you release that model. And that could be a pretty big deal.
Speaker 14 Yeah, well, you know, I'm freaking out because I've been training this huge model using exclusively data that I scraped from the New York Times.
Speaker 14 And I heard that you guys are real sticklers about that.
Speaker 13 So
Speaker 13 I'll get our lawyer's attention on that.
Speaker 14 Yeah, so that's a big one. There is another law that will require water marking for AI images.
Speaker 14 The idea here is that if you are seeing these images out there, you should have a way to tell that they were created with generative AI.
Speaker 14 There is a law that will require the disclosure of generative AI when it's used in a healthcare setting. There's one that will make robocalls disclose when they are AI.
Speaker 13 I also think this is important, right?
Speaker 14 So if you got a call from, you know, you're a business and you got a call from what you think is a person that's placing an order or something, my understanding, it is now going to have to say, hey, by the way, I'm an automated tool.
Speaker 14 And then finally, the last one I wanted to bring up, they really want to fight against AI misinformation and deep fake election content.
Speaker 14 So they're going to ban deceptive ads that are created with generative AI, and they're going to require platforms to label election content that is created with or altered by generative AI.
Speaker 14 So there's obviously a lot in there. And I'm curious, in addition to the training data thing, if anything in there stands out as, oh, yeah, that actually seems important or useful.
Speaker 13 It all seems sort of marginally important.
Speaker 13 I think so much is going to come down to enforcement and like what happens to companies or people who actually do use generative AI in these ways that are now prohibited in California.
Speaker 13 And we'll see, I guess, as these laws start to come onto the books.
Speaker 13 I mean, it seems like the theme of the bills that Governor Newsom did sign into law is basically, if you are using AI for something important, you have to tell people that you're using AI for something important.
Speaker 13 And that seems like a pretty good, sort of relatively uncontroversial type of regulation to pass.
Speaker 13 It's not saying you can't, for most of these things, it's not saying you can't do AI in healthcare or in robocalls. It's just saying you have to identify that it's AI.
Speaker 14 Yeah. And I think that that is a good idea because there can be a lot of upside in the use of generative AI.
Speaker 14 I think for creative projects in particular, whether that's, you know, maybe you do want to make some kind of art with it or you want to use it to create an ad. I think that's okay.
Speaker 14 But there are just many cases where we want people to actively be disclosing that.
Speaker 13 Yeah. And I think we should also back up a step and say why we care about the regulations that are being passed at the state level in California.
Speaker 13 Because most people listening to this probably don't live in California. These laws won't apply to them right away.
Speaker 13 But I think there's this feeling and a reality that, you know, in the absence of strong federal regulation on AI,
Speaker 13 it's going to be the case that the laws that get passed in California and other sort of early jurisdictions will sort of set the template for how AI will be regulated more broadly.
Speaker 13 All of these AI companies, you know, many of of them operate in California, are based in California, many of their employees live in California, many of their customers are in California.
Speaker 13 And so, in the same way that California's vehicle emissions standards kind of became the national standard
Speaker 13 because you didn't want to be selling one type of car in California and another type of car in all the other states, I think the fact that California is such a huge market for AI kind of makes their state regulations kind of the de facto federal regulations.
Speaker 14
That's right. They're sort of raising the floor for all of the regulations regulations here.
And so it really does matter what gets passed here. All right.
Speaker 14
So that's the stuff that actually did get passed. And again, I think this goes toward the harms that we are likeliest to see right now.
You know, we're in an election right now, right?
Speaker 14
So there's a lot of stuff that I think the state just wanted to deal with as these harms are starting to come into view. But then there was.
Senate Bill 1047.
Speaker 14 And I think it's safe to say, Kevin, this was the most controversial AI bill that we saw this year, probably in any state.
Speaker 13
Yes, this was the big one. SB 1047 was all I could hear about for several months this year.
People in the AI industry were really worked up about it.
Speaker 13 It was sort of the subject of furious lobbying and posting and attempts to sort of sway
Speaker 13 the state lawmakers and Governor Newsom on this.
Speaker 13 It has been such a big controversy inside the AI industry because unlike these other bills, which sort of regulate the use of AI, this was a bill that attempted to to say, what should the regulations be on the models themselves, at the model layer of AI?
Speaker 13 So, Casey, for people who have not been following the sort of internicine drama of California state legislation, what is or was SB 1047?
Speaker 14 So the main requirements of this bill, Kevin, were that one, it required safety tests for models that had a development cost north of $100 million
Speaker 14 and used a ton of computing power. And it created some level of legal liability for these models if they were used to create harm.
Speaker 13 Right. And harms in this bill were sort of defined pretty specifically as things that would like cause, you know, more than $500 million of damages or include like loss of life.
Speaker 13 So we're not talking about like a model that's like, you know, given people the wrong answers on their homework. This is like things that would really create catastrophic harms out in society.
Speaker 14 Yeah, we've seen a lot of AI catastrophes over the past couple years, like the launch of Google Gemini, but but this bill would not have covered that.
Speaker 14
So look, after this bill got introduced, it then got watered down, Kevin. You know, like initially there was a plan to create a new state agency over AI.
They got rid of that.
Speaker 14 The liability requirements were actually a lot higher in the first version of this bill. And in fact, there was even at one point a requirement that derivative models would be part of the liability
Speaker 14 regime here. So that if you took Meta's Llama and you fine-tuned it and did something wrong with that, Meta would be liable for what you had done with the derivative model.
Speaker 13 Yeah, and people got really worked up about that, especially in the open source AI community.
Speaker 13 They basically said this bill would kill open source AI because who in their right mind would create an open source model and release the weights to the public if they could be held liable if someone down the road took that model and
Speaker 13 did a huge cyber attack with it or something like that. So, look, I think a lot of those fears were sort of ginned up for the purposes of rallying opposition to this bill.
Speaker 13 If you actually look at the bill,
Speaker 13 the version that was sort of voted through was much gentler when it came to these sort of derivative models.
Speaker 13 But yeah, that was a big sticking point for a lot of AI companies and investors who didn't like the bill.
Speaker 14 So that bill then passed the state assembly and the state senate in August, and it went to Gavin Newsom, who took a few weeks to think about it.
Speaker 14 But this past Sunday on September 29th, he vetoed the bill.
Speaker 13 Yeah, and that was not shocking if you had heard what Governor Newsom had been sort of saying about the bill.
Speaker 13 He had been sort of tentative whenever he was asked about whether he was going to sign it or not. So many people expected him to veto it.
Speaker 13 But there was still sort of a glimmer of hope among some of the AI safety folks that I talked to that he would sort of realize that this bill on balance was a good thing and would sign it into law.
Speaker 13 And then we would have some regulation of these huge AI models.
Speaker 14 Right.
Speaker 14 And based on the pressure that he had been getting, you might have assumed that when he vetoed the bill, he would have said what the companies lobbying him said, which is this bill goes too far.
Speaker 14 But in fact, that was not what he said.
Speaker 13 No, no, there was this very strange statement that he put out after vetoing this bill that basically made the claim that what was wrong with SB 1047 was that it wasn't restrictive enough.
Speaker 13 He basically said this model, you know, it would only apply to the biggest AI models and it wouldn't apply to smaller models. And smaller models can be just as harmful as big models sometimes.
Speaker 13 I mean, I should say basically no one believes him on this. Like, of the folks that I'm talking to, they're like, this is not actually why Governor Newsom vetoed this bill.
Speaker 13 He vetoed it because he was getting pressure from all these big companies and lobbyists, and he didn't want to do anything that could hurt the tech economy of California.
Speaker 13 But that is what he claimed, which is that the bill did not go far enough.
Speaker 14 Right. And, but, you know, we should say that he also said, we are not done with AI regulation in this state.
Speaker 14 He put together a group of people, including Fei Fei Lee, who is an early pioneer in AI research, along with an AI ethicist, a dean at UC Berkeley, and said they're going to work together to continue coming up with new guardrails for AI.
Speaker 14 I believe he also encouraged lawmakers to bring him another similar improved bill in the next session.
Speaker 14 So I understand a lot of AI safety folks are really disappointed right now, but at the same time, I fully believe that California will pass more AI regulations next year.
Speaker 13 Yeah, I think so too. I actually, so right now, people in the AI industry, many of them are celebrating having sort of successfully killed this bill that would have applied some regulations.
Speaker 13 I think, you know, a lot of the way that they killed the bill was sort of by misleading people about what was actually in the bill.
Speaker 13 If you actually look at it, it sort of wasn't as tough. It was much more sort of lenient than previous versions of the bill.
Speaker 13 And I think it actually was sort of a light touch way of regulating these huge AI models.
Speaker 13 But put that aside for a second, I think there's a potential that the tech industry will regret having killed this bill.
Speaker 14 Why is that?
Speaker 13 So I think there are two reasons. One of them is,
Speaker 13 we talked about these sort of two approaches to regulating AI, either at the model level or at the application level.
Speaker 13 And what has happened in this most recent legislative session is that California passed a bunch of use-based laws about how AI models can be used, and it did not pass.
Speaker 13 Governor Newsom vetoed the one bill that would have applied at the kind of foundational model level.
Speaker 13 And that is what the tech industry, the AI industry wanted for the most part.
Speaker 13 But I actually think there's a world in which the use-based regulation of AI becomes much more annoying for them to deal with.
Speaker 14 How so?
Speaker 13 Because this is what we've seen happen in Europe, right? Europe did take the kind of use-based approach to regulating AI with their AI Act.
Speaker 13 And now all of the American AI companies hate doing business in Europe because it's a patchwork of different regulations.
Speaker 13 There are 40 different rules that might apply based on how you're using your AI system.
Speaker 13 You need to hire a bunch of compliance people and lawyers to sort of review everything that goes out to make sure it's not violating any of those dozens of different rules.
Speaker 13 And I think there may be a point where the AI industry wishes that what it had gotten instead of this patchwork of little use-based regulations was sort of one or a handful of big, broad broad regulations that applied to the only the companies that are, have the most money and the most resources and the most compliance people and the most lawyers to sort of sign off on all this stuff.
Speaker 13 So that's one argument.
Speaker 13 The other argument for why I think killing SB 1047 may be something of an own goal for the tech industry is that Regulations around new and emerging technologies are typically written in the wake of crisis, right?
Speaker 13 There is something that happens where you know, people die or there are sort of catastrophic harms and lawmakers rush in to write some bills.
Speaker 13 The quality of those bills is generally not super high, but that's because what lawmakers are trying to do in those moments of crisis is just put a stop to the crisis.
Speaker 13 I think what the tech industry had in SB 1047 was a chance to create regulation and rules for a new technology when there was no crisis that they were dealing with.
Speaker 13
There was nothing sort of immediate. They had months to kind of work out their objections, to propose amendments.
It was sort of a peacetime regulation.
Speaker 13 And I think what may happen now is that we will get a broad AI regulation that applies to AI companies training these huge models, but we will get it at a time that is much less favorable to them because these AI systems will improve.
Speaker 13
Something will go wrong. There will be some huge cyber attack or some huge incident involving one of these AI systems.
Lawmakers will scramble to get some regulations on the books.
Speaker 13 And I think the AI industry will be much less happy with the regulations that come out of this process.
Speaker 14 Yeah, I think that that is a really smart and interesting point.
Speaker 14 You know, I have to say that I have been of mixed mind about this bill, because on one hand, I do want to see harms stopped before they come to pass.
Speaker 14 And on the other hand, I'm not convinced that California lawmakers really knew what harms they were solving for here, because I still still don't know that we have a very clear line of sight from the models we have today to the catastrophes that everyone is predicting.
Speaker 14 At the same time, Kevin, as I mentioned at the top of this segment, from the beginning, the founders have said to us, these models that we are making can, we think, eventually cause great harm.
Speaker 14 And if that is the case, if you take them at their word, when lawmakers come along and say, okay, we're going to believe you and we're going to hold you legally liable if you cause great harm and they throw up their hands and they say well no what wait hold on a second you know let's not get carried away here there is something that i think uh damages their their credibility about that and i sort of feel like both things can't be true right is what i'm saying here right
Speaker 13 yeah and i think in this weird way the fight over sb 1047 has exposed something really
Speaker 13 important and and kind of counterintuitive, which is that a lot of the people who will say, I'm an AI optimist, are also the people saying this stuff will never get so powerful that it poses any threat to human life or to society or any of these catastrophic harms that people are worried about.
Speaker 13 It is actually the doomers who are saying that this technology is going to be incredibly powerful and useful and maybe scary because it is improving at such fast rates.
Speaker 13 So you have kind of this interesting
Speaker 13 arrangement where the people who are the most optimistic about the actual capabilities of the technology are also the people who are taking the risks more seriously.
Speaker 14 Yeah, absolutely. I want to make one more point, which is that I think that we've sort of been handed a false choice here, which is, well, do we regulate
Speaker 14 the uses of AI or do we regulate the models themselves? And I think in practice, the answer is going to be both because we do this all the time, right?
Speaker 14 We regulate guns very lightly in this country and we regulate the uses of those guns, right? And I think something similar is just inevitably going to happen with these models.
Speaker 14 And so to your point, yes, the industry should be thinking about what reasonable liability ought we have in a situation where these models are insanely powerful because they're never going to get away with only regulating it at the level of the application.
Speaker 13 Yeah, I think that like regulation is coming.
Speaker 13 AI is just too powerful and we regulate every other industry that has that kind of power.
Speaker 13 And so AI is inevitably going to be regulated. I think the question for the industry is how much regulation can they live with? You know, I was at an event
Speaker 13 last week with a bunch of sort of lawyers and compliance people.
Speaker 13 The Folsom Street Fair?
Speaker 13
No, not that one. This was an event at Berkeley.
And their point was basically, whatever happens, we would like for it to happen at the federal level.
Speaker 13 because you know at least if you regulate ai at the federal level then there's sort of one law or one set of laws that companies have to follow they have clarity they know like i can use the same ai in uh california as i can in texas as i can in florida and they don't have to sort of you know hire a bunch of people to you know cross-check all of the various state laws and so their ask was basically whatever happens on ai regulation it should happen at the federal level and i think that's something that i support too.
Speaker 13 I think what we're talking about now is a world in which the federal government does not do anything about AI regulation. And so it's up to the states like California to do it instead.
Speaker 14 That's the world we live in. Well, let me end on which thing do you think made us safer from AI?
Speaker 14 The regulations that Governor Newsom signed into law in California over the past month, or the fact that people just keep leaving open AI all the time, leaving it in apparent disarray?
Speaker 13 I mean, look, I think that if you are a person who worries that AI is moving too quickly and you want it to slow down, you probably don't mind all of this drama that is going on inside the AI industry, because I think probably one effect of that is that it does actually slow things down if you're constantly losing co-founders and research leads.
Speaker 13 And so maybe that's a good thing. Maybe it's, maybe it's the open AI people leaving.
Speaker 14 I think that that's true. I think that over the past year, it feels like the main innovation in AI in Silicon Valley has just been people leaving OpenAI to start AI companies.
Speaker 14 And there's just, it just takes time. You know, it takes time to ramp those companies up and do your hiring and, you know, create your little wiki dock and notion for everything.
Speaker 14 So
Speaker 14
anyways, Kevin, it's a fascinating discussion. I'm sure we'll have a lot more to say.
But in the meantime, if you are going to make deepfakes, don't do it in California.
Speaker 14
When we come back, we're having a baby. Or at least we would if the technology was good enough.
And it's not, but it's getting there.
Speaker 13 We'll talk about the latest infertility tech.
Speaker 1 In business, they say you can have better, cheaper, or faster, but you only get to pick two.
Speaker 4 What if you could have all three?
Speaker 5 You can with Oracle Cloud Infrastructure.
Speaker 8 OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.
Speaker 10 Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.
Speaker 11 Try OCI for free at oracle.com/slash nyt.
Speaker 5 Oracle.com/slash slash nyt.
Speaker 15 Over the last two decades, the world has witnessed incredible progress.
Speaker 15 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.
Speaker 15 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ, let's rethink possibility.
Speaker 15 There are risks when investing in ETFs, including possible loss of money. ETF's risk is similar to those of stocks.
Speaker 15 Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.
Speaker 15 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com. In VESCO Distributors Incorporated.
Speaker 13 At Sutter, caring for women of all ages never stops because we know women have unique needs when it comes to health care.
Speaker 13 That's why our team of OBs and nurses are committed to building long-term relationships for lifelong care.
Speaker 13
From prenatal support to post-menopause guidance, we're here for every woman at every stage of her life. A whole team on your team, Sutter Health.
Learn more at Sutterhealth.org/slash women's health.
Speaker 13 Casey, we're going to talk today about something that has been a very hot topic in Silicon Valley recently, which is babies.
Speaker 14 Kevin, I don't know if I'm ready to have a baby with you.
Speaker 14 I'd really prefer to take it a little more slowly.
Speaker 13
We already have the podcast. That's true.
That's our baby. No, we are talking today about fertility technology because this has become a big topic within the tech world.
Speaker 13 For years, there has been this conversation in the tech world about what's being called pronatalism, which is this belief spread by Elon Musk and others that declining birth rates in the U.S.
Speaker 13 and elsewhere are a big threat to the future of civilization. And we should all be having way more children than we are.
Speaker 13 And people are sort of taking this banner in the tech industry and saying, well, maybe some technology could help us here.
Speaker 14 Yeah, well, and also just, you know, infertility is a huge problem for a lot of people, right? A lot of people want to have babies and cannot.
Speaker 14 You know, my boyfriend and I have been trying to get pregnant for many months now. We're just getting absolutely nowhere.
Speaker 14 So I'm excited that Silicon Valley is finally, you know, paying attention to this issue.
Speaker 13 Yeah, I mean, this has been an area where investors, tech founders, startups are just spending an inordinate amount of time recently.
Speaker 13 Investors have poured hundreds of millions of dollars into startups working on technology to help people conceive and have children.
Speaker 13 These are things like very specific types of genetic testing that can test a whole embryo's genome rather than just for a few sort of specific things, sperm freezing and even sort of these longer-term moonshots like artificial wombs and something called IVG, which would basically allow you to make a gamete from non-reproductive cells, to basically take some other cells from a body and use that to create a child.
Speaker 14 Which, among other things, would let same-sex couples create a child that can table their DNA.
Speaker 13 So, you know, as we know, fertility and babies and birth rates, they are not just a topic of conversation at Silicon Valley, they are also playing a role in the presidential election as people debate, you know, what kinds of restrictions and laws there should be about women's reproductive rights and health.
Speaker 13 So to talk about this subject, I wanted to bring in Julia Black.
Speaker 13 Julia is a reporter for The Information, and she wrote a great piece a few months ago called Dawn of the Silicon Valley Super Baby, which was about sort of all this investment and energy going into fertility technology right now and what it would mean for the pronatalism movement and what it would mean for fertility as a whole.
Speaker 13 Let's bring her in.
Speaker 13 Julia Black, welcome to Hard Fork.
Speaker 16 Thank you so much for having me. I'm a big fan.
Speaker 13 Oh, thanks so much. So, can you just start by telling us how you got interested in this world of fertility tech?
Speaker 16 Yeah, this started for me in a sort of unexpected way in 2022.
Speaker 16 I got a tip that kind of changed my life, which was that Elon Musk had more kids than the public realized, and some of them were with people who had not been previously disclosed.
Speaker 16 So I went out searching and found some secret twins that he'd had with one of his employees, Siobhan Zillis.
Speaker 16 And that just kind of sent me down this rabbit hole for the last couple of years where I've learned about Silicon Valley's interest in fertility tech in particular.
Speaker 16 And then I've learned about some of the technologies that are coming out of that interest.
Speaker 14 And let's try to sketch out that interest a bit. So if you are a pro-natalist, what is concerning to you about declining birth rates?
Speaker 16 So I think a lot of economists would agree that there is some concern around the fact that the majority of developed countries now are below what's called the replacement rate.
Speaker 16 And when you structure that into an economy, something happens that they call the flipped demographic pyramids.
Speaker 16 So you've got more old people, fewer young people, and young people are the people who drive the economy, who are putting into social security, who are taking the jobs, and old people are more of a weight on society.
Speaker 16 So that's the general concern at its most basic. I think a lot of people take this in some pretty wacky and sometimes dangerous directions.
Speaker 16 The white supremacist movement has become particularly interested in these demographic shifts.
Speaker 16 And some people, I think, would argue that it's not just that we all need to have more babies, that certain types of people people should be having more babies, which translates to white, educated, well-off people.
Speaker 14
Right. So there's like this very vanilla version of this, which is like having babies is good for the economy.
It's good for economic growth.
Speaker 14 It's good to make sure that there are workers to take care of us as we age. And then there is a sort of more racist, more eugenics-oriented version of it.
Speaker 14 And yeah, so it's a lot in here.
Speaker 16 Yeah, definitely. I think that this fertility tech as a field has gone from this totally ignored, underdeveloped realm of technology to something that's finally attracting investor interest.
Speaker 16 And yeah, it's also attracting some weird ideological interest. So there's a few different elements bundled up in there.
Speaker 13 Yeah, there was a quote in your piece that was sort of attributed to an anonymous meta engineer who said something like, no one is having children naturally. anymore.
Speaker 13 And that's obviously not true.
Speaker 13 Some people are still having children naturally, but it did sort of give a sense of like how pervasive this has become in the Bay Area where we live and in the tech world more broadly.
Speaker 16
Totally. And, you know, the information where I write, our audience is all Silicon Valley all the time.
So when you're talking about our small niche world, that's actually really true.
Speaker 16 And that's the reason we put that quote in the story, because it really kind of exemplified what we're hearing across the board, which is like, this is like, you know, the way that a trendy purse like takes off within a certain niche group of people.
Speaker 16
Like fertility tech is taking off. Like these people are going to dinner parties, they're telling their friends, oh, we use this new pre-implantation genetic testing service.
You got to try it.
Speaker 16
And they're trying it. And then they're telling their friends.
And it's like within this very small and non-representative community, yeah, it really has taken off.
Speaker 13 Yeah. Your piece made a point that I hadn't seen before about the fact that crypto entrepreneurs, people who are interested in cryptocurrency, seem especially interested in fertility tech.
Speaker 13 Brian Armstrong, the CEO of Coinbase, has invested in Orchid, one of these startups that is doing sort of embryo genetic screening.
Speaker 13 Vitalik Buterin, the co-founder of Ethereum, has also invested in Orchid. So what is the crypto-fertility connection?
Speaker 16 Oh my gosh, this has been like such a head scratcher and obsession of mine. And I've been trying to figure that out.
Speaker 16 The best understanding I can come to is that the crypto world is also really interested in something called decentralized science, DSI.
Speaker 16 It's kind of taking off in places like San Francisco and New York, but also in special economic zones like Honduras and El Salvador.
Speaker 16 I have a couple of these, Prospera, where people are experimenting with new scientific techniques that
Speaker 16 might not pass muster with FDA regulation. There's like a big backlash to the FDA in this crowd.
Speaker 16 So that's my best guess is that like this is a world that overlaps very much with that kind of Brian Johnson longevity obsession.
Speaker 13 He's the guy who does the blueprint method.
Speaker 14 Yeah, he's invested basically all of his energy and money goes into de-aging himself. Right.
Speaker 16 Exactly. So I think that another particularity about this kind of Silicon Valley audience and maybe more so the crypto crowd is
Speaker 16
like they just want the best of the best technology now. They want to live in the future now.
And so they hear about these technologies that might be possible, but might feel far off.
Speaker 16 And, you know, they're going to do everything they can to optimize. And, you know, optimizing their own fertility or reproductive process is like, why not?
Speaker 16 So I think especially with the genomic stuff, and especially as AI has helped genomics really advance quite quickly in the last couple of years, that particular crowd is quite drawn to these very futuristic possibilities.
Speaker 13 I mean, I kind of get it.
Speaker 13 Like I was telling Casey before we started this interview, my wife and I have gone through fertility treatment, and it seems like it should be a way to sort of bring precision and control to the sort of random process of conceiving a child.
Speaker 13 But there's so much about it that is still kind of guesswork and let's try this thing and see if it impacts this thing. And like, it's frustratingly inexact.
Speaker 13 And I can see if you are a person who's spent your whole career trying to sort of optimize systems and eliminate chance and kind of program things, like this technology would seem like a way to sort of bring order to what could otherwise be a pretty frustrating and chaotic process.
Speaker 16 Yeah. And I think I'd also note that there's a whole gamut of new fertility related technologies that are coming to market.
Speaker 14 Is that like a gamete pun that you just did?
Speaker 13 God.
Speaker 16
I did have gametes in my notes here. I was going to talk about them later.
I was just going to say that there's this like spectrum of
Speaker 16 types of technology and some of them are really far out there, really advanced, like artificial wombs and something
Speaker 16 that a company called Conception is working on to try to make gametes out of stem cells, which means like two men could, in theory, take skin cells, use them to create an egg and sperm, and reproduce using those cells.
Speaker 16 Like it's really far out there. That stuff is nowhere near possible at the moment.
Speaker 16 But I'm just saying, like, you are seeing people in Silicon Valley working across the board on, you know, current possible things versus like totally out there futuristic technologies.
Speaker 13 Yeah. And it's not, it's, it's sort of far out there in the sense that this stuff does not exist in any form that consumers can use, right? But there are serious people investing in it.
Speaker 13 Like Sam Altman is someone who has invested in this kind of, what they're calling IVG, this sort of using non-reproductive cells to essentially grow babies, right?
Speaker 16
Yeah. And I think another name I would bring up is George Church.
He is like an absolute pioneer of the genomics field, and he's got a hand in a lot of these companies.
Speaker 16 And I had the chance to talk to him, and he was actually the one who kind of hinted to me that artificial wombs, which he's currently developing for his other company, Colossal, which
Speaker 13 they're trying to bring back the woolly mammoth, right?
Speaker 16 Exactly, exactly. Which, you know, you kind of scratch your head and you're like, what's the market value of bringing back the woolly mammoth? But then you watch.
Speaker 14 I think somebody's going to try to command an army of them and to take over some small nation. Yeah.
Speaker 16 I'm sure Elon Musk would be interested.
Speaker 13 So, Julia, let's talk about these actual tools that you wrote about, some of the fertility tech that is on the market or will soon be on the market. Talk to us about ORCID.
Speaker 16 Yeah, so ORCID is a company I wrote about in July with my colleague Margo McCall, and they're doing something called PGTP, which is pre-implantation genetic testing for polygenic disorders.
Speaker 16 This is not totally new to the scientific realm. In fact, we've been testing for things like Down syndrome, which is a pretty simple disorder that's easily detectable early on.
Speaker 16
Then we also have done testing for monogenic disorders. That's things like cystic fibrosis.
But now polygenic disorders start to get really complex.
Speaker 16 This is stuff like schizophrenia and bipolar, diabetes. And so what they're claiming to detect for is the risk factor for these diseases.
Speaker 16 There is some speculation in the scientific community about how much of this is really possible.
Speaker 16 And yet, this company has got a lot of investment and has now reached a lot of consumers and is expanding actually to be nationwide. They have some partnerships with nationwide clinics.
Speaker 16 So, yeah.
Speaker 13 How much does it cost to get your embryos tested this way?
Speaker 16 It costs $2,500 an embryo.
Speaker 13 And that's on top of what you're already paying for IVF, right? So, this is not something that's sort of like mass accessible yet.
Speaker 16 Exactly.
Speaker 14 But it is part of the core American competency of making healthcare more expensive for everyone at all times.
Speaker 13 We've got that.
Speaker 14 Kevin, for the price of a mid-range MacBook Pro, you could know everything about the genetic predisposition of your future child. Seems like a small price to pay.
Speaker 13 Yeah.
Speaker 13 Well, of one embryo.
Speaker 16 And of course, you're not just looking at one embryo. The idea is that.
Speaker 14 How many embryos are y'all implanting in a typical situation?
Speaker 16 Well, you're probably just going to implant one, but the idea is that you want to test a range. So let's say a couple during the IVF process creates eight viable embryos.
Speaker 16 So then this company is offering to test each of those for $2,500 a pop.
Speaker 14 So it adds up. Yeah.
Speaker 16 So then they're going to look at those tests and start to compare them and say, okay, this one is more predisposed.
Speaker 16 They're not saying that this kid's going to have this disease, but more likely to have type 1 diabetes. Whereas this kid maybe is more likely to have bipolar disorder.
Speaker 16 So it gives you this like chart, which is supposed to be this risk picture of your child's future.
Speaker 13 So actual, like babies have been born using this
Speaker 13
pre-implantation genetic screening, right? This is not theoretical. There are children out in the world today who were born after being tested this way.
Correct.
Speaker 16 So as embryos, they were tested, and then they were the embryo selected to be implanted.
Speaker 13 How many babies has this been performed on? Like, how widespread is this?
Speaker 16 ORCID wouldn't give us a number, but I think a lot of parents are especially interested in this when they have genetic disorders that run in their families.
Speaker 16 One really common one is the BRCA gene, which is responsible for breast and ovarian cancer in many cases. You know, that's one use case example.
Speaker 16 Something else we did discover through this piece is that Elon and Siobhan, the parents who I originally discovered two years ago, did use this.
Speaker 16 I don't know if they were using it to test for complex hereditary disorders.
Speaker 16 I did speak with a few customers, as did Margot, my colleague, who did tell us that IQ testing was something that they had been offered by the company.
Speaker 16 The company did not confirm this themselves, but this is something we said, we heard from a few people.
Speaker 13 What does that even mean? Like you can, there's some sort of gene that if it's present, you're likely to have a higher IQ as an adult than an embryo without that gene.
Speaker 16 So remember that part when I said the scientific community is very skeptical of some of these claims?
Speaker 16 That would be the chief one. I think that the idea that you can detect intelligence in an embryo from what is a very complex picture of a combination of many genes.
Speaker 16 Yeah, that's very much up for debate.
Speaker 14 So, I can understand why parents who had had that history with cancer that you just mentioned would want to know if their future child was going to be at risk for that and would be willing to, you know, pay a high price to try to avoid that.
Speaker 14 At the same time, I imagine that there are other more kind of nice-to-have
Speaker 14 features that these parents might be testing for, or things that sort of stray a little bit closer to the eugenics line that we were, you know, talking about at the top of the interview. So
Speaker 14 are these services able to sort of go in a bit of a darker or more concerning direction? Or what have you heard about maybe potential misuses of the screening technology?
Speaker 16
Yeah, I mean, I do want to be really clear. If you go to ORCID's website right now, they lay out very clearly the services that they claim to offer.
I think it's something like 13 factors.
Speaker 16 And again, it's the things like diabetes, bipolar risk. They're not claiming publicly to detect IQ.
Speaker 16 For whatever reason, several different customers brought that to us and said that it was part of the package they received.
Speaker 16 Yes, though, I do think a lot of people, a lot of bioethicists would argue that we enter this slippery slope territory
Speaker 16 where, you know, it's one thing, as you say, to make sure that your child doesn't die of some horrific rare disorder.
Speaker 16 It's another thing when you start to get into the realm of, you know, characteristics of their appearance or their intelligence or behavioral traits.
Speaker 16 And I think that even something like bipolar schizophrenia is inching more towards like behavioral traits.
Speaker 16 In fact, when I wrote a piece two years ago about a different company called Genomic Prediction, I spoke with this couple, Simone and Malcolm Collins, who again were doing some of this decentralized science DIY stuff.
Speaker 16 They were taking the data that they got from one company, plugging it into another genetics company that was actually not supposed to be for embryos, but they were able to upload the data as if it was a person.
Speaker 16 And they showed showed me their spreadsheets and like, it really was
Speaker 16 just wild how many factors they claimed to be detecting. They were talking about things like brain fog and propensity for headaches and, you know, mood disorders of various kinds.
Speaker 16 And so, yeah, some people are starting to inch more and more into that territory of like these very complex characteristics that make up who we are as human beings. And
Speaker 16 yeah, you would hate to see it fall into the hands of someone who wanted to detect for blonde hair and blue eyes. Right.
Speaker 14 Are they able to detect the propensity of an embryo to start a podcast later in life?
Speaker 13 Yeah, no one's going to be able to do that. We definitely want to keep that out of the gene pool.
Speaker 13 So I just have a question about the politics of all this, right?
Speaker 13 Because this is, we're at a moment where there's a lot of discussion on the national political stage about reproductive rights and abortion rights.
Speaker 13 There are some Republicans
Speaker 13 who don't even think that that we should be doing IVF, that that's sort of a bridge too far. How do the people who are pushing for this kind of investment in fertility tech square
Speaker 13 their belief that this technology should be able to exist and be able to help people have more children with the very real possibility that some elected officials want to make this kind of thing illegal?
Speaker 13 I'm thinking in particular about Elon Musk, who is backing Republicans, who would make it much harder for women to access reproductive health care of all kinds, but also wants there to be this population boom because people are able to have more children.
Speaker 13 So how do they square that belief?
Speaker 16 I think we're watching that play out before our eyes right now. I mean, a phrase I think of a lot in covering Silicon Valley's kind of move towards the right is strange bedfellows.
Speaker 16 Like you are getting these alliances that in so many ways don't make any sense.
Speaker 16 A couple of weeks ago, I was at a conference, a tech conference in San Francisco, and they had the Heritage Foundation like up there with these AI founders.
Speaker 16
And like, they did, in fact, have a panel on fertility, surprise, surprise. And there are just so many incompatibilities that I really don't know how they're going to square them.
The IVF question is,
Speaker 16 of course, the main one that is going to come up in terms of very tangible policy very soon. Yeah, I don't have much of an answer except like
Speaker 16 they're going to be in for a rude awakening, I think.
Speaker 13 Yeah.
Speaker 13 I mean, I guess the question that a lot of people have about this topic is like, how far are we actually from the kind of science fiction sort of Gattaca scenario where like you are a well-off person, you want to have a child, you kind of like go into the fertility clinic and you just kind of get like a menu.
Speaker 13 And it's like, well, do you want your child to be six feet tall? Do you want your child to have a high IQ? Like that'll be another $500.
Speaker 13 Like how far are we from the scenario in which people, or at least, you know, wealthy people with access to good reproductive health care, will have the ability to kind of select traits for their offspring.
Speaker 16
Yeah. So again, I think this comes down to two questions.
One is when is the technology going to get there? And I think a lot of people in this field would argue sooner than you think.
Speaker 16 And then the other is when is society going to get there to a place where we actually want that and where our lawmakers actually make that possible.
Speaker 14 And already it seems like it's starting to become a bit of a status symbol.
Speaker 14 Like I think your story gets at this, where it's like friends love bragging to their friends that they just spent $20,000 on genetic screening for their embryos.
Speaker 14 And what, you didn't do that for your embryos? You know, so to me, I feel like part of that Gattaca world has actually already arrived here, Kevin, in at least some fashion.
Speaker 13 Totally.
Speaker 13 Well, and I think what people don't maybe realize unless they've gone through this kind of fertility process is like there is already a kind of report card that you get back when you, when you have, you know, embryos and they sort of tell you like this embryo has this grade there's already a lot of choice on the frontiers of fertility uh today
Speaker 13 but what we're talking about here is sort of a very different possible future in which it's not just screening out sort of the most debilitating uh you know and and harmful uh genetic conditions but it's truly getting down to the level of like you know how tall do you want your child to be do you want them to have a higher risk of bipolar so something like that feels just different fundamentally from what exists today.
Speaker 13 But maybe people, you know, 20 years ago were saying the same thing about the testing that now seems pretty commonplace today.
Speaker 16 I mean, I will say this. Women's bodies have always been this political battleground that attracts controversy, kind of no matter what.
Speaker 16 There have been ethical debates over everything from IVF to epidural use.
Speaker 16 So, you know, on one hand, like if it's going to have to do with women and reproduction, it's going to be controversial.
Speaker 16 On the other hand, there are some very real ethical concerns here that should be addressed and should be regulated very thoughtfully and i i fear that like with so many things happening with tech like with ai as we've all seen the regulators are probably behind on this are probably not working at the same speed as silicon valley innovators so yeah they probably need to do some catch up
Speaker 13 Yeah, well, Julia, thank you so much. This is really fascinating, and it's a story I hope we'll keep track of as it continues to develop.
Speaker 14 Yeah, it was great to meet you.
Speaker 13 Yeah, thank you so much.
Speaker 14 Well, we come back, OpenAI's big fundraise and other big updates from the past week in technology news.
Speaker 1 In business, they say you can have better, cheaper, or faster, but you only get to pick two.
Speaker 4 What if you could have all three?
Speaker 5 You can with Oracle Cloud Infrastructure.
Speaker 8 OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.
Speaker 10 Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.
Speaker 11 Try OCI for free at oracle.com slash nyt.
Speaker 5 Oracle.com slash nyt.
Speaker 13
I don't mean to interrupt your meal, but I love Geico's fast and friendly claim service. Well, that's how Geico gets 97% customer satisfaction.
Yeah, I'll let you get back to your food.
Speaker 13 Uh, so are you just gonna watch me eat? Get more than just savings, get more with Geico. At Sutter, breakthrough cancer care never stops.
Speaker 13 Our teams of doctors, surgeons, and nurses are dedicated to you from day one of your diagnosis. Our 22 cancer centers deliver nationally recognized care every day and every step of your way.
Speaker 13
And we're located right in your community, ready to fight by your side. A whole team on your team, Sutter Health.
Learn more at sutterhealth.org slash cancer.
Speaker 13 Well, Casey, from time to time, we like to update our listeners on some stories that we've covered in the past that have had some new developments in a segment we call system update
Speaker 13 what's happening in the news kevin so the first system update is that open ai a company we've talked about once or twice on this show has just completed a 6.6 billion dollar fundraising deal that nearly doubles the company's valuation from just nine months ago.
Speaker 13 The new round was led by Thrive Capital. Lots of other participants in this fundraising round, Microsoft, NVIDIA, SoftBank, MGX, which is the sovereign wealth fund of the United Arab Emirates,
Speaker 13 but notably not Apple, which backed away and declined to invest in OpenAI's most recent round, according to reports.
Speaker 14
Yeah, and you know, there are many reasons why companies decide not to invest in things like this. Maybe they didn't like the financials.
Maybe they had concerns about the product roadmap.
Speaker 14 But of course, you can't help but wonder whether Apple looked at the steady stream of departures out of OpenAI over the past year and thought, maybe we don't want to put our eggs in that basket.
Speaker 13 So Casey, you've been covering tech startups and fundraising for a long time. How does $6.6 billion in fundraising at a $157 billion valuation compare to what other startups are raising?
Speaker 14 So that is, we believe, the largest venture capital fundraise of all time. OpenAI had previously raised $10 billion, but that was a sort of multi-year commitment.
Speaker 14 So it's thought of somewhat differently, and it's a huge amount of money.
Speaker 14 At the same time, Kevin, your colleagues Mike Isaac and Aaron Griffith at the Times reported last week that Open AI is expecting about $3.7 billion in sales this year and $11.6 billion next year.
Speaker 14 So assuming that is the case, that is an insanely high growth rate and it really only values the company at around 15 times or so its forward earnings.
Speaker 14 And believe it or not, in Silicon Valley, companies often have much crazier multiples, right? They're raising at a 50 or a 100x multiple on their expected revenue.
Speaker 14 So as big as a fundraise as this is, it's weirdly kind of in line with typical Silicon Valley projections.
Speaker 13 Yeah, I mean, I think the more relevant fact here about OpenAI's financials is this company, despite having had huge success with ChatGPT and making billions of dollars in revenue, is still burning through cash at just a phenomenal rate.
Speaker 13 So they're projected to lose something like $5 billion
Speaker 13 this year.
Speaker 13 And that's in large part because it's just so damn expensive to keep building and training these models and paying all the people to do that.
Speaker 14 Yeah.
Speaker 14 And, you know, I went to a press preview for their developer day this week, Kevin, and they had some pretty lavish charcuterie boards that they had set out for us, a variety of beverages, and some macarones, which I'll say were delicious.
Speaker 14 So with $6.6 billion
Speaker 13 now in the bank, I guess they will be able to afford much better and bigger charcuterie boards.
Speaker 14 They can also probably afford a much larger settlement with the New York Times.
Speaker 13 Oh, yeah, I've almost forgot to disclose our mandatory disclosure, which is that the New York Times company is suing OpenAI and Microsoft over copyright issues related to the training of their models.
Speaker 13 But, you know, this is sort of goes to one of the big questions swirling around this company right now, which is, is all of this growth ultimately going to result in big profits for their investors?
Speaker 13 Or are they just kind of burning cash until they can't get any more cash?
Speaker 13 You know, we've seen companies before that have been unprofitable for huge stretches of time.
Speaker 13 Think about Amazon or Uber more recently, which was unprofitable for most of its existence and then kind of found ways to become more profitable over time.
Speaker 13 Or is OpenAI the kind of company that will just keep burning cash until they run out?
Speaker 14 Well, it's clear that they have found some real consumer demand and have been able to answer that with products that people really like, right?
Speaker 14 Like ChatGPT is probably the most successful new consumer brand launched on the internet since TikTok, I would say.
Speaker 14 And more and more corporations are signing up to use its APIs and build custom enterprise versions of its software to perform various tasks, some of which can save those companies a lot of money.
Speaker 14 So there is something very real here.
Speaker 14 And we know that while generative AI is extremely expensive to run, there are big computing costs, energy costs, and so on, the growth rate suggests that at some point these numbers will pencil out, or at least that's what I think.
Speaker 13 Yeah, I think so too.
Speaker 13 And I think one other thing that we should talk about in relationship to this fundraising deal is that OpenAI is apparently telling employees that they may now be able to do a, what's called a tender offer
Speaker 13 for their stakes in the company.
Speaker 14 That's where they make you an offer, but they say it in a really sort of nice, gentle way. They say, baby, what if we just gave you a few dollars for those sweet, sweet shares?
Speaker 13
Exactly. That's a tender offer.
Yep. A tender offer also means that employees could potentially cash out their employee shares in this company.
Speaker 13 And as we know, OpenAI and other AI companies sometimes pay people phenomenal amounts of money to work there. And so a tender offer could be pretty meaningful.
Speaker 14
All right. Well, that's enough of an update about OpenAI.
What else is in the news, Kevin?
Speaker 13 So this next system update is about Reddit, a company we've talked about in the context of last year's big protests by moderators.
Speaker 13 As you may remember, there were some changes to Reddit's pricing structure for developers. Moderators got very upset about this.
Speaker 13 A bunch of them took their subreddits private in protest of the company's decisions. It sort of threatened to hurt the site as a whole.
Speaker 13 And as of this week, Reddit moderators, according to these new rules, will not be able to change the public or private status of their subreddit without first submitting a request to a Reddit admin.
Speaker 13 This policy will apply to all community types on Reddit, and it is basically trying to take away one of the ways that users and moderators were able to stage a protest against the company last year.
Speaker 14 Right, because if all of a sudden most of the Reddits are private, it drives traffic away from Reddit, which means less advertising revenue for them.
Speaker 14 And so, you know, this was a really novel form of protest, I think, that Redditors essentially invented.
Speaker 14 And the company has now gotten around to saying, yeah, we hate that and you can't do it anymore.
Speaker 14 But you know what I love about this story so much, Kevin, is that in a way, this like mirrors the history of free speech in America, which was like, you know, at first it's like, go ahead, stage your protests, gather anywhere, say whatever you want.
Speaker 14 And now it's like, oh yeah, you can have a protest, but you do need to apply down at City Hall.
Speaker 14 And we're actually going to put you in the designated free speech zone and we're going to pelt you with tomatoes while you hold up your protest signs.
Speaker 14 So Reddit is just sort of finally getting around to the same idea that many American municipalities have had, which is that despite what it says in the First Amendment, we hate free speech in this country.
Speaker 13
Yeah. Yeah.
Despite all your rage, Redditors, you are still just a rat in a cage.
Speaker 14 That's actually kind of a catchy lyric, Kevin. You ever thought about setting that to music?
Speaker 13 I'll keep it in consideration. Okay, next item on our system update.
Speaker 13 This one is about deep fakes, and it's from a story in the New York Times with the title Deep Fake Caller poses as Ukrainian Official in Exchange with Key Senator.
Speaker 13 This is a story about something that happened to Senator Benjamin Cardin, who is the chairman of the Senate Foreign Relations Committee, and it is a wild story.
Speaker 14 Oh my gosh, yes. Like this is sort of like if a Mission Impossible movie came out today, this is the sort of scene that you would expect to open it.
Speaker 13 So this story was first reported by Punch Bowl News, but the Times learned about it from an email that was sent by Senate Security to lawmakers' offices and started to piece together some of what happened here.
Speaker 13 Senator Cardin got an email from someone claiming to be Dmitro Kuleba, Ukraine's former minister of foreign affairs, asking him to meet over Zoom.
Speaker 13 He got on Zoom, took this meeting, and saw someone who looked and sounded like Dmitro Kuleba, but this person started asking weird questions like, do you support long-range missiles into Russian territory?
Speaker 13 The senator reportedly ended this call and reported it to State Department authorities who confirmed that this was indeed a deep fake and not really the former Ukrainian foreign minister.
Speaker 14 So, you know, this is something that people have speculated about for years, written sci-fi about.
Speaker 14 But look, I think we're getting to a point now, Kevin, where like with the family members in your life or like close business associates, it's actually time to come up with a code word.
Speaker 13 Totally.
Speaker 14 Like have a code word.
Speaker 14 And if you get invited to a Zoom and the conversation takes a sort of suspicious turn, and you know, certainly if ever any one of my friends asked me about long-range missiles into Russian territory, I would get suspicious.
Speaker 14 And that's when you say, hey, what's the code word? And if they don't know it, well, you know that you've been deep fake.
Speaker 14 But it is pretty wild to think that we have already arrived at that point where now U.S. editors need to be on the lookout for this sort of thing.
Speaker 13 Yeah, I thought we probably had another year or two before this would start to happen because among other reasons, like the video deep fake technology for like real-time video conferences just isn't that good yet.
Speaker 13 But, you know, as we know, people sort of pay flitting attention to Zoom calls. Maybe you're doing something in another window.
Speaker 13 Maybe you're not looking super closely at the lips and the mouth of the person you're chatting with.
Speaker 13 And so this might just totally fool you, even if you are someone sophisticated who knows that this threat is out there.
Speaker 14 Yeah. And, you know, let me throw something else into the mix.
Speaker 14 So one of the things that OpenAI announced at their developer day this week is the availability of the API for their real-time voice tool that made such a splash earlier this year when, of course, you'll remember that people thought it sounded a little bit too much like Scarlett Johansson.
Speaker 14 Well, now other companies are going to be able to come and use those voices for their own purposes. All you need
Speaker 14 in addition to that is some of the voice cloning technology that companies like 11 Labs are working on.
Speaker 14 All of a sudden, you're going to have extremely plausible phone calls that can do this exact exact sort of thing with the added bonus that you don't have to create a sort of
Speaker 14 realistic like visual depiction of the person.
Speaker 14 So I just want to make sure that we keep paying attention to this stuff because people are already running all sorts of scams with this and it just seems like it's going to keep getting worse.
Speaker 13 Yeah.
Speaker 13
Next system update. Sonos has a plan to earn back your trust.
And here it is. This is from The Verge.
Speaker 13 As you will remember, back in May, Sonos got into a lot of trouble because it replaced replaced its app, which allowed you to control your internet-connected speakers, with a new and much worse app that basically users said was impossible to use.
Speaker 13 Among the features that were either missing or broken were things like local library support alarms, queue management, whatever that is, and even some accessibility options.
Speaker 14 Cue management is like the order of the songs that are playing, which is hugely important. I'm constantly adjusting my cue.
Speaker 13 Oh, well.
Speaker 14 You know, because sometimes you want to hear a song a little earlier. Sometimes you want to hear it later.
Speaker 13 So I'm not a Sonos guy, but this was a big deal for you because you are a Sonos guy, and this was a big change that you did not like.
Speaker 14 I have made a massive investment in Sonos, Kevin. And when I heard they had a plan to earn back my trust, the first thing I thought was, did you think I trusted you?
Speaker 14 Because I never trusted you, Sonos.
Speaker 13 I've had my eye on you for a long time.
Speaker 14 You know, the rollout of this app, Kevin, also sort of made me giggle a little bit because while everyone else was complaining about the new app and the features didn't have, I was like, the old app never worked for me either.
Speaker 14 Like these people, here's the thing, they make wonderful hardware.
Speaker 14 And when everything is firing on all cylinders, and when you actually manage to connect the Sonos to your Spotify and your house is rocking, there's nothing like it.
Speaker 14 The problem is it is a very inconsistent system.
Speaker 14 And everyone has been hoping that eventually they'd get their act together and finally bring all the pieces together. And then we could just enjoy the very good hardware that they've created.
Speaker 13 It does not seem like it should be something that takes like years of work and many talented engineers to do is to like make a speaker that connects to the internet in your house and plays the songs from your Spotify.
Speaker 13 Like that seems like a tractable technology problem, but it appears to have thrown this company into a state of chaos.
Speaker 14 It does, but you know, look, I do think that there are real technical challenges with synchronizing the audio across multiple speakers.
Speaker 14 I actually think this is where Sonos gets into the most trouble is if you're in a relatively large space, you have a limited Wi-Fi connection, you have a stream of music, and you need to route it to, let's say, five, six, seven speakers, and all of the music has to be in sync at all times.
Speaker 14 That is a technical challenge.
Speaker 14 And they have made some strides in fixing it, but then along comes this after buckle. And well, yeah, they've been having a terrible year.
Speaker 13 Well, Casey, you and other Sonos users will be pleased to hear that Sonos has a seven-point plan to earn back your trust. And let me just read some of these points to you.
Speaker 13 Point number one is unwavering focus on customer experience.
Speaker 14 You know, I was so sad to see that focus wavering over the past year. I'm glad, glad that we will no longer be wavering.
Speaker 13 Point two, increasing the stringency of pre-launch testing seems like a good idea. Point three, approaching change with humility.
Speaker 14 Have you noticed that so far all these are fake changes?
Speaker 13
Like, none of these are real things. None of these are fix the damn app.
Exactly.
Speaker 13 But they also say they're going to commit to relentless app improvement and also extending our home speaker warranties.
Speaker 14
Now, that is actually a real thing. Yeah.
So if you have a home speaker that you bought from Sonos over the past year or so, they will extend the warranty for another year.
Speaker 14 And that actually does seem like a nice, even though, by the way, it's like the problem is not that the speakers stopped working. So extending the warranty doesn't actually really do anything.
Speaker 14 But, you know, I guess it is something nice we got out of all this.
Speaker 13 My favorite change that they proposed actually in this seven-point plan is number four, which says that they are going to appoint a quality ombudsperson, which is tech companies speak for a person that people can yell at when their speakers start to malfunction.
Speaker 13 And I got to say, this sounds like the worst job in America.
Speaker 14 I don't know if, like, it got you a direct email line to Patrick Spence, the CEO of Sonos, and you could just sort of forward him all complaints that you were receiving.
Speaker 14
That actually sounds kind of fun. I'd be interested in that job, honestly.
I'd be like, Patrick, by the way, another 400 emails from people who can't get Spotify to work.
Speaker 13 That would be fun to do. Well, if this journalism thing doesn't pan out, they are hiring.
Speaker 14 I want to be part of the solution, Kevin.
Speaker 13 And the company also said that they are pegging employees. They're pegging employees? Oh, my God.
Speaker 14 Wait, why did we save this for the end of the segment?
Speaker 14
Gang, we got some breaking news. Send out a New York Times push alert.
Sonos is pegging its employees.
Speaker 13 Wow, I really stepped into that one. Wow.
Speaker 14 Just when they had a plan to earn back my trust.
Speaker 13 Talk about an unwavering focus on the customer experience.
Speaker 13 We're going to use this,
Speaker 13 by the way. Yeah, this is staying in.
Speaker 14 This is staying in.
Speaker 13 What I was trying to say is that the company has demonstrated its commitment to these changes by pegging executive bonuses to improving the quality of the app and rebuilding customer trust.
Speaker 13 But now I almost don't want to say that because what just preceded this was so much more energy.
Speaker 14 I'm very glad you did because the number one thing I've been thinking this whole time is how our executive bonus is being affected by this.
Speaker 14 Because if these people are not properly incentivized, Kevin, to maintain their unwavering focus on the customer experience, I don't know how we're ever going to solve this.
Speaker 13 I think we should implement a quality umbuds person for the hard fork podcast.
Speaker 14
Sure. Come at me, bro.
Yeah. Yeah.
Speaker 13 Is that you?
Speaker 14
Well, look, let me just say, we do read the emails that are sent to us and we're hearing the feedback loud and clear. Yeah.
Which is that you would rather be listening to a different podcast.
Speaker 13 I don't know why you emailed that to us, but you did. So thank you.
Speaker 13 We did get one email this week that I thought was very nice, which was from a person who said, you know, because we did that whole thing on the show last week about the hot Kevin Roos from the Netflix documentary.
Speaker 13 And there was someone who wrote in who said a very nice thing, which was that actually
Speaker 13 the real Kevin Roos is the hot Kevin Roost. Oh, and I appreciate that.
Speaker 14 That is very nice.
Speaker 14 We do have nice listeners. Yeah.
Speaker 1 In business, they say you can have better, cheaper, or faster, but you only get to pick two.
Speaker 4 What if you could have all three?
Speaker 5 You can with Oracle Cloud Infrastructure.
Speaker 8 OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.
Speaker 10 Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.
Speaker 11 Try OCI for free at oracle.com slash nyt.
Speaker 5 Oracle.com slash nyt.
Speaker 13
I don't mean to interrupt your meal, but I saw you from across a cafe and you're the Geico Gecko, right? In the flesh. Oh, my goodness.
This is huge to finally meet you.
Speaker 13
I love Geico's fast-and-friendly claim service. Well, that's how Geico gets 97% customer satisfaction.
Anyway, that's all. Enjoy the rest of your food.
No worries.
Speaker 13
Uh, so are you just gonna watch me eat? Oh, sorry. Just a little starstruck.
I'll be on my way. If you're gonna stick around, just pull up a chair.
You're the best. Get more than just savings.
Speaker 13 Get more with Geico. At Sutter, Healing Hearts never stops.
Speaker 13 Our specialists provide life-changing cardiac care for every heartbeat, every step of the way, and are dedicated to helping hearts love longer and beat stronger.
Speaker 13 Whether it's transplants, arrhythmias, or blood pressure management, pioneering heart care isn't just our purpose, it's our promise. A whole team on your team, Sutter Health.
Speaker 13 Learn more at Sutterhealth.org slash heart.
Speaker 13
Heart Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant.
This episode was fact-checked by Ina Alvarado. Today's show was engineered by Chris Wood.
Speaker 13
Original music by Alicia BetyouTube, Marion Lozano, Diane Wong, Corey Schreppel, and Dan Powell. Our audience editor is Nell Galogli.
Video production by Ryan Manning and Chris Schott.
Speaker 13 You can watch this whole episode on YouTube at youtube.com/slash hard fork. Special thanks to Paula Schuman, Pui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.
Speaker 13 As always, you can email us at hardfork at nytimes.com. Or you can reach our quality ombudsperson at casey at platformer.news.
Speaker 14 By the way, if you have feedback on the new Times subscription, I know they're changing it, we're the Ezra Klein Show at nytimes.com.
Speaker 13
So, um, I was just parking my car, and then I saw you, the Gecko, huge fan. I'm always honored to meet fans out in the wild.
The honor's mine.
Speaker 16 I just love being able to file a claim in under two minutes with the Geico app.
Speaker 13
Well, the Geico app is top-notch. I know you get asked this all the time, but could you sign it? Sign what? The app? Yeah, sure.
Oh, that means so much. Oh, it rubbed off the screen when I touched it.
Speaker 13
Could you sign it again? Anything to help, I suppose. You're the best.
Get more than just savings, get more with Geico.