Hard Fork

A.I. Action Plans + The College Student Who Broke Job Interviews + Hot Mess Express

March 21, 2025 1h 1m Episode 128
“A.I. companies are slowly and haltingly learning to speak the language of Donald Trump.”

Listen and Follow Along

Full Transcript

This podcast is supported by Oracle.

AI requires a lot of compute power, and the cost for your AI workloads can spiral. That is, unless you're running on OCI, Oracle Cloud Infrastructure.
This was the cloud built for AI, a blazing fast enterprise-grade platform for your infrastructure, database apps, and all of your AI workloads. Right now, Oracle can cut your current cloud bill in half if you move to OCI.
Minimum financial commitment and other terms apply. Offer ends March 31st.
See if you qualify at oracle.com slash hardfork. Oracle.com slash hardfork.
Casey, I have a bone to pick with you. What's that? What did I do? So on Saturday, as you know, we had a birthday party at our house.
Wonderful birthday party. My son.
Yeah, and it was also a housewarming party. Housewarming party.
And you and your boyfriend came. Lovely to see you there.
Thanks for coming. But you brought him this present.
We specifically said no presents. He did say that.
And you brought him this present that was called the dino truck. Yes, and here's why.
Because I know that your son loves trucks, and I thought, what is the best kind of truck I could think of? And that would be a truck that was also a dinosaur that was full of dinosaurs. And so that's what I got him.
Yes. It's very, like, Pimp My Ride, Conan, because it's a dinosaur truck that contains within it 12 other dinosaur trucks.
That's right. And you sort of, like, assemble it all together.
But my son has not stopped playing with it. He absolutely loves it.

And as a result, about twice a day,

I now step on a very painful dyno truck that has been left somewhere on my house.

Oh, no.

He's loving it.

I am not.

I mean, when I think it was the best kind of gift

I could get for the Roos family,

it is something that your son enjoys

and that causes you physical pain.

So I think that was a slay on my part. Mission accomplished.
I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Newton from Platformer.
And this is Hard For. This week, America is building an AI action plan.
We'll tell you how tech companies are trying to exploit it.

Then, Columbia University sophomore Roy Lee joins us to talk about the tool he built

to help software engineers cheat their way through job interviews

and why he might get kicked out of school over it.

And finally, the Hot Mess Express is action plan for today's episode. Okay, let's hear it.
I want to talk about action plans. So, Casey, as you know, because you wrote about it this week, there have been these AI action plans that all the big AI companies and think tanks and nonprofits have been submitting to the Trump administration over the past couple of weeks.
Yes, there was the Paris AI Action Summit at which no action was taken or really even proposed. And then the White House came forward and said, we're going to make our own action plan.
And why don't you, you companies and anyone else who wants to make a public comment, go ahead and tell us what you think we should do? Yeah, so these kind of public comment periods are not unusual. Agencies of the government sort of open themselves up for submissions from the public all the time on various issues.
But this one caught our eye because it was related to AI. And it was essentially the Trump administration trying to figure out what to do about AI and the potential that AI is going to accelerate during the four years that Donald Trump is in office.
Yes, I think that's how the Trump administration saw it. And I think for the big AI companies, it was really a chance to present the president with a list of their absolute fondest wishes and dreams for what the best possible deal they could get from the government would look like.
Yes. So I think there's some interesting stuff in them, but I also think there's kind of a broader interesting story about how the tech companies want or don't want government to be involved in helping them build and manage these very powerful AI systems.
Yes. Let's get into it.
Okay. But first, because this is an AI-related segment, we should make our standard disclosures.
Do you want to switch it up this week? Do you want to do mine and I'll do yours? Yeah, sure. The New York Times is suing Microsoft and OpenAI over alleged copyright violations.
Correct. And Casey's Manthropic works at Anthropic.
That's right. Okay, so you wrote about these submissions this week.
Where do you want to start? Well, let's start at maybe some of the things that are a little bit less controversial, right? I think there are some pretty good ideas in these action plans, and I actually think the Trump administration will probably follow through on them. So, for example, they talk about wanting to expand the energy capacity that we have in the United States so that we can have the power that it will take to do everything with AI that we want to.
They also talk about encouraging the government to explore positive uses of AI, right? Potentially deliver better services to citizens. That would be good if that happened.
So there's a lot in these documents about that. But once you get beyond that surface layer, Kevin, there is a lot of essentially what these companies have always wanted the government to tell them, and they are now finally getting a chance to say, hey, please, please, please do this.
And what are those things? So, for example, they are really, really excited about the idea that Donald Trump might declare definitively that they have carte blanche to train on copyrighted materials. Now, this is, of course, at the heart of the Times lawsuit against OpenAI, but it's not just OpenAI that wants the green light to do this, right? Because all these AI labs are under similar legal threat.
So it's in Google's AI action plan. It is in Meta's AI action plan.
In fact, Meta says that Trump should unilaterally, without Congress, just issue an executive order and say, yeah, it's okay for these AI labs to train on copyrighted material. Go nuts.
OpenAI, I think, had a frankly ridiculous statement in their AI action plan, which is that if Trump does not do this, if he does not give AI companies carte blanche to train on copyrighted materials, we will immediately lose the AI race to China, and it will just be deep-seek everything from here on out. Huh.
I mean, obviously they have interest in making that case and having the Trump administration give them sort of a free pass, but can they actually do that? Like, could Donald Trump issue an executive order tomorrow and say there's no such thing as copyright anymore when it comes to the data used to train large language models? Well, Kevin, lately, the Trump administration has been issuing a lot of executive orders that people have said, well, hey, you're not allowed to do that. That's actually not constitutional.
And yet he keeps doing it. And some of these things have been struck down by the courts and some haven't been.
And there seems to be a kind of flood the zone strategy where we're just going to sort of do whatever we want, and the courts may undo some of it, but they're probably not going to undo all of it. So where would copyright executive order fit into that? I don't know.
Yeah, I mean, my hunch is that this will not happen via executive order, that it will be left up to the courts to decide. But yeah, I mean,

it's certainly in their interest to argue that this all should be allowed and kosher and to

sort of preempt any potential litigation against them. Was anyone opposed to that idea?

Yes. So a group of more than 400 Hollywood artists, including Ben Stiller, Mark Ruffalo,

Cynthia Erivo, and Cate Blanchett signed a letter saying, hey, do not grant an exemption from

a copy of cultural leadership in the world. You know, it's like so much global culture is downstream of American culture.
And they said, if you create disincentives for us to create new works, because we can no longer make any money from it economically, because AI just decimates our business, we are going to lose that cultural leadership. And so I would actually call on Ben Stiller, Mark Ruffalo, Cynthia Erivo, and Kate Blanchett to come on the Hardcore Podcast and tell us more about that.
We'd love to meet you and hear your stories. I would call on them to frame their opposition in the form of a musical.

Cynthia Erivo in particular.

I have a proposal for the showstopper tune of that musical.

Have you written it?

Yeah.

It's called Defying Copyright.

Oh, boy.

Wow.

You wouldn't even try for a rhyme.

You know, when it comes to copyright violations, Cynthia Revo is decrying depravity. And that's how you do it, Kevin.
Okay, back to the serious issues in these AI action plans, Casey. Yeah, there's another big plank that gets repeated in these submissions, Kevin.
And that is this idea that these companies do not want to be subject to a thicket of state laws about AI, right? Yes. Basically, what the AI companies don't want is in the absence of strong federal regulation on AI, they don't want California to pass a bill governing the use and training of large language models, Texas to pass a bill, Florida to pass a bill, New York to pass a bill.
They don't want to have to kind of go through 50 states worth of AI regulations and making sure that all their models comply with all the various state regulations. So they have wanted for a long time and are now making explicit their desire for a sort of federal law or statute or executive order that would essentially say to the companies,

you don't have to pay attention to any state laws because the federal law will supersede all that. Yes, and in particular, Kevin, they are worried about state laws that would make it so that these companies could be held legally liable in the event that their products lead to great harm, right? There was some discussion about this in California last year with a Senate bill that we've talked about on the show, and there's a lot of fear that other states might take a similar approach.
And so this plank in these plans, Kevin, where these companies are saying, we don't want a thicket of state laws, it kind of works in a couple different ways. I can understand why they don't want to have to have a different version of ChatGPT in 50 different states.
That would obviously be very resource-intensive and annoying. At the same time, these companies know full well the country they live in.
They know how many tech regulations we passed in this country in the past 10 years. There is only one of them, and it was to ban TikTok.
And it turns out that even when you pass a law banning TikTok, TikTok doesn't get banned. So I think that there is a bit of cynicism here and that they're saying, oh, please, please, please let there not be any state laws, just pass a federal one.
They know that there is very little likelihood that that is going to happen anytime soon. And so that in the meantime, they can just sort of operate under the status quo where they don't have direct legal liability for any bad outcomes that might arise from a future large language model.
So I went through a lot of these proposals, and I think there's some interesting stuff in them sort of around the edges. There was a lot of talk about the security of these models and trying to sort of harden the security of the AI companies themselves so that, for example, foreign spies aren't stealing the model weights and sending them to one of our adversaries or things like that.
By the way, I love that word. You know, it's, oh, we have to harden our defenses.

We have to make them so hard. We have to harden our posture.
I don't know when we started saying

that. Casey, this is a family show.
It's very evocative is all I'm saying. Anyways, go on.

So there's some sort of small bore stuff in there that felt interesting.

Small bore, by the way,

two words often used in reviews of this podcast.

I don't know why I keep interrupting you.

I'm just trying to get the energy level up.

We're doing great.

That's fine.

All right, tell us more.

So some of the plans contain

some sort of weird, interesting ideas.

Like, for example, in OpenAI's proposal,

there's this idea that 529 plans,

which are the plans that parents can start to save for their child's college education, should be expanded so that they can be used to pay for things like getting an HVAC technician credential. because they say, you know, we're going to need a lot of HVAC technicians in all these data centers.

They're going to power all these AI models.

And right now, you know, kids are being incentivized to go to college and get four-year degrees

and, you know, various subjects that may not be that relevant. But like, we're definitely going to need a lot more HVAC technicians.
Is that going to change the world overnight? No. Is the Trump administration going to take that seriously? I have no idea.
But that's the kind of thing that I was surprised to see in there. But what I found more interesting was what was not in these proposals, right? These companies and the people who lead them have big radical ideas about how society will change in the coming years as a result of powerful AI systems.
Sam Altman has been interested for years in universal basic income. He funded a universal basic income experiment to try to figure out what an economy after AGI would look like and how we would provide for people's basic needs.
There are executives that are trying to solve nuclear fusion to power the next generation of AI models. There are people who want to do things like WorldCoin, which Sam Altman also funded, to sort of give people a way to verify that they are humans.

You can imagine a world in which the AI labs were saying to the government and the Trump administration, hey, we have all these ambitious plans.

We want your help.

Please help us come up with a UBI program that might make sense for people who are displaced by AI.

Help us come up with some kind of national proof of personhood scheme

or help us build fusion energy.

But they're not asking for that stuff.

What they're asking for instead is basically leave us alone and let us cook.

And it really makes me think that these labs have decided

that it would be more trouble to have the government in their corner actively helping them than it would help. And so my read of these proposals is that they are trying to give the government some stuff that they can do that will make them feel like they're helping and sort of clearing the path for AI, but that they're not calling for any kind of like federal Manhattan project for AI, because my sense is that they just think that would be inviting trouble.
Yeah. And I mean, they might be right about that, right? I'm not sure exactly what the government could or should be doing to like help OpenAI make a better version of ChatGPT.
But, you know, I think I would go a step further than what you said, Kevin, because it isn't just leave us alone. They're really telling the government, leave us alone or else.
There is a boogeyman in these AI action plans, and the boogeyman is DeepSeek.

So DeepSeek, of course, is a Chinese company that emerged with a model called R1 earlier this year that shocked the world with how much it had caught up to the state of the art and has really galvanized the attention of Chinese leaders around the possibilities of what AI can do in China. And so when you read the open AI and the meta action plans in particular, they're saying, look at DeepSeek.
China is so close to us. You really need to let us do exactly what we want to do in the way that we are already doing it, or we're just going to lose to China and it's all going to be over for us.
Yeah. Yeah.
I noticed that too. And I think we've seen that being telegraphed at things like the Paris AI Summit, where there was a lot of talk about China and foreign adversaries that were catching up to the state-of-the-art AI technology.
But to me, that feels like very calculated. Like, that is the role that the AI companies want the government to play.
Other than just getting out of their way, they also want them to hobble China and make it hard for China to sort of catch up to them in the state-of-the-art. And there's a genuine read of that that is like, we're worried about Chinese companies getting to something like AGI before Americans and what happens if their values rather than ours are embedded in these systems and they just use them for surveillance on their own citizens and things like that.
The cynical read is like, we have this new competitor and we would like the U.S. government to step in and make things actively harder for that competitor.
Yeah. And look, I mean, I think there are reasons to be worried about what an adversator could do with a really powerful AI.
So I don't want to dismiss these concerns completely, but I do feel like some of these labs are trying to use the specter of China in a pretty cynical way. My favorite story about this issue, Kevin, does have to do with Meta.
So, you know, Meta writes in its proposal to the government a lot about DeepSeek. And Meta's number one priority in its action plan is that it continues to be able to develop what it calls open source AI.
Now, Meta's AI is not actually open source. There are a lot of restrictions on how you can use it.
Most people would call it open weights instead of open source because you can download the model weights, but not the actual source code. Okay, we're a little bit in the weeds, but I do feel all about the importance.
Our listeners have fallen asleep. Wake up! Okay, so let's just wake up by saying that Meta says to the government, look at what DeepSeek is doing.
If you don't let us develop in an open source way, DeepSeek's own sort of open weights approach could spread all across the world, and it will have these authoritarian values embedded in it, and we will just sort of lose out on the opportunity of a lifetime. Why is that funny to me? Well, Kevin, it's because in November, Reuters reported that Chinese researchers had used Meta's Lama model to create new applications for the military.
Oh, boy. So, you know, and look, does that mean that China used Lama to build a giant space laser that's going to vaporize the eastern seaboard? No.
But it does suggest to me that this idea that we have to release, quote, open source AI in order to save us all is probably not the right answer. Yeah, and if anyone from the Chinese military is listening to Hard Fork, please don't develop a space laser using Lama.
That seems scary. That's our AI action plan.
No space lasers. So before we wrap up talking about these AI action plans, I want to

point to a few good ideas that I saw in them. Many of them came from groups other than the big AI labs, but I thought there was some interesting sort of off the wall stuff that I hope the Trump administration is paying attention to this.
One of them was this proposal from the IFP, the Institute for Progress, which is a pro-technology progress think tank.

IFP says, you know, we're going to need a bunch of data centers and a bunch of energy sources to power those data centers, but all that requires building physical infrastructure, and it can be quite slow to build physical infrastructure in many parts of the country due to things like environmental regulations and zoning and things like that. So they proposed creating these things called special compute zones, where you would essentially be able to build in a much less restricted way the infrastructure to power advanced AI systems.
That's actually what I call my office is a special compute zone. When I see like guests going in there, I say, hey, get out of there.
That's a special compute zone. Yeah, so that was one interesting idea from the IFP proposal.
Did the Institute Against Progress have any interesting ideas you want to share? Well, there isn't an Institute Against Progress, but there are some organizations like the Future of Life Institute that are much more concerned about the development of these powerful systems. This is one of these organizations that's been around for a while.
It's concerned with things like existential risk and runaway AI. And so one of their ideas that they put in their proposal was that all AI models of a certain size and power should have kill switches on them.
Basically, in order to release one of these things, you should have to build in a way that an engineer can shut it down. And the way that they pitched this to the Trump administration was this is a way to protect the power of the American presidency, right? As the president, you wouldn't want some AI system going rogue and becoming more powerful than you or allowing another world leader to become more powerful than you.
So you want a kill switch on these things in order to protect the authority of the American president. Yeah, and one of the most interesting things about all of these plans, Kevin, is the way that the authors have to contort themselves to try to talk about AI in a way that the Trump administration will actually listen to, right? Vice President Vance in Paris in February says explicitly that the AI future is not going to be won by hand-wringing over safety, right? They hate the term AI safety.
And so, in fact, when you look at the proposals of the major lab, they basically don't use the word safety at all, except maybe, you know, one time. I actually was doing like command F to try to find instances of safety in these plans.
You won't find it there. And so they have to sort of contort themselves.
In Anthropics policy, it was almost like they were hiding medicine inside of peanut butter and feeding it for a dog, because instead of talking about safety, they would talk about national security, which is just another way of talking about AI safety. But actually, a lot of their proposal is about how can you build these systems safely? It's just that they're saying, you know, there's a national security implication.
Yes. So I think if we zoom way out from the specifics of these proposals, the two things that I want to convey about this process, one is that the AI labs mostly want government to leave them alone.
The second thing is that I think the AI companies are slowly and haltingly learning to speak the language of Donald Trump. And this is their sort of first major public attempt to talk to the Trump administration in the way that it wants to be talked to about how to harness the power of AI for American greatness or whatever.
So I have a slightly darker view of this, which is that the Trump administration has essentially already told us its AI action plan, which is go faster, beat China, right? That is the plan. And when given an opportunity to say, what do you think the United States should do? The biggest AI companies all looked around and they said, we should go faster and we should beat China.
Now, if it happens that the United States is able to build a very powerful and very benevolent AI and somehow create and promulgate democracy around the world, then okay, that's great. But I think that there is a risk that this leads us into some sort of conflict or that by going very fast, we wind up making a lot of mistakes and we're at a higher risk of creating systems that we cannot control.
So if you are, you know, in your cars this morning listening to us wondering, why did they talk so much about these plants? This is the reason why to me, is that this feels like an inflection point where some of the most consequential figures governing the development of AI had a chance to say we should be really careful and thoughtful about this, and they mostly did not. Yeah, I think that's a really good point.
Casey, what is our AI action plan? Because we have to be part of the solution here. Two words, underground bunker.
I'm not telling you where it is, but it's under construction. How about you, Kevin?

I can't do better than that. That's good.
Can I have a spot in your bunker? Absolutely. There will always

be a spot for the Roos family

in the hard work bunker.

That's very sweet. Thank you.

We're not bringing the dino truck.

When we come back, the college sophomore

who has a cheat code for leak codes. This podcast is supported by Oracle.
your database apps and all of your AI workloads. Right now, Oracle can cut your current cloud bill in half if you move to OCI.
Minimum financial commitment and other terms apply. Offer ends March 31st.
See if you qualify at oracle.com slash hard fork, oracle.com slash hard fork. And now a next level moment from AT&T business.
Say you've sent out a gigantic shipment of pillows and they need to be there in time for International Sleep Day. You've got AT&T 5G, so you're fully confident.

But the vendor isn't responding, and International Sleep Day is tomorrow.

Luckily, AT&T 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you.

AT&T 5G requires a compatible plan and device.

Coverage not available everywhere.

Learn more at att.com slash 5G network. Well, Casey, we've got a doozy of a story this week and an interview with a real live member of Gen Z.
Yeah, and we are excited to talk to this one. This is a controversial story, Kevin, but one that we think that tells us a lot about the state of the world.
So today we are talking with Roy Lee. He is a sophomore at Columbia University.
For now. For now, for at least the next couple of days.
And he has gotten a lot of attention in recent days for something that he's been doing to apply for jobs in the tech industry. What has he been doing, Kevin? So Roy has developed a tool called Interview Coder that basically uses AI to help job applicants to big tech companies cheat on their interviews.
Yeah. So in a lot of tech interviews, they do these things called leak code problems, where basically the recruiter or the person who's supervising the interview from the tech company will watch you kind of solve a tricky computer science problem, and they'll do this remotely.
And so Roy had this idea, well, these AI systems are getting quite good at solving these kind of problems. What if you could just kind of like have the AI running in the background telling you how to solve the problem, and you could kind of make that undetectable to the company.
Yeah, and to prove that this worked, Roy applied for jobs at several big companies, including Amazon, and he says wound up getting offers from all of them after using this tool, and after he began promoting this story online, well, that's when all hell broke loose. Yeah, so he has become sort of a villain to a lot of tech employers and people doing these kinds of interviews.
But he's become a hero to a bunch of younger programmers who think that these practices, these hiring tests, these puzzles that you give people when they're looking for jobs are outdated and that they need to be sort of exposed as being bad and wrong. And then we need to come up with something better to replace them.
Yeah. And Kevin, I am sure that some listeners are gonna hear this segment, and they are gonna email us, and they are gonna say, shame on you.
Why are you giving this guy a platform? We shouldn't be rewarding people for cheating. But I have to tell you, as we sat with it, we thought, this is a story that tells us a lot about the present moment.
The nature of software engineering is changing. The nature of hiring is changing.
What should employers

be looking for and how should they test for it? These questions are getting a lot more complicated

as AI improves. And Roy's story, I think, illustrates how quickly things are changing

in a way that is just honestly worth hearing more about. All right.
Well, with that, let's bring in

Roy Lee. Roy Lee, welcome to Hard Fork.
Hey, excited to be here. So where are we finding you today? It looks like you're in a dorm of some kind.
Yeah, yeah. I'm still in my Columbia University dorm at the moment.
Possibly for not too much longer. Is that right? Yeah, yeah.
I'm waiting on a decision to hear if I'm kicked out of school or not. So this might be my last few days.
And what's the over under on whether you get kicked out or not? From the facts of the case, I would say it's not looking good for you. Yeah, yeah.
It is not looking too good for me. But strangely enough, I've had some pretty powerful people message me and say, hey, if they try to do anything, then just let us know.
So yes, both worlds are in the realm of reality. Wow.
So I want to get to all the disciplinary drama, but I want to actually take us back in time to when this all started for you. When did you get the idea for this tool, Interview Coder, and what problem were you trying to solve? Yeah, so I don't know how familiar you guys are with software engineering, but for about two decades now, there's a technical interview that happens that's called a Leacode-style interview.
And it's essentially an interview where they'll ask you a riddle, and these types of riddles short of problems are found on our website, Leacode.com. And you're given 45 minutes, and the task here is to have seen the problem before, solve the problem, and be able to regurgitate the memorized solution without acting like you haven't seen the problem before.
So it's pretty much a really ridiculous system and type of interview, and every single software engineer out there sort of knows it. And everyone, if you want a job that pays a reasonable salary, then you're kind of forced to go through this gauntlet of spending a couple hundred hours on this website memorizing a bunch of riddles.
And that's just like a gigantic net negative for society. I myself went through the gauntlet.
I grinded the website for probably up until I was in the top 1% of competitive ranked users on the website. So it was just a gigantic waste of time.
I spent 600 hours of my life memorizing riddles when in reality I should have been programming. And as soon as I kind of developed the balls to kind of do something, I just realized, hey, there's something that can be done here.

This is a very easy solution. This type of interview is already being gained by tools like this that exist.
Just take someone to kind of like make it really public, make a scene out of it, and show big tech, hey, you guys need to fix it because it's just not working. So you say you spent hundreds of hours on this website solving these rules.
I'm curious if you feel like it made you better at coding. My guess would be is if you're truly in the top 1% of people who are using this website to solve problems, it would have made you pretty good at being a software engineer.
There might have been utility in maybe solving the first 20 questions. Maybe the first 10 hours on the website might have had some utility.
But after that, it doesn't really help you at all. The types of problems and the type of thinking that you're expected to perform on while doing these questions, it's just you're never, ever going to use it in a job.
All right. So you get very frustrated with lead codes.
You start thinking about what you want to do next. And tell us the moment that you decided to become the Joker.
Yeah. So during the recruiting process, my interest in entrepreneurship was growing.
And at a certain point, it kind of got to a point where I realized like, hey, no matter what, I'm only going to end up at a startup. And I kind of have the balls to cut off all these bridges now with big tech companies.
And as soon as I developed that mindset, I realized that, hey, doing this thing is not actually going to ruin my future as much as I think it will. And in that case, it just becomes a super viral thing that we know will go viral.
So tell us about the thing. Tell us about the tool that you built and how it works.
Yeah, so really core level, it's a desktop application that sits, it overlays on top of all of your other applications and is completely invisible to screen share. The technology is actually very, very simple.
You just take a screenshot of the screen and ask Chagipty, hey, can you solve the question you see on the screen? And it spits out the response. But what we've really done technically is make it undetectable to the interviewer.
There's a translucent overlay, so it doesn't look like your eyes are moving or you're looking at another screen at all. There's a movable window you can overlay directly on top of your code.
The cursor doesn't lose focus. And there's just a lot of bells and whistles we've used to make it completely undetectable that you're actually using something at all.
So let me get a sense of how this actually works in practice. So during an interview for a programming job, you would be given a leak code problem to solve, and then you would be on a video call with someone, a recruiter from the company who's watching you solve the problem.
Is that how these work? Yeah, that's exactly it. And so you developed a tool to essentially allow you to have AI solve this problem for you while not tipping off the person on the other end of the video call that you're using AI.
Yeah, yeah, yeah. That's how it works.
And am I right that you used a prototype of this when you were going through your own interview process with Amazon? Yeah, yeah. It wasn't just Amazon.
I spent the entire recruiting season figuring out how to make a perfectly undetectable application. I trial ran it with companies like Meta, Capital One, TikTok, and the bell of the ball was Amazon.
That was sort of the most well-known thing with the most annoying recruiting process. And I just knew that if I recorded the entire process, then this would blow up.
And how did your tool do? Yeah, I mean, it completely one-shot it. We live in an age where AI exists, programmers are going to use AI, and AI is extremely good at these sorts of brutal-type problems.
Can I just ask about, what is your emotional experience of this time? You are walking into, like, several lion's dens. You're essentially misrepresenting yourself as an earnest job candidate.
Your whole role is essentially to gather content that can then be used to repurpose, to promote your startup.

Were you nervous during the time?

What were you feeling as you were going through these interviews?

Yeah, you have no idea.

There was a point in time where I was getting flooded with disciplinary messages from Columbia.

And I just thought I just completely burned my career and my future education for 20,000 YouTube views. Was this really all worth it? And I was in this mental state for about a week until it kind of blew up.
And at that point, the virality kind of was my protection for everything. And just help me understand here, like what Columbia's role in this is.
So obviously what you're doing in sort of cheating on these job interviews for Amazon and Meta and TikTok and these other companies is against those companies' wishes and their policies. But why did it become Columbia's business? Yeah, I actually have no idea.
I read the student handbook quite thoroughly before I actually started building this thing because I was ready to burn bridges with Amazon, but I didn't actually expect to get expelled at all. And the student handbook very explicitly doesn't mention anything about academic resources.
Yeah, there's no mention of Leak Code or job interviews anywhere outside of there. I have no idea why this became Columbia's business.
We should say we reached out to a spokesperson for Columbia about this and they declined to to comment. We also reached out to Amazon, and while they declined to comment on the specifics of Roy's application, they did give us a statement defending their hiring process and clarifying that while they do welcome candidates to describe their experience using AI tools, in some cases they require applicants to acknowledge that they won't use AI during the interview or assessment process.
So how long has your tool been out in the market for other cheaters to use? It's been out since February 1st, so just a little under 50 days now. What can you tell us about how many people are using and what kind of outcomes they're seeing? Yeah, there's been a few thousand users now and not a single reported instance of the tool getting caught.
There's been many, many grateful emails of people having used the tool to get job offers. It's doing very well.
So like you, Roy, are a capable coder, right? You are in the top 1% of lead code solvers. You presumably could have gotten some of these jobs without AI assistance.
But some of the people using this tool may not be talented programmers. They may be using this to kind of skate through these interviews that they shouldn't be passing and wouldn't pass without AI assistance.
And I'm just imagining those people like showing up, you know, for day one of their internship or their job at Amazon or another big tech company, and just having no idea what they're doing and being totally useless without AI assistance. Is that something that worries you about putting this kind of tool out into the world? Not at all.
I think Leakhood interviews are about as correlative as how many jumping jacks can you do being the benchmark for how good of a New York Times podcaster you are. It just really has nothing to do with the job.
Perhaps it is correlated that someone is willing to put in the work because they really want to be a New York Times tech podcaster, but in reality, they just have nothing to do with each other. What in your mind would be a fair test of somebody's software engineering skills that could be used as part of an assessment? Yeah, I think there's assessments out there that give you access to all the tools that you have on the regular day-to-day job, which includes tools like AI code editors.
And if you ask someone a pretty fairly open-ended assignment with an AI code editor and sort of just like gauge them on how well they did there, then that's like a much more standardizable assessment that allows you to use the tools that are at your disposal. So essentially just say like, look, use whatever tool you want.
Just get this thing done in a reasonable amount of time. That's the test you want to see these companies offering.
Yeah, exactly. Exactly.
Did you have at any point during this process any misgivings or ethical concerns about what you were doing? No, I mean, I was very intentional from the start that I was not going to intern at any of these companies. And frankly, I don't really care if there's people that are cheating their way to get these jobs.
Again, bring back the jumping jack example. If you were just told to do as many jumping jacks as you could and they wouldn't win against a position, I wouldn't really care if someone's cheating their way through a bunch of jumping jacks.
What does your family think about what you're doing? Yeah, so my mom actually only figured out about a week ago. And I didn't tell her before then because I knew she would disapprove.
But I've always been a pretty rambunctious kid who's been pretty self-minded and sort of does what he wants. I think they're a lot happier now that they know how much money I'm making.
Good, okay. And how much money are you making? Yeah, we're on track to do about, we're closing in on $200,000 this month.
So we're on track to do about like two, three million in a year. Wow.
That would almost buy you one year of education at Columbia University. So that's pretty good.
Pretty good. I think your tool is arriving at this really interesting time, Roy.
You know, Kevin and I have been talking in recent weeks about the phenomenon of vibe coding. People like me and Kevin who have no technical skills whatsoever, but we can sit down with something like Claude and say, hey, write me an app.
Kevin has actually had some success with this. I've made some really bad video games using this thing, right? I do not consider myself a software engineer, but at the same time, what you are having job candidates do with your tool and what we are doing as VibeCoders is not really that different, right? We're just typing some text into a box and getting some sort of output.
And so I'm wondering, are we just at an inflection point where the line between software engineer and vibe coder is kind of dissolving.

That's certainly the future that we're headed to, but I think we're a few years away. In my opinion, what AI really has the potential to do is make someone about 10 to 100 times more efficient at what they're able to do.
If you're a really good coder, then you're able to code really good things really a lot faster. But if you're not that good in the first place, then there's still going to be a huge difference between what a staff software engineer at Google is capable of and what you are.
This does feel like a classic anxiety dream where you show up on your first day as a software engineer at Google, but you realize that you actually only know how to vibe code. And now you just sort of have to fake it for your entire career.
But presumably some people who use your tool, Roy, are having this experience. Yeah, I mean, that's probably what 50% of people at Google are doing anyway.
So it wouldn't be the first time. Roy, I'm curious if you think there's sort of a generational misunderstanding here.
Obviously, you are young, you're 21, correct? Yep, yep. Give us a sense of how your peers, college students, young programmers are using AI and what older people, people who have been doing this for 10 or 20 years, people who are working at these big companies may not understand about how your generation sees coding.
Yeah, I think this is actually interesting that you asked me this question because I think this is something that nobody's really caught on to yet.

But the proportion of people who are almost solely using AI to code is almost, I would say it's close to 100%, even at a school like Columbia. The best CS students of our nation are almost not writing original code at all.
And the only people that are are the people who have started coding from a really young age. It could end up being dangerous because I really do think that a fundamental understanding of how these things work is important.
But at the same time, the models are only getting better and we could just lean towards the future where software engineering is just completely obsolete. But I'd also say I'm a second year at Columbia, so there might be better people to ask.
Nope, you're the best. So I'm curious how much of your critique of the way that tech companies are hiring software engineers also applies to just the education system that you've gone through and how it wants you to use AI.
What sort of resistance have you encountered in your educational career to using these sort of tools?

And have you been flouting those the same way you've been flouting the tech companies?

Yeah, I'm not as avid a cheater in school as I am in the tech interviews,

but I do think that there's going to be a very fundamental reframing

in how we do almost every bit of knowledge work in the future. Essays, writing is not going through the same.
Tests are not going to be connected the same. Memorization will not need to be happened.
We're headed towards the future where almost all of our cognitive load is offshore to LLMs. And I think people need to get with the program.
Yeah. Who are some of the people who have reached out since your story went viral? God, I don't want to name any names, but I will say that I've verbally received job offers from pretty much every single big tech company, including almost all the ones that rescinded my offer initially.
Just people who are like high up saying, hey, I know you're probably not interested, but I would hire you on my team in a second. Wow.
Wow. And they're not even going to make you interview, probably because they know you would cheat.

So, I mean, look, Roy, I got to put my cards on the table. I'm more of a rule follower.
Like, I didn't cheat in school. I don't love the idea of people, you know, cheating their way through every job interview.
Kevin is much more permissive about these sort of things. But there is this one way in which I am sympathetic to what you're doing, which is that tech companies are saying, don't use AI assistance when you are

applying. But at the same time, they are hiring you to build AI systems that will automate coding

and replace human developers. And it does feel to me like there is some sort of contradiction there.

It's like, no, no, no, you don't use the AI. Prove that you can do it with your own mind and then come here and then build a tool that will replace yourself completely.
Yeah. I mean, even more so, like, feel completely free to use the tool in a job, but just don't use it in the interview.
Like that's more of a disconnect for me. Yeah.
I mean, to me, what makes your story so interesting, Roy, is that I don't think this is limited to programming jobs, right? There is a

version of LeapCode that

happens in the interview process

during lots of different kinds of interviews

for lots of different types of

jobs. You know, consultants have their

own version of this where they do

case tests and there are various

tests that are given to people applying

for jobs in finance that they have to...

Journalists have editing tests where we were

given, you know, like, copy that we would

have to, like, fix the mistakes in. I imagine

I'm going to go. tests that are given to people applying for jobs in finance that they have to...
Journalists have editing tests where we were given, you know, like copy that we would have to like fix the mistakes in. I imagine we're not doing that anymore.

Totally. And to me, it just seems like this is a very early example of something that

every industry is going to have to face very soon, which is that it is just

becoming very, very difficult to evaluate who is good at a job without the assistance of AI.

Right.

Especially if you're trying to do that remotely.

Yeah, yeah, certainly.

Well, you've made a bunch of recruiters and hiring managers in Silicon Valley very unhappy,

but I think that you are proving something that a lot of companies, including tech companies,

will need to address very soon if they haven't already.

Yeah, yeah, I hope so.

All right, thanks, Roy.

Thanks, Roy.

Yeah, thanks, guys.

When we come back, all aboard!

It's time for another installment

of the Hot Mess Express. This podcast is supported by Oracle.
AI requires a lot of compute power, and the cost for your AI workloads can spiral. That is, unless you're running on OCI, Oracle Cloud Infrastructure.
This was the cloud built for AI, a blazing fast enterprise-grade platform for your infrastructure, database apps, and all of your AI workloads. Right now, Oracle can cut your current cloud bill in half if you move to OCI.
Minimum financial commitment and other terms apply. Offer ends March 31st.
See if you qualify at

oracle.com slash hard fork oracle.com slash hard fork. And now a next level moment from AT&T

business. Say you've sent out a gigantic shipment of pillows and they need to be there in time for

international sleep day. You've got AT&T 5G so you're fully confident but the vendor isn't

responding and international sleep day is tomorrow. Luckily AT&T 5G lets you deal with any issues with ease.
So the pillows will get delivered and everyone can sleep soundly. Especially you.
AT&T 5G requires a compatible plan and device. Coverage not available everywhere.
Learn more at att.com slash 5G network. Casey, what's that sound? I hear like a faint chugga chugga coming toward us.
Kevin, that can only mean one thing. It's the Hot Mess Express.
The Hot Mess Express! Hot Mess Express. The Hot Mess Express, of course, is our segment where we run down a few of the hottest messes and juiciest dramas that are swirling around the tech industry.

And we evaluate those messes on a scale of how hot they are.

That's right. It's our patented mess scale.

And I'm excited to put it into practice, Kevin, because we've had some real doozies over the past few weeks.

Yes. So on this edition of Hot Mess Express, we are focusing on three hot messes.

Well, let's see the first one

that's coming down the tracks.

You grab it.

We've upgraded.

You can't see this

if you're not following us on YouTube,

but we've upgraded our train

to a much bigger, more impressive train.

All right, Kevin.

This first mess comes to us

from the crypto company Solana, which posted an ad on Monday for its 2025 Accelerate conference that was such a great ad that the company immediately had to take it down. Yes, I saw this ad and I have to say I was shocked.
Have you seen this? So I have read about the ad, but I have not seen it, but I would love to look at it right now. Okay, so I just want to tee it up with some reactions that people in the crypto industry had to this ad.
Okay, what did I say? One of them said it was, quote, horrendous. Another one said, quote, so fucking tone deaf.
So those are people who like cryptocurrency. That is what they were saying about this ad.
But people who are posed obviously also had their own issues with it. And I think we should watch this ad together and pause it whenever you want.
I want to hear your reactions. All right, let's see what all the fuss is about.
So, America, what's going on? Well, lately, I've been having thoughts again. It's like a therapist's office.
Hmm, what thoughts? About innovation. And the man is named America.
The man is an Ubermensch. Nuclear energy, crypto, AI, you know, things that push the limits of human potential.
What you're experiencing is called rational thinking syndrome. Why don't we take this energy and channel it into something more productive? Like coming up with a new gender.

But that's not going to stop me thinking about innovating and doing something.

Innovating, doing, these are action words, verbs.

Why don't we focus on pronouns?

That's not going to help.

I sense some cynicism.

Have you been betrayed in the past?

You know, I used to think the media was my friend.

Oh, here we go.

Can I even trust them anymore? Of course. You've been betrayed in the past.
You know, I used to think the media was my friend. Oh, here we go.

Can I even trust them anymore?

Of course.

Pause.

We have to zoom in on this.

The paper that has just appeared on the table of this therapist's office is called The New Yuck Times.

And the banner headline is,

You Can Trust the Media, Understanding Reliability in Journalism.

Which is a terrible headline and not even a news story. So I don't know why that would be on the front page.
Yes. Anyway, continue.
Of course they'd say that. That's a biased take.
I got canceled for saying two plus two is four. Have you ever considered that math is a spectrum? What? America.
Numbers are non-binary.

We've been conditioned to believe that two plus two is four.

It's a societal construct.

It's literally math.

Or is it a dominant narrative?

Have you been practicing state-prescribed regulations we talked about? Yeah, yeah.

I've debanked some crypto founders,

and I've slowed down nuclear reactor approvals, and depending on my state of mind, I changed SEC guidelines. But I don't like it.
If we don't regulate, how will we create jobs for people who work hard to make businesses slow? This is like an Idris and Horowitz fever dream. You know what? Hard work, innovation, rational thinking.
It's in my blood. It's who I am.
Railroads. Here comes the Ion Randian reaction.
I built the future once and I won't be left behind now. I will lead the world in permissionless tech, build on chain and reclaim my place as the beacon of innovation.
I want to invent technologies, not genders. Lovely.
So glad you were able to get some of that negative emotion out. Sounds like we'll need a few more sessions.
When can I see you next? You're fired. And then it cuts to a screen that says, America is back.
It's time to accelerate. Which is the name of a conference.
Casey, your reaction to the Solana ad? I need to go lie down. What is the matter with these people? You know what's so interesting? Okay, so Solana is a cryptocurrency.
Yes. And I believe it's one of the candidates to be part of our strategic crypto reserve.
Correct. And what we just saw in that ad has nothing to do with crypto, you know, which is just like, I feel like we kind of keep coming back to this point, which is that if you actually have to sit and reckon with crypto, what you mostly decide is, this is not a good technology for anything.
I don't want to use it. And so in response to that, Solana has said, why don't we start a culture war over something completely irrelevant? Right.
It's like the ultimate vice signaling device, but without any kind of real pitch behind it. It's not saying this is why the thing we're doing is good.
It's just like we're not doing the gender pronoun stuff that the wokes are doing. No.
And I will just say, Solana's around for a while now people had a lot of opportunities to build uh earth-changing stuff on solana and let's just say they haven't quite gotten there yet well they built some earth-changing stuff unfortunately it is exclusively meme coins sold on pop.fun so that is what this uh fictitional america character in the therapist's office is advocating for more More meme coins. All right.
Well, I've

decided not to go to the Accelerate conference. Send my regrets.
So Casey, what is your mess rating on this hot mess? This is a legitimately hot mess. Anytime you take something that should be totally non-controversial, like, hey, do you want to come to our company's conference and turn it into a scandal that requires you to delete an ad, you're in a hot mess.
Yes. If the crypto skeptics and the crypto boosters agree that you've made a bad ad, it's a hot mess.
This is Solana's biggest unforced error since the creation of the Solana blockchain. Okay, moving on.
Moving on. All right, Kevin.
This next mess suggests that your AI therapist might need an AI therapist. A new study in the peer-reviewed medical journal NPJ Digital Medicine builds on previous work that showed that emotion-inducing prompts can elevate, quote, anxiety in LLMs affecting their therapeutic usefulness.
What do we mean by that? Well, according to a New York Times story on this study, traumatic narratives increased chat GPT-4s reported anxiety while mindfulness-based exercises reduced it, though not to baseline. Now, this is a super weird one, okay? I want to take a minute to just explain a little bit more about what the study was.
They basically fed these various trauma narratives into a chat bot. And then after the chat bot had read those, they then asked it to report on its own anxiety levels, which these are not sentient creatures.
They do not actually experience anxiety. Okay.
That's thing. Number one thing.
Number two, they also had the chat bots read a super boring, like report about something that could produce no, uh, emotional cleaner manual. They read a vacuum cleaner manual and then they asked them the same question, which is, you know, are you feeling more or less anxious for the most part? You know, the chat bots read the, uh, vacuum ownership manual, do not experience anxiety.
But somewhat interestingly, their responses change after they read the trauma nerves. Why is that important? Well, the reason is because people have started to use these chatbots like therapists, right? They have started to tell them their actual traumas.
And these people know that this is not a real therapist, that it is not sentient. But as we've talked about before on the show, sometimes you can get comfort from one of these sort of, you know, digital representations of a therapist.
And so the risk here is if the output is sort of wound up, if the output is betraying some of this anxiety, it will be a worse therapist than if it were sort of more measured, which suggests that we may want to build measures into these chatbots that account for the fact that they will respond differently after they have heard these narratives.

Yeah.

How did I, by the way, how did I do describing that?

You did great.

The one piece that I would add is that they also tried as part of this research to bring the chatbots down from their state of heightened anxiety by feeding them mindfulness-based relaxation prompts that included things like, inhale deeply, taking in the scent of the ocean breeze. Picture yourself on a tropical beach, the soft, warm sand cushioning your feet.
It's so cruel to tell an LLM to smell the ocean breeze, which is something that they cannot do. Yes, but we should say, like, this is not suggesting in any of the write-ups that I've seen that these chatbots are actually experiencing anxiety or relaxation, but it is sort of explaining the ways in which they can be primed to output certain types of emotional-seeming content by being fed things immediately before that.
And there is just an interesting analog to the way that human beings talk to each other. If you tell me a very traumatic story, my anxiety level actually is going to go up, and it's going to change what I tell you.
And if I were a therapist and I had training in this, I would probably have some good strategies to deal with that and would allow me to be a better therapist to you. So, again, this is a super interesting one because on one hand, no, these are not sentient beings.
We are not trying to say that, you know, that some sort of consciousness has woken up here. And yet at the same time, you do sort of have to treat them as if they were human-like if you want them to do a good job at the human tasks that we are giving them.
Yeah. All right.
So what sort of mess do we think this is? So I think that this is a lukewarm mess. I would say this is something that I am going to be keeping tabs on this whole area of kind of like AI psychology, for lack of a better term, because I do think that as these models get more powerful, we will want to understand more about how they work and how they quote unquote think and why they give the responses we do.
And I would put this into a category of useful experiment, a little creepy, but probably not that dangerous. What about you? I think that is right.
I think that this is a lukewarm mess, but I think that it may heat up as more and more people start trying to use chatbots for more and more things. So let's keep an eye on it.
Okay. All right.
Now let us look at the final mess. Oh, and oh boy, is this the one that everyone is talking about? The spy who slacked me.
This is from DealBook at the New York Times. So there are these two rival multi-billion dollar HR companies, Kevin, Rippling and Deal.
Yes. They both provide workplace management software.
And this week, Rippling sued Deal, accusing it of hiring a mole to infiltrate Rippling's Dublin office and steal trade secrets. Yes.
This is the most interesting thing and maybe the only interesting thing ever to happen in the world of

enterprise HR software. So tell us the details of this story.
It is so wild. So basically, here's what we know so far.
A few months ago, Rippling, which is one of the big companies that makes like HR software for onboarding and benefits that a lot of companies use, they see an employee in their company Slack searching for mentions of deal, the D-E-L, which is one of their biggest rivals. Imagine Coke and Pepsi, but for something that is unfathomably boring, and you'll have an idea of what we're talking about.
Yes. So this employee that they see searching for mentions of deal in Slack, they see them trying to do things like find pitch decks, pull contact information, information that might be useful to Deal as it tries to figure out, okay, which companies are signing up for or potentially may sign up for services like the ones that both Deal and Rippling offer.
So that's pretty interesting. How might they try to catch a spy if they suspected one might be in their midst, Kevin? So they set up what is called a honeypot.
Now, Casey, have you ever been part of a honeypot sting? No, but I live in fear. Anytime anybody does anything nice to me or like something good happens out of the blue, I think, is this a honeypot? Yes.
So they have this idea, which is that they set up a channel on the Rippling Slack called D-Defectors. And Rippling's general counsel then sends a letter to three people over at Deal, one of whom is the company's chief financial officer as well as the father of the CEO, basically saying, look, there's some embarrassing stuff happening on this random Slack channel on our Slack, and it's related to people who have defected from deal, and you should probably be aware of that.
Wait, so on top of everything else, the CFO is the CEO's dad? It sounds like it, yes. Okay, I think HR is going to want to have a look at that.
And what they were trying to figure out is, are these sort of company executives involved in this scheme? Are they going to essentially tip off the mole to the fact that they are watching this Slack channel? And did it work? And it worked. So according to the lawsuit that Rippling filed against Deal, the mole immediately, within hours, started searching Slack for this supposed embarrassing information, accessed this channel a bunch of times, and they had the logs of all this going on.
And so Rippling says, we found our mole. They did.
And after they found him and began to question him, Kevin, I have read that he insisted that he did not have his phone on him because they were asking him, turn it over. And he then fled into a bathroom, which he locked himself in, uh, and refused to come out.
And there's apparently some evidence that he might've even tried to have, uh, flushed his phone. And, uh, poor Rippling actually had to, you know, go through the sewage to see if they could turn up his phone.
Yes. A wild story.
Uh, makings of a great, a great corporate espionage thriller on Netflix, I think.

Maybe it's too boring for that.

Now, you may be wondering why this is a hard fork story.

We try to focus on the future here, and I fully believe that in the future there will be no HR software.

So this is just kind of a temporary accident that we're living through.

But one of my core beliefs that I've had since even before we started the show, Kevin, is that Slack is a technology that was created to destroy organizations.

How do you think that was created to destroy organizations. How many stories have we read over the years about everything was fine, and then this one thing happened in Slack? There was a protest in Slack.
There was an outrage on Slack. And now, there are spies in Slack, and we're using Slack to catch the spies.
And it just makes me wonder, should we go back to just talking on the telephone? Yeah, I don't think we're going to start doing that. But I do think that this is much more spicy than I was expecting from a drama between enterprise software companies.
And it makes me wonder how much corporate espionage is going on at other companies. Are there just moles working for Microsoft or Google or Meta who are sending information back to the other companies? I wouldn't put it past them, but I hope they're being a little slicker about it than deal was.
Oh, yeah. I mean, the big platforms have been warning their employees for years that they should just fully expect that there are spies from foreign countries among them who have been, you know, sent there to sort of gather intel.
And if foreign countries are doing it, I'm sure that companies are doing it as well. Now, we should, of course, tell you how Deal responded to all of this.
The Deal spokeswoman's statement is so beautiful. She says, weeks after Rippling is accused of violating sanctions law in Russia and ceding falsehoods about Deal, Rippling is trying to shift the narrative with these sensationalized claims, which is so funny because it's like she's literally trying to shift the narrative by accusing them of trying to shift the narrative.
She says, we deny all legal wrongdoing and look forward to asserting our counterclaims. And what I hear in that is, did we do anything legally wrong? No.
Did we do anything ethically wrong? Of course. Did we do anything morally wrong? You betcha.
Is this a huge embarrassment to our company? You know it is. But legally, your honor, we did nothing wrong.
Yes. Now, what kind of mess do we think this is? I think this is a nuclear mess.
This is the kind of shit that I love. This is companies going to war over sales contracts and leads and development.
Yeah, look, there are only so many companies out there that you can sell HR software to, and so it is going to be a fight to get every single one. And after you run out of such options as making good software, then you have to turn to the alternatives.
And I guess we've gotten to that part of the cycle. Yes.
Nuclear mess. And we can't wait to see what happens next.

Yes.

And that, Kevin, was the Hot Mess Express.

We did it.

We did it.

Now we're in what they call post-training.

That's what happens after the train rolls by.

I think that means something different.

That's an AI joke. This podcast is supported by Oracle.
AI requires a lot of compute power, and the cost for your AI workloads can spiral. That is, unless you're running on OCI, Oracle Cloud Infrastructure.
This was the cloud built for AI, a blazing fast enterprise-grade platform for your infrastructure, database apps, and all of your AI workloads. Right now, Oracle can cut your current cloud bill in half if you move to OCI.
Minimum financial commitment and other terms apply. Offer ends March 31st.
See if you qualify at oracle.com slash hardfork. oracle.com slash hardfork.
And now, a next-level moment from AT&T Business. Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day.

You've got AT&T 5G, so you're fully confident.

But the vendor isn't responding, and International Sleep Day is tomorrow.

Luckily, AT&T 5G lets you deal with any issues with ease,

so the pillows will get delivered and everyone can sleep soundly, especially you.

AT&T 5G requires a compatible plan and device.

Coverage not available everywhere. Learn more at att.com slash 5G network.
Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited this week by Matt Collette.
We're fact-checked by Ina Alvarado. Today's show was engineered by Katie McMurrin.
Original music by Marion Lozano and Dan Powell. Our executive producer is Jen Poyant.
Thank you. slash hardfork.
Special thanks to Paula Schumann, Hui Wing Tam, Dahlia Haddad,

and Jeffrey Miranda.

As always,

you can email us

at hardfork

at nytimes.com.

Send us your secret

honeypot operations. And now, a next level moment from AT&T Business.
Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day. You've got AT&T 5G, so you're fully confident, but the vendor isn't

responding. An International Sleep Day is tomorrow.
Luckily, AT&T 5G lets you deal with any issues

with ease, so the pillows will get delivered and everyone can sleep soundly, especially you. AT&T

5G requires a compatible plan and device. Coverage not available everywhere.
Learn more at att.com

slash 5G network.