DOGE's Website, Hacked

39m
This week we start with Jason's story about anyone being able to push updates to DOGE.gov website. Then we talk about other stories with the DEI.gov and Waste.gov sites. After the break, Sam tells us all about some lawyers who get caught using AI in a case. In the subscribers-only section, we chat about a true crime documentary YouTube channel where the murders were all AI-generated.

YouTube version: https://youtu.be/wqjeDFk9LMo

Anyone Can Push Updates to the DOGE.gov Website

Researcher Captures Contents of ‘DEI.gov’ Before It Was Hidden Behind a Password

Elon Musk's Waste.gov Is Just a WordPress Theme Placeholder Page

Lawyers Caught Citing AI-Hallucinated Cases Call It a 'Cautionary Tale'

A ‘True Crime’ Documentary Series Has Millions of Views. The Murders Are All AI-Generated

Subscribe at 404media.co for bonus content.
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

Charlie Sheen is an icon of decadence.

I lit the fuse and my life turns into everything it wasn't supposed to be.

He's going the distance.

He was the highest paid TV star of all time.

When it started to change, it was quick.

He kept saying, no, no, no, I'm in the hospital now, but next week I'll be ready for the show.

Now, Charlie's sober.

He's going to tell you the truth.

How do I present this with any class?

I think we're past that, Charlie.

We're past that, yeah.

Somebody call action.

AKA Charlie Sheen, only on Netflix, September 10th.

In today's world, data breaches happen all the time, and even the most secure companies can't always protect their employees' personal information from ending up in the wrong hands.

That's where Delete Me comes in.

DeleteMe is a service that removes your employees' sensitive information from hundreds of data broker websites.

Sites where hackers can find phone numbers and emails within seconds.

Rachel Toback, CEO of Social Proof Security, says attackers use this data to target employees with phishing messages and AI-powered phone scams.

But DeleteMe makes it harder for these bad actors by scrubbing your employees' details regularly.

It's simple.

Attackers are lazy.

If it's too hard to find contact info, they'll move on to easier targets.

DeleteMe takes care of this for you, doing the heavy lifting so you don't have to.

And over time, they keep removing the information so it stays down, protecting your team from constant exposure.

If your business has a social presence or deals with clients, you need DeleteMe.

Visit deleteme.com slash 404media and start safeguarding your team's information today.

That's deleteme.com slash 404media.

Hello and welcome to the 404 Media Podcast, where we bring you unparalleled access to hidden worlds both online and IRL.

404 Media is a journalist founding company and needs your support to subscribe.

Go to 404media.co as well as bonus content every single week.

Subscribers also get access to additional episodes where we respond to their best comments.

Gain access to that content at 404media.co.

I'm your host, Joseph, and with me are the 404 Media co-founders Sam Cole.

Hello.

Emmanuel Mayberg.

Hello.

And Jason Kebler.

Hey, what's up?

Definitely not a robot.

Yeah, we're definitely not a robot.

We had some audio issues.

I guess we'll see if you turn into a robot halfway through and we'll deal with it as it comes.

But right now, let's talk about the first story.

And it is one that Jason wrote.

Anyone can push updates to the doge.gov website.

There is some context to lay out here, but I think the funniest place to start is just some people,

you know defaced the doge website what did they write on it jason and then we'll get into the how and the why etc but i think what they did first is probably interesting yeah so the doge.com website is

dot gov oh yes dot gov dot gov yes is a website that didn't exist uh

like at the beginning of last week and elon musk went in front of

you know he had he did that interview with media in the oval office behind the Resolute desk with Donald Trump and his son.

And he got asked a question about transparency of what Doge is doing.

And he said, we're the most transparent organization in the history of mankind, something like this.

And then

he was like, just go to doge.gov to see what we're doing.

And if you went to doge.gov, there was nothing there.

It was just a blank website.

And then the next day it was updated to have just a stream of X posts from Doge.gov.

And then it was updated again to have this database of supposed cuts and structure of the government and things like that.

Yeah, you could click through and it would be like, oh, here's the army or whatever.

And it has this number of employees and this is the average salary they get and the average age of an employee, right?

Yeah.

And so some of these pages were defaced to read, quote, these

experts, which was their own quote, said these experts left their database open.

And then another one said, this is a joke of a.gov website.

And then I've seen a third one that said, this.gov is hosted on insecure Cloudflare pages, which happened over the weekend.

So

that's pretty funny.

That happens.

It gets defaced, like a lot of websites do.

You know, there's always people trying to deface government websites or corporate websites or whatever it might be.

This one's like a little bit different

through the way they did it.

And I mean, I don't think we have to get too technical, but like, what was the issue here?

Like, was it, you know, a fancy vulnerability or was there something exposed?

Like, how was this being defaced?

Yeah, I found it to be pretty interesting.

I'm not a web developer.

So.

Some of the technical details might be a little bit off here, but I spoke to two different web developers who independently and separately found this vulnerability and then messaged me about it within about an hour of each other.

And then some other folks sort of verified it after.

But basically,

like doge.gov is not hosted on doge.gov.

It's not hosted on a government server.

It's hosted on Cloudflare, which is an internet infrastructure company that does work with the federal government, but it has like a special

program for the federal government.

But it was hosted on a Cloudflare pages website.

And so essentially the page itself was not doge.gov.

It was some Cloudflare URL.

It was like a long string of URL.

And we shouldn't say what it was, but basically like these web developers

inspected the source code, found out where the page was actually being hosted, saw that it was just like a random Cloudflare page where code had been deployed to from like a GitHub or like some sort of code repository.

The database that was like powering this website was deploying to this Cloudflare page, and then that Cloudflare page was pointing to doge.gov.

Now, that's like a little bit complicated, but basically it's where it's pulling the data from, basically.

Yeah, from the Cloudflare pages, yeah.

And it was pulling the data from the specific databases that were on the Cloudflare page, and they were able to essentially find the API endpoints for these databases, which were left exposed, meaning they were able to find out where the database was pulling from, and they were able to push their own database records to the database that were then reflected on the live page.

So, like the TLDR is that they were able to edit

the database that was powering the live page by being able to push new entries to it.

I asked if they were able to edit like existing entries, meaning could they fuck with the data that's actually on the website?

And

neither one

said that they had tried.

And we are not allowed to ask people to go poking around government websites.

And so it's sort of unknown whether they could have done more damage than this sort of defacement that they did.

Yeah, to be clear, for legal reasons, we never ask people to go do this, but if two people independently find it and tell us about it, and I think only one of them actually did the defacement, right?

If one of them decides of their own volition to go and do that, well,

thank you.

That helps verify, but we're never going to ask you to do that, obviously.

So I do think I can talk a little bit about like how this came to be because it's not super sensitive.

But basically, I like

got a message with a link to doge.gov

that went directly to the page that had already been fucked with.

And so

that was them proving like, hello, I've already done this.

And then I talked to them.

I was like, well, how did you do it?

And they explained.

And then that was the same.

vulnerability that a different person had

discovered.

And I asked that other person, like, have it, like, have you modified anything?

And they said, no, because that's probably a crime, could be a crime.

And I don't want to do that.

But if I wanted to, I certainly could, because they found like the same thing that the other person did.

Yeah, it's pretty interesting verification.

It reminds me of some other cases I've had where

somebody broke into, I think, a stalkerware company, you know, this malware that abusive partners will put on people's phones and that sort of thing.

And they'd actually managed to get a ton of data from the company.

But one way they wanted to prove their access was by also doing a defacement.

And if I recall correctly, what they did was they defaced it.

They put my name on it, which I don't know.

Okay, thanks.

And then they pushed it to the Wayback Machines.

It was archived.

So I could go back and be like, oh, yesterday,

somebody, presumably these hackers, put my name onto this website.

So it is

useful for purely journalistic purposes when somebody sends you a link like that, for sure.

I would say,

maybe not ordinarily, but a lot of the time, a defacement just like wouldn't be a story.

There's a hacker who pings me every so often.

He's like, hey, look, I defaced Biden's website.

And it's like, I don't care.

Like, this does some, this doesn't matter.

That was obviously, you know, last year or something like that.

This one is different

probably

for multiple reasons.

I mean, why do you think it's important?

And maybe what does it show us that the doge.gov website, which is supposed to be targeting fraud and waste and abuse in government and making it much more efficient, was it really tell us that like even their website was apparently held together by, you know, digital string, basically?

Yeah, I think there's a few reasons that it's interesting.

I think one,

the way that they did the defacement was kind of interesting to me, where they were able to push to this specific database, meaning, you know, potentially other sorts of information on the website could have been, could be changed in some way.

But I think more importantly, it shows that this group of coders who is going into every government agency and asking to examine source code, to get access to really sensitive systems, to get access to systems that they probably don't understand because they run on COBOL or they run on like old mainframes, things like that, seemingly was unable to push like a very simple website without

having these very basic vulnerabilities included in them.

One thing I also thought that was very interesting was

usually when a website is defaced and then there is an article about it, it's fixed very quickly, like within minutes often.

This was up for something like 18 hours, these defacements.

So that suggests to me that they had trouble finding them or they just weren't paying attention.

Like I'm not sure because it got a lot of

attention online.

And then

the other one that happened over the weekend suggests that they didn't close whatever vulnerability was allowing this to happen.

And that one was still up as of the time I checked

before this podcast.

And so they still haven't fixed it, which is pretty concerning.

I mean, maybe they don't care.

Maybe they ironically don't have the resources to do it.

I don't know.

It's impossible to know, really.

I think also,

like, right after we published our article, the Huffington Post published an article about how

Doge had published classified information about the staff makeup, like the number of employees at a specific government agency that, you know, is so secretive that the number of people who work there is classified.

And so that just suggests that they are not taking care when developing something like this.

Yeah.

Sam, you had a story that was somewhat related.

And it was researcher captures the contents of DEI.gov before it was hidden behind a password.

We're going to talk about another Jason story in this segment as well, but just briefly,

what's the deal there?

They put a password on DI.gov, but before that, what was exposed or available or what's going on?

Yeah, so it was left unpassword protected for like,

let me see, a maximum of 30 minutes is what this researcher that found this told me based on his scraping and archiving of the site.

So it was up for 30 minutes without a password, and he had been running an app that was capturing uh government websites like automatically so it grabbed it in those 30 minutes which is so crazy um and while it was up and you know exposed to the web like that it had this long list of

um

I don't know, I mean, you know, it's like quote unquote waste.

It's like what they're trying to track or whatever through Doge or what Elon says that he's trying to do.

And it was like, I couldn't even include all of it in the story, but the, what's in the story is really long.

So people should go check it out.

But it's like just a laundry list of random shit that like they are claiming is wasteful use of federal funds.

So it's like things like, it's like $3.4 million for a Malaysian drug, drug-fueled gay sex app.

No source on that, no reference to where they got that from.

$15,000 to queer Muslim writers in India.

It's like.

you know, it's just this random stuff that they're claiming.

You know, it's like $1.3 million to Arab and Jewish photographers.

Are they American Arab and Jewish photographers?

We don't know.

And more broadly, they keep making mistakes, basically.

Like I saw there was

a bit of confusion between 8 million and 8 billion because they missed a decimal place.

Yeah, kind of important.

That's a big difference.

So yeah, it was up.

And then they immediately gave it a WordPress template and that kind of hid all of that information.

And then that's kind of where the story comes in that Jason's talking about, where then they

people wrote about it being defaced.

They wrote about it being, we wrote about it being a WordPress site, a WordPress template that looked like, you know, random and generic.

And then they were like, oh, and then they put it behind a password.

Yeah.

All three of these stories are very closely related because they're all new websites that have been spun up to track, yeah, like quote unquote government waste and also Doge's efforts to

cut things.

And so the types of things that were captured on

DEI.gov that Sam wrote about are some of the things that have now shown up on the Doge website as part of like the Twitter stream and things like that.

So

I guess I'll just jump into that, the third story very quickly, which is Elon Musk's waste.gov is just a WordPress theme placeholder page.

And so the three websites are doge.gov, dei.gov, and waste.gov.

And DEI.gov and waste.gov were both registered about a week after Donald Trump was inaugurated.

And there was never anything on waste.gov, to my knowledge, but Sam spoke to this researcher and captured, he captured that information on DEI.gov.

And then I went to waste.gov one day, and all of the information there was about an imaginary architecture firm that was pulling directly from just like a WordPress template.

Yeah, it was clear that it was like

some sort of default landing page, essentially, when I don't think that's really what you're going to expect when you go to an alleged official government

website.

So what do you see?

Is it just like pictures of this made-up architecture firm or something?

I mean, it's like if if you register for any website ever, you can usually click through different themes.

And then the person who makes the theme, which is just like the layout of how the website is going to look, will try to demonstrate the features of that theme.

And they do it with placeholder language.

So in this case, waste.gov said a commitment to innovation and sustainability.

Etude.

which like E-T-U-D-E, French word, is a pioneering firm that seamlessly merges creativity and functionality to redefine architectural excellence.

It's funny because the placeholder language for this imaginary architecture firm violates

Trump's executive order against DEI because it talks about how this imaginary architecture firm cares about diversity and cares about sustainability.

Right.

And it was live on a government website, which is in violation of the executive order, which is pretty funny.

Right.

It was too, it was too inclusive.

And then, I mean, did they realize their mistake and now the website is dead or locked?

Or like, what happened after, if anything?

Yeah, I mean, immediately after we published that article, they put it behind a password wall.

And that's when they put DEI.gov behind a password wall as well.

So like sometime in between when I wrote this article and published this article, they briefly exposed what was supposed to be on DEI.gov.

Then that researcher scraped it, and then it went behind a password wall.

And both of those websites are still behind a password wall as we're recording this.

So it's unclear whether they're going to

use them in any way.

But if you go to those websites right now, it just says this content is password protected.

To view it, please enter your password below.

That is rarely transparent from the most transparent entity, agency, organization, or whatever it was that you said earlier.

Yeah, and I know this segment turned into the web development of Doge or whatever, but I think it just shows how haphazardly it's being rolled out.

I mean, with all of the chaos across the U.S.

federal government with the actual actions they're taking of, you know, dramatically downsizing workforces and then getting rid of essential employees and and they have to ask them to come back, and all of that sort of thing.

They can't even like run a website properly.

You know,

it's not a great look, I'd say that.

All right, let's leave that there.

When we come back, we're going to be talking about AI and lawyers,

and a particular set of lawyers who basically got caught using AI to hallucinate a bunch of different cases.

We'll be right back after this.

Today's episode is sponsored by BetterHelp.

We spend so much time looking out for red flags, but what about green flags?

The signs that a relationship is actually working.

Things like open communication, emotional support, and actually talking through problems.

Therapy can help you recognize green flags.

BetterHelp isn't just about working through challenges.

It's also about learning what healthy relationships look like, practicing those behaviors, and even embodying that green flag energy yourself.

Because the more you understand what good looks like, the easier it is to find.

I think of therapy as a great partner to help navigate my own growth in relationships.

BetterHelp makes it easy.

It's entirely online, so you can connect with one of over 30,000 licensed therapists from anywhere.

With such a large network of therapists, therapists, you'll find a wide range of specialties and therapy approaches.

And if it isn't the right fit, you can switch therapists anytime at no extra cost.

Discover your relationship green flags with BetterHelp.

Visit betterhelp.com/slash 404media today to get 10% off your first month.

That's better,

H-E-L-P.com/slash 404media.

Hackers and cyber criminals have always held this kind of special fascination.

Obviously, I can't tell you too much about what I do.

It's a game.

Who's the best hacker?

And I was like, well, this is child's play.

I'm Dina Temple Reston.

And on the Click Here podcast, you'll meet them and the people trying to stop them.

We're not afraid of the attack.

We're afraid of the creativity and the intelligence of the human being behind them.

Click Here, stories about the people making and breaking our digital world.

AI machines, satellites, and telegnic.

Click here and listen.

Click here every Tuesday and Friday, wherever you get your podcasts.

And we are back.

This is one that Sam wrote.

Lawyers court citing AI hallucinated cases call it a quote cautionary tale.

Not entirely sure where to start with this one, Sam.

Maybe we just do it with what did the lawyers admit to doing?

And then I guess we'll get into the implications of all of that as well.

Yeah, I mean, that's a good place to start because I also kind of worked backwards from there.

This is an article that we did in collaboration with Court Watch, which is Seamus Hughes's independent newsletter/slash outlet that he digs up court records and sometimes he sends us like the interesting ones and says, Do you want to write this up?

So, yeah, the pitch for this one was basically just like these lawyers

got called out for using AI in

a filing.

And now they're like, oh, this is a cautionary tale, which is the headline.

But

basically, they had used,

they don't say what.

LLM, what chatbot that they used, it, whether it was Chat GPT or some other one.

And there are a bunch of like, you know, like specific to

legal uses,

LLMs at this point, like tools that are rolled out for lawyers to do research and to use AI to

draft, you know, responses, ideally not straight copy pasting, but that's what these people did.

So yeah, they said in a in a filing

that,

let's see, I'm going to quote it.

directly.

Our internal artificial intelligence platform quote hallucinated the cases in question while assisting our attorney in drafting the motion in

Le Main.

I've probably said that wrong, but it's a legal phrase.

This matter comes as a great embarrassment and has prompted discussion and action regarding the training, implementation, and future use of artificial intelligence within our firm.

This serves as a cautionary tale for our firm and all firms as we enter this new age of artificial intelligence.

So I thought

I'm actually warning everybody because it's a cautionary tale for all firms.

For everybody.

You know, they're like,

yeah.

They're like, oh, this, I mean, the implication is like, this could happen to anyone.

And we got caught doing it.

And it's like, oops, we're so sorry.

So they're.

They were just testing everyone.

They're like, oh, yeah, yeah, yeah, yeah.

Whenever I slip on a banana peel, I get up and I was like, let this be a cautionary tale for everyone.

Yeah.

Don't slip on the banana peel, everyone.

Right, exactly.

And it's like, this is so, this is somebody fucked up majorly.

So

obviously at that point, I'm paying attention.

I'm like, you don't really see lawyers immediately apologizing profusely very often.

What had happened was they had cited, like they say in their groveling apology, that they had cited,

I think it was like eight out of nine cases that they had cited in this document as like,

you know, if you read court documents and

complaints and case filings, a lot of the times they'll cite other cases that are similar and be like, this is the precedent that we're standing on legally.

Like these are similar cases where, you know, cases like ours won in the past.

So you should, you know, grant us what we're looking for in our case.

But they had made up eight of the nine that they cited, just they didn't exist.

Like they don't exist anywhere.

And obviously the judge looked them up.

They were like, because I mean, so the judges who caught them.

Yeah, the judge caught them.

I mean, actually, I think it might have been.

opposing counsel caught them and like the judge and then was like and told them like judge like hello this is these are fake and they're not they don't exist um and then the judge was going to jump and uh he was like well

these are

um

what are you gonna say for yourself like and the reason that they are

apologizing immediately is that there actually is a precedent for this happening in the past yeah it's not the first time yeah and maybe people listening are already aware of that i think and we'll talk about that in a sec i think that the reason you correct me if I'm wrong, but the reason you covered it here is: well, first of all, it's very funny.

Second of all, lawyers admitting they're wrong, let alone groveling, that's probably newsworthy in and of itself.

And third, yeah, it's like it's one thing for lawyers to use AI of all the problems it is.

It's like another thing to get caught so publicly.

I think actually, just before we talk about previous instances, you bring up what was this case about exactly?

Because this was like a pretty ordinary case involving walmart or something

it was so it was actually once i kind of worked backwards like i said and kind of dug back to to the original complaint the original lawsuit um it was filed in 2023 against walmart and this company called jetson electronic bikes and jetson makes hoverboards for walmart um or for sale at walmart and what had happened was the the plaintiffs who so the people who are being represented by these lawyers who who made this huge mistake with the fake cases,

they had bought a hoverboard, one of the Jetson hoverboards from Walmart.

And they're claiming that this, the battery, the lithium-ion battery in the hoverboard caught fire while they were asleep and burned their house down.

So it's a really horrifying case.

And like, it's pretty.

serious.

Like this, like we've, we've talked about this, we've written about this in the past with like scooters.

It's like scooters have this problem a lot.

It's like people's houses burn down because

these batteries malfunction sometimes.

So they're claiming, you know, injury and, you know, severe like burns.

Obviously, it's like their house.

Like, it's just massive loss that they have from this, that they're claiming from this hoverboard company.

So that's what they're bringing.

And this case has been going on since 2023.

So it's been,

you know, a drag out

thing.

The docket is long.

And this is the thing that gets like

attention is that the lawyers made this huge fuck up, which is kind of sad.

It's like, look at my lawyers, bro.

Situations.

Like,

this is a mess.

Your house burnt down and your lawyers are like fucking around with ChatGPT or something.

I was going to say, we unfortunately have a lot of experience with lawyers, both hearing from lawyers representing other people who are mad at us.

And we have our own lawyers who are very good and represent us.

And I feel like 90% of the time, what you need a lawyer to do is to like know the law and know the case law and reference specific cases and write long explanations for why what you did is perfectly legal or for why what someone else did is not legal and the idea that you would pay a lawyer who is not cheap ever

to just like have a chapa do it and then not even double check the output is so crazy.

Like, I feel like you would have to dump them immediately.

Like, imagine if we had some sort of libel letter coming at us and our lawyers just like ChatGPT generated a response.

We would be livid.

We would probably try to sue them ourselves.

Yeah.

Yeah.

With a second set of lawyers who then also use Chat GPT, unfortunately, or something.

So, I mean, that's a horrible case.

And it's even more horrible, as we all say, that the lawyers using ChatGPT or what, sorry, it's not specifically Open AI.

We don't know what it is, but some sort of LLM.

But what are some of the other instances of lawyers doing this then, Sam?

Yeah, so the most recent one was in 2024 when it involved Michael Cohen,

who, I mean, we're not going to get into Michael Cohen's resume right now, but a very famous lawyer.

He and his own lawyer.

So we're talking about like, again, like a stacking of a Russian dollar.

Yeah, exactly.

They had generated fake cases with Google Bard,

and they weren't fined or anything.

But the judge let them off, um, and he called

the situation embarrassing for them.

I think it's probably just like shaking my damn head at Michael Cohen's situation.

But then in 2022, so this is kind of this is why I think probably they took it so seriously and apologized immediately.

It's embarrassing, and it's also you can get

like seriously sanctioned and big fines if you are

presenting fake stuff to the court.

Um, So in 2022, this man had filed an action against Avianca Airlines that he was saying that he was injured by a serving cart during a flight.

And his lawyer was citing non-existent cases.

But instead of that legal team apologizing immediately and being like, please, please don't sanction us,

they doubled down and they were like,

we can defend the reason that these cases are in the filing.

They thought they could get out of it, I guess.

And they were fined five grand for that error.

And the judge wrote,

and that's what I was kind of like, okay, maybe that's, I don't know.

I'm maybe there were other repercussions that I don't know about, but it's also just like highly humiliating.

It's like the judge is like

reaming you in official court documents.

Like the judge was like, they abandoned their responsibilities.

You know, they

stand by these fake opinions after judicial orders called their existence into question.

It's like, I don't know.

I'm like secondhand humiliated just reading a judge be super mad at lawyers.

So yeah, I mean, it's like, it's, it's something that I think is going to come up more and more, honestly.

I'm surprised it doesn't come up more often.

Um, because again, this work is really tedious that lawyers are doing a lot of the time to present their cases.

But

yeah, it's, I mean, maybe it does happen a lot and we just aren't hearing about it most of the time.

But

it's, I think you're right, it's going to happen more because I literally just typed in like legal chat GPT and it comes up with all of these companies are offering these tools, which I mean, I don't know how many customers they have, but presumably they see a market there.

And obviously some people are using it.

Jason, you wanted to bring up.

Yeah,

I'm going to do a whirlwind roundup of like stuff that Motherboard reported on about Robo Lawyers.

And then also

I wrote one article for The Atlantic in my life, and it was in 2017.

I didn't know.

It was a freelance piece that for some reason motherboard let me do.

And it was called Rise of the Robo Lawyers.

And it was about all of these startups that were trying to automate law.

And

like,

like every other industry, the legal industry has been like, we can AIFY that.

And that was all the way back in 2017 when things like this were happening.

So like LexisNexis, which is a database program.

It's just, it's massive, has something, or at least did at the time, called Lex Machina that allowed you to try to predict whether you were going to win a case based on like other,

it helped for like venue shopping, I believe, where it was basically like

a defamation lawsuit in this jurisdiction in Texas is more likely to succeed than if it's in North Carolina or something.

So it was being used by law firms to venue shop.

Then there was also a lot of really dystopian startups at the time.

I don't know if any of them are still around, but one was called Premonition that was

basically like you enter your legal documents and the ones that have been filed against you.

And then it predicted whether you would win or lose the case.

And then it was like, I guess it was supposed to

let you know if you were better off like pushing for a settlement versus like going to trial.

There was

a company called Legalist that allowed you to bet on the outcomes of lawsuits.

So basically it would use AI to determine whether a specific case was likely to succeed or not.

And then you could do commercial litigation financing, meaning you could like pay the lawyer's fees to like,

you know, essentially help someone sue someone else.

And then you would get some of the winnings of that and i think that one is gone i need to go check but that one was like some weird idea uh there's been a lot of like chat bot type lawyers as well uh there's this company called do not pay that initially started off um

like helping people fight parking tickets and was very successful at that because

many case many times you can just like write any sort of like form letter to contest a parking ticket and you'll you'll win just by attempting to contest it.

And then Do Not Pay got into like more complicated legal situations and eventually was fined by the FTC or was at least threatened by the FTC for representing itself as eight days ago, FTC finalizes order of do not pay that prohibits deceptive AI lawyer claims, imposes monetary relief and requires notice to past subscribers.

That was earlier this month.

Yeah.

Yeah.

And then as part of this article, which was pretty good now that I'm reading it again, um, I talked to some law professors and people like that.

And one thing they did say is that the legal process is incredibly, incredibly expensive, as Emmanuel already pointed out.

And a lot of cases are quite straightforward theoretically.

Things like divorces, for example, are like really, really expensive or can be really expensive, but a lot of them are really simple in terms of the legal filings are not that complicated.

And so there like maybe some sort of role for like, here's how you fill out the forms properly if you don't have like a contentious divorce going on.

So you don't have to pay tens of thousands of dollars to a lawyer to do like a pretty straightforward sort of thing.

But like what Sam's reporting on here goes far beyond that, because it's a situation where the cases are complicated and the people have hired real lawyers.

And then the real lawyers are like outsourcing that work to a chat GPT and that's like fucked up.

So I know that was like a weird tangent.

I'm good for at least one of those per episode, I know, but

this has been like a dream of the legal, of like big law legal professionals for a long time is like, how do we

how do we like outsource all of this work to a robot and still collect our like really,

you know, extreme fees yeah they're not lowering their prices no no no no it's gonna be the same crazy per hour fee and then it's just being done on some LLM backend or whatever I know it's crazy it's outrageous

all right we will leave that there if you're listening to the free version of the podcast I'll now play us out but if you are a paying 404 media subscriber we're going to talk about a true crime youtube channel you know ran true crime documentaries, except all of the murders were AI generated.

You can subscribe and gain access to that content at 404media.co.

As a reminder, 404 Media is journalists founded and supported by subscribers.

If you do wish to subscribe to 404 Media and directly support our work, please go to 404media.co.

You'll get unlimited access to our articles and an ad-free version of this podcast.

You'll also get to listen to the subscribers section where we talk about a bonus story each week.

This podcast is made in partnership with Kaleidoscope.

Another way to support us is by leaving a five-star rating and a review for the podcast.

Here is one of those reviews from someone who used the username Leftist Tech Bro.

Good show.

Blunt to the point.

I like it.

This has been 404 Media.

We will see you again next week.

Bundle and safe with Expedia.

You were made to follow your favorite band and from the front row, we were made to quietly save you more.

Expedia, made to travel.

Savings vary and subject to availability, flight inclusive packages are at all protected.