We're Not Ready for Chinese AI Video Generators

47m
We start this week with Emanuel's great investigation into Chinese AI video models, and how they have far fewer safeguards than their American counterparts. A content warning for that section due to what the users are making. After the break, Joseph explains how police are using AI to summarize evidence seized from mobile phones. In the subscribers-only section, we chat about an AI-developed game that is making a ton of money. But your AI-generated game probably won't.

YouTube version: https://youtu.be/_Sy_nw4gJVY

Chinese AI Video Generators Unleash a Flood of New Nonconsensual Porn

Alibaba Releases Advanced Open Video Model, Immediately Becomes AI Porn Machine

Cellebrite Is Using AI to Summarize Chat Logs and Audio from Seized Mobile Phones

This Game Created by AI 'Vibe Coding' Makes $50,000 a Month. Yours Probably Won’t

Subscribe at 404media.co for bonus content.
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

Packages by Expedia.

You were made to occasionally take the hard route to the top of the Eiffel Tower.

We were made to easily bundle your trip.

Expedia, made to travel.

Blight-inclusive packages are at all protected.

Hello, and welcome to the 404 Media Podcast, where we bring you unparalleled access to hidden worlds, both online and IRL.

404 Media is a journalist-founded company and needs your support.

To subscribe, go to 404media.co as well as bonus content every single week.

Subscribers also get access to additional episodes where we respond to their best comments.

Gain access to that content at 404media.co.

I'm your host, Joseph, and with me are two of the other 404 Media co-founders.

The first being Sam Cole.

Hello.

And the other being Emmanuel Mayberg.

Hello.

I think Jason's on a plane right now.

I think Jason's on a plane right now.

I left Jason behind in Austin.

We had a busy night.

Last night was our

big takeover and after party.

Flipboard kindly hosted us for a shindig downtown for South Buy.

So yeah,

it was a long night.

I'm a little, my voice is a little hoarse.

So I apologize.

It's

got a little fry more than usual and I'm a little hungover, but we're we're here.

Yeah, I just got in this morning, so it was an amazing time.

It was really good to see

just as hung over by speaking loudly at a party than I do from the alcohol.

Yes, it's a whole, and yeah, I'm an inside cat.

I do not go to networking events much unless we're throwing them.

So, it's, you know, it's like it was amazing to see everybody and meet everybody and met a lot of podcast listeners, which was very cool.

Yeah, it was, it was a good time.

You heard that some people only knew about the event through this podcast, right?

Yeah, yeah.

A few people came up and said, oh yeah, I heard about this happening on the pod.

I was like, that's amazing.

You know, we plug it on the podcast, you know, because we know a lot of people listen.

But

yeah, it was, it was cool to see how many people actually are tuning in.

So, yeah.

Yeah.

And it brings up one thing I'll just say super briefly, which is that clearly some people only listen to the podcast rather than reading the website.

And at first, when we sort of realized that, I found that quite strange.

And then I realized, wait, I do that myself.

I listen to the Verge podcast every single week, twice a week, if they've uploaded it without fail.

But then I generally don't read the website just because I prefer to digest it that way.

So if you are one of those people who only listens to the pod, thank you very much.

We really, really do appreciate it.

And the pod is growing.

With that said, let's get to our stories because we have a bunch of complicated and interesting stuff to get through.

The first couple of stories, which we're kind of putting together, they're written by Emmanuel.

And the headline of this first one:

Chinese AI video generators unleash a flood of new non-consensual porn.

Emmanuel, this is something you've been working on for a long time.

What is the top line of your investigation?

Is it about the guardrails of these Chinese developed AI models?

Like, what's the top line?

The top line of the investigation is that there are a bunch of AI video generators that are available via apps that you can get via your web browser or the app stores.

And I don't think many people know about them because they come from smaller, lesser-known AI companies.

And to explain for a bit why I have been working on this for so long, I think we need to travel back in time to, I think it was February of last year when OpenAI revealed Sora, which is their AI video generator.

And I think that's the first time that people saw kind of high quality AI generated video.

And that really blew people away.

It blew me away.

They kind of came out with those video samples out of nowhere.

And

that tool wasn't available at the time and it wasn't available for a long time.

It is available now if you pay.

They were more just showing it off, right?

Yeah, they were just showing, like, look at how powerful AI videos can be.

And they definitely delivered that message.

What I did that day is immediately go to the chat rooms where I monitor the communities that create and share non-consensual AI-generated pornography.

And I wanted to see how they reacted.

And obviously, they were kind of salivating over having access to those kinds of tools.

But it wasn't really a concern, A,

because OpenAI didn't give them access and they rolled it out very slowly and additionally, very safely.

OpenAI is notorious for having pretty strong, some would say, overbearing, unnecessary guardrails around all their AI tools.

And that is true for Sora as well.

But what happened in between the time that Sora was announced and today

is that a bunch of other competitors in the AI market rushed to launch

competitors.

And at first, these competitors seem not nearly as good as what we saw from Sora.

But

as everyone who's familiar with this beat by now knows, AI kind of develops at a very

fast pace.

And

now there are a bunch of AI tools that produce pretty damn convincing video.

And

for

reasons that we can

speculate on here in a minute, they just have really, really, really bad AI

prompt guardrails, right?

So people might remember one big story we did.

God, it's 2023 now, I think.

No, it was also 2024.

But that was around people using Microsoft Designer and kind of writing prompts that tricked it to generate non-consensual images of Taylor Swift.

And that loophole we've seen replicated in many AI tools since then.

But overall, the big AI companies realized that people were using this loophole and abusing it and have gotten a lot better at

having guardrails.

against that sort of abuse.

These newer AI companies, not so much.

And

over the months, I've seen people find these AI tools, find the loopholes, and just generating at this point mountains of non-consensual videos of celebrities.

So, yeah, it sounded like it almost started with the Taylor Swift stuff that we reported on, obviously, and you got, and I think Sam as well on that piece as well, got more information.

And then everybody, because it went viral on Twitter or whatever.

Now you're saying

these other

predominantly Chinese-developed AI models

are basically being used for the same thing, but to a much, much larger degree.

It's not just some Taylor Swift stuff, it's all of these different celebrities.

When it comes to the video generators themselves,

what are they doing exactly?

Do you give it a video and it auto-completes it?

Do you give it a photo and it animates that?

Do you give it a text prompt?

Like, what does the user put in to then make all of this stuff?

Yeah, so it's both.

You can

generate something out of what appears to be nothing, but it's not really nothing because it's pulling from huge data sets that the AI model was trained on.

But you can basically write a text prompt and generate a video that way.

Or...

And I think this is important, you can do image to video, which you give it a still image, and then in the text prompt, you write how you want the AI tool to sort of animate that image.

And the latter is harder to moderate because

the

text prompt, you can

fairly easily

filter out terms that you don't want people to use.

And those can be names of celebrities, nicknames of celebrities, and a bunch of sexual terms, right?

That's like a fairly easy way to filter out a bunch of bad content.

You could do that with images, but that is much more complicated because then you need to train other AI models to recognize a person in the image or recognize nudity in an image.

And that just takes a lot more effort to filter out those kind of visual prompts.

It just appears that the AI tools that I've found, most of which are developed by Chinese companies, are not doing that very hard work of visually detecting images that are used to animate pornography.

Yeah, because it could be a red carpet photo of a celebrity or something like that.

It's like, what, are you going to ban all red carpet photos?

Well, then they'll just find another photo or something like that, right?

So

what are people making exactly?

Is it like sort of lewd images where maybe they get a celebrity flashing their breast or something like that?

Or is it um,

you know, more full on pornography?

Obviously, there's degrees here.

I mean, it's all nonconsensual and it all sucks, but like, what are people making broadly?

So I think if I went into these chat rooms and

put all these videos into a spreadsheet and counted what is the most popular type of video, I would say it is probably videos of female celebrities taking their tops off.

The reason for that, I think, is that there's a very popular tool called Pixverse,

which

is just,

I think, to be fair, is used for

non-harmful reasons by a lot of users.

But it's just, it's just an easy accessible tool.

You can get it on the web.

You can get it via the Apple App Store.

So it's very easy to access.

And this community has figured out how to abuse it in this specific way.

They found the specific written prompt that you can use to create that kind of video.

And it's really easy.

So that is the most common one.

But what I saw is that

people

move from tool to tool depending on what kind of video they want to generate and what kind of vulnerabilities they're finding in each tool.

So Pixverse is good for that.

Then, there's this other tool I talk about later, which is actually from an American company called Pika.

And that one can, I mean,

produce straight-up like videos of oral sex fairly easy.

And it looks a bit janky, definitely looks weird, but it is also, I think, would be fairly horrific to find someone doing this to your likeness.

Yeah.

And again, it just takes an image or something like that, depending on the tool, but it's very low effort if they have the prompt workaround.

What specific Chinese tools are we talking about?

And are they like little upstarts?

Are they like open source projects?

Are they well-funded operations?

Like,

who are these or what are these tools?

And sort of where do they come from?

It's this new crop of AI tools that

are doing this exact thing.

I think I talked about in the beginning, which is

OpenAI presents Sora, and everybody sees the potential in that.

And rather than

be careful and release it very slowly and safely,

their tactical move is to just get to the market first, get something that is honestly probably not as powerful as Sora and definitely not as safe,

but is really easy to access and there is a demand for this kind of tool and just getting their first improvement and giving people access, that's just a better business strategy or like the only

competitive business strategy that they have.

They are well funded.

They all have millions of dollars in venture capital.

For example, this app that I talked about, Pixverse, it has some notable people, like the person at the head of that company used to be the head of machine vision at ByteDance, which is the company that owns TikTok.

So they're new companies.

It's like a new generation of AI companies.

We're seeing similar type and scale of company in the U.S.,

but in this case, they happen to be almost exclusively Chinese.

Yeah.

And

there's a line in the piece where you say 404 media is not sharing the exact prompts that produce these videos.

Can you explain why?

I mean, I think it's obvious, but I think it's useful for people to hear why you don't include these exact prompts.

But then also, can you just mention, so you won't mention the prompts, but you'll mention the companies.

Like, is that because, well, they're massive multi-million dollar companies?

Like, why wouldn't we name them?

Like, what's the thinking there?

Yeah, I mean, as Sam knows very well, when covering this kind of thing, you're always trying to walk this very complicated line where you want to

report about an important issue.

You want to name and shame basically these companies and hopefully apply pressure not just on them to build better protections, but also

build pressure on Apple and Google who are making these apps accessible via the App Store.

But at the same time, we obviously don't want to teach people how to create very harmful content.

So we're kind of sharing the responsible parties here, but we're not sharing the

communities where people will teach you how to do this or the specific prompts that will generate those harmful videos.

Yeah.

Sam, what do you think of that balance between highlighting something because there's a public interest is important while not amplifying the bad stuff.

Like, how do you figure that out in your head?

Yeah, I mean, it's just so hard to

write about something and especially to illustrate what's going on

without saying what's going on, like saying it plainly.

I think when we kind of beat around the bush, so to speak, and try to, you know, use like euphemisms or

descriptors that aren't exactly not the prompts, but you know, like naming what these companies are,

saying, you know, for example, like last week we talked about Instagram gore.

It's like saying exactly what people are seeing without being cute about it, I think is really important as journalists.

So

it's a hard calculus.

I mean, it's definitely something that I struggled with a lot early on, kind of figuring out when to

be very blunt about these things and name the companies and when to not.

And I think a lot of it comes down to its,

I think, just, I think a lot of people's reaction when they see like, oh, you're talking about like a Telegram group or an AI model or a tool or whatever it is, it's like, they're like, oh, I haven't heard of that.

So you're amplifying it.

It's like, no, actually, tens of thousands of people have heard of this.

You just haven't.

You know, lots of people are using this and this company is making a lot of money on those people.

And it's doing, in a lot of cases, real harm to a lot of people.

Just because you haven't heard about it doesn't mean it's not already a huge thing.

Maybe it'll become more of a mainstream thing and more pressure will be put on it to, you know, like Emmanuel said, be taken off the app store and things like that.

But that's also out of our hands.

It's not really part of our job to do that.

So, yeah, it's a tricky thing for sure.

It's something we think about, I think, every time one of these stories comes up.

Yeah.

I mean, I think the stakes of this example, I'll say, are lower, but almost the quintessential one I always remember is when Gorka first covered the Silk Road website.

You know, people were debating me, like, oh, you shouldn't do that because now people will know to go do it.

And it's like, I don't know, man, the revolution in marrying Bitcoin and the Tor Anonymity Network to allow the borderless online exchange of narcotics is probably something that's worth putting in an article.

And it applies here when it comes to like the unbridled use of this AI technology.

And on that, so just to get back to the the models we're talking about a little bit, Emmanuel.

So it's mostly Chinese in this article.

There are some US ones or one

you did mention.

Is it more that the new generation of ones without guardrails just happen to be Chinese?

And that's sort of why Chinese is in the headline?

Or like,

is that sort of...

a commonality when it comes with Chinese companies that they just don't have these guardrails like how exactly does the China element play into it because

as of course you're always careful and we all are here.

But recently, we had the deep seek stuff, and people lost their fucking minds over the Chinese element.

That's a little bit different because it's like when you're giving data to a Chinese company, blah, blah, blah.

But, like, is it more

they just happen to be Chinese here, or what's that?

So, I think there's two things that are happening.

Um, there are

American

companies that are doing this.

There is like American competition, but I think there is less.

And we've written about some of them.

I wrote a story about

one app called Dream Machine.

Sam wrote a huge scoop about,

I forget the name,

the AI video generator that we got the training data on.

Runway.

Runway.

Runway.

Yes, thank you.

Yeah.

Runway.

We're running all our stories.

That's a long time ago.

A lot has happened.

Yeah.

So they exist.

But I think one thing that is definitely happening is that, and just that the Chinese companies saw an opening.

We're like in this great competition

in the AI industry between

the U.S.

or the West and China.

And it was just a place for them that they can get ahead a little bit.

And they did.

And I think that's one thing that is happening.

The other thing that...

that I think is happening, and I haven't been able to prove this, and I didn't put this in the article, but I feel comfortable saying it here.

And I invite people who are listening who might be interested in safety or red teaming and might be able to teach me about this, honestly.

But I do suspect or I wonder if there's a language barrier problem here where the American companies are just better at building what we call like the semantic filtering, right?

Like the word-based filtering of prompts, where the Chinese companies, since they're initially built for Chinese markets and are prompted in Chinese, maybe have like fairly good filtering in Chinese, but the English language filtering is not as good.

I was wondering if that's one issue here, but I don't know that for a fact.

That's super interesting.

Yeah.

I think there's one more thing you wanted to mention on this story, Emmanuel, before we move to Ali Baba.

There was

the apps and the models, right?

Right.

So just to transition here to the next story.

So far we've been talking about apps.

These are

user-friendly, consumer-grade.

Anyone can use them, are advertised to the average user type of tools.

The other thing that is happening at the same time is kind of a rerun of what we've seen with this website called Civitai and

these more open models as opposed to apps.

So there's two Chinese companies, Tencent and Alibaba, which are like two of the biggest tech companies in the world.

And

they have released essentially the video version of stable diffusion, which stable diffusion also does video, but stable diffusion is basically an open weights model.

It's an

AI image generation model that you can tinker with to customize it and make it better at producing specific type of images.

And Tencent released this tool called

Hanyun.

I hope that's how you pronounce it.

And Alibaba produced this other

model called WAN.

And they're just the exact same thing.

They release all the documentation.

There is a GitHub where you can go and download the code and tinker with it.

And as soon as this happened,

very rapidly,

the exact

same thing we saw with AI images happened.

The models were adopted by the Civitai community.

They were modified to create videos of highly specific sexual acts and fetishes, and then also videos of very specific small-time YouTubers and Twitch streamers, Instagram influencers.

And while Civitai at this point is pretty good at preventing you from posting non-consensual content to its website.

It also makes it incredibly easy to like, I'm going to take this AI video model that has been designed to create videos of blowjobs and I'm going to take this other AI video model that's been designed to recreate the likeness of this Twitch streamer that I like.

And you kind of put them together and make

non-consensual videos, which are also of much higher quality than what these apps that I talked about do.

But at this point, they are more difficult to produce.

You need to navigate Civitai, know how to run these models, either do it locally on a fairly powerful GPU or rent that GPU time in the cloud and setting up the workflow for that.

And

it's not impossible.

Like, I could figure out how to do it.

Anyone can figure out how to do it, but it is several degrees more difficult than just downloading an app and clicking generate.

Yeah.

And I think just the last thing on that before we take a break is when Alibaba released this open video model, and then obviously then got used for porn, as you reported, what was the actual intention

by

with releasing this?

Like, why did they want to release it?

And what were they hoping it was going to be used for?

That's a complicated

question to answer because it gets into this greater debate of like, why is Mark Zuckerberg really seeing Llama as an open model?

Right.

So, the theory is that it becomes

widely adopted across the world,

and then

question mark, question mark, question mark, monetize it somehow.

It's it's how that works out, that's kind of like above my pay grade.

But you're just like the plan is to make it open so as many people as possible adopt it.

So, the technology is developed by by a community along with the company and has a lot of investment from that community.

And then you sell them something.

I don't know how that last stage works out, but it's just the

open model of AI.

Step for profit.

You don't need

in between steps.

That's just how it works, you know.

All right, we'll leave that there.

Really, really amazing stuff.

When we come back, we're going to be talking more about AI.

This entire episode is about AI.

I mean, we're going back to like almost our 2023 roots, you know,

but it's going to be about how police are using

AI when it comes to analyzing seized evidence.

We'll be right back after

this.

I don't know about you, but I like keeping my money where I can see it.

Unfortunately, traditional big wireless carriers also seem to like my money too.

After years of overpaying for wireless, I finally got fed up from crazy high wireless bills, bogus fees, and quote-unquote free perks that actually cost more in the long run and switched to Mint Mobile.

Switch to Mint Mobile and you'll save in the short and the long run.

It's easy to switch.

You can keep your device, your phone number, and your money.

Say bye-bye to your overpriced wireless plans, jaw-dropping monthly bills and unexpected overages.

Mint Mobile is here to rescue you with premium wireless plans starting at 15 bucks a month.

All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network.

Use your own phone with any Mint Mobile plan and bring your phone number along with all your existing contacts.

Ditch overpriced wireless and get three months of premium wireless service from Mint Mobile for 15 bucks a month.

If you like your money, Mint Mobile is for you.

Shop plans plans at mintmobile.com slash 404 media.

That's mintmobile.com/slash 404 media.

Upfront payment of $45 for three-month 5-gigabyte plan required, equivalent to 15 bucks a month.

New customer offer for first three months only, then full price plan options available.

Taxes and fees extra.

See Mint Mobile for details.

We're journalists, not business people.

At least we weren't until we started 404 Media.

Starting the business meant figuring out how to open up an online merch shop, which is something that I was honestly dreading.

Then I found Shopify.

Shopify is the all-in-one online and IRL store solution that you can get set up in minutes.

Nobody does selling better than Shopify, home of the number one checkout on the planet.

Shopify's not-so-secret secret, it's ShopPay, which boosts conversions up to 50%,

meaning way less carts going abandoned and way more sales going.

If you're into growing your business, your commerce platform better be ready to sell wherever your customers are scrolling or strolling.

On the web, in your store, in their feed, and everywhere in between.

Upgrade your business and get the same checkout we use with Shopify.

Sign up for your $1 per month trial period at shopify.com/slash media, all lowercase.

Go to shopify.com/slash media to upgrade your selling today.

Shopify.com/slash media.

This episode is sponsored sponsored by BetterHelp.

Therapy can feel like a big financial investment with a single session costing anywhere between $100 to $250.

For most people, that can be out of budget, but with BetterHelp, you can save on average 50% per session.

And more than anything, it's an investment into your mental health, and your mental health is worth it.

With therapy, you can learn positive coping mechanisms and learn how to set boundaries with yourself and others.

Therapy isn't just for those who have experienced major trauma.

It's also for anyone who wants help help with navigating their everyday life experiences just a little bit better.

With over 30,000 therapists, BetterHelp is the largest online therapy platform.

The platform has served over 5 million people around the world.

And with BetterHelp, you can switch therapists at any time.

And if you're busy like we all are, you can join a session with just a click of a button.

Your well-being is worth it.

Visit betterhelp.com/slash 404media today to get 10% off your first month.

That's betterhelp, h-e-l-p.com/slash for

media.

Okay, so this next story is from Joe.

The headline is: Celebrite is using AI to summarize chat logs and audio from C's mobile phones.

I think let's start with

what is Celebrite, a notorious company in our little world, but for people who don't know, what is Celebrite?

Yeah, so Celebrite is an Israeli company.

I mean, it has U.S.

subsidiaries and stuff, I'm sure, but predominantly an Israeli company.

And it's basically ubiquitous in the world of law enforcement.

So when a police officer seizes a mobile phone,

maybe that's at the border with CBP.

Maybe that's ICE when they arrest somebody.

Maybe it's a cop traffic stop.

What they'll often use is a piece of technology from this company called Celebrate.

And, you know, it comes in lots of different forms.

But the generally the tool is called, I pronounce it UFED.

I'm not sure if that's entirely correct, but I like that it has Fed in there, UFED.

And you plug the phone in.

If it needs to,

it will crack or bypass the password or the passcode requirement, and then it will download all of the data on the phone you know in some cases able to get deleted chats or chats that you thought were deleted but they weren't forensically deleted necessarily and then it takes all of that information and the police officer can

you know safely store it for later you know if there's i don't know a murder investigation and you have the phone of the victim you want a forensically sound image of that phone or if someone's crossing a border you just want to download everything on their phone and rummage through it without a warrant because that's what authorities are able to do.

There's a funny sidebar about how Celebrite started, which is that when you would go into the Apple store and

you were like an Android user, but you wanted to change to Apple iOS, Apple would have Celebrite devices in there because it could help transfer your data.

I don't think that's the case anymore because I don't think that's necessary, but it's just funny, especially especially around the San Bernardino attacks of 2016 and there was all of the FBI versus Apple stuff there's a lot of coverage about Celebrate because people were speculating that Celebrate was the company that unlocked the phone for the FBI but I think it was Kim Zetter at the intercept actually did a really good profile of Celebrate and how yeah you go to an Apple store there's actually a Celebrate in the back probably

but yeah it's a very very

common tool and company across law enforcement.

Its main competitor being Greykey which has probably taken actually a sizable chunk of its

customer base, I imagine.

Every once in a while, someone on Reddit posts a picture of the Celebrite device at the Apple store, and it's like the Leo DiCaprio meme pointing.

Like, oh, there it is.

So Celebrite is doing what every other company in the world is basically doing right now, and they slapped AI on it.

which is what your story is about.

What does that mean for them?

What does it mean in this context for them to use AI?

Yeah.

So they've slapped AI into their products called Guardian.

Again, that's not the one that's actually getting the data from the phones, but after the cop has extracted the data, they upload it to this system, like an evidence sharing system.

This is a bad analogy, but it's almost like Google Docs for cops, where you can collaborate across the cloud live with each other on a piece of evidence.

It's like that, but for stuff that's been taken from mobile phones.

So you'll have all of the chat logs in there, the voice memos, the photos, all of that sort of thing.

And what Guardian is now capable of doing

with this slapped on AI, it can summarize all of that material.

So rather than a police officer having to read through every single text message or listen to every single voice memo, Guardian can use AI to potentially summarize it.

Now, I haven't seen sort of a Guardian-produced AI report, and I would absolutely love to see one if anyone gets hold of one.

But that's the way they frame it: in that it can really speed up investigations, it can save cops time,

it can do all of the promises of generative AI essentially, but

this isn't a school kid trying to generate an essay.

This is a cop generating

summary, a summary of evidence seized from from a mobile phone.

And

that is very, very different in my opinion.

You know, the stakes here are a lot, lot higher when it comes to sort of this use of AI.

Do we know what cops think about this new feature?

Yeah, so there's a couple of testimonials included with Celebrate's announcement.

And I should say, like, this actually came from a press release from Celebrite in February.

I'm not really in the habit of reporting on press releases, but it seemed like nobody picked this up.

You know, I was searching around for something, probably to do with Celebrate because of all of these leached documents we get from Grey Key and Celebrate, that sort of thing.

And then I came across this February press release saying, yeah, we're putting AI into this product.

So included in there are testimonials from police officers.

Now, this is very, very common.

We saw it with Ring when we did a ton of reporting back at Vice, where the cops would provide, you know, pretty positive reviews, and then, I don't know, they get cameras or whatever in exchange, right?

I've seen that with other surveillance companies and with this mobile forensics firm as well.

So

there is a testimonial in there, and it comes from a pretty small police department who says they were piloting the AI capabilities.

And they said, quote, it is impossible to calculate the hours it would have taken to link a series of porch package thefts to an international organized crime crime ring, the Gen AI

capabilities within Guardian helped us translate and summarize the chats between suspects, which gave us immediate insights into the large criminal network we were dealing with.

End quote.

So there's a couple of things there.

The first is that translating, so you know, I mean, that's not that revolutionary, obviously, and I don't have a super problem with cops using an automatic translation tool as long as it's verified later on.

But then summarize the chats between suspects.

Now, again, clearly into Gen AI territory.

But also, they say they've linked these different cases together by summarizing the evidence.

Now, of course,

maybe,

if not most likely, they would have linked those together as well manually.

You know, I don't know, they're reading the chats or something, and the same money mule comes up, or the same safe house comes up across all of these different chats, whatever.

So maybe they would have

found out about it in any way.

But they're saying explicitly here that they linked all of these together with this AI tool, basically.

Yeah, I think one of the

reasons

it is so easy for us to write articles about AI and make fun of AI is that one of the most common uses for it is summarizing

other

large pools of information, and it so often gets it wrong.

And

it just increases in severity or like becomes less funny depending on the context, right?

So, if it's a Reddit search, I mean, a Google search, and it's pulling some random comment from Reddit that tells people to put glue in their pizza, that's not great, but mostly it's like, ha ha, AI is stupid.

And then increasingly, we've been seeing more and more stories about lawyers who are using AI in court.

And then you get the AI hallucinating citations of cases that never took place.

That seems pretty bad.

And I sort of believe the cop quotes here about it being much faster to summarize just like massive amounts of chat logs and other information.

Like, sure, it's easier, but it's like,

yikes.

Yeah.

And

you got some pretty alarming quotes from civil liberties experts to like speaking to this exact issue.

And what do they say?

Yeah.

And this is from

Jennifer Grannick from the ACLU.

And, you know, she brought up Fourth Amendment issues, which is like when you get a warrant, it's only supposed to be for a particular device and maybe even a subset of that device and that sort of thing.

And that's obviously a long-standing Fourth Amendment warrant issue.

But I think she brought up some other really, really interesting points, which were, you know, she said

there could be a tendency to believe that an AI tool will successfully identify patterns which reveal criminal behavior more so or better than the human reviewer.

So you could end up trusting the AI more because, oh, well,

this is an AI tool sold by a law enforcement contractor.

Like, why would I not trust it?

You know,

that said, and I think we'll get a little bit into this at the end, but Celebrite says there is always a human in the loop, an H-I-T-L.

I think they made that acronym up because

I've never heard that.

And when Jason was editing it, he highlighted it like oh my god

so there's there's always a human in the loop they say

okay

but

I don't know that this is this is still crazy to me and the idea that it could introduce errors which then the police officer has to catch and maybe they will catch but wouldn't it be better if they just Did it from their own experience?

I'm not entirely sure.

But you brought up the one how it's easy to dunk on chatbots and here the stakes are higher.

I mean, we're seeing seeing more and more of this.

There was this BBC study.

When was this from?

From February.

And they tested, it seems quite methodically, ChatGPT, Copilot, Gemini, and Perplexity to summarize news stories.

So basically, it's kind of doing what's the same here, summarizing some sort of corpus, some sort of body of work.

And it found that 51% of all AI answers to questions about the news had significant issues of some form.

Half of them are getting the shit wrong.

Like that is really, really crazy.

When you're talking about obviously news, that's very, very bad.

But then in this context as well, I don't know.

It's just alarming and concerning for sure.

And again, I haven't seen any errors in it, but I think the potential is absolutely there.

This is not going to be the end of AI and police work.

There are several stories that we're working on that will dig into this more in the future.

But

what do you think about

investigators, I don't know, the FBI,

any law enforcement agency increasingly adopting AI tools into their investigations, their reporting of crimes?

Yeah.

So there's a couple of examples.

One is,

you know, the thing I go on about most in the entire world, which is how cops are increasingly compromising entire encrypted chat platforms like Sky, EnchoroChat, Anom,

and then all of these other smaller ones, Exclude, Ghost, Matrix, blah, blah, blah.

The Dutch authorities in particular, they did create AI tools to surface content in those massive, massive data sets.

So if a criminal was talking to another one about cocaine, for example, the AI would surface that chat, tell an analyst, hey, here is a conversation about cocaine.

And in my reporting, speaking to those law enforcement officials involved in those sorts of investigations, it seemed that that sort of AI was especially useful.

I guess the good thing there is that it's very much limited to that data set.

But then, you know, it still brings up questions of, well, eventually those people are going to be prosecuted.

So it would still involve the human going and verifying the actual evidence against them.

That's one way.

The other one,

which I think is probably a lot more relevant for more people, is that Axon, the law enforcement contracting giant, you know, it makes what tasers, body cameras, basically everything for everybody, right?

And they have this relatively new capability called draft one.

Yeah, draft one.

And it uses basically Chat GPT or OpenAI's model to summarize the

audio of body cam footage.

So it takes that audio, it listens to it, and basically summarizes what happened.

And examples are, oh, a man came over to the police officer and he said XYZ.

He described the suspect as blah, blah, blah.

And the idea is that police officers can be sort of more engaged in the moment.

They can be talking to the witness or the victim or the suspect and they can be really engaged in that conversation without having to like sort of remember bits and bobs and that sort of thing.

And when I was going through a lot of the Axon material, they were saying, or maybe the police officer was saying that, that with the rise of body cameras, officers have now, they speak more to the body camera than to the person.

in front of them because they know they're being recorded.

So they say they almost repeat everything.

So it gets recorded on the body camera.

In a way, I don't know.

That sounds like a good thing.

There's more evidence, right?

But the idea of that AI is that the cop can just go and sort of do their job and then Axon's tool, draft one, will summarize it.

I mean, that made quite a splash in good and bad ways when it was announced several months ago at this point.

And it brought up basically the same concerns as the celebrate one here, which is that

you're asking an AI to to summarize stuff, which is really, really important.

This is not trivial.

This is not some dumbass lawyer summer.

And to be clear, the stakes are pretty high there as well, but the judge is always going to catch them.

Here, it's a lot more

asymmetrical because, well, it's the cops generating the evidence and summarizing it with their AI.

You don't know if the victim or the witness or the suspect or anything necessarily gets a chance to challenge that in the moment, right?

It's a black box happening over there.

So, I don't know.

AI is going to continue to become a more and more relevant part of policing.

And I think we're going to start to see the side effects of that in the same way that we saw side effects with facial recognition, where more cops were using it.

And yes, it's an exceptionally powerful tool for them.

But, you know, the wrong people have been arrested because they happened to be black or something like that.

And these systems make mistakes.

Does that sound right, Emmanuel?

That sounds horrible.

So yes, it sounds right.

It sounds extremely frightening and bad.

So yes, that sounds correct.

That is what we do on this podcast.

If you're listening to the free version of the podcast, I'll now play us out.

But if you are a paying 404 Media subscriber, we're going to talk about the rise of vibe coding and a game someone made with AI that's now making them allegedly $50,000, which is crazy.

You can subscribe and gain access to that content at 404media.co.

As a reminder, 404 Media is journalists founded and supported by subscribers.

If you do wish to subscribe to 404 Media and directly support our work, please go to 404media.co.

You'll get unlimited access to our articles and an ad-free version of this podcast.

You'll also get to listen to the subscribers only section where we talk about a bonus story each week.

This podcast is made in partnership with with Kaleidoscope.

Another way to support us is by leaving a five-star rating and review for the podcast.

That stuff really, really does help out.

This has been For It for Media.

We'll see you again next week.

Starting a business can seem like a daunting task, unless you have a partner like Shopify.

They have the tools you need to start and grow your business.

From designing a website to marketing to selling and beyond, Shopify can help with everything you need.

There's a reason millions of companies like Mattel, Heinz, and Allbirds continue to trust and use them.

With Shopify on your side, turn your big business idea into

sign up for your $1 per month trial at shopify.com/slash special offer.