AI Slop Is Breaking the Internet as We Know It (Live at SXSW)
Use code "404Media" for 20% off an annual plan of DeleteMe: https://www.404media.co/r/5d94373c?m=e247fe06-53d6-4c05-9484-be3684d4f655
Find Brian's work at Blood in the Machine: https://www.bloodinthemachine.com/
Become a paid subscriber for access to bonus content: https://404media.co/membership
Learn more about your ad choices. Visit megaphone.fm/adchoices
Listen and follow along
Transcript
Getting ready to step into your career era?
Set yourself apart with Adobe Creative Cloud Pro for students.
Hone your skills with apps like Photoshop, Illustrator, Premiere Pro, and more.
Powered with the latest in creative AI, students save over 55% so you can build a portfolio you're proud of.
Launch your future with Adobe Creative Cloud Pro for students.
Visit adobe.com slash students to learn more.
Hey, this is Jason.
I wanted to explain this quickly before we get started.
This is a recording of the live panel that we did at South by Southwest a few weeks ago, where I talked about the rise of AI slop with Sam and our friend Brian Merchant of Blood and the Machine.
If you're listening to the podcast version of this, the first few minutes might be kind of confusing because there's a TV display at the event where I'm showing a bunch of AI photos and reels that I've reported on over the last year.
You might want to skip ahead a few minutes because after the intro, it becomes more podcast-like.
Or you can check out the video version of this on our YouTube.
Our username is 404 Media.
I also wanted to thank Flipboard for giving us the space at South by Southwest to talk about this and DeleteMe for sponsoring our panel and party.
Let's get into it.
Okay, thank you everyone for coming.
I'm Jason Kebler.
I'm a co-founder of 404 Media.
It means a lot to us that you're here.
We're first going to talk to you about about AI slop, and then afterward, we're going to talk a little bit more about 404 media.
I know some of you are here to hang out with other people, which is very good.
But if you could be like further back, if you're going to talk to other people, that would be helpful.
Thank you.
Yeah.
So I'm here with Sam Cole.
Hello, Sam.
Hi.
Thanks, guys.
I'm a co-founder of 44 Media Also.
Yeah, we're really excited to be here, obviously.
It's so cool to see you all.
Yeah, we're going to get into some weird stuff this afternoon, so I'm excited.
And this is Brian Merchant.
And I'll let Brian introduce himself.
Yeah, honorary 404 adjacent human.
And yeah,
fellow traveler in the tech journalism world.
I used to work with these two back in the motherboard days.
We all worked at motherboard advice for a stint before that ship went down in flames dramatically.
And yeah, couldn't be more pleased to be here with these heroes of modern journalism here.
Let's give them another round of applause.
404 Media.
Stuff they've done with independent tech journalism is just, it's astounding.
We need it, and it's the best.
So, this panel is going to be 45 minutes of Brian saying nice things to us.
So, I've been very obsessed with artificial intelligence and AI slop on social media.
So, I'm going to take you through a couple of slides very quickly just to give a lay of the land.
I have this thesis that AI slop is essentially a brute force attack against the algorithms that control the nature of our reality, which sounds very lofty, but I think it's actually what is happening.
I've been writing about AI slop for about 18 months.
Sort of started with Shrimp Jesus and
you know AI generated images that were going very, very viral on Facebook.
But it's not just Facebook, it's Instagram, it's TikTok, it's YouTube.
And there's an entire content creation factory apparatus around all of this, which we'll definitely talk about.
So I'm going to flip through just like a few things that I've seen over the last few years very quickly.
So these are like dozens of images of Jesus being made out of Coke bottles with four children.
We have some
Jesus and hot flight attendants over here.
And this is an Indian YouTuber who teaches people how to make AI spam to put on Facebook.
And so after like months of writing about this sort of thing, I became obsessed with figuring out where it came from.
And it turns out that all of these
like bizarre AI images are being monetized directly through Facebook itself, through the Facebook Creators Program, which gives a fraction of ad revenue depending on how viral an image goes.
And so there's like all these side hustle bros in India, Vietnam, the Philippines, some in the United States as well, who teach people how to create AI slop and spam it to Facebook.
See this?
And then there's this guy here who has become very famous on YouTube because he made this image, which is very small, but it's an image image of a train that is made out of leaves.
And he was paid $431
for this image because it went mega viral on Facebook.
And now he's been on like 20 different podcasts talking about how he did it.
So there's like thousands and thousands of people all over the world who are just making stuff to put on the internet
in hopes that it goes viral.
At first, it was just like images of Jesus and like bizarre trains made out of grass.
And then Hurricane Helene happened and this image went extremely viral.
It is not real.
And here's the head of the Republican National Committee saying that they don't care if the image is real or not because the vibe
is such that it seems true.
And then if you scroll Facebook right now, you'll see tens of thousands of different images and videos of Elon Musk,
like inspiration porn, I would call it, where Elon Musk has invented a UFO or a $10,000 micro house.
And a lot of these images have, you know, 10, 15 likes, and then others will have
like 7 million likes and like hundreds of thousands of comments and things like that.
And clearly, very many people can't tell that it's real or fake.
This is expanded beyond Facebook, of course.
It's expanded into libraries.
It's expanded into TikTok.
It's expanded into SEO spam, things like that.
Here are some AI generated books that you can take out from the public library, including AI monization of your faceless YouTube channel.
And
I say that this strategy is brute force because
they're essentially, in my opinion, looking for weaknesses in the algorithm.
And when they find something that works, you create a lot of that.
And so there's multiple influencers who are teaching people how to do this.
They have, you know, Slack courses and Discord channels where you can pay them $50 a month to learn how to do this.
And this one is from a guy named Daniel Bitten.
He emails me every single day
automatically.
And he likens making, like going viral on YouTube shorts to being a serial killer because there's a pattern to the algorithm.
I think the important thing to look at here, though, is now AI does 95% of the work and ready to start feeding the algorithm what it's actually hungry for.
This is another email from a guy named Musa who flies in a private jet.
He is 17 years old.
I don't know if he owns the private jet or not, but he says that he's gotten rich by spamming these platforms.
And similarly, he's just saying, let the algorithm do its thing.
And
now I'm going to show you what my Instagram feed looks like very quickly.
And then I'll bring in Brian and Sam.
Some of these are gross.
I'll just preface, but I think you can probably handle it.
We have Jesus fighting the Grim Reaper here.
We have
whatever this is.
This is totally unedited, by the way.
This is like when I open Instagram and I scroll through reels,
this is what I'm getting.
We have this horrible tidal wave that killed all these people.
Things like that.
We also have a lot of wildfire content.
We'll start with this, but there's a lot of LA wildfire disinformation here.
This has 39,000 likes if you can't see.
There's a lot of how to build the pyramids as well.
Like this is how the pyramids were built in ancient Egypt with a gigantic man,
some mermaid content,
so on and so forth.
And the last thing I'll show you is AI influencers are very popular on Instagram now.
We have this with 519,000 likes, whatever is going on here.
We have this woman with three breasts.
She's pretty popular on Instagram as well.
And
I guess we'll just start there.
What do you guys think?
You like that?
So, anyways, what do you think?
Yeah.
No, but
thank you guys for staying.
Yeah, thank you for staying.
Thank you for not running away in horror.
You're very brave.
Yeah.
They're sweating, I can can tell, but it's fine.
But Sam, you've written a lot about AI slop as well and how it can be dangerous.
You know, field guides to eating mushrooms
generated by AI, things like this.
Yeah, I mean, I feel like it's really funny to look at the slop.
First of all, I love that slop is the term for this.
I think maybe Jason coined that.
Someone coined it.
Possibly you.
Probably someone here.
Absolutely, I don't know.
But yeah, I mean, the AI slop stuff is really funny, but it's also, I mean, we're looking at this as journalists, obviously, so we're like, why is this happening?
Like, what is the kind of the
ecosystem, the perverse economy that's behind this?
And so there's the visual examples, but then there's also LLMs and AI slop of like a text-based form.
So
when we first started 404, one of our first stories is about AI-generated books on Amazon for mushroom foraging, which is obviously a very niche hobby that can get you in a lot of trouble if you do it wrong.
It's something that the New York Mycological Society had flagged
as being life or death, essentially.
It's, you know, if you eat the wrong mushroom in the wild, it could be really serious.
And these books were saying, you know, like, it's no problem, just cook it and eat it.
It's like,
no human is involved in the expertise that went into this book.
But obviously those books sell and people buy them and
Amazon makes money and the sellers make money and I think we're gonna get into a little bit about why these make money and how these kind of images make money.
But
yeah, I mean it's
it's scary in a way that is so absurd you have to laugh.
But you know, it's
Brian, you write a lot about
AI empowered automation, but have you thought about all the jobs that AI slop is creating for teenagers who want to get rid of quick on Instagram.
Will anyone think of the teenage content farmers spamming AI content
slop algorithms?
Yes.
No, I mean, I think that is a really
good point, and that it's easy to scoff at this stuff.
It's easy to write it off.
But just like Sam was talking about, if even a few of these guides to consuming, you know, mushrooms or foraging for mushrooms make it into the public hands and they don't understand what it is properly, that it was created by these algorithms, it can be very dangerous.
And on those edge cases,
it is having sort of outsized effects on the broader economy, right?
Like, so by flooding, especially creative, adjacent or creative fields properly with all of this AI slop, both image and text, it is slowly eroding the market for this stuff.
And, you know, it's not really anything new.
This is a process that began, you could argue,
back when people started
blogging or aggregating
internet content and sort of capturing some of the value that was put forward by reporters or writers or people sharing images online.
But
by being able to spam this
out at such great sort of scale and at such
with such ease right now, it's very cheap, it's very easy to do this, we can't really take for granted some of the potential long-term impacts of this.
So I wrote a book called Blood in the Machine.
It's also the name of my newsletter that I kind of document a lot of this stuff going on now.
And you can draw a lot of parallels to what's happening with sort of creative labor now back with what happened to the artists and workers, the cloth workers in the Industrial Revolution, where these sort of technologies of mass production and automation enabled the cheap production of goods, the
eschewing of the skilled worker.
So there's quite a few parallels and the stuff doesn't need to be good, right?
It just has to be good enough a lot of times.
So there are different categories of slop, the stuff that Jason just showed us.
I'm glad I didn't have to watch that again.
Some of it's pretty grotesque.
But then there's the stuff that you probably see more of that's just slowly being substituted for other
purpose.
Maybe it's on somebody's newsletter, a sub stack.
Maybe it's being used on social media.
Maybe it's making its way into like a corporate presentation.
But so I think we have to both recognize that this stuff is ridiculous.
And I think most people
object to its artistic quality most of the time, but it can still have real impacts as it does sort of gain steam.
Yeah, I think that's the biggest thing I want y'all to understand is that as I've been writing about this, a lot of people say, do people really think that this is real?
Are the people engaging with it real?
And I spent a long time scrolling scrolling through Facebook comments and trying to determine whether the people commenting on these things were real or whether they were bots themselves, whether there was some sort of like engagement mechanism here that made these things go viral, but there wasn't really that many human beings in the loop looking at it.
And I've come to the conclusion that it's a mix.
I mean, there's definitely some automation happening here where, you know, maybe there's a thousand likes at the beginning that gets it going in the algorithm.
But I think that the target of this AI slot is not other human beings.
You know, the people who make it say exactly that.
It's the audience is not human beings.
The audience is the algorithm itself.
And the audience is figuring out how to gain the algorithm so that it does show up in the feeds of human beings.
And if something is shown to 100 million people and a few of them like it and comment on it, like that is enough.
And I think what we, from what we know about Instagram algorithm, you know, the Facebook algorithm, it's like opening up a reel and seeing a man turn into a spider in the first three frames of the clip will make you stay on that because you might say, hey, what the fuck is this?
And that's a signal to the algorithm that this is interesting content.
And I think that as writers, and we're not artists, but we work with a lot of artists.
It's like we do have to think sometimes about, you know, will this perform well on social media?
Will this perform well in the algorithm?
And let's say I spend a week working on a story that I report very deeply and put out there and, you know, it doesn't get much engagement.
It doesn't perform well in the algorithm.
Well,
you have these people making a thousand images in three minutes and they're able to iterate on that instantly.
And so it has the effect of not just flooding out human content, but also they're able to A, B, C, D, E, test, all of this stuff.
And then when something is working, they can just spam more and more and more of that and, you know, collect the fractions of a cent.
Yeah.
And importantly, right, the platforms don't care or they're happy to have this happen
because
it's more content and more engagement when it does fire off, right?
Like, so there's, you know, Facebook doesn't really.
care if the with the content that's going on to it is ai generated or human generated in fact it might preference the the latter so i mean i i'm i'm curious because you guys have reported on this a bunch.
And when you first did, I honestly, bless my naive soul, I thought that, you know, Facebook would respond to the outrage by sort of clamping down on it, but that hasn't happened really at all, has it?
No, not at all.
In fact, Facebook has tried to create its own, you know, AI agents.
They've talked about allowing your users to create AI profiles.
Mark Zuckerberg has talked about this future where, you know, you might have your own Facebook profile, but then you'll have seven others that are your AI avatars that are out there spamming Facebook for you.
And
it's interesting because people ask me, like, what is the end game of this?
Like, why is Facebook doing this?
And I think from what we know about Facebook, and I always say Facebook because that's where it's worst, but the other platforms are the same, more or less.
They're all filled with this stuff.
We know that they are trying to advertise to people in as specific a way as possible.
They are trying to create psychographic profiles of you, get your demographic information, and then provide you the most optimized advertisement to you.
And I think what they are hoping is that AI content will be able to provide you extraordinarily specific content that is tailored to what Facebook thinks you are.
And so, I don't know if that's like weird crabs turning into Jesus and you're really into that, there will be a never-ending feed of AI generated crab Jesuses that you can scroll through and they can target ads to you based on that the other thing that I wanted to say about how Facebook enables this and it is profiting off of it is that
you know they're an advertising business and for a long time you used to be able to make several ads on Facebook.
So if you were a brand, you could make like three or four different versions of an ad and you could test different messages
and then you could put money behind the one that is performing the best.
And very recently, they announced this program called Advantage Plus, where instead of making two or three ads and A, B testing different messages to different audiences, you know, maybe the call to action is slightly different on one, maybe the image is slightly different on another.
You can now do this with AI, generative AI, and you can make a thousand ads and target it to a thousand different segments and only put money behind the ones that are performing.
And so when I talk about like a brute force attack on reality, it's like, it's not just the people who are spamming Facebook.
It's that the companies themselves are making and enabling these tools for,
you know, advertisers to just hit you over the head with this stuff constantly.
And I don't think that's good personally.
Yeah, I mean, I guess it's something we've talked about
in private, but I would love to kind of work through it here too, is why this is bad for everyone.
It's like if you don't care about AI and you think, oh, well, the stuff is whatever, or if you're like AI is good in certain use cases, but I think we also, you know, we've kind of gone back and forth on that, but like,
why does this kind of trash the whole economy of creativity
at scale, I guess?
If you're working toward
an algorithm and appeasing the algorithm and being
getting engagement for an algorithm, how does that affect everyone?
And I know Brian's written quite a bit about this.
You know, it's like it's not just, oh, this stuff is ugly to look at.
It's actually affecting how we use these platforms and the way that we're contributing to them.
Yeah, I mean, it is.
And it does really seem like.
I mean, there's a couple, and there's a couple of caveats too, and I'd be in, we could talk about in a second, too, because, you know, I think one of the things that's enabling this right now is the fact that the cost of
running an AI-generated
search or producing an image,
the compute cost and the energy costs of doing that right now aren't really priced into a lot of models.
Or if they are, if it's something, if it is like
a model that's being run by a big tech company, then they have such a huge war chest of capital that they can do that for a long time.
But an interesting question is,
eventually,
if those costs are factored in,
does it make sense anymore to continue to do this?
To sort of,
if you have to pay to generate images, even
just a small threshold more, then does that sort of ruin sort of the slop economy for those worst offenders that are spamming it out there?
I'd be curious to hear that.
Yeah, I mean, right now, all of the guides to doing this, because there's always a guide.
There's always like a guy at the top of the pyramid, if you will, who is teaching the people underneath how to do this.
And for a small fee, you can purchase their prompts and their tips and things like that.
Almost everyone is using free tools at the moment.
And by free, I mean free trials of generative AI tools or things like being image creator, which is free.
And when you run out of generations on the free account, you can just make another account and have seven different accounts and do that.
And so it's kind of the Mark Andreessen Uber effect where,
you know, Uber was cheap for a very long time because it was being subsidized by the venture capitalists that were investing in it.
And that's the period that we're in with artificial intelligence right now, where the people investing in and building this technology are taking a huge loss on this stuff.
And presumably they don't want to do that forever.
So there may come a day when it's not profitable to do this.
I think the big question for me is like whether social media will be completely destroyed before that happens.
And what I just showed you where I was scrolling my feed and that that's what I'm seeing, I don't think that's most people's experiences at this very moment.
I think that I've given enough signals that as a journalist who writes about this, like show me the weird AI stuff.
That's what I've told the algorithm that I want.
But as you saw, like a lot of these videos have hundreds of thousands, millions of likes.
And so That is starting to happen on Instagram and Facebook at scale where people's feeds are being taken over by this stuff.
And I wonder when the collapse point comes where it is so hard to find other human generated content that, you know, these social media platforms are just
all AI, like more or less.
And it's happening on search as well.
I mean, Google is being flooded.
And of course, Google itself is putting AI overviews at the top of things too.
But the entire SEO industry is AI focused at this point too.
And I mean, it's just a reflection reflection of the problem when you have just like such highly monopolized
sort of tech infrastructures and platforms that
we rely on, that maybe Facebook would be more responsive to sort of the abuse of
being flooded by AI images for a little bit of profit for some content farmers.
Maybe Google would be more receptive to improving its product and not serving us results that are going to tell you to eat poisonous mushrooms or whatever if
there were more viable alternatives or an ecosystem in which they didn't dominate 90% of the market share.
And I do think that, like, I do, I do really ultimately think this is a really toxic thing.
Because I kind of see as I, you call it brute forcing, which I like.
I also think about sort of AI slop almost as kind of like the like the shock troops, right?
They're just kind of like, these are the most egregious examples of sort of AI generated stuff.
And then it kind of moves the Overton window and makes other AI
generated stuff seem a little bit more reasonable by comparison.
Here's a fun and topical example.
So I was the tech columnist at the LA Times until last year when I was part of the mass layoffs along with like 120 of my colleagues.
And then at the end of that year,
The owner, Patrick Soonshung, who has not managed to keep himself out of the news much these days for a number of reasons, but he announced
a new feature called
AI Insights.
And it's a new feature that's part that's on
an opinion piece, or they call them voices now.
But if somebody writes an opinion piece, now it features an AI that will determine or attempt to determine the political content or positioning of this piece.
So, you know, it'll say like, this is a moderate liberal piece or something like that.
And then it will take the initiative to offer some,
you know,
some counterexamples and some
yeah, buts, right?
Some both sides, which is what all journalists love is when they have to both sides their argument.
Anyways, this thing recently got in trouble for somebody wrote a column.
One of my former colleagues wrote a column saying,
my hometown of Anaheim,
you know, has this checkered past because at one point the city council was dominated by the KKK.
And we need to remember this sordid history so that it doesn't happen again.
And of course,
the AI bot comes out and says, well, maybe the KKK wasn't that bad.
Maybe it had some points.
Some people have said it wasn't a totally hate-driven movement.
And obviously, this caused some backlash.
They had to get rid of the AI thing for now, at least on that article.
But this is just an example.
Like, here's here's some AI generated content that is, that is, I think we could call it slop.
It's garbage.
It's nobody really wants it.
And yet the management of a company is trying to use it to sort of create more value.
I kind of, in my last newsletter, I kind of being cheeky, said this, they've replaced me with an AI that defends a KKK, but it's not quite that one-to-one, but you can look at the value propositions here, what the tech companies are trying to do to sort of plug the gap or to see how what they can get away with by generating content as cheaply as possible or getting people to engage with content as cheaply as possible.
And it's not just big tech, right?
We've seen this at Sports Illustrated using AI slop.
We've seen it at
some portions of USA today and seen it, the old tech company.
So it is like creeping up.
BuzzFeed, BuzzFeed fired all of its reporters and then said, we're going to try making some AI quizzes instead.
Again, so it's just this rise of this force that is, you know, I do think that it has this destructive capacity that needs to be recognized.
Yeah, I think that,
so Sam was the first person who ever wrote about deepfakes
ever, which, and she totally owned that story, you know, 2018, something like this, many years ago.
And when deepfakes came out, like face swapping technology, Sam was reporting on how it was being used to make non-consensual pornography of celebrities and women and, you know, essentially do sex crimes to people and
the immediate response from like politicians was one of
this is going to be really bad for national security because what if someone makes donald trump saying that there's going to be a launch a nuke or something and so i want to ask sam because i think to brian's point there are
There's the intended use of this stuff, which is like use it to make yourself more efficient, use it to make your workers more efficient, efficient, so on and so forth.
But it's like every time one of these things gets released, there's like the quote-unquote utopian
productive uses of this thing, which we could argue about ad nauseum anyway.
And then there's the way that people are actually using this.
And to this day,
the dominant use of artificial intelligence is to make porn of non-consensual porn of women.
And Sam has done amazing reporting on that and just wanted to hear more.
Yeah, yeah.
I mean, I don't know if it's, I feel like that deepfakes are not quite in the in the slop bucket, but slop is kind of a out,
like an outcropping of the deepfake phenomenon.
Yeah, I mean,
that's the story with the internet too, right?
It's like people had this utopian idea where it was going to be like an amazing thing for humanity and it was going to be nothing but good and egalitarian and beautiful.
And then people started using the internet in a really big way.
And, you know, hundreds and then thousands of people started getting online and actually interacting with each other.
And it kind of turned crazy quickly.
So I think the same thing happened with AI and with this technology.
Someone found a way to face swap really easily, which was something people were already doing using AI and it just exploded from there.
I think the experts that I talked to at the time said, oh give it, you know, this will be a huge thing within a few months.
And it was maybe three weeks where people were making non-consensual, intimate images of celebrities en masse.
It was a huge thing.
And now I think we're seeing it get more awareness, I think, because it's happening in schools.
Kids are dealing with this now.
And I think that's something that we're going to have to kind of
have to reckon with legally and as a society, and as far as like internet regulations and things like that.
But also just, I think there needs to be some kind of introspection about how we got here,
why we let it get this far,
you know, who kind of
let it go to the point where, you know, 13-year-olds are making porn of their classmates using AI.
I think that's something that the adults in the room need to sit down and introspect about a little bit.
And I think that's something that we try to do when we write about this stuff in a really early stage.
It's like, let's talk about it now.
Let's kind of get the Leyla land now and figure out what's going on before it becomes so commonplace that a 12-year-old is getting arrested for it.
You know, that's it's
such an explosion so quickly.
So, yeah and I mean the the KK example is really crazy but also it's just showing up in consumer ways too I think like the the Amazon reviews that we wrote about recently where reviews of Mein Kampf were getting five stars because people were like it's a it's a nice print of a very concerning book
it's not five stars on the ideology within it but then the AI said oh people say this is an easy and interesting read and a true work of art and it's like
I don't think I think AI is kind of slipping in in these ways that it's now cracking open where lots of people are realizing it.
So you aren't in the depths of the internet anymore looking at deep fakes or looking at,
you know, weird stuff that it's on, it's in the LA Times, you know.
Has the character, since you have been on the deep fake beat for so long, has the character changed of like of the way that people are producing or engaging with
deep fakes changed sort of materially since this, since the sort of reinvigoration of the
Midjourney chat GPT, the more recent AI boom?
It's definitely easy.
It's so much easier and it's really realistic.
I think looking back at how it was done in 2819,
you needed hundreds or thousands of pictures of someone.
That's why celebrities were getting deep fake.
Now you need one picture and you can make a full video.
It's not even just make an image.
This is something that our colleague Emmanuel writes about.
Even with a text prompt at this point, too,
which is something that everyone in this, I hate to call it community, but the community of people who make
porn of Taylor Swift will try immediately whenever a new AI model is dropped.
They will see if there are guardrails on this new tool that can be circumvented in some way.
And
often they can be, unfortunately.
Yeah, it's hard to guardrail around the human mind, I feel.
There's just no way.
Our event at South by Southwest was sponsored by Delete Me, which has been a sponsor of 404 Media for a long time.
Delete Me is one of the few products that I've used for years, and it's made a meaningful difference in my life.
Delete Me makes getting your personal information deleted from data broker websites really easy.
As someone who's been doxxed multiple times, that's really, really important.
With data brokers and bad actors increasingly selling your information and trying to dox you, it's great for you, your family, or your employees.
DeleteMe makes it harder for these bad actors by scrubbing your employees' details regularly.
It's simple.
Attackers are lazy.
If it's too hard to find your contact info, they'll move on to easier targets.
DeleteMe takes care of this for you, doing the heavy lifting so you don't have to worry about it.
They keep removing the information so it stays down, protecting you and your team from constant exposure.
That's what I personally like best about DeleteMe.
When I used to try to get my address deleted from different data broker websites, I would spend hours filling out different forms and trying to figure out how to get myself removed from any individual data broker.
I'd succeed, but then another data broker would just pop up and I'd have to start the entire process all over again.
With Delete Me, it's set it and forget it.
So go to joindeleteme.com and use code 404media for 20% off of DeleteMe today.
That's joindeme.com, promo code 404media.
Now back to the event.
Let's open it up for some questions for a few minutes.
If anyone wants to talk AI slop,
yeah.
One second.
Before we do questions, I wanted to thank all of you for coming.
And I wanted to thank Flipboard for letting us do this.
It's very, very, very round of applause for Flipboard.
You guys are the best.
Surf is amazing.
So, check out their
app, Surf.
And then also,
we got some help from Delete Me for this as well, which is a product that I actually use.
So, we have like discount codes for that.
We have discount codes for subscription to 404 media as well.
We'll talk more about that later.
But, thank y'all for coming.
I'll give you the mic so that we can hear it.
Hi, my name is Wesley.
And it seems the balance of the power overwhelmingly is actually the advertisers.
So if they ever get sophisticated enough to be able to detect
and say that this is an issue for them because they're seeing maybe more views, but diluted conversions.
I know in marketing, there's a very strong metric of looking at the number go up.
But
when the views aren't translating to sales and they go to meta and say, we need this to be taken care of,
does Meta make a new tier of verified humans, viewers?
Do they lower their CPM?
How does,
if this tide does turn and then the, because it's all about money.
And if they go to advertisers and saying, we can sell you this tool for you to get a score of how many people are actually doing this, they'll eat that up whether it's
accurate or not.
So what happens?
What does Meta do?
What does CNN do when they hear this?
I think that's a really amazing question.
And it's something I think about constantly because I think
my opinion here, but there's a gigantic amount of fraud happening across not just social media, but also,
you know,
a lot of the AI slop we just looked at leads off-platform to websites that are laden with ads and also like other additional AI slop.
And a lot of the traffic to these websites are bots and a lot of the people engaging with them are bots.
They're not people at all.
And I think that this is probably a very massive scandal that people in the industry may understand, but that has not been reported properly.
to this point.
And it's something I've tried to do a little bit of reporting on, but it's very difficult.
It's just the internet is very vast and getting access to whether Facebook's views are real and the likes and comments are real and things like that is really tough.
And then at the same time, I think the ad industry feels like the advanced targeting that AI in general provides them and then the customization that generative AI provides them, where instead of showing one type of ad to
100 different types of people, they can show 100 types of ad to three different types of people and then put money behind the one that's working.
I think that on balance, the advertising industry
complicit in all of this as well.
And so I think that I'd like to say that maybe they could
hold them accountable if they're not happy, but I think on balance, they may be getting better performance because this is one of the few ways that AI is actually working.
Like if you look at Meta's earnings reports and things like this, they say, actually, people are spending more time on our platform now and ad performance is up and blah blah blah and whether those numbers are real i have no idea but that's the narrative at the moment yeah it is it's a really interesting question and one that theoretically again should uh you know
should mean a great deal to advertisers trying to see a return on their spend um but once again it's another case where sort of the uh the dimensions of of of the of the monopoly the platform uh are said where you know where else are you gonna where else are you gonna advertise what are your other options you're you're quite limited um and Jason's is exactly right.
Meta has been
on a tear right now.
And AI is one of the reasons that investors are citing as being behind its booming stock valuation, even though it hasn't sort of historically been thought of as one of the leading sort of AI companies in the AI space, even though it's got Llama and its open source project and all that.
But precisely because
advertisers and investors are just tantalized by the prospect of of using AI to distribute ads and to do kind of a version of what we've been talking about today.
It's really sort of
rising
Meta's stock price to this radical degree.
And I think that just underlines the extent to which these AI technologies are useful or interesting or
enthused by investors is that they're taking place on these monopoly platforms where the company can have so much control over them.
There's two related points I want to make very quickly.
One,
one
is I don't think that people like generative AI.
Like by and large, I don't think this is something that people want, but because of the nature of monopolistic power, it's being shoved down people's throats.
It's being surfaced in the algorithm and you cannot escape it.
There was this video of a spider turning into a man in an airport that I saw earlier today that had been viewed according to Instagram 330 million times.
And we've been doing 404 media for 18 months, and we've been pretty successful.
That is like 10 times, 15, 20 times as many views as all of our articles combined in that 18 months.
And, you know, the person making that spent one half of a second making it.
And that is, it's pretty demoralizing, to be honest, to see that sort of thing.
But these platforms have a scale of such that it's mind-boggling.
And so I think that
we can siphon people off and bring them to our platforms.
And I think that increasingly we need to reach people through group texts and word of mouth and newsletters and the Fediverse and decentralized social media that's not dominated by algorithms.
But when you're talking about how many people are on the internet,
it's pretty tough.
And then the other thing, and I know I said it'd be quick, but it wasn't.
People have their guard up about artificial intelligence now.
You know, my partner's aunt reads 404 media, and she sent me some AI generated content that she saw on Facebook the other day and said, I don't want to get tricked by this AI.
And what she sent me was a video of a girl drawing,
doing a painting.
And it was real.
It was like it was real.
This woman was like painstakingly documenting what she was doing.
And I think that beyond just flooding out everything else, else, it's like people don't want to get tricked by this stuff.
And so
they'll see real things and they will say, well, that's bullshit.
It's AI.
And that has become like a mechanism for saying, you're fake news or you're making shit up.
And I think that that is very, very alarming.
So
with the idea that
people are creating more and more with AI or companies or whatever, and then the AI is also training on that data, I'm assuming,
because
so there's going to be so much more AI data out there than there is actual data developed by humans.
From what I saw on the internet, and I don't believe that everything's true because, you know, AI,
it talked about how AI training on AI generated image actually degrades the images when it spits out new generated images.
What are your ideas in regards to, or any research that's out there in regards to where that's headed with all of the proliferation of AI-generated data or generated content?
I mean, I think the companies know that this is a problem, right?
Like they kind of recognize the big companies, Microsoft and Meta, and the ones making these tools.
And it scares them, I feel.
I mean, I...
Isn't this something that's a good idea?
There's a professor at,
I believe UCLA, Hani Farid, is he at
UCLA or
at Berkeley, perhaps, who's done a paper about this.
And, you know, I have some hope that this, what does Jathen call it?
Habsburg AI, basically
human centipede AI, where it just degrades over time.
It's eating its own tail.
I mean, I think that it's a good thought.
And I think that it...
without the models being altered,
you know, it's something that is happening and could happen.
But at the same time, I think the AI companies realize that this is a problem.
And I believe the way that DeepSeek was trained, a lot of the training that it was done was based on data that itself had generated.
And so that's one of the reasons it was able to be a bit more efficient.
And so
I don't think that is something that we can count on to save us, I guess is what I would say.
I'm watching it worse.
Yeah.
As much as I'd like to see it happen.
I mean, yeah, it's a nice thought, but like, I like, that's why they're so desperate for human.
They're desperate for your data.
They're desperate for your creative output.
They want the AI companies are hiring people to just sit in a cubicle and type out new.
Yeah, just gobble it all up.
So
are you seeing a push toward companies trying to use AI to
curate your AI feeds?
So like the way that Google is searching the web and saying, okay, I'm going to pull the things that you want.
And do you see like somebody, an AI that wants to be your gateway to your Facebook feed and your Instagram feed and all that kind of stuff and pulling that one step closer to you.
Hmm.
No, don't give them ideas.
I mean, they have, I mean, that's, it exists now.
I mean, yeah, I mean, it's been a, I wouldn't say that that's like the generative AI is something a little bit different, but you've had algorithmically curated feeds for years and years and years now.
And that's, that's why, kind of incidentally, why, so Jason has a dummy account where he just was look actively looking for the AIs and then it just serves him more and more but I do think I'm glad you bring it up because it does illustrate a good point where once sort of there's enough of the stuff that is palatable enough to the to the user and that they click on some of it then we can only assume that it's going to serve more and more and then it's going to slowly sort of accelerate the process right or maybe not so slowly
I'll do this side of the room and then come back
we see you we'll get to you I can't see your face though you already started talking about this, but I was wondering how you see human created content
adjusting to try and get more attention
and
overcome all the AI slot.
Yeah,
how humans are trying to adapt to the algorithm.
Yeah.
I mean, that's something that I think of YouTube, right?
So like the shock stuff on YouTube, the Mr.
Beasts of the world.
altering your thumbnail to make your face be like, you know,
that classic.
People have been kind of tweaking their own behaviors to appease the algorithm for a minute, which I think is dark-sided.
I don't like it, obviously.
Yeah, and one thing I think I bet we can expect to see more of is that
are those demands changing more rapidly?
Because as there gets spammed, you know, if you can create a thousand images in a few minutes and then fulfill the current, you know, demands or requirements of that algorithm,
then users are going to going to get tired of that pretty quickly.
So it's going to have to shift.
So we might not just have a more AI slop-filled internet.
We might have a more volatile one where things are
broken or glitchy or weird more often too.
Can you pass the mic to someone else?
Or do I have it?
I have someone back there is a way.
Someone back there is a microphone.
I thought this was a mysterious fourth mic.
Please support independent media so next time we can have someone running the mic too.
All right.
I spent most of my career as a journalist, which means I'm an optimist that's been continuously disappointed by life.
And I've also came up through the social media, you know, growth and personally believe that it's objectively made the world a worse place.
And I would like to pull you and then maybe the audience from a scale of it'll be okay to no, we're completely fucked.
Where do you feel that we are?
In regards specifically with generative AI or
with media in general?
Oh,
maybe we can convene another panel after this one
and I can hold forth.
No,
you know,
I am also always an optimist.
I am perennially an optimist, even as conditions I think are the worst that they've ever been for journalism in my lifetime, for American journalism.
I think it's just an absolute train wreck right now.
And, you know, this is before the LA Times, I was at Medium, which all the editorial workers were laid off in one fell swoop there.
Before that, I was at Vice, which went bankrupt and is now owned in parts by a private equity firm.
So, and that's just, you know, most journalists working these days have similar horror stories.
And that's pretty reflective of the, you know, the trajectory of the industry at large.
And it's gotten to the point where, you know,
if the trajectory continues unabated
and we don't fight, we don't push for policy interventions, we don't push back against these countervailing trends, then yeah, I think we're in truly disastrous crisis mode right now.
And I think journalists.
One way to fight back is to do with
what these fine folks have done and start an independent media company that can play by different rules and can adapt.
I think local news is is in such, is, is in truly, truly dire, you know, death spiral levels right now.
And that's where it's not, you know, you need, you need new tools.
And I.
Brian, did you see that one of the most popular local news outlets at this point is an AI generated newsletter that goes out to like thousands of towns in the United States?
Awesome.
Yeah,
I did see that.
And meanwhile, the fucked meter is now, we've changed it.
Yeah.
Meanwhile, a city of hundreds of thousands of people, Salinas in California recently as down to one editor, their newspaper has one working editor, no full-time reporters.
And you can just kind of imagine what's going on at the you know at the city council meetings and what's going on at you know the local real estate firm things that just nobody's knowing about now because uh so many of these uh sort of uh responsibilities have been and they have you know we do have to we do have to be more critical of these tools
Media companies have,
they've had a bad track record of pairing up and partnering with the tech companies that have been eating their lunch for the last two decades.
And in my humble opinion, the whole pivot to video mentality and constantly trying to play by the rules, forming new licensing deals or partnerships with the AI companies, it's just going to lead us further down this spiral.
So, you know,
it's time to fight.
So, yeah.
Yeah.
I mean, like, I think we can pull this up out of that.
I think we can get it.
I mean, you're like, you're 100% correct.
But I think, and this goes back to the question about like how are people adapting to algorithms.
I think you can either try to adapt to the algorithm and try to appease the algorithm, which is the SEO game,
pivoting to video, whatever, in journalism, or you can extract yourself from it.
for the most part, which I think is what people have started to do.
It's what we're doing in a large way with our email newsletter.
We want to talk to people directly.
We don't want to talk to them through whatever algorithm decides to feed them our content.
We want to talk to them because they care about what we're writing about.
And we see it happen with like people like Brian writing Blood in the Machine and people getting directly to their readers without the SEO game, the Google algorithm game.
And I think that's a really hopeful thing that I, I mean, I feel more hopeful about journalism than ever right now, ironically, even though the industry is in total chaos because people are saying, you know what, what, let's do it a different way.
I think we can do it differently.
Everything that Brian and Sam said, and then also it's just like, look around, there's you know, 150 people here, 200, I don't know, a lot of people here.
So, a lot of people care about this.
I think that you look at what Flipboard is doing, you look at what Ghost is doing, you look at the incredible energy around independent media, but also decentralized social media.
And I think that
I think we're on the right path.
We have some folks from Wikimedia here as well.
Like, one of the
like gives me more hope than anything, just this
the greatest collaborative project humankind has ever done, perhaps.
And
so I think
so.
So I'm optimistic despite all the stuff I just said and showed you.
Hey, so like the put aside the automatic viewers of the AI slop for a second, What types of groups do we see being instrumentalized?
Like, can you think of any more that I can't?
So, for example, very young people, right?
The children's content was one big one.
And now they're talking about older people are being iPad addicted as well.
And that's becoming an issue.
And then maybe in the middle, like hipsters and stuff using AI, you know, ironically, blowing up Fance's head or whatever, you know, but then being consumed by people who are not in on the joke is another one.
Are there any other like human segments that are like amplifying the the slop or using it for a different reason uh so the question is just like is there is there a demographic that's more responsible for amplifying this stuff and um it it is a lot of old people on facebook not uh not exclusively by any stretch but I've scrolled through just like so, so, so, so many
viral
AI slop Facebook images.
And, you know, it's people who spend a lot of time on Facebook.
And I think that it's no surprise that Facebook has shown itself to be most susceptible to this in the early days because it's more popular with an older demographic, which is not to say that younger people don't fall for it or don't promote it.
But I do think that there's maybe a an AI literacy aspect to this or like a news literacy aspect to this.
And it's just like the internet is a really, really complicated place to navigate at this point.
And Facebook itself is a really bloated and tricky like UI at this point.
Like, I don't know if you've been on Facebook lately, but it is a mess just to use at all.
And so I'm not surprised that people can't navigate it.
But I also think that because Facebook has been around for so long,
it's almost like looking into the future for these other platforms because it has had a longer time to decay than the other social media platforms.
And I think that
that's not to say the others aren't susceptible to it.
I think they very much are.
Yeah.
I think that's a good place.
One more, one more.
One more.
One more.
And then we're going to take a break and we'll be back.
Two more.
We're going to take a break and set up.
our merch table probably so you find folks can make some purchases
hi um i'm a social media founder myself
And
so I've seen a lot of evidence for how many social media accounts on mainstream platforms are bots.
And not just bots, but really convincing bots.
And so you talked a lot about the real people behind this AI slop, but I'd love to know more about
how you can even differentiate between those accounts and who's real and who's not, just because they have become so convincing, even though these AI videos in many cases not.
Do you want to ask your question too?
We'll get those last two or three.
Yeah.
Yeah, we're going to do two at once.
Yeah.
Mine is more of a cheer.
So I created the Wikipedia article on AI Slop, and it could use updating because that is how AI Slop learns about AI slop mostly through ingesting Wikipedia.
So if people want to have impact on a conversation, the article is there.
Thank you.
What was the the thought question?
I can't hear anything up here.
I don't know why.
The question was how can you recognize AI slop more or less?
I mean more than that, but
you're absolutely right that
and the accounts creating it and the accounts engaging with it.
I could talk about this for like a half hour, so I won't.
But
come find me after that.
Yeah, come find me after.
So
for a while, it was very easy to tell what was AI generated just because it looked bad.
And there's this saying in the AI world that this is the worst that it'll ever be because it's always improving.
And I think that it is.
I mean, it's getting harder and harder.
As someone who's looked at, like, honestly, hundreds of thousands of AI slop videos and images, it's like you can't just look at someone's hand and see if they have six fingers at this point.
It's like they've gotten a lot better.
There are some that still have watermarks on them, which I think suggests that people don't actually care if things are AI generated or not to some extent.
And then, as far as the accounts engaging with it,
I wrote an article about why I think a lot of them are real.
And it's because I would go into their Facebook profiles, see whether they were posting only about AI slop or whether they had posted like real images of themselves or had commented with like other accounts that seemed real, things like that.
That's obviously not doable at scale, but I I was so obsessed with the idea of like, are these people real or are they not real that I spent a couple weeks just like clicking on a bunch of comments on AI generated content and trying to figure out whether the people were real or not.
And the answer is some of them are.
All right.
Well, all this talk of AI slop and the decline of journalism, I'm ready for a drink.
I think we can call it here.
Again, once again, 404 Media, they've done something really amazing.
Support independent journalism.
Subscribe to 404 Media.
And since you've made it to the end, we can reveal now that every single article they've written has been produced by AI.
So, thank you.
All right.
Open bar.
Come find us.
If you have more questions, say hi.
We're all friendly people.
Thanks so much.
Yeah, thank you.
It means the world that you're here.
Thank you.
If you like this panel, we're looking to do more of this sort of thing this year.
So, if you're throwing an event or have a conference or talk you're interested in having us come out to, reach out to me at jason at 404media.co.
See y'all next time.