Why We Cover AI the Way We Do

1h 3m
This is a re-upload that was previously only for paying subscribers! It gives a lot more context on the how and why we cover AI they way we do. Subscribe at 404media.co for more bonus content like this. Here's the original description of the episode:

We got a lot of, let's say, feedback, with some of our recent stories on artificial intelligence. One was about people using Bing's AI to create images of cartoon characters flying a plane into a pair of skyscrapers. Another was about 4chan using the same tech to quickly generate racist images. Here, we use that dialogue as a springboard to chat about why we cover AI the way we do, the purpose of journalism, and how that relates to AI and tech overall. This was fun, and let us know what you think. Definitely happy to do more of these sorts of discussions for our subscribers in the future.

Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

Hey, Joseph here.

We are on holiday, as I hope you are as well.

This is a re-upload of an episode we've recorded over a year ago, I think now, but it was just for paying subscribers.

And we just wanted to give listeners of our free feed a taste of what they can get if they do become a subscriber.

It's a discussion about how we cover artificial intelligence.

As I'm sure you've noticed, we cover it in a bit of a different way from other outlets.

While they may be focused on the companies and the technologies, we're focused on the harms and the use cases and everything else that's happening to and by humans right now.

So I hope you enjoy this episode and subscribe at 404media.co

for more bonus content as well.

Thank you.

Sorry, that music went on for way, way, way longer than I thought.

I hear it.

Why does the bonus five music go so hard?

Well, we save that exactly for the paying subs.

Joseph here, the person you heard speaking was Sam.

We also have Emmanuel and Jason.

This is a subscribers only episode of the 404 Media podcast.

If you're listening to this, you're a sub.

And thank you for that.

We're going to get straight into it.

You know, we've published a couple of stories over the past week that started or rather contributed to a lot of, let's say, debate around images and artificial intelligence and journalism and tech companies' responsibilities and all of that.

You know, here we're just going to talk a bit more casually about what we think of that reaction, the questions it brings up, all of that sort of thing.

Before we get going, let me just provide two very quick summaries of the two stories we're basically going to be talking about and around them.

The first is one that we just spoke about on the weekly pod.

You can go listen to it there as well.

But that's by Sam, and that's called Bing is Generating Images of Spongebob Doing 9-11.

Basically, people took Bing's AI image generation, which typically stops you from using prompts such as World Trade Center, 9-11, that sort of thing, because Microsoft doesn't want that sort of stuff on its platform.

But people have found found workarounds by, including Sam, doing prompts for like Kirby flying a commercial jetliner to skyscrapers, all of that sort of thing.

And then the second is one written by Emmanuel, which was, 4chan uses Bing to flood the internet with racist images.

It is where it says on the tin, the members of 4chan, instead of using Photoshop or maybe, you know, more established tools, they would go in, they would use Bing's AI to generate racist images.

And then they were telling, you know, their community members, go post this on Instagram or wherever.

Right.

And then I read that on a tin.

I'm not buying, I'm not buying the product inside.

No, I'm just saying it's accurate.

I'm not saying it's a good tin.

I know.

It's just like, for spake beans,

I want to know what's in it, you know?

So we publish those pieces.

And then there's some reaction.

I think all of you guys are more familiar with what the actual reaction was.

So I I don't know, Emmanuel, do you want to like jump in and summarize maybe what the reaction was exactly?

Yeah.

So

I think first it's important to say that the story and in particular the tweets about the story went very viral.

A lot of people retweeted it, but namely Mike Solana, who is a venture capitalist who has a huge following, retweeted it.

And then Elon Musk, the main character

on the platform, responded to him.

So it went mega, mega viral.

And when you go that viral on Twitter, as you all know, as anyone who has experienced it before knows,

it just becomes a mess that has a general shape that you recognize.

People make the point of the article to you, assuming that you don't understand it.

People

thought that we were

being critical of the images themselves.

We're saying that the images themselves were reprehensible,

the SpongeBob image specifically.

Right.

And I don't think we care about some

joke, to be perfectly honest.

Right.

Yeah.

I mean, a funny image, like a great, great image.

I mean, I mean, it's 2023.

We're long past the time when 9-11 jokes were even mildly controversial.

You know, like,

or they're controversial, but I definitely, I mean, I I saw it and I was like, wow, we should use that image because it's, it's very

gravy and funny.

And then obviously just

the whole range of racism and anti-Semitism.

I don't know, Sam, if you got any heat for being a woman, but it's like, whatever, whatever you are, it's like they kind of are mean to you in that specific way.

So that's, that's just the nature of going viral on Twitter.

Twitter.

But the response that is specific to this story, I think in order to understand it, we have to explain that

the story is

entering an ongoing debate in AI.

And I want to try and like map out roughly what the debate is.

True.

And

there is a spectrum.

that

on one end,

there is people who are concerned with AI safety.

and on the other end, there are people who are concerned with AI ethics.

That's how people in the field describe the debate.

I find that kind of confusing.

Those are confusing terms.

A better way to think about it is just open source AI development and closed.

That's roughly how the debate maps.

And on one far end, like the very far end of the spectrum, are these

effective accelerationist AI people.

And accelerationism, if you're familiar with the idea, is

whatever is happening, you just want to like do whether it's good or bad, right?

You just want to like pedal to the metal, go faster, reach the logical conclusion of the process.

Right.

People may be familiar with that term more in the context of like, you know, extremism and that sort of thing, but that's that's not, that's not what you're saying here, really.

But it's just, it's a similar dynamic.

It is related, right?

So like the origin, I think, is actually like kind of like a lefty idea, which is capitalism is this process.

We're not going to see the end of capitalism until we rush through the process and reach the conclusion.

And then we can do something else, right?

A communist, socialist kind of alternative.

And then, as you say,

that dynamic was picked up by extremist people who are like, we have to...

Like, this is kind of like the Boogaloo Boys thing, right?

It's like, and

the Charles Manson thing as well.

It's like, there is racial strife.

There's political strife.

Let's have it out.

Let's have a race war and we'll reach the end of the current state of America.

And then we can have something new in their eyes, probably something like a fascist alternative, right?

In AI,

the effective accelerationist crowd, which

has like prominent members, like the head of Y Combinator.

The Bush is a startup in Cubesa.

Yeah.

Yeah.

Yeah.

And people are unabashedly effective accelerationists.

They have, they have the e slash ACC tag in their Twitter usernames.

And they just think that

the only way through

the upheaval of AI

is through it.

And it's just like, we have to just like develop it.

No regulation, no restrictions.

Just let AI do its thing.

And on the other end of that is some sort of like AI utopia, right?

Because we're going to get all the benefits of it, all the benefits of AI.

We'll go through whatever turmoil society is going to go through, and then we're going to reach some AI utopia.

That's one end of the spectrum.

And the other very far end of the spectrum is

the AI alarmist crowd

who believe that some form of AGI is going to kill us all imminently.

And for that reason, we have to pause the development of AI.

We have to restrict it with the number of GPUs that are able to work on it or ban it altogether.

And also, critically, like make open source development of AI illegal because it's too dangerous for people

to have access to this technology.

Now,

those are the extremes of each side of the spectrum.

In between them are like totally good, cogent arguments for either side.

So, for example,

like

some AI researchers that I talk to who study the data sets that the AI tools are trained on and are trying to find the bias in AI,

they say that it's much better to have open source AI

because

the outcomes might be bad, but at least we're able to see into the black box, understand what is happening, and try to mitigate.

It's rather than one just one company being able to control it or private industry or whatever.

Right, and behind closed doors, right?

So it's like open AI is doing its thing.

It's doing bad things.

And we can't even tell you why or do anything about it.

So that's a totally good argument.

And then like a logical argument on the other side is the AI safety people who are like, people are doing harm with this right now.

Like actual harm is being done to actual people right now because the tools are open source.

And this is something that we've reported on, right?

Like stable diffusion, text-to-image AI tool.

People are using it as we speak to create like deep fake pornography, non-consensually, of like everyone under the sun.

It just, it is happening.

Specific people who did not give their consent.

It's not we're going, oh no, they're making porn.

It's that they're targeting specific people.

Right.

So that's the argument.

That's the spectrum.

That's the debate.

That's the largest conversation that is happening.

And then we publish our story.

And I think the way people read it, despite

both Sam and I making very clearly clearly the argument that it's like we don't know what the solution is, and we are not saying that

either side is correct, it's a very complicated question.

And I don't have an answer for it.

Like, I'm not going to give you

my take on which is the correct side of the argument because I don't know.

It's also almost definitely somewhere in the middle.

Oh, yeah, for sure.

And I think we will get to that also, because the conversation is really stupid because people just assume that it can only be one extreme or the other.

But people just assume that we are on a side that is kind of like wagging its finger and saying that AI tools have to be put behind some lock and key, and average people shouldn't be able to use it.

And then, you know, people trot out these metaphors around, like, oh, you want, would you ban the printing press?

Would you ban word processors?

Which is the tweet that Elon loved so much.

With his little crying, laughing emoji.

Yes.

His divorced emoji yeah um so yeah so that's kind of the the state of the debate and that's how we entered it and that's what people assumed about the stories even though it's i mean it's it's simply not what we believe and not what we what we wrote sam what do you think about some of the reaction or did the manual miss anything there or what did you see from from your end yeah i mean i i saw a lot of people like like ameno said wishing for the articles that we wrote,

you know, begging us to write what we already wrote or, you know, just saying, like, I wish someone would tackle this in a way that's more nuanced.

And it's like, you didn't read past the headline and you looked at the pictures.

And that's very obvious.

You know,

we did write quite a bit of nuance into these stories.

We always do.

You know, we don't just post 200-word, look at this-type blogs.

We almost always try to provide some kind of context into why it's important to write about these things and also,

you know, how it's affecting real people.

I think my,

um

and i wrote about this a little bit in our like our newsletter behind the blog thing but

um my thing is like these

these

ways that these companies are using ai is impacting the way we use the internet like moderation affects the way people behave uh we that's very obvious good or bad um but you know it's an ai is being used in moderation pretty much everywhere you look at this point.

Whether or not they call it AI or they call it, you know, just like machine learning or whatever they want to say it is.

That's it's AI is going after people.

It's going after more AI-generated content.

In Bing's case, it's AI going after AI.

While Microsoft is kind of like some kind of third party with their hands up being like, we don't know, there's the users and then there's the moderation tools.

And that we're just like, we don't know what's going on.

Or, you know, that's what I kind of inferred from their non-response to me about user safety.

But

yeah, you can't say sex on TikTok without getting banned.

So people started saying segs with two G's.

Now that's a thing people say out loud.

Crazy.

You can't say suicide on TikTok.

So

people say S-E-G-G-S.

Yeah, like to be funny.

Okay.

Yeah, like ironically, but it's like explaining to me what people do to be funny.

Me, like,

it's to be funny, but it's, it's to avoid the algorithmic moderation.

Yeah, people say unalive instead of suicide.

That's the thing people say out loud.

It's not just like they're avoiding like written bans.

They're avoiding the AI scraping or like reading, listening to them without, you know, considering context and then banning them outright.

And I think that's kind of the frustrating thing is like people just don't know what is going to get banned and what's not.

And the way that these rules are applied is so uneven

that who knows?

It's like Emmanuel said this in his piece, but you can't generate a nipple on Bing, but you can generate like massive amounts like horribly racist shit.

That's something is something is wrong there.

Like, I don't think that's a controversial statement that like that's a lopsided view of moderating your own platform.

And this is one of the biggest companies.

This is like the second biggest company in the world.

It has a huge vested interest in brand safety.

And it's dropping the ball massively because they wanted to roll out a shiny new toy.

You know, I mean, mean, there was a lot of just like, and the word processing, it's like, it's so, it's like, it's one of those things that's like, it almost makes sense until you think about it for three seconds.

And then you're like, oh, this is a dumb argument to make just to dismiss somebody else's actual concerns.

So

on the nipple thing, just very briefly, like,

even if you don't take, again, I think the very justifiable position of like, well, these priorities are a bit out of whack, right?

You, you don't allow the generation of a nipple, but you will allow the generation of racist shit.

You know, look, even if you put that aside and you don't even put a value judgment on it, you can just be like Microsoft at some point made a decision that allowed this, you know, and like, maybe it was an accident by Microsoft.

Maybe it was a deliberate policy where they have a slide deck hidden in a vault deep inside Microsoft HQ where they're like, hey, we're going to get rid of the nipples or whatever.

Obviously, I don't think it's that.

I think it's more messy and probably like, you know, a lack of foresight on how these tools are going to be used.

But even at minimum, if you don't do the value judgment, it shows that decisions have been made that are impacting the output of this AI.

You know, and that is that at a bare minimum.

Jason, you were going to say something.

I was just going to say,

both Sam and Emmanuel are geniuses and just said very smart things.

One thing I do want to say is

we're doing this podcast on the paid feed for a few different reasons i think one of the reasons though is

a lot of the people who are criticizing us and who are jumping into this conversation they're very loud and they're like influential and they're famous and we can talk about that in a minute but it's like i am certain that tens of thousands of people saw our articles and they're like that's fucked up or that's interesting and then thought it was a good article and read it and continued on their way.

And I don't think that we want to get into a position where we are

defending ourselves against the loudest voices in the room every single time, like anyone criticizes anything.

Because

as Emmanuel said, anytime you go viral, there's going to be some criticism, but like it goes viral for a reason.

And that reason is because

people think it's interesting.

Like most times.

Most like in our we've been doing this for a long time.

And most of the time that we have something go viral, it's usually our best stuff or when we've hit on something like a hot button issue or something that people are talking about.

And

I am certain that the vast majority of people read these articles and were like,

Good article, thanks for doing it.

But then there was a small number of people who were like,

You're everything that's wrong with journalism.

This is terrible.

You want to censor, you want to censor us, blah, blah, blah, et cetera, et cetera.

And I think because some of the people who were making that argument were very

have like giant megaphones, it can seem like that's what the vast majority of people think.

But I want to like

state for a fact, I don't think that

most people are going to see this article about SpongeBob flying into the World Trade Center and think

404 media is trying to censor Microsoft and the creative tools being used here.

Like, I just don't think that that's the dominant position.

And so, we didn't want to spend an hour talking about it in our main podcast, but I do know, but it is an interesting debate, and we're happy to have it.

It's a springboard for us to talk about some stuff that we don't necessarily always get to talk about, which is, you know, more behind-the-scenes stuff that our subscribers can listen to.

Yeah.

I mean, I think you just mentioned something about how, you know, lots of people will read it and just go on with their day, while others will will like almost like read a motivation into it.

And look, again, I didn't write these two stories, which is why I'm kind of like letting you guys just like more talk it out.

And you're a lot smarter than me on it.

But I'm super interested in the journalism side of things.

And that made me just think that, you know, sometimes I would publish a piece and people would be like, this isn't a scandal.

Or they'd be like,

They go on the assumption that you can only write about something if it's a scandal, which is ridiculous.

You can just write about something because it's interesting.

You can write about something because it incrementally provides more information.

This, I don't think it's a scandal, you know, but I think it's incredibly interesting that Microsoft is allowing these things, even when it actually goes specifically against their policies.

And just those.

Joe, can I just like refine that?

I agree with that.

Let's put scandal aside.

Like, I'm not making any value judgment about scandal or not scandal.

Microsoft is one of the biggest companies in the world.

It employs some of the most talented computer scientists in the world.

It is a huge

center of technological power.

And

when it deploys something,

it has the ability to make

millions of people take notice of it and use it.

You earlier talked about Microsoft making choices.

And I think what is interesting and I think what is important is that

what Sam's story showed, and I think what the 4chan people have abused is that

it is not making choices as much as Microsoft think it is making choices.

It is deploying the tools and it has written all these rules for how the tools can be used,

but they don't work, right?

It's like even according to their own policies.

Yeah.

Their own policies don't work.

And something that I really liked about Sam's story, like a really good observation, is that it's like essentially

there's this new term that appeared in the past year, prompt engineer, right?

And people are prompt engineering the tool to do things that Microsoft doesn't want it to do.

Whether it wanting it to do it or not is a scandal.

You decide, dear reader, you know?

But this giant, powerful company deployed this thing that it can no longer control.

That to me is the interesting thing.

And as Sam pointed out, it is not clear that they will ever be able.

to control it because it's not clear that while we can teach AI to understand language, it's not clear that we can make it understand cultural context, which is how people are hacking

the tool, right?

That

it, and it's already out there.

So it's like humans outsmarting the machines with

cultural context.

Yeah, it's true.

I mean, that is what's happening.

Yeah.

Let me just super briefly end that.

point because yeah, the scandal thing isn't coming from me.

That's just what people have told me.

But it's more just that people assume that when you choose something to cover, you're taking a side.

And that's not what it is.

It's that we think there is a public interest in writing about that.

Sometimes the public interest is very high when we publish an investigation.

Sometimes I would call it a normal amount, which is something here where we were like, hey, this doesn't match Microsoft's own policies.

Like, that's kind of wild.

And sometimes it's like lower, but still very beneficial in that I don't know you know Jason often gets these really entertaining funny foyers about police officers doing stupid things and don't get me wrong they're funny but they also show you know maybe some systemic issue with the police or something like that when you write an article you're not going out to be like heh heh I'm gonna use this as ammo for my side it's like there's a public interest there and like that's almost like the start and the end of it and I think people forget that we're not going out trying to censor the AI.

You know, we're going out because there's public interest in highlighting whether this information is.

Well, people say it about any article you do, no matter what.

That's why I'm like, I'm making it super because I'm tired of this shit.

Like the delivery robots story I did where people, where like this delivery robot was giving footage to the Los Angeles Police Department.

Vast majority of response to that was like, wow, I didn't know this.

Thank you for writing this.

This is interesting.

I see these robots all the time.

There are so many people, though, who were like,

you didn't know that.

Like this is, how is this news?

How is this new?

And I'm like, it's new because literally it was unknown.

Like this is the, here's the specific case.

It's the first time we could assume or infer that, yes, there's cameras on these things.

Maybe the LAPD

like would want it.

Now we're showing that it's happened.

And there's this huge contingent of people being like, oh, you didn't assume that there was a, you know, if there is a camera, the LAPD is getting it.

And it's like, you could guess you could yeah like that's why i foyed it i foyed it because i thought maybe this is happening but now showing the public that it's happening yeah and it's like i'm very sorry that you personally do not feel that this was worth writing about but lots of people did like lots of people found new information from it and

I'm sorry if we, if we write similar, like, as you said, incremental type stuff.

Sometimes people don't get it until the 30th time you write about it.

Sorry, maybe they didn't see the other 25 times that someone wrote about ring cameras and police or whatever.

And that's fine.

Not everyone consumes everything all the time.

I don't know what they want you to write.

Like,

like the, that, that blog would just be like, I think that the cameras on the robots are watching me.

Like, we need like, we need to prove some things sometimes.

And the way to do that is like getting the actual documents or seeing other people do it in the wild or, you know, like, we can't just make shit up.

We can, but like, it's our website.

We can, but like, our lawyer wouldn't like that very much.

Yeah.

As someone who's helping to run a startup, we've used all kinds of software and things quickly get siloed and confusing.

Then we found Coda.

Rely on Coda to keep your your team on the same page by bringing the best of documents, spreadsheets, and apps into a single platform.

Coda helps you collaborate by centralizing all of your processes and shared knowledge in one place.

It cleans up the clutter with its all-in-one solution that replaces a host of siloed productivity tools.

With Coda, you can stay aligned by managing your planning cycles in one location.

Set and measure OKRs with full visibility across teams, communicate and collaborate on documents, roadmaps, and tables instantly, and access hundreds of templates to get inspired by others in CODA's gallery.

CODA empowers your startup to strategize, plan, and track goals effectively.

Take advantage of this limited time offer just for startups.

Go to coda.io/slash 404media today and get six free months of the team plan.

That's coda.io/slash 404 media to get started for free and get six free months of the team plan.

Coda.io slash 404 media.

This holiday season, give a gift that lasts a lifetime.

It's Masterclass.

With Masterclass, your loved ones can learn from the best to become their best.

Masterclass is the only streaming platform where you can learn and grow with over 200 of the world's best.

That's why Wirecutter calls it, quote, an invaluable gift.

I'm trying to get a head start on my New Year's resolutions.

That means I'm learning how to eat intentionally from Michael Poland, fitness and wellness fundamentals from Joe Holder, and yoga foundations from Donna Farhe.

I do a lot of my learning while I'm out and about, so I appreciate that I can take a masterclass on my TV, but I can also do an audio-only version while I walk my dog or exercise, or watch these classes from my phone when I'm at the doctor's office.

Plus, there's no risk.

Every new membership comes with a 30-day money-back guarantee.

Give your loved ones a year of learning with Masterclass.

Masterclass always has great offers during the holidays, sometimes up to as much as 50% off.

Head over to masterclass.com slash 404pod for the current offer.

That's up to 50% off at masterclass.com slash 404pod.

Masterclass.com slash 404pod.

Let's face it, after a night with drinks, I don't bounce back the next day like I used to.

I have to make a choice.

I can either have a great night or a a great next day.

That is, until I found pre-alcohol.

Z-Biotics Pre-Alcohol Probiotic Drink is the world's first genetically engineered probiotic.

It was invented by PhD scientists to tackle rough mornings after drinking.

Here's how it works.

When you drink, alcohol gets converted into a toxic byproduct in the gut.

It's this byproduct, not dehydration, that's to blame for your rough next day.

Pre-alcohol produces an enzyme to break this byproduct down.

Just remember to make pre-alcohol your first drink of the night, drink responsibly, and you'll feel your best tomorrow.

I love seeing family over the holidays, but it also means that I drink a little bit more than I'm used to.

I've been taking prebiotic before some holiday parties, and I definitely noticed the difference the next day.

Even after night out, I can confidently plan to live my life like normal because I wake up and I feel good and I just get on with my day.

With the holiday season upon us, I know I'm going to be consuming a bit more alcohol than usual.

With pre-alcohol, I can stay on track and not let the season throw me off course.

Go to zbiotix.com/slash 404media to learn more and get 15% off your first order when you use 404 Media at checkout.

Zbiotics is backed with 100% money-back guarantee, so if you're unsatisfied for any reason, they'll refund your money, no questions asked.

Remember to head to zbiotics.com/slash 404media and use the code 404media at checkout for 15% off.

let's bring it back around to AI a little bit rather than the here's 10 years of stored beef against people.

And I don't beef anymore.

I'm Namaste.

I'm Zen.

I can't wait till the next time you beef, and I'm going to take this clip.

No, absolutely not.

No, it's not going to happen.

So,

one

reader who wrote in in was Tim Sweeney, founder and CEO of Epic Games.

You know, they make Fortnite, all that.

They have the Epic Games launcher competitor to Steam.

They have a lot of money.

E-man may maybe explain it in a minute about what they do in AI, because I'm a little bit unfamiliar on that.

But let me read out

Tim's Twitter DM to me after he read some of this coverage, because again, it's a useful springboard to talk about some issues.

He says, the latest 404 AI articles have been irking me, so I thought about why.

At the core, they are written to be inflammatory while taking an implicit but not clearly articulated editorial position about who is to be considered morally responsible for content created by a human using AI prompting.

And then he provides some positions they could take.

The first would be, you know, it's the human who prompts the AI is responsible.

It's the creator of the AI, or maybe it's the government through regulation.

His message then continues: without clearly taking an editorial position, it feels like the debate is playing out in the Twitter court of bad analogies, capitalized.

And he says he would love to see a 404 editorial on the topic of responsibility attribution, which seriously weighs the implications of those three positions he laid out.

And he says this might drive some productive discourse about principles rather than Microsoft Word comparisons and the like.

I mean, what do we think of that?

The first thing I'll just say is that I don't know if it's necessarily our responsibility to always take an editorial position.

I mean, I know that maybe you guys'

articles are a little more voicy than mine.

I am like very,

you know,

almost boring to the point where I like don't really take a position beyond just publishing information.

And I would certainly disagree that your pieces are are inflammatory.

But, you know, what do we think, maybe Emmanuel, with that message

and maybe that question?

What do you make of that?

I guess a few things.

First of all, I'm sorry, but I have to say this just in case people listen and they're like, why didn't you mention this?

But Tim Sweeney, who I

I'm like a big video game nerd and like a history nerd, and particularly this era that he came out of and have been familiar with him for many many years and like played his games and used his products and all that stuff.

However, Tim Sweeney is a very rich man.

This is a very wealthy company.

He just fired 16% of his workforce despite being incredibly rich.

He acquired Bank Hamp last year and

has dumped it on this other company, Songtrailer, just like a couple of weeks ago.

Those people don't know what their future is.

They were in negotiations for their first union contract.

They're locked out of their computers.

It's something that I've been trying to report on.

Just want readers to know that I'm aware of this context for this person specifically.

I think, first of all, also,

thank you for like writing a thoughtful

question.

You know, it's just like he took the time, he clearly read the articles and he articulated a position and question.

And I really respect that.

And I want to let readers know that, as I say in my Twitter bio, I read every email that I get.

And if you write me like a considerate email, even if you disagree with me, I will probably

engage with you.

Just I'm saying this is a good response, right?

Even if we disagree.

To the content of the response, I think it's really interesting that

people constantly talk about how they want unbiased reporting.

They want factual reporting.

But then at the same time,

what Tim is saying is that we should tell him what we think and also what he should think.

And I just don't think that's my job.

Like, I'm happy to engage with the questions here.

I will.

But I just want to note that it's like, all we did is report about something that is happening in the world and present it to the world.

And obviously, the world found it to be very controversial and reacted to it with like a variety of emotions.

And Tim is like, You're making me feel these emotions.

That makes me uncomfortable.

Please tell me what I should feel so I can be calm.

And it's like, I'm sorry, but it's like not a job requirement for me.

You know, it's just like, I'm okay with you feeling uncomfortable.

And perhaps maybe that is the point, right?

It's like we wrote an article that is making you kind of like consider these questions and be uncomfortable and realize that you're uncomfortable and uncertain what you should think.

And like we as a society probably like need to think about these issues and come up with solutions.

I don't have the solutions.

Sorry, that's just like my first reaction to it.

Sam or Jason, do you want to?

No, I think that's right.

I mean, I think that's, I also, I felt the same way about when I saw this message.

And Joseph shared it.

Like,

I didn't take it as like, I didn't take it as like hostile, but it was definitely, like, it was definitely like a respectful message, but it was also like, you're asking for something that we're not, we can't do, we're not gonna do, we don't have any desire to do is to tell you how to think or how to feel about these things.

Um, a lot of people reacted that way to the SpongeBob thing in particular.

It's like because it is a pretty neutral image out of context.

And I think it's, you know, like that is just like the process of thinking through these things is doing them sometimes.

Like the process of thinking through

what Microsoft is doing or what generative AI is doing to the world is maybe just making a bunch of like

weird images of like M.

Preg Sonic flying a plane toward New York City.

I don't know.

It's an interesting kind of thought experiment.

And it's also something people were doing in the real world that we,

our job is to report on that kind of thing.

So yeah, I think that's a really good response, Emmanuel.

I think that if you applied this to a different

subject,

what he

is asking, or like the, the sort of like underlying thing here, in my opinion, is like,

let's imagine this article is about climate change.

Like, let's just imagine it was about climate change.

It could be like the Arctic melted historic amount of

ice.

fell into the ocean this year.

Very bad.

Actually, no, not very bad.

Historic amount of ice fell into the Arctic.

Then we go talk to a scientist who says that it's bad and we talk to a scientist who says that actually it's good or that it's neutral or that it doesn't matter and then you can read the article and you can make up your own mind but you're presented two different ways to think and that is like largely how journalism has functioned for the last like 50 to 60 years right before that there was actually like

more of a slant.

Like I took a journalism history class.

So I know that it has not always been this both siding of everything.

But the version of this article is like, you go talk to an AI safety person and you go talk to an AI ethics person.

One person says this is good and that you should be able to generate anything you want on Bing.

And the other person says, this is bad and it's dangerous.

And then you can like come up with an idea.

I think that we write about these things so often that we do get experts to weigh in on things all the time, and we will continue to get experts to weigh in on things.

But I also don't think that it's necessary for every article.

And

I guess to be fair to Tim, he's asking us to do that and then to tell him

which side wins.

And one, I think it's complicated.

And two, I think that people can see the image and see what we're writing about and think about it for two seconds and come to their own

decision and come to their own thoughts and feelings.

And I think that SpongeBob 9-11 is one thing.

I think that the follow-up, which people are using the same tool to create super racist, super anti-Semitic stuff and then are spamming it to people on

social media that is not super well moderated.

It's like, that's bad.

I feel comfortable saying it's bad when 4chan like makes like

content that dehumanizes black people and that you know is super anti-semitic and et cetera etc and when microsoft enables that it's like it's not great it's not it's probably not what microsoft intended and i think that an intelligent reader

i think tim is intelligent but i think also he should think about it for two seconds it's it's not good and i don't think that we need to talk to someone who can say

these images that make black people look like animals is bad.

Like that, that was the intention of the image, of many of the images that 4chan

was generating is bad.

And then the way that they're using it, the context was also to harass and like in a very hateful manner.

And I think that we've written enough articles where I think our stance

that

hate is bad is not controversial and doesn't need to be spelled out at length in every article.

Yeah, like

when

I'm going to talk very generally and I feel like this is a fair characterization of the position, but I think it's a stupid position.

So, you know, maybe I'm going to miss a little bit here.

But and to be clear, I'm not, this isn't responding to Tim's question.

This is a more general point.

But it's when people say they want unbiased media, like we were mentioning earlier.

Yes.

And as you say, Jason, you have a climate change example of one person say good, one person say bad, yay, balanced journalism.

Look, come on, one of those people is full of fucking shit.

They are wrong.

It is a disservice to the reader to present both as equal positions in the same way that it would be a disservice to your reader to be like, well, you know, Russian, Russia invaded Ukraine, but Ukraine, you know, maybe deserved it, or some other like ludicrous position.

There's very clearly a good guy and very clearly a fucking bad guy, right?

And

that obviously also applies to people who are generating racist imagery with AI.

Like it would be a disservice to even float the possibility that maybe racism in AI actually good.

No, fuck that.

I'm like, you, you're, you're leading to an article that's actually going to make the person less informed.

It's going to make it more muddy, that sort of thing.

Well, the literal argument is that In a word processor, you can write racist things and or with your pen and paper, you you can write racist things.

And yet we don't ban pen and paper and we don't ban the word processor.

So why would we ban?

Why would we restrict what AI can do?

Like that is what, that's literally Elon Musk and Mike Solana's argument.

Right.

Which does feed into what Tim's question more specifically is actually about, like the positions, right?

Which is, you know, is it the human who prompts the AI who's responsible?

Is it the creator of the AI?

I guess that's Microsoft, like in this case, or is it, you know, the government that should come in and regulate it?

We don't pretend to have an answer.

And I don't think we're

like, we're not going to claim to have an answer.

But definitely in the piece, and the pieces, it comes through that we are approaching this, even implicitly, from the frame that, well, this is Microsoft's tool, but crucially, it is like violating its own terms.

You know, if Microsoft didn't have those terms, maybe the piece would actually be framed differently.

But it's approached through that lens of Bing's policies because Bing put the policies there.

And that's the entire public interest in this.

And it's highlighting that those policies aren't working, right?

If it was, I don't know, if we were more researchers focused purely on racism or we were looking at research that showed, I don't know, people felt more attacked based on their race through a hyper-realistic AI AI-generated image than what somebody drew on a post-it note.

You know, that's not the best example.

But then that would be the frame.

And what I'm trying to get to is that we're not going to pretend to have an answer, but I think the frame of an article does come from the facts of what's actually happening.

And we're going out and finding what those facts are, which is this tool doesn't gel with Microsoft's policies.

I also don't want to provide an answer, but I want to engage directly with what the three options that Tim is offering.

So it's, is it the person who is using the tool?

Is it the creator of the tool?

Or is it the government who is responsible for kind of like controlling the output?

I don't know.

But I do want to highlight that

one of the responses that I've seen a lot is,

well, do you want the government to tell you what you can do with a drill?

right and the analogy is like one is a tool the other is a tool and i can do whatever i want with the drill and I can't do whatever I want with an AI tool.

That's not fair, it's not good, blah, blah, blah.

Like I said, it's like

Tim Sweeney, incredibly accomplished human being.

Elon Musk, you know what I mean?

Mike Solana,

not a fan personally, but it's like they're not like random people off the street.

Like they work in technology, they understand how it works, they had a lot of success there.

Some other people that have seen like comment on this or like have this sentiment, one who is very notable, I thought, is Jan Lacun.

I hope I'm pronouncing that correctly, but he is a professor at NYU.

He is Meta's chief AI scientist.

It's like, these are some of the most accomplished computer scientists living.

You know, Jan Lacun, it's like, if AI is going to be as useful, as important to society as some people think,

then it's like he's probably in line for a Nobel, right?

Like he already has a Turing erode.

Like, I'm just trying to say, very, very, very smart computer scientist.

That doesn't mean that they are like experts on everything.

And I think that becomes clear when you actually examine this like drill analogy or this pen and paper analogy or this WordPressor analogy.

And that is that those things are actually regulated, right?

Like those things are regulated like a drill right so it's like if you operate a drill depending on where you're operating it it has already gone through a bunch of safety regulations if you're working on a professional site then you're under osha regulations about how you're operating a drill like there's like a very complex web of regulation that impacts most of what we do in the world.

And like that is a huge part of what what civilization is: is like we have these conversations and we use different

regulatory bodies and organizations to make rules.

And it's not clear that the people who are making this argument know that.

It's like we are constantly swimming through

this huge sea of regulations.

And it's not always top of mind, but it's there.

And I think either like some people genuinely don't understand that.

And I think some people, if you zoom out, it just comes back to this

perspective in the tech industry and in Silicon Valley, where it's just like they're just expressing a libertarian tendency, right?

It's just like, we make that technology, don't tell us what to do.

This will make the world better.

The more you try to control us, the worse it will be.

And that is the actual

opinion that's being expressed and doesn't really have to do with what it would be best for AI specifically.

I guess if you think about the three options, it's like the preferable option is that the user doesn't make racist imagery, which relies on the thinking that

racism, hate, violence, et cetera, will be stamped out of society and civilization.

through some sort of collective

fixing of human nature that happens naturally without any like it's a false premise like that that one

is not one of the three options it's just not because that one relies on a systemic changing of human nature it would be nice but it's not and unfortunately and rather depressingly it seems like the least likely of all of them well it's i mean it's the gun debate where it's like oh the the preferable one is that people stop shooting each other like that's the preferable option right but it's not gonna to happen.

Yeah.

Yeah.

I think just really briefly, we'll, we'll do a couple more things for Sam and then maybe Emmanuel.

I mean, I almost feel like it's useful not to spell it out super long or super explicitly, but just, you know, to say a little bit out loud.

I mean, what are your own motivations when it comes to covering AI tech in the way we do?

You know, you'll have other publications like, I don't know, Fortune, Forbes, whoever, covering, you know, the VC side, the capital side, totally makes sense for their audience.

For us, we're going in very much on the ground floor, looking at sometimes abuse, sometimes funny shit that people are doing with it, and then showing a broader, you know,

a broader importance for that activity there.

Sam, I mean, what is your like own motivation for covering AI tech in the way we do and maybe even this story?

Yeah.

I think someone on Twitter said something that made me laugh.

It was kind of tiny cheek, but kind of true.

They were like,

the journalistic urge to just post funny shit is the end, beginning and end of this cycle.

Which, yeah, that's obviously, it's like you see funny shit happening online.

We cover online culture and technology.

Let's unpack what that is.

So

I do think that that is like, that's obviously facetious, but

looking at what people are doing with the tech, I don't, I don't know another way to do it.

It's like, these are the things that confound the VCs.

So, if you're just writing about what the businesses are doing and the VCs are doing and what their movements are in the investor world, that obviously affects a lot of things.

And it's very important to write about those things.

But if no one's writing about the crazy, weird shit that people are actually doing with the technology, we're missing a lot.

I read about the chatbot, the sex chatbots, the erotic roleplay chatbots that are jailbreaking and getting around filters for large

language models.

And that to me was a really interesting and fascinating and kind of like inspiring story to write because it was people taking

something that

companies and corporations had blown tons of money on, not blown, but you know, they invested a lot of money.

They invested millions of dollars in these.

development of these models.

And then people immediately were like, if you're not going to let me do horny shit on this, I'm going to go do it myself.

Stop me.

Right.

They can't.

And I think that's really where, that's where the fun stuff is.

That's where the scary and horrible stuff is.

And if you're not following that very ground level reporting,

yeah, I don't, that's, you're missing a whole world of stuff going on that the the VCs and the tech world is not paying attention to, obviously.

Yes.

Emmanuel, what about you?

Like, what is your motivation for coming in and covering AI, like in the way we do, or like from the angle that we do?

I think I said this a few times when we launched 404, but I'm very interested in how people use technology in unexpected ways, in the ways people subvert technology.

And

I can't think of a more perfect example

than Sam writing a story one day

about people people

subverting being an image creator and saying, Wait, Microsoft is not in control of this as Microsoft thinks.

And then less than 24 hours later, somebody sends me an email and he's like, Hey, the same thing that Sam says is happening in a kind of funny, harmless way is actually being used in an extremely harmful way.

And it's just like, that's basically the thesis for

my approach to AI: AI, people using it in unexpected ways.

Yeah, sure.

Jason, I think you had a couple of thoughts.

Yeah, I want to engage very quickly with what I think is the dumbest argument.

I'm very sorry, but I saw it,

which is like a lot of people are like mainstream media has fallen so far.

Like they, they're writing about, they're generating outrage by like making SpongeBob fly into the World Trade Center.

There's so much more important stuff happening.

One,

fuck you, like straight up like we can write about anything we want we're not mainstream media we literally

not yeah we're literally not that we are not that and it's like it was a broader critique but it's like we're

people who have a website that we made ourselves that we have no bosses so we can write about whatever we want like we can that is that is what that's why we did this so that we can write about whatever we want and you don't have to like it that's fine But the argument that goes hand in hand with this is they're saying you wouldn't censor the word processor or the typewriter.

So why are you going to censor AI?

And the argument is that you shouldn't write about this stuff.

Like this is inherently a pro-censorship view.

It's like anyone can write anything they want on the word processor, except for these journalists who are writing something that I don't like as a billionaire.

And it's upset me as a billionaire.

I'm very upset that they wrote this.

It's like, it's stupid.

It's very dumb.

And it makes me, it's like, it makes us mad.

It makes me mad at least.

The other thing, and very quickly, it's like people are saying, what's the big deal?

SpongeBob is flying into the World Trade Center.

Kirby is flying into the World Trade Center.

I've written articles about Nintendo suing a high school student for throwing a Pokemon card tournament at his like local internet cafe.

It's like these companies do not have any sense of humor when it comes to their trademarks and their copyright and their intellectual property.

And because Microsoft is so big and so powerful and whatever, like I kind of doubt that Nintendo is going to sue Bing for allowing people to fly Mario and Kirby into the World Trade Center.

But let's say I drew Mario flying into the World Trade Center, and then I wanted to throw a party about it and charge $5 admission for people to look at my art of Mario flying into the World Trade Center.

It's like I would probably get a letter from Nintendo's lawyers.

We've written about Nintendo.

Should we put it on the merch store?

Yeah, exactly.

It's like, in what world?

Make it.

I don't care.

It doesn't offend me.

But someone go ask Shigeru Miyamoto.

Miyamoto, Miyamoto.

I don't know.

Sorry, very sorry.

I'll get to it.

Go ask him.

And we did ask them.

And then they're like, oh, you're snitching to Nintendo.

And it's like,

we wrote an article about a guy who lives in a shack in the Dominican Republic who owes Nintendo $6 million.

And it's like, he's fucked.

He's like,

will live in poverty until he dies because Nintendo sued him through the feds and took more pennies than he will ever make in his entire life.

It's like this, it's crazy.

These people care.

Like Nintendo cares.

Right.

They've also definitely seen it.

Yes.

Yeah.

You know,

we don't have to go deliver it to them.

Some alarm went off in their office immediately.

Sorry.

And the reason that we would deliver it is because there is just a public interest in knowing how massive companies are going to react to one another when this technology proliferates.

That's what we want to know.

Emmanuel, you were going to say something.

Just to add to what you said, Joe, it just like it'd be useful for readers to know if they're going to get sued, if they make an image of Mario using Bing AI.

Are they going to get sued?

Is Microsoft going to get sued?

Are both of them going to get sued?

Or is no one going to get sued?

It's like, that's a good question.

No, that is part of what Tim is asking, right?

Sorry, just

the same people who are getting very mad and are like, don't write about this, they're all also heavily invested in a specific outcome.

Elon Musk has an AI company.

Mike Solana, I'm sorry, I'm not read up on his portfolio, but I assume he has a bunch of dumb AI companies.

John Carmack just posted a very long tweet, like a way longer tweet than you should be able to tweet about how we have to fight for open source AI right now, or it's like we'll never win it back, blah, blah, blah.

And it's like, he might be right.

I'm not, maybe that is actually the correct path, but he also has an AI company.

It's like all these people are heavily invested in a specific outcome.

It's not that they're just like philosophers and here like, here's what's best for society.

It's just, they're making the argument that is going to make them rich.

Yeah.

When you think about it, Socrates, Plato, Nietzsche,

they didn't have a substack, you know?

So, like, I mean, what, and now what would it be like if they did anyway?

I don't know.

Well, the philosophical argument for AI acceleration is very shortly.

There's a shorthand for it, which is called like AI-fueled techno-luxury communism.

And it's this idea that AI will make everything and it will do all the jobs and we'll have a universal basic income and no one will have to work and we'll all live in luxury and we'll all like profit from this.

What

real world events have occurred that make anyone think that this is the likely outcome of super intelligent AI?

Like when has any of these rich people who have like ushered this in fought for a stronger welfare state.

Right.

Like that's that's the hope, that's the dream, nominally, philosophically, but it's like there's nothing to suggest that's actually going to happen.

Yeah, totally.

Uh, I think for the last question, just because somebody did ask me this

specifically on Blue Sky, I don't want to put either Emmanuel or Sam on the spot, but maybe you have an idea on this.

Uh, they say that, you know, our site's articles on non-consensual AI images feature a lot of, you know, Discord servers, that sort of thing.

They ask, isn't non-consensual sexual images against Discord's terms of service?

And what are they doing about it?

Like, I don't know if you know the TOS off the top of your head, but like, even what you do, you do.

Okay, what is it and what are they doing about that?

It is against the TOS.

It is.

Defects are against terms of service as of when we asked them if they were

in 2020 or whenever it was, 2018, 2019, when the defects were a thing.

Yeah, it's against it for sure.

Discord will take action on things when it rises to their level of awareness, which a lot of the time is through journalism.

So Discord doesn't know what's going on a lot of the time in a lot of these closed communities because they're closed and they rely on moderators to keep

people within the guidelines because it's in their interest to do so.

Like it's very much...

in their interest to keep their own community in line with what Discord wants because they're risking their entire server getting shut down and banned.

That's the short answer.

I mean, that's what they do about it, is they do something about it when they become aware of it, which is

rare.

When we tell them about it,

when we send in the link directly and ask them directly, I guess it's not enough for us to publish the story, which is the case on a lot of platforms.

It's like, I don't, I don't always hit them up.

Discord is my personal anom.

It's my

app that I keep out there to keep track of all these dirt bags.

You know what I mean?

Yeah.

All right.

It's also not our, you know, we're not, we're not mods.

No, no.

And let me, let me just close with that.

In that,

sort of similar to the,

oh, why are you telling Nintendo thing?

It's like, for years,

I were,

well, I still am.

I'm hesitant to provide, you know, if I'm reporting on YouTube or Facebook or Twitter or whoever, I don't always want to give them the specific link to the material because it is not my job to moderate for them.

In fact, whether they can find it without my assistance is part of the story, especially if we're approaching it through a content moderation frame.

When I will provide it is when

typically they sort of need that to provide an informed statement.

So I don't have a great example off the top of my head, but let's say I go and ping them about some sort of abuse and they will only be able to sort of make sense of what the hell I'm actually fucking talking about if they have a link to the specific thing and they can provide a statement.

So, more often than not, that is why we sometimes provide links and material.

I think that would apply there.

All right, with that, thank you all for listening to the subscribers-only podcast.

Please feel free to leave a five-star review on the free version.

You actually can't on the premium version, but that's all okay because you're already subscribing.

Uh, tell your friends about the podcast as well.

Uh, I will now let the subscriber music, which is really intense, just

play us out, I guess.

We'll just let that go.

Okay, here we fucking go.