AI Is Breaking Our Brains

51m
This week we discuss a new Microsoft study that finds using generative AI is "atrophying" people's cognition and critical thinking skills and the right's war on Wikipedia.

Articles discussed:

Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared”
Wikipedia Prepares for 'Increase in Threats' to US Editors From Musk and His Allies
You Can’t Post Your Way Out of Fascism
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

This podcast is supported by Progressive, a leader in RV Insurance.

RVs are for sharing adventures with family, friends, and even your pets.

So, if you bring your cats and dogs along for the ride, you'll want Progressive RV Insurance.

They protect your cats and dogs like family by offering up to $1,000 in optional coverage for vet bills in case of an RV accident, making it a great companion for the responsible pet owner who loves to travel.

See Progressive's other benefits and more when you quote RV Insurance at progressive.com today.

Progressive Casualty Insurance Company and affiliates, pet injuries, and additional coverage and subject to policy terms.

In today's world, data breaches happen all the time, and even the most secure companies can't always protect their employees' personal information from ending up in the wrong hands.

That's where Delete Me comes in.

Delete Me is a service that removes your employees' sensitive information from hundreds of data broker websites, sites where hackers can find phone numbers and emails within seconds.

Rachel Toback, CEO of Social Proof Security, says attackers use this data to target employees with phishing messages and AI-powered phone scams.

But Delete Me makes it harder for these bad actors by scrubbing your employees' details regularly.

It's simple.

Attackers are lazy.

If it's too hard to find contact info, they'll move on to easier targets.

DeleteMe takes care of this for you, doing the heavy lifting so you don't have to.

And over time, they keep removing the information so it stays down, protecting your team from constant exposure.

If your business has a social presence or deals with clients, you need DeleteMe.

Visit deleteme.com slash 404media and start safeguarding your team's information today.

That's deleteme.com slash 404media.

Hello and welcome to the 404 Media podcast where we bring you unparalleled access to hidden worlds both online and IRL.

404 Media is a journalist-founded company and needs your support.

To subscribe, go to 404media.co.

As well as bonus content every single week, subscribers also get access to additional episodes where we respond to their best comments.

Gain access to that content at 404media.co.

I'm your host, Joseph, and with me are 404 Media co-founders, Sam Cole.

Hi, Joe.

Emmanuel Myberg.

Hello.

And Jason Kebler.

Hey, good to be here.

Was that your Jason voice?

Yeah, it was.

How was my Joseph voice?

It was perfect.

It needed to be a little bit more British, but pretty good.

Yeah, yeah.

As you can probably tell, Joseph is not here.

I'm Jason, but you know, you get the B team today.

We are going to start out with Emmanuel's story.

Microsoft study finds AI makes human cognition, quote, atrophied and unprepared.

This is one of those stories that we published that

the response to was kind of mixed, I feel, where there was a bunch of people saying, hey, dumbasses.

Of course, we already knew that.

However, I thought it was quite interesting.

And I think that anytime that a company is pushing a technology, but then also publishing studies saying that their technology is causing some problems, I find it to be very interesting.

So what is this paper?

How did it work?

What did they find?

So I want to start by reading a quote from the paper itself, which I think really sums up the point.

And it says: A key irony of automation is that mechanizing routine tasks and leaving exception handling to the human user, you deprive the user of the routine opportunities to practice their judgment and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.

So,

that is what they found.

This is the gist of the paper.

I think that's very well phrased.

In terms of how they actually measured this, they recruited 319 knowledge workers from one of those.

This is an interesting story on its own, but there are these various platforms where people can sign up and volunteer to participate in this kind of research.

So they use one of those to recruit 319 knowledge workers.

And then those

workers reported

936 firsthand examples of them using generative AI.

And then the researchers provided them with a questionnaire to reflect on how

using AI

impacted or did not impact their critical thinking.

And the paper goes

on quite a bit on how they prepped the people who participated in the study, which I think is good because a lot of these terms are vague and complicated, but they explain like what their definition of critical thinking is, made sure that the people who participate in the study understand that and kind of explain what they're looking for.

And

a lot of interesting stuff to talk about here.

I want to start with like some of the actual data, which the study did not make a big deal out of, but I found to be probably the most terrifying thing about it, which is just what people report that they use AI for in their daily jobs.

So they cite some examples.

So there is a teacher who is using DALI, which is an AI image generator, to make a presentation about hand washing in school.

There's a commodities trader who is using ChatGPT to kind of trade themselves on their job and become better

commodity traders.

And there is a nurse who verified that a chat GPT-generated educational pamphlet for newly diagnosed diabetic patients.

So, that alone, I think, is already kind of shocking that all these professions are using generative AI like that.

But yeah, what

these people reported

and what the study found is that the more a worker relies on generative AI,

the less critical thinking they use in their job?

And also, there's a correlation

between

like workers who are more confident in themselves are less confident in the generative AI output and therefore feel like they have to use more critical thinking.

And the inverse is ultra true.

So if you have not a lot of confidence in your abilities, you just trust the generative AI more, rubber stamp it, and let it go.

Yeah, it's interesting.

They have like a few little excerpts in this study where they have quotes from people, and one of them is:

I can be confident that everything is spelt correctly.

I don't need to second-guess myself.

I can get the reassurance I need without having to bother another person to check it for me.

And that's obviously like a very small thing where they're just asking it to

like spell check something.

But I I did find it interesting to see like what people were using it for.

The other thing that this study says, I believe, if I understand it correctly, you wrote this story, but like as I understand it,

the people who are using generative AI are

rather than

being creative and

creating things from scratch, which is how we used to do things and how we still do things here.

They're like correcting the AI.

So they're sort of like managing the output of this other

technology.

And that's like a different type of work.

Can you elaborate a little bit on that?

Like it's a different type of work, more or less.

Yeah, maybe one of you can look up the exact word that they use, which I thought was really good.

But essentially, what you're referring to is the difference between

someone doing the job and someone's job being verifying the output of a generative

AI

tool, which is, yeah, like that's

here's the quote.

It's the data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight.

Oversight, right.

Yeah.

So that, and that's like, that's kind of a scary vision of the future where all these jobs, you're not actually doing the job.

You're kind of

making sure that the AI is doing the job correctly that you previously did.

You, Emmanuel, had this

like allegory more or less where you said, like, I don't remember anyone's phone numbers anymore.

And you just use, you know, they're saved in your cell phone.

And that's not the cognitive, you don't have that like cognitive load anymore.

You've like offloaded it to technology.

And that's not artificial intelligence, but it seems like,

you know, saying that someone's cognition is atrophied and unprepared, that seems quite bad.

But it does seem like it is not fully a dire picture that they're painting here.

Can you elaborate a little bit more on the phone number analogy?

Yeah, so I think you and I were texting about this study that we both saw over the weekend.

And I think the reason that we thought it was interesting is because the study itself, let me see what the actual, I'm going to read the title of the paper.

It's the impact of generative AI on critical thinking, self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers, which is obviously many words, but that is like a pretty spicy title for a paper from Microsoft, which is heavily invested in the success of generative AI.

And we can talk more about

what that means for all this to be coming from Microsoft researchers.

But to your question,

and the researchers point this out out at the top, and this is something we talked about before we wrote the story, is that there is a need, we both felt the need to hedge here and like make this seem maybe less dramatic than even the title says, because there is a long history

of

various technologies that we offload cognitive tasks to.

it seems outrageous at the time, but then becomes completely normal and we all assume is good.

So the researchers point out calculators, right?

It's like calculators didn't used to exist.

People thought that it would make us bad at math.

And obviously, that is not the case.

Like math has advanced tremendously.

People are probably overall better at math than they used to be, used to be before the invention of the calculator.

People said the same thing about the internet itself.

This I didn't know, but they noted that Socrates really objected to writing becoming commonplace and that that would make people dumb

because they wouldn't have to remember stuff, I guess.

Um, so did not know that right, me neither.

But it's like, yeah, we've all felt this.

And I think,

you know, the millennials in the audience probably remember a time before smartphones and not having Google Maps on your device.

And I talk about in the story how when I moved to San Francisco, it was right before smartphones caught on and I would leave the house with like a little pocket map and I would learn how the city works and public transportation works and I would remember all that and in a few years I kind of knew the city very well and since then every time I moved to a new city I never went through that process because I just had Google Maps and I never learned

you know I really can't navigate New York that well without Google Maps

and is that is that bad I don't know.

Like it's not, it's not, it's probably not great that my navigational skills have atrophied, as they say in the paper.

but overall,

people are better at navigation.

And one could easily imagine generative AI,

should it catch on, should it work as well as the people who

run AI companies tell us it will, offloading various tasks and us getting completely used to it and then it becoming an overall positive.

Like

one can imagine that world even

if the prospect of offloading all these tasks right now

seems scary, especially if you're a nurse or whatever.

Yeah, I mean, I find it to be pretty scary regardless.

Like, I do, I take your point.

And I think just like as a, as a counterpoint, you know, I didn't memorize

my

partner's phone number until like six months ago.

And we've been together for seven years.

And that's very bad.

But I can still remember a lot of phone numbers from my childhood.

Like, and I, like, my next door neighbor's phone number, I still remember it.

And I do think that, like, the maps one is a good thing to bring up.

And I find that

when I purposefully try to navigate without Google Maps or something like that, I feel my brain working in a way that it doesn't normally work when I just blindly follow the map.

And

I also find that if I go somewhere without Google Maps one time and I have to figure it out myself, it like downloads into my brain in a way that doesn't happen if I do it like 30 times with Google Maps.

I'll go to the same place and just blindly follow the map and I'll have no idea like how I got there, what happened during the ride.

It's like that, there's an entire like blank space in my brain.

And I do wonder

if you start offloading all sorts of different things to, you know, generative AI in some way, like what, what that will look like over time when you just start taking like all of these things off, like what is left.

And the paper does bring that up.

Like it says effects on writing.

And they're like, well, people know how to write now.

And the people who use generative AI for writing who already know how to write,

they will

be able to edit what.

what is what's coming back.

So they'll be able to do that oversight process.

But they sort of worry that in the long term,

like it says, however, there are concerns that novice writers may become overly reliant on these tools, potentially impairing their long-term skill development by bypassing critical writing processes, such as constructing logical arguments and understanding subject matter.

That seems like a real potential problem.

I just like, so, two things: one, which is addressed in a paper,

and that is

that goes back to what does it mean for this to come from Microsoft?

And

I think that is a company that is heavily invested in AI, recognizing potentially

a real and big

problem.

And eventually, at the end of this paper, coming around to a solution, right?

Saying, we recognize, and we see this with other things, right?

Like when we write about Google doing research about AI misinformation, the point of that at the end of the the day is for Google to say, we recognize this is a problem.

This is a result of a technology that we are developing and deploying.

Here's what we suggest

we can do to mitigate this problem so we can develop this technology further.

And in this case, essentially, what Microsoft says is that the generative AI tools need to,

first of all, encourage critical thinking even though they are doing the work.

So they say the AI tool can suggest areas for user refinement or offer guided critiques, like invite the user to examine the output, which is not something we see AI tools currently do, right?

When OpenAI gives you an answer, it is kind of doing it in a definitive voice with authority.

And that's where things can get really hilarious, right?

When we talk about

Google AI overview telling people to eat glue, right?

That would seem less bad if Google in some way was like, hey, like

maybe you should look into this further and not just trust the output.

And then the other thing that is not addressed in the paper, but I think is something that has come up in our reporting a little bit.

For example, when I wrote about

the CEO of Suno talking about how nobody likes making music actually and we should just like let AI do it all.

And that is a more fundamental question of like, what do we actually want

AI to do?

Like, let's assume that generative AI will eventually be able to do all of our jobs.

Does that mean that we want it to do it all?

Are there things that we don't want it to do because we want to do them just because we enjoy them or we think we get better results, even though an automated tool can do it?

I think these companies that are developing AI know this is a problem.

And it's like Jason says, like, people replied to this story and they were like, oh, duh, like everyone knows this is a problem.

But something being studied is much more powerful than people just like kind of anecdotally knowing something and when i was writing about anthropic um anthropic recently put in all of their job applications which they have like 150 different open roles right now so it's a lot of job applications or job um

you know like descriptions

um

that

uh they're telling applicants not to use ai to fill out the application.

And it's like they know,

they know that this is a problem.

And I was trying to find a study that kind of backs up the idea that everyone knows, which is that like using, relying really heavily on these conversational AIs

makes your brain kind of bushy after a while, if you're just kind of never using critical thinking.

And in their application, it says, you know, don't use AI assistance during the application process.

We want to understand your personal interests in anthropic.

without mediation through an AI system.

And we also want to evaluate your non-AI assisted communication skills.

And Anthropic, if people don't know, makes Claude, which is a very popular conversational AI that you would use for that purpose.

Like if it was any other job application, you were like, I just need to whip off why I want to work at Anthropic.

I would use Claude to do that probably.

If I was going to rely on a chatbot to do those sorts of things.

But they're saying

we want to see your non-AI assisted communication skills, which I wonder is that going to be a sought after skill in the future if people don't have it, if people are losing that ability to like put together critical thought without the use of a computer,

is that going to be the new kind of sought after

skill in the workforce?

I don't know.

That feels like a very far away thing, but maybe it's not that far.

This study is not exactly about that.

And yet I found myself thinking the same thing.

Like, what does this mean for humans?

And what does this mean for jobs?

And what does this mean for, you know, cynically like companies who want new ideas and who want uh

to continue making money

and we talked about this before but there was this period of time where a bunch of journalists were getting laid off and as part of the culture war there was like a type of software engineer who would go into people's Twitter mentions and say learn to code.

Like

the way that it worked was someone would say, oh my god, I just got laid off from my job at Vice.

And then a bunch of like dickheads would respond to them, say, learn to code.

And saying that like software engineer was going to be the, you know, the only place that jobs would exist in the future.

And in this somewhat like interesting fashion, you know, software engineers are being automated right now by generative AI.

And I think there's a lot of creativity in software engineering.

There's a lot of problem solving in software engineering, but a lot of the sort of like commodified coding is being replaced by things like Copilot, you know, Microsoft Copilot.

You know, Mark Zuckerberg is talking about replacing software engineers with AI and turning a bunch of these coders into people who just like oversee

AI coding bots and things like that.

And then in this story we reported on last week, like the federal government wants to replace a lot of its coders with AI coding bots.

And meanwhile, meanwhile, there's like tons of journalists and writers who have been replaced by AI or who have, you know, are competing with, you know, AI that steals content and just, you know, regurgitates it out there.

But

like what we do and what a lot of really good journalists do is they find new information and share new information and share new ideas.

And

I wonder if like the only way that you'll have a sort of like knowledge work job in the future is if you can create new ideas with your own brain.

Because a lot of the little quotes in here about what people are using generative AI for are

like,

I just wonder if companies are getting what they paid for because

a lot of companies are asking people to do stuff and they're like, oh, I typed it into the AI and I gave whatever, whatever it spit out.

And so I wonder about like the sameness of everything that's going to start happening as

AI does more stuff in like an office setting, I guess.

I think

what is the open AI research tool?

Is it just open research that they recently unveiled?

Deep research.

Deep research.

That is the AI tool that I think is

closest to what our jobs

are.

And I haven't used it yet, but I will use it and report back and tell you if we're, if we're, if we're cooked or not, I suspect not.

I was watching the demo and initially, I was really freaked out.

I was like, damn, that's what I do.

But then I thought about some of what you, Jason, said, which is a lot of our stories are really, you know, the foundation

at its core is an original thought and also talking to people and getting information that inherently

is hard to get out of people.

And as far as I know, deep research is still unable to do that.

Before we move on, can I just very briefly talk about pod committed?

Yeah, please.

I wrote in the story, I used the phrase that Microsoft is pod committed to the rapid development of generative AI tools.

And I've never gotten more emails about a mistake in my article before.

And I was like, did I make this up?

And I didn't.

It's a real term.

Pod committed is a poker term.

I don't even play poker, but I like the metaphor, which is you're sort of so

deep in to a bet that the only way forward for you is to keep placing the bet or to go deeper into the same bet.

And I was very surprised that people are not familiar with it.

So it's like in Holdham, in Texas, Holdham, for example, like you get your cards, you make a bet.

Someone else calls, someone else raises, you call or you raise, and then like you see some cards, and then by the end of like the hand, your options are like fold and lose all the money that you've bet,

or

you know, you're pot committed, meaning you put in so much of your money that giving up is like gonna you at this point.

If you don't win the pot, you're done.

Yeah, so you have to keep going, so you have to keep going, you have to keep investing in it because, like, even if there's like a low percentage chance of being able to win at this point like you know a card comes up in poker in on the river or whatever the last card and you're you're not gonna win or like you only have like a 5% chance of winning or something like that you still kind of just like have to bet because

you won't have enough chips left to like play if you fold right more or less is how I understand it yeah yeah yeah and I'm saying and I mean that that is basically my read on where Microsoft and a lot of the big tech companies are with AI.

They have put so many billions of dollars into it already that to back out at this point would be

they need us to they need us to accept it as a society.

Right.

That's what I always say that they're shoving it down our throats.

And I mean, that is what's happening, but it's like,

even though people are showing in many ways that they don't want to use it or they only want to use it in certain contexts or for certain things, it's like.

They're like, you will keep eating these vegetables

until you're healthy.

Even though we can see you turning purple or whatever i just wanted to say it's a real term i promise pot committed it's a real thing man you wanted to say he's right that's all yeah

we had like a like an hour-long conversation about pot committed yesterday though afterwards sam you knew it though right no i didn't know it i had to look it up and then i asked my fiancé who plays poker and he didn't know it but he's not very good that means he's not good that means he's not good at poker he said oh i would never do that though and i was like okay yeah for sure only win he only plays poker hands that he wins I think we should do a 4-4 poker club.

We should.

Poker match for subscribers?

No, maybe not.

That's a bad idea.

We can bet Bitcoin.

Okay.

We'll take a quick break.

And when we come back, we're going to talk about Wikipedia.

Do you plan your vacation locations based on the local language?

With Babel, language no longer has to be the barrier.

This year, speak like a whole new you with Babel, the language learning app that gets you talking.

Learning a new language is the pathway to discovering new cultures.

So before you embark on a trip, embark on learning something new.

Babel's quick 10-minute lessons handcrafted by over 200 language experts get you to begin speaking your new language in three weeks or whatever pace you choose.

And because conversing is the key to really understanding each other in new languages, Babel is designed using practical real-world conversations.

Spending months with private teachers is the old way of learning languages, and nothing screams tourist like holding a phone translation app up to your face all day.

Babel's tips and tools are inspired by the real-life stuff you actually need when communicating.

With a focus on conversation, you'll be ready to talk wherever you go.

Last year, I used Babel to learn Italian before my trip to Rome, allowing me to better connect with the people I met there.

This year, I'm brushing up on my rusty Spanish before a trip to South America.

This year, get talking with Babel.

Babel is gifting our listeners 60% off subscriptions at babble.com slash 404.

Get up to 60% off at babble.com slash 404.

Spelled B-A-B-B-E-L dot com slash 404.

Babel.com slash 404.

Rules and restrictions may apply.

I don't know about you, but I like keeping my money where I can see it.

Unfortunately, traditional big wireless carriers also seem to like keeping my money too.

After years of overpaying for wireless, I finally got fed up from crazy high wireless bills, bogus fees, and free perks that actually cost more in the long run and switched to Mint Mobile.

Now I pay just a fraction of the price with Mint compared to big wireless carriers, and you can be doing the same.

Say bye-bye to your overpriced wireless plans, jaw-dropping monthly bills, and unexpected overages.

Mint Mobile is here to rescue you with premium wireless plans starting at $15 a month.

All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network.

Use your own phone with any Mint Mobile plan and bring your phone number along with all your existing contacts.

Ditch overpriced wireless and get three months of premium wireless service from Mint Mobile for 15 bucks a month.

If you like your money, Mint Mobile is for you.

Shop plans at mintmobile.com slash 404 media.

That's mintmobile.com slash 404 media.

Upfront payment of $45 for three month 5 gigabyte plan required, equivalent to $15 a month.

New customer offer for first three months only, then full price plan options are available.

Taxes and fees extra.

See Mint Mobile for details.

Starting any business becomes so much easier when there's another business you can rely on to make things easy.

For our merch, that business is Shopify.

And for millions of online and brick and mortar stores, well, they rely on Shopify too.

Nobody does selling better than Shopify, home of the number one checkout on the planet.

Shopify's not so secret secret, it's ShopPay, which boosts conversions by up to 50%,

meaning way less carts going abandoned and way more sales going

So if you're growing your business, your commerce platform better be ready to sell wherever your customers are scrolling or strolling, on the web, in your store, in their feed, and everywhere in between.

Businesses that sell more sell on Shopify.

Upgrade your business and get the same checkout 404 Media uses.

Sign up for your $1 per month trial period at shopify.com slash media, all lowercase.

Go to shopify.com slash media to upgrade your selling today.

Shopify.com slash media.

Hackers and cyber criminals have always held this kind of special fascination.

Obviously, I can't tell you too much about what I do.

It's a game.

Who's the best hacker?

And I was like, well, this is child's play.

I'm Dina Temple Restin.

And on the Click Here podcast, you'll meet them and the people trying to stop them.

We're not afraid of the attack.

We're afraid of the creativity and the intelligence of the human being behind it.

Click here: stories about the people making and breaking our digital world.

AI machines, satellites, engine ignition.

Click here.

And listen.

Click here every Tuesday and Friday, wherever you get your podcasts.

This is a story by Jason.

The headline is, Wikipedia prepares for increase in threats to U.S.

editors from Musk and his allies.

I have been kind of seeing here and there on social media, because I follow a lot of Wikipedia folks,

talking about threats to Wikipedia existentially and also sometimes literally from Elon Musk and Silicon Valley and with everything that's been going on with those delightful

topics and news cycles.

So I have seen that there are threats.

I've seen that people are worried, but I haven't quite seen exactly what's going on.

So, can you kind of just walk us through what these threats are against Wikipedia?

Also, why would you threaten Wikipedia?

So wholesome, crazy.

Yeah.

So, Elon Musk has hated Wokipedia for like several years now.

And our friends, Molly White, who runs Web3 is Going Great and also a blog called Citation Needed, did a really big rundown of like the history of Elon Musk hating Wikipedia back at the beginning of January that you should go read.

But, like, basically,

his

thought is that Wikipedia is biased against the right,

and specifically, it's biased against Elon Musk and his companies.

And so, he's been calling it Wokipedia for a long time.

He said, stop donating to Wikipedia.

He's talked about trying to take over Wikipedia.

He's talked about trying to buy Wikipedia because he tries to buy stuff when he doesn't like it and then turn it into something of his image.

And

this all like boils down to the fact that,

you know, Wikipedia is a collective effort between millions of human beings across the entire world

and

also the most pedantic people you've ever met in your entire life.

And so the integrity of Wikipedia is very important to them.

It's not something that can be bought and it's not something where

it's like you try to change a comma in an article and you'll you can get into an an argument for like four days about whether that comma should be there or not um in the talk pages and sort of like in the back and forth of like whether an edit is going to be accepted or not and so wikipedia has shown itself to be very resilient to efforts to change how a given article reads and a lot of

not a lot of but like elon musk doesn't like what his wikipedia page says more or less and what he doesn't like what

the Wikipedia pages for his companies say either.

And

there are lots of companies that try to do paid Wikipedia editing, meaning they're just like, they're essentially like disinformation firms that try to go in and edit a Wikipedia page to make it look more

favorable to whoever the subject is or to like scrub it of

negative information more or less.

And

i i haven't seen like a good there probably is a good study because there's a lot of uh studies about wikipedia but there's basically like

that's very very verboten in wikipedia land like being paid to edit things like when people are caught they're banned for life it's like extremely bad

and so elon musk and more recently the heritage foundation which wrote project 2025

has sort of upped the ante and said that they are going to start going after individual Wikipedia editors, meaning they are going to threaten specific editors with either like harassment campaigns, you know,

more or less.

That's my value judgment put on that.

Like, you know, the Heritage Foundation didn't say we're going to do a harassment campaign, but they've talked about trying to identify individual editors based on like their IP addresses or based on

their usernames, like searching their usernames in hacked data sets, for example, example, to try to dox them and find out who they are and perhaps like bring lawsuits, like defamation lawsuits against them and things like this.

And this is like really scary.

It's really scary for Wikipedia editors who are largely just engaging in this volunteer project in good faith.

And it's really scary for the integrity of Wikipedia as a whole because

it's a nonprofit organization.

The people who work for the Wikimedia Foundation are being paid, but the editors are not.

And so if editing Wikipedia starts becoming this like dangerous thing where you could get sued because of an edit that you made, then, you know, it throws the entire like

validity of the project into concern, I guess.

It's also such, I'm glad we're podcasting about this because

now that we're talking and I can put this like in the analysis bucket and not the,

I don't know, purely factual bucket, we can

speak about it in terms I feel are more appropriate.

And it's just very crazy that we're having this conversation about editing Wikipedia in the United States because this language and these strategies

come from

what we think of as authoritarian countries, right?

It just, it just, it's bad for Wikipedia for sure, but it's also, I feel like a really bad sign that editors in the United States feel like they are under threat for editing Wikipedia.

It's just not something that

I imagined

was on the menu here, you know?

Yeah, so I mean,

let's be real.

Like that, we're journalists in the United States.

The First Amendment is extremely permissive for us.

It protects us.

It protects like a lot of different types of freedom of speech.

It's like a very strong amendment, or it has been historically.

And it's been eroded away sort of by like via a lot of lawsuits.

You know, there's been examples of different media companies that have been more or less destroyed by these lawsuits that even if they maybe could have won, stay in court forever because there's like a rich person behind them and they can be bankrupted in that way.

And that's for like larger media corporations, companies like Gawker and things that have had millions of dollars to spend on their defense.

If you start thinking about just like a random Wikipedia editor, you know, having a lawsuit financed against them,

that's very scary.

It's like a very scary situation.

And so

historically, it's like freedom of speech in the United States has been very strong.

Wikipedia operates globally and it operates differently in other countries than it does in the United States for that reason.

It's like always been quite easy to operate in the United States because of the First Amendment.

But

Wikimedia has just recently fought a lawsuit in India.

It's fighting a lawsuit in Germany.

And I believe it won the lawsuit in Germany.

I don't know the specifics of it, but that came up as part of this sort of reporting that they were fighting this.

And And then there's like a lot of countries where it's either illegal to edit Wikipedia, it's illegal to access Wikipedia, or it's very dangerous to do so.

And you're right, they've like developed these methods for users there to continue to contribute to the project, but to do so anonymously.

And they're now rolling those features out to the United States.

So like, I haven't said what those features are, so I'll say them now, which is right now, if you're logged out of your Wikipedia account your IP address is shown and this is so that

you know if you're just like terrorizing a website the administrators or like the moderators of that web at that Wikipedia page can ban you or you know like contact you in some way so they are gonna get rid of that They are going to make it so that everyone who is logged out and starts editing Wikipedia pages is given a dummy username and that the IP address is not available publicly.

They also

are going to start deleting IP address information after 90 days, I believe it is.

And then, especially in what has traditionally been considered authoritarian countries, if you were editing pages that were considered to be controversial, meaning like pages about the government in these countries, things like that,

you could get a sock puppet account, and that sock puppet would be known by an administrator in another country, and you would be allowed to edit these pages,

but

it wouldn't be attached to your real Wikipedia username.

So, you could have like a Wikipedia presence where you're like editing only, you know, articles about the color red or something.

And then on the side, be editing, you know, articles about the dictator of your country or something under a separate name.

We usually use sock puppet in a negative context, but here we're talking about it's like a pseudonym.

It's just a way for you to have an identity that doesn't compromise your real identity, right?

Yeah, they called it legitimate sock puppets.

But yeah, that like

sock puppets on Wikipedia are a big problem generally where it's like someone creates a fake account and then starts, you know, doing disinformation or whatever.

And this is using it in a different context to allow people to edit politically dangerous topics.

So, you like, you could imagine someone wanting to use a sock puppet account to edit the Elon Musk page or edit the Tesla page or edit the Heritage Foundation page, for example.

And so they're going to roll these out more widely.

And this was announced at like two different

meetings on January 30th.

One, Jimmy Wales went to, who was the founder of Wikipedia.

And then

Mariana Iskinder is the CEO of the Wikimedia Foundation.

And she, she was at that same one and was sort of talking about this problem.

And so it's like, this is something that the highest levels of the Wikimedia Foundation are worried about at this moment.

I personally think that the community is not,

they don't think that this is enough is sort of the vibe that I got, although I hesitate to speak for the entirety of the Wikimedia community.

Like, this is something that people are really worried about.

And the solution here is not super clear.

One last thing I'll say is that every year,

Wikimedia has a conference called Wikimania.

You know, lots of lots of terminology here, but they have basically a conference and a party.

And it's not always in the United States.

It's kind of rarely in the United States.

But the idea of holding a Wikimania in the United States came up.

And,

you know, the CEO of Wikimedia was like, we're not sure.

We're just not sure if we're going to do it in the U.S.

because it might be dangerous.

It might be dangerous to have it in the United States.

It's also like difficult to get visas to come to the United States now.

And it's just like, I don't know, the idea that you wouldn't have an event like that in the US because of the political situation shows kind of like how scary at least this global organization is taking this moment, I guess.

Do you want to talk at all about, because we're talking about Elon Musk being a risk and the government potentially being a risk, but there's also a growing,

I don't want to call it a movement, but there is this like, in conservative media, There's this growing notion that Wikimedia is like, if not Soros backed, but like a liberal, progressive, lefty project.

And it's like, I don't think a week goes by now that we don't see some peace about how biased Wikipedia is.

And that I think is also feeding into this and feeding into the fear of Wikipedia editors.

Yeah, yeah.

I mean, you're right.

This has been sort of like a years-long conservative,

I guess, project to delegitimize Wikipedia as an information source.

It's interesting, like on one of these calls, there was

a former Wikipedia editor who went on a rant.

So these things are like open to anyone, basically.

And so it's like the world's largest city council meeting that you could imagine.

And it's all like Wikipedia affiliated people.

So there was one guy who was like, you haven't answered my question for six years.

And I'm going to continue asking this question at every single one.

And he talked for like seven minutes straight.

And then they were like, okay, sir.

And then they were like, we'll get back to you.

And then there was another woman who came on and she started asking a question to Jimmy Wales and was basically like,

I have forked Wikipedia because I think that you're like a woke project, more or less.

And so she like cloned Wikipedia and then created a new one.

And she's like,

but my version isn't being indexed by Google.

Can you like figure out, figure this out for me?

And he was like, like, well, we have nothing to do with that, ma'am.

But I bring that up because it like is this,

it's been this narrative for a while.

There have been some studies about this, and they found that

the average like Wikipedia article is slightly left of center, like ever so slightly, but that it's generally fact-based.

You know, there's so many rules about citing your sources and things like that.

And so,

yeah, it's like the,

it's one of those things where it's like reality has a left-wing bias.

You know, that's a saying.

Also, like the United States in general is very conservative and Wikipedia is a really global project.

And so you have like, I don't know, those pesky Europeans with their like

center left.

like ideology infiltrating like articles about climate change where it doesn't say like climate change is a

So I don't know.

It's one of those deals, I think personally.

But

my personal opinion is that Wikipedia is like maybe the greatest example of cross-cultural global human cooperation ever done.

Like I don't really know what else compares to it.

It's a free resource that has just like

endless information about everything.

I don't know.

I don't even know what else you would like put up there with it.

And so like the idea that this

is now under threat is quite concerning to me.

Back when anyone could upload to Pornhub, that was a very rich tapestry.

But that's over now.

That's so true.

Yeah, they shut it down.

I was just looking at Elon Musk's Wikipedia page because I was like, what is on here?

You know, I don't look at that every day.

It's like, what is on here that he's so pissed about?

And it's very normal, very average.

Like, there's not even, it's like the public image section is about how he was like portrayed in The Simpsons or whatever.

But then you go in the talk page, which you can click at the top of any Wikipedia page, which I recommend everybody doing for anything mildly controversial because it's so fun.

And the first entry in the talk page is discussion is mentioning oligarch characterization in lead.

And it's someone making the argument that he should be characterized as an oligarch because

like he cites like seven different sources.

And then people are going to fight about that for who knows how long, weeks.

That's, you know, that's good eats on Wikipedia.

It's just the talk page is just like endless.

People are talking about the duck test.

So good.

I just love it.

I love that this exists.

We're talking about his family's wealth, South Africa.

Yeah, just

good shit.

But none of this is on the front of his page.

Like, what are you so mad about, bro?

Yeah, you really need to read Molly White's article.

It's on, it's citation needed, and

the article is called Elon Elon Musk and the Rights War on Wikipedia.

On December 31st, Elon Musk was mad

because

in a Twitter account called at BGatesIsA Psycho, he had

seen that Bill Clinton's Wikipedia page was edited to delete his connection to Jeffrey Epstein, for example, and then suggested that Bill Clinton himself was the person who did the edits.

And, you you know, the text really just got moved to another part of the Wikipedia page.

And this, this spurred like a days long, like Elon Musk rant against Wikipedia.

So it's like, even trying to explain this on a podcast is, it's, this is something where text is way better, links,

screenshots, images.

Like, so go check out what Molly White wrote because it's, it's really good.

Yeah, go down the rabber hole.

She also put out a recent

video about how to edit Wikipedia if you're new, which I think is very useful and great.

Yeah.

Okay,

let's end it there for the free show.

If you're listening to the free version of this podcast, I'll now play us out.

But if you're a subscriber, we are going to talk about posting our way out of fascism and why you can or cannot do that.

You can subscribe and gain access to that content at 404media.co.

As a reminder, 404 Media is journalists founded and supported by subscribers.

If you wish to subscribe to 404 Media and directly support our work, please go to 404media.co.

You'll get unlimited access to our articles and an ad-free version of this podcast.

You'll also get to listen to the subscribers only section where we talk about a bonus story each week.

This podcast is made in partnership with Kaleidoscope.

We will see you again next week.