The Plan to Use AI to Purge Voter Rolls
YouTube version: https://youtu.be/JDAW53zv29g
Inside the Plan to Use AI to Purge Voter Rolls
Microsoft Provided Gender Detection AI on Accident
When Does Instagram Decide a Nipple Becomes Female?
Inside the Massive Crime Industry That is Hacking Billion Dollar Companies
Learn more about your ad choices. Visit megaphone.fm/adchoices
Listen and follow along
Transcript
This episode is brought to you by Progressive Insurance.
Fiscally responsible, financial geniuses, monetary magicians.
These are things people say about drivers who switch their car insurance to Progressive and save hundreds.
Visit progressive.com to see if you could save.
Progressive Casualty Insurance Company and affiliates.
Potential savings will vary, not available in all states or situations.
Hello, and welcome to the 404 Media podcast, where we bring you unparalleled access to hidden worlds, both online and IRL.
404 Media is a journalist-founded company and needs your support to subscribe.
Go to 404media.co,
as well as bonus content every single week.
Subscribers also get access to additional episodes where we respond to their best comments.
Gain access to that content at 404media.co.
I am your host, Joseph, and with me are 404 Media co-founders, Sam Cole
and Jason Kebler.
Hello.
On this election day,
my landlord has decided to have someone grind the driveway outside.
So I apologize in advance, but I guess, I don't know, it's sufficiently chaotic.
And hopefully you won't be able to hear it too much, but I apologize in advance for that.
That's his god-given right to you know dig up your driveway during a podcast in a democracy we allow that we uh we enjoy that but yes we'll we'll see what we can do speaking of elections the first story we have is one exactly about that we wanted to publish it on this day and the story is inside the plan to use ai to purge voter rolls yes we did it we found an ai angle for an election day story.
People didn't think it could be done.
We were able to do it.
So, Jason, first of all, because there's a lot of moving parts here, it's pretty complicated.
Let's just start super basic.
What is Eagle AI exactly?
I guess maybe to preface this about finding the AI angle, there is dispute as to whether this is like actually AI.
Like many other AI startups, this is something that has appended AI to its name and called itself AI, but it is like
unclear whether I would call it artificial intelligence.
It's basically like a sophisticated database program that was created in the aftermath of the 2020 election.
There have been many sort of like groups spun up to
essentially disenfranchise voters.
And one of the tactics is to challenge the eligibility of voters all over the country.
And this is like, to be clear, there is no evidence of widespread voter fraud or widespread,
you know, people voting twice, things like that.
Like, if people vote twice, they are often caught.
And some of the few cases where I have seen that, it has been like Trump supporters voting twice and getting arrested, things like that.
But basically, it's a Georgia organization that created this software that uses like national
address change information.
It uses newspaper obituaries.
It uses like felony arrest record databases and a whole bunch of other stuff, like property records, things like that.
And it combines all of that into a search tool that allows
people to automate the system of challenging voter eligibility,
which is maybe a little bit complicated.
But the way that it works, every state has like different rules.
But the way that it works in Georgia, which is where this article focuses on,
is there is like a secretary of state maintained voter role, like
all the registered voters.
That is made publicly available.
Then what Eagle AI does is it ingests this gigantic role of eligible voters and it runs it, it like cross-runs it with all these other databases and tries to detect people that it thinks are ineligible to vote.
And then somebody using this piece of software could then say, oh, look, you had a bunch of voters who shouldn't exist or shouldn't be allowed to vote or they're dead or whatever.
And it's, as you say, it's almost like a data broker in some ways, right?
Combining all these different data sets and coming to, I'll say, a conclusion.
I don't know how strong that conclusion is going to be but they're basically trying to streamline that process of calling out potential um
voter fraud i guess right
yeah i mean that that's exactly right and it is it has so much in common with data broker um the way that these things are are sort of like cross-pollinated and collated and so on and so forth and a lot of the information is actually like purchased from data brokers in in some cases.
Some of it is government data, but it's also like information scraped from the web, bought from data brokers, so on and so forth.
And the interesting thing is like theoretically, you want to make it very easy to register to vote and to vote in the United States.
We know that because of various voter suppression
like tactics all over the country, but predominantly in the South, especially, it has become
like,
very
contentious as to like who is allowed to vote.
Like you have Elon Musk saying all these illegal immigrants, undocumented immigrants, although he's not using that term undocumented,
are voting, things like that.
And that is not actually happening, but there is this like widespread perception that all of these people are voting who should not be voting.
And the evidence, the like quote-unquote evidence that is being used in many cases are
like typos in voter registration information.
One thing that is mentioned in this audio that I obtained is that in Georgia, a lot of the voter registration, like if you live in an apartment, some of the addresses will say like apartment one.
And then the other ones will say like apartment hashtag one.
And it's like very simple things like this, where there's like, you know, the number sign in the address.
And they'll say, well, like that doesn't correlate with the person's government ID.
There's no like hashtag in the address here.
Like let's challenge that.
And so what Eagle AI is doing is making it very, very simple to generate these challenges in huge numbers.
Like streamlines it.
And basically.
It streamlines it.
But also what it purports to do is like
in Georgia, they have a challenge system where anyone can challenge the legitimacy of any other voter.
And so what happens is you have these groups that create these challenges and submit them to different election boards of each individual county.
And so you'll have like one person filing 10,000 of them or like 80,000 of them.
And there have been a few cases in Georgia where one person is filing tens of thousands of voter challenges collated from either Eagle AI or like other similar products that have been created over the last few years.
And
then like the Board of Elections has to go and determine whether these challenges are legitimate or not.
And the really interesting thing in this case is that Eagle AI is in some cases being used to generate these challenges with one click, like, you know,
click challenge.
And then there is a county in Georgia that voted overwhelmingly for Trump in both 2016 and 2020 that
like entered into a contract with Eagle AI to then use Eagle AI to review the challenges.
So the scary thing here is that you could have these challenges being generated using information that is almost always faulty in some way.
And then the government entity checking it is using that same software to check whether it's valid or not.
Right.
I mean, that's a mess.
And even though I don't know much about this specific tool beyond your reporting, I know that the data coming out of data brokers is not always accurate and it can be a mess.
And, you know, that's not ideal.
Yeah.
And to be clear, the Georgia Secretary of State, like the state, the guy in charge of all of the elections in Georgia told the counties, do not use this software.
Specifically, they said, don't use Eagle AI because we don't think that its data is accurate.
And,
you know, there was this county in Georgia that was considering using it.
And ultimately, they ended up not using it in this election because,
you know, it's a busy year, obviously, like there's a lot going on.
And they basically said, like, we didn't have time to get trained on how to use this.
But that doesn't mean Eagle AI wasn't used across the country to formulate these challenges.
Caroline Haskins, who we used to work with at Motherboard, has done some reporting on this and actually like an hour ago published another story on Eagle AI saying that
it's been used by private citizens in North Carolina to create like huge numbers of voter challenges.
And so we do know that this is being used right now.
Yeah, totally.
And you mentioned audio, and we'll get to that in a second.
Before we do that, who
actually made Eagle AI?
We've spoken about who uses it, and that can be, you know, counties and then private citizens as well.
But who made this tool exactly?
So it's a guy named Dr.
Rick Richards, who is a Columbia County resident, which is one of the reasons why they're like, oh, he like lives nearby, let's use it.
He's an epidemiologist.
I don't know that much about him, and I didn't focus on him that much in the article because a lot of the previous reporting on Eagle AI has been about how the tool was made and who this person is.
But he's part of a larger effort to,
I guess, cast aspersions on American voting lists.
There has been this really big push to
politicize voter roles and who is on it.
And I will promise I'll keep this short.
I won't get into it that much, but there was an organization called ERIC, like E-R-I-C, which is still a non-partisan group that basically like updates voter roles across the country.
And states opt into using it.
And it is a group that If you move from Georgia to Texas or Georgia to Maryland or whatever, like they help update that information on the back end for secretaries of state.
In the aftermath of the 2020 election, Republicans started saying like, this is a Democratic PSYOP, more or less, and have run a big campaign against ERIC, saying that it's somehow biased in some way.
And as a result of that, a lot of
states have started pulling out of using ERIC.
Like they have been successful at politicizing it to the point that this is not used that much in that many states.
And the thing that is replacing it, or at least the thing that people are positioning as a thing to replace it, is Eagle AI,
which takes some of the information that Eric had, like the, you know,
change of address information, things like that.
But they're also adding all of these other
like pretty scary
data sets into it and very often inaccurate data sets.
There were also like a lot of, in the meetings that I listened to, there was discussion of like
a lot of voters living at a single address, for example.
And by a lot, I mean like five.
And it's like there are many places in the United States where more than five, you know, 18-year-olds who can vote live in the same house.
Like that's common.
Another thing is Eagle AI
was like detecting anyone who lived at a nursing home as being
suspicious is not the right word, but being flagged in some way.
Just because they're under the same roof, right?
Just because they're under the same roof.
And it's like, well, yeah, like hundreds of people can live at a nursing home and they are
probably mostly eligible to vote, like things like that.
And so there has been like a long history of people being flagged and removed from voter roles inappropriately
because of some clerical error or because of some challenge that had to do with, well,
they showed up on some list somewhere or they have a very common name and there was a death notice for someone with that name, but there was actually two people who had the same birthday and the same
name and one of them died, something like that.
Like that stuff happens often.
And many like voter rights groups are saying, like, we need to have like a really robust voter list because
if you don't, you're going to purge people who
think that they're registered to vote and are registered to vote and should be allowed to vote
but can't because they've been removed from the list accident, either accidentally or as a result of like a
concerted effort to trim the voter lists.
Right.
But like one of these malicious or potentially malicious requests that could be powered by something like Eagle AI.
So you mentioned some audio you got and some meetings.
We will listen to a short clip in a minute, minute, but just broadly, what is this audio of and how did you get it exactly?
Because it's sort of, that's the basis of this article.
Yeah, so I knew that the
Board of Elections in Columbia County was considering this.
And
a lot of counties make audio available just like on their websites of meetings.
And usually they make video available, to be honest, but Columbia County didn't make that available for some reason.
so I requested video of these meetings and they said oh we don't we don't have any video but like we're required by law to record it in some way so they've been recording it on a cell phone um for like a year
and just these really important meetings on some person's phone and they just
do the voice memos app on their iPhone and put it on the table I mean I mean I think that's what's happening yeah and it it It is really important.
Like I listen to it and there are many residents there who are like, don't do this.
This is really bad.
It's like, not, you know, people from nonprofits show up and say, like, this is bad for the following reasons, et cetera, et cetera.
And, you know, I think being on a board of elections is probably a very thankless.
I don't even know if they're paid job.
If they are paid, it's probably very little money.
And it's like really contentious, unfortunately, in the United States now.
And
all that being said, it's it's like the three people on the election board in Columbia County,
they,
I don't know exactly who they're voting for, what their politics are.
And they, they sort of like have the air of like, hey, yeah, we want to make sure that there's a good election.
But at the same time, each of them were repeating.
these sort of talking points from Eagle AI about there being dead people on the voter rolls, about there being people who shouldn't have voted that did vote in 2020, like things like that.
And so some of the audio that we'll play here is just members of the board talking about,
as I see it, kind of repeating talking points that are popular among the stop the steal folks and then talking kind of like callously about how Eagle AI can be used to like quote unquote research voters without them knowing, which I think is also pretty concerning yeah for sure all right let's listen to some of that audio now just a minute or so to give you an idea of of uh what this article is based on
i think there's been some misunderstanding i i saw some comments that basically are not true uh that said that the software will change things in the voter rolls or that it will disenfranchise people or it will cause people to feel that they're being singled out.
Well, first of all, they won't know they're being researched because they're going to be researched because something triggers the research.
A death notice,
a record in the National Change of Address database, something of that nature.
If you can look at a timeline of the way we handle
confirming whether voters are still eligible,
Nothing will change in the steps that we take.
The only thing in the little segment that is research to reach the point where you decide what you need to do, Eagle AI would be inserted there as a tool to help with the research.
So, Jason, what does this audio overall show us?
I mean, of course, how many, what is it, is it hours of audio you got?
I got like 12 or 13 hours of audio because it was like 12 meetings over the last year.
I would say about two or three hours of that was talking about Eagle AI.
And then a lot of the other stuff was just like regular,
like, here's the polling place.
You know, this county was hit by Hurricane Helene.
And so they were talking about like, we're changing different polling places and things like that.
So there's a lot of that.
There were two like really, I would consider them contentious meetings where Eagle AI was the dominant
topic of conversation.
And one of them was October 2023, and one of them was December 2023.
I know that was like a while ago now, but
it's all been building up to this, right?
It's been building up to this, and they've been working like ahead of time trying to implement it.
And then the next time, from what I could tell, where Eagle AI was discussed at length was in the October 2024 meeting.
So the last meeting before the election
just last month.
And that's sort of when they were like, we think that Eagle AI would be useful for us.
Like we
want to use it, but we didn't use it because we've just been so busy.
And we like have a beta test of it, but we ended up not using it for this election.
Which doesn't mean they won't for a future one.
If there are any more elections in the United States.
It doesn't mean they won't for a future one.
It doesn't mean that it's not being used elsewhere in the country.
And that's, that's like why I wanted to get this story out because
it's being pitched really heavily all over the country.
And there are definitely election boards that are sympathetic to this idea or this cause.
And I think it's important that people know, one, like how it works and how it's being pitched.
And two, I guess there are people showing up to fight this sort of thing.
And I don't know.
I think that not a lot of people probably go to their local county board of elections meetings.
They have them every month, probably in most counties.
And it's like some of the people speaking out about this are pretty forceful about it.
And I think that, you know, that that's very good because it's a really like informed group of people that have come, have shown up to
like protest this more or less.
Yeah.
All right.
We will leave that there.
When we come back, we're going to talk about a piece of technology that is actually AI.
I mean, I'm pretty sure.
It's a Microsoft product that was basically taken off the shelves, and then people still had access, and there are a bunch of issues with that.
We'll be right back after this.
Black Friday and the holiday season is coming up.
It's a big time for any online store, and it's a good time to make sure that yours is set up in the right way.
404 Media uses Shopify to sell our merch, and it's one decision we made that has simplified everything for us.
Whenever I need to restock items, offer a new item for sale, or manage our inventory, everything is just a click or two away in Shopify's simple but powerful backend, which gives you everything you need need to start and manage your store so you can spend less time researching and more time selling.
Upgrade your business and get the same checkout we use with Shopify.
Sign up for your $1 per month trial period at shopify.com slash media.
All lowercase.
Go to shopify.com slash media to upgrade your selling today.
Shopify.com slash media.
I love a great deal as much as the next guy, but I'm not going to voluntarily sit in traffic for hours just to save a few bucks.
It has to be easy.
No hoops, no BS.
So when Mint Mobile said it was easy to get wireless for $15 a month with the purchase of a three-month plan, I called them on it.
Turns out it really is easy to get wireless for $15 a month.
The longest part of the process was the time I spent on hold waiting to break up with my old provider.
Mint Mobile's simple, intuitive website makes it really easy for you to buy your plan, get on Mint Mobile, all while keeping your phone and your phone number.
To get started, go to mintmobile.com/slash 404media.
There, you'll see that right now, all three-month plans are only $15 a month, including the unlimited plan.
All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network.
You can use your own phone with any Mint Mobile plan and bring your phone number along with all of your existing contacts.
Find out how easy it is to switch to Mint Mobile and get three months of premium wireless service for $15 a month.
To get this new customer offer and your new three-month premium wireless plan for just $15 a month, go to mintmobile.com slash 404media.
That's mintmobile.com slash 404 media.
Cut your wireless bill to 15 bucks a month at mintmobile.com slash 404 media.
$45 upfront payment required, equivalent to $15 a month.
New customers on first three-month plan only.
Speed slower above 40 gigabytes on unlimited plan.
Additional taxes, fees, and restrictions apply.
See Mint Mobile for details.
All right, and we are back.
Emmanuel wrote these two stories,
but I'm sure Sam and Jason can pitch in here.
The first one is Microsoft provided gender detection AI on accident.
Sam, so Microsoft previously had an AI that promised to detect certain things after looking at someone's face.
What was it looking for?
Exactly.
Yeah, so it was, this would be part of like Microsoft's Azure face
services, like their face capabilities.
And it looked at a bunch of stuff.
So
it was
inferring things like emotional states, parts of your identity, like your gender, age, whether or not you're smiling.
whether or not you have facial hair, what kind of hair you have, whether you're making, where you're wearing makeup, things like that.
So
yeah,
they had this service that basically could look at you and try to evaluate whether
you were male or female and
like make a bunch of assumptions based on what it sees.
And it retired those capabilities in 2022 as part of this big, like responsible AI push.
It was this big, like, splashy
news item that Microsoft was saying, saying, oh, we're going to put all these safeguards in for our facial recognition services because we realize that maybe they're not great or they're not working as intended or people aren't using them as intended.
Maybe they're harmful.
And
that was, that was one of them.
Yeah, we made the torment nexus and then we were like, whoops, my bad.
Yeah, you're supposed to make the torment nexus.
And they, of course, did.
Right.
I don't know.
I just find that so funny.
And I know that like the development of AI in a technological sense is so, so quick, right?
You know, every week, every month, whatever, we see something new.
But it's just interesting that 2022 from a, or slightly before that as well, from a policy perspective, feels like ancient history.
at this point where they're like, oh, we shouldn't have done that.
We shouldn't have released facial recognition, which can identify your gender or emotional state, are bad.
And if you like, you wouldn't release that today.
You see what I mean?
And I just find that interesting.
Yeah.
And it's like they, they kind of were like, oops.
And, you know, it was like this big positive thing.
It wasn't just like, oh, we did it bad.
It was we are responsible by,
you know, winding down this particular
use, this particular service, because, you know, we're so smart.
And that's, that's such a pattern, I think, in AI in general is you introduce the thing.
really fast and don't really think about the repercussions of it and apparently have no one on your team that has ever experienced any kind of prejudice.
And then you say, Oh, we realized that actually we're so much smarter than everyone because we're responsible now and we're going to do the right thing.
And aren't we the bigger man?
It's such a like, it's such a vibe across the entire industry.
And Microsoft was doing that in 2022, which you're right.
It was
only two years ago, and it feels like an eternity ago.
So consider like
the state of AI even two years ago is nowhere near near what it is today.
It's all changed
so, so, so fast.
So, and Microsoft even had, I was looking at this before the podcast because I thought it was interesting that they had like a
uses and then a
maybe don't use it this way list on their documentation page.
And it's like avoid using for task monitoring systems that can interfere with privacy.
Avoid using
where,
you know, a human in the loop is needed don't use carefully use consider using in schools it's things like that and it's just a list of things that like people are definitely gonna do it's like a suggestion list it's like if you have to say don't do it people are doing it i think on the same token it's like
i
think that it it should go without saying but i will say it anyway to be crystal clear it's like gender detection facial recognition and gender detection uh like ai is a pseudoscience for one it is super like not accurate in addition to being like
a horrible idea researchers who have researched it have have shown like repeatedly that one it's like not accurate emotion detection ai is not accurate And two, it's like, if you're going to use it, the torment nexus, et cetera, if you're going to use it, the only ways that you can use it are to discriminate against people.
Like that is
what it exists for yeah yeah um it's literally the purpose because if you're trying to
pigeonhole somebody into an emotional state or a gender that is you know that's just literally discrimination you're picking something there yeah exactly it's like phrenology but with ai it's yeah it's algorithmic phrenology basically it's
useless the the other thing is that once i mean sam basically just said this but it's like once it's built other people spin it off and can use it regardless of whether Microsoft
officially releases it or not.
There was a researcher at a school in North Carolina.
I don't have it up in front of me right now, but we covered it at Motherboard who made what they called a gender detection AI.
And what they did was fed it many, many, many stolen pictures from Reddit, from like transgender, like transition progress picture subreddits without any permission from anyone, built what they called a gender detection,
you know, AI.
And like, super bad.
It was really bad.
It became a very bad thing.
They, they didn't store the data set in any sort of like secure way, and it became an absolute mess.
And so it's just like
the reason that people build these things is to
enable discriminatory things.
Yeah.
And as you both allude to, this is why in part Microsoft said, okay, well, if we can't stop people doing bad stuff necessarily, we're just going to stop offering it.
And that's what they did in 2022.
But it turns out
that it was actually still active and accessible by some people who were using, as I understand it, basically an older version.
You know, they were almost like grandfathers into this earlier access.
So as well as that first piece, Emmanuel also published this other one, which is about a project which I find incredibly fascinating.
And the title of the article is, When Does Instagram Decide a Nipple Becomes Female?
And this is a project of Ada Ada Ada.
I hope I pronounced that correctly.
And what she did was basically document her transition
with photographic evidence every day or to regular intervals and process those through various AI detection systems.
So there is the Microsoft one in there.
I think there's an Amazon one as well.
And of course, Instagram, right?
Which is what the headline alludes to.
And it's, well, when does this computer system decide that I've transitioned, you know, in the eyes of this not caring machine?
When is my nipple now technically a female nipple?
And I guess, I mean, I mean, to us, it's kind of obvious, like, why you want to test Instagram specifically, because Instagram has a very long storied history with nudity and with nipples specifically.
So, Sam, how does Instagram treat
nipples exactly in its moderation efforts?
Yeah, I mean,
it's just like, it's.
This is such a an ongoing like, who the fuck knows moment for Instagram.
It's like the nipple thing is gone on forever.
Technically, Instagram has policies against nudity and they
don't allow,
I mean, I'll just quote straight from them because it's such a funny line, photos, videos, and digitally created content that show sexual intercourse, genitals, and close-ups of fully nude buttocks.
I don't know how close you need to be, but close-ups.
And it also includes female nipples is what the line says in Instagram's terms of service, but breastfeeding, giving birth and afterbirth moments and health-related situations, as well as breast cancer awareness or gender confirmation surgery or things like that or acts of protest are allowed.
So it's like when is it's the it's the
eternal question of the internet is like when is a nipple female?
It's an old thing from Tumblr from like 2018.
It's like female presenting nipples was like the meta on tumblr for a long time because they changed their terms of service to say female presenting nipples were not allowed, but all the other nipples are okay.
It's like when do you define a nipple as being female?
And Instagram makes all of this way more confusing by applying it so irregularly and disproportionately.
It's like sometimes a nipple is okay.
Sometimes it gets you perma banned.
Sometimes it's like you'll see like Playboy models and browsers being like, you know, on Instagram showing almost total nudity.
And then you'll get banned from Instagram for showing like side boob.
It's so confusing and it makes no sense.
And it disproportionately affects trans people, fat people, people of color, and also sex workers and sex educators.
And anyone doing what Instagram says is okay to do on Instagram, like sex education and things like that.
So yeah, that's kind of, that's the environment that Ada Ada Ada was working in.
And I think that's so interesting.
And she found out out about the Azure thing because she was using a couple of different facial recognition tools and services to track when those also detected.
Like when, when did they rate her as female?
When did they not?
It was all just kind of like in a spreadsheet for her, which I think is so nerdy and fun.
But yeah, that's that's Instagram's line on it.
And testing it is super interesting.
Yeah, I'll just read out a quote from her, which says, I'm really interested in algorithmic enforcement and generally understanding the impacts that algorithms have on our lives.
It seems like the nipple rule is one of the simplest ways that you can start talking about this because it's set up as a very binary idea.
Female nipples, no, male nipples, yes.
But then it prompts a lot of questions.
What is a male nipple?
What is a female nipple?
For those who don't know, back when we all worked at Motherboard, the technology section of ICE, ICE, before we quit to make 404 media, me and Jason worked on a series of stories about Facebook and Instagram's content moderation.
We got leaked, I don't know, was it hundreds of pages of
very internal content moderation documents.
There was stuff in there how Facebook has to contend with dick pics.
There was technically you're allowed to have the face of a world leader with anuses for eyes, but then something like that.
I don't know.
But the
anecdote that I
always come back to, and that I always say, because it's the only one I can specifically remember.
There were so many highly specific rules about how its content moderators should interpret this stuff.
And
here is what the rule was, as I understand it.
Okay.
It's like you are allowed to show an anus on Facebook or Instagram if it has been Photoshopped onto the face of a world leader as an act of political protest.
However, if there is a dildo or butt plug going into it, that is not allowed.
Right.
And you're not allowed to do it to people who are not famous.
And you're not allowed to do it to
celebrities if it is not like.
clear what the political
statement you are trying to make is.
yeah this is like pages of documents with examples and like check check marks or like x marks next to them i don't know if there is actually that but that was the vibe it was like allowed not allowed and the example was kim jong-un
and a trump one there was definitely a trump one as well but the and the reason i bring that up is because that is a super specific um caveated example right you almost have like a flowchart of slightly what is allowed and what isn't allowed and i find that it's interesting that when you then look at the nipple policy on Instagram, it's like, no, female nipple bad, male nipple good.
And it's like so simplistic, almost in the other direction.
Do you see what I mean?
Now, to be fair, I haven't seen an internal Instagram document or Facebook content moderation document in several years at this point.
And I don't remember one specifically about...
nipples.
So maybe there is nuance that we can't see.
You know what I mean?
But on the outside, people
are questioning it because it's basically a black box and everything you said, Sam, about how it impacts other people disproportionately.
Jason, I think you're going to.
I will just say very quickly that
you got those documents during a period where Facebook was trying very hard to
like they were investing a lot of money in content moderation and they were attempting to
have
like fair, like quote-unquote fair rules for everyone.
I just did that another big story about Facebook content moderation for us.
About, you know, like, is Facebook even trying anymore?
I don't have internal documents on this, but I've talked to people who work at Facebook, recently left Facebook, so on and so forth.
I think it's fair to broadly state that
so much more is automated now.
Like, those were guides for human moderators who were reviewing things.
I suspect that they have just said fuck it and are like over-deleting things now because they don't super care about
being like fair to sex workers and women and so on and so forth.
I think that they're probably just over-deleting.
And I think that that's happening because this, this story that we just published got deleted by Instagram and threads just for sharing the link.
Yeah, when we when we posted it to Instagram and threads, they well, what do you call a post on threads, a thread?
Did they did they delete the thread was that it or was it
i posted it on threads yeah uh i think everyone did but i posted on threads and within 10 minutes it was deleted and it said it was like you know graphic nudity or whatever and then
i appealed that saying no this should be allowed because it is like political speech and it's also not graphic and it's also that's what the article is about like
and there's actually not a place on Instagram or threads or Facebook where you can like write an impassioned appeal.
They give you like five little check boxes and you click one of the boxes and I click that.
And then like five minutes later, they said,
we have rejected your appeal.
And on Facebook, they limited our account, which I don't think I even told you, but
first time any of it.
Yeah.
I mean, it's, it was so, they, we're not, we're no longer allowed for the next month to create a group it's funny the the rules the way that they've started like punishing you is like first they're like you can't post on marketplace anymore then they're like oh no it's just like random little like slaps on the wrist and then they're like now we've banned your account entirely right um
i mean the groups one is interesting because of course they've had a big issue with groups uh in the past and then they try to lean into it um i guess just the last question sam is just what do you think the Ada Ada Ada's project shows us
about
either Instagram's continued moderation or analysis of nipples, I guess, or even just more broadly, because I feel like
it's still so messy across all of the different services.
It's not, you're never really clear where you stand is sort of my takeaway, but I'm just wondering what you think.
Yeah, I mean, it's,
it definitely just drives on the point that it's all so arbitrary and so confusing for users and that Instagram is not really putting a whole lot of thought into this like individually.
It's like Jason said, they don't really seem to care anymore about whether users, like the average user
is having a good time at all.
or, you know, is not looking carefully at each post that they're banning or each account that they're designing to permanently ban and things like that.
Yeah, I mean,
I think these two stories together are just so, so fascinating and interesting.
I was reading the
Access Now is like a nonprofit, like a digital rights organization, and they put out
a statement about facial recognition and how
specifically like gender and facial recognition is really fraught and bad
and reinforces these outdated stereotypes about things like race and gender.
Yeah, I mean,
they gave the example that
if you're trying to catch your flight and then being stopped by these facial recognition ticket services that are everywhere to check into your flight or to get onto the plane and being stopped because the computer thinks that you look too male to be female or you look too female to be male and it doesn't match your passport.
First First of all, that's that's like humiliating and frustrating at
best.
And at worst, keeps you from traveling or going places or out to people around you, things like that.
So, yeah, I mean, it threatens all of us, things like this.
I think people kind of see these stories and they're like, oh, that's that's like a trans issue.
And it's like, it's really an everybody's problem issue.
And I think we've kind of seen that play out in the last year that these services and these surveillance products are everyone's problem.
It's just they're affecting people in marginalized positions first and more
severely.
And, you know, it's definitely, it's coming for all of us.
It's already come for all of us.
It's, you know, it's something that we're all feeling at this point.
So.
yeah, I think it's just kind of it's illustrative of that.
It's like a, I mean, she has kind of like a play platitude about it, but
you know, it's a pretty scary and serious thing that she's illustrating through this.
For sure.
All right.
We will leave that there.
If you are listening to the free version of the podcast, I will now play us out.
But if you are a paying 404 media subscriber, we're going to talk about the world of info stealers.
This is a really in-depth piece I did looking not really just at the malware, but the entire ecosystem and industry around it.
And we're also going to talk about our new partnership with Wired.
You can subscribe and gain access to that content at 404media.co.
As a reminder, 404 Media is journalists founded and supported by subscribers.
If you wish to subscribe to 404 Media and directly support our work, please go to 404 Media.co.
You'll get unlimited access to our articles and an ad-free version of this podcast.
You'll also get to listen to the subscribers only section where we talk about a bonus story each week.
This podcast is made in partnership with Kaleidoscope.
Another way to support us is by leaving a five-star rating and review for the podcast.
That stuff really helps us out.
This has been For If All Media, we will see you again next week.
Packages by Expedia.
You were made to occasionally take the hard route to the top of the Eiffel Tower.
We were made to easily bundle your trip.
Expedia, made to travel.
Flight-inclusive packages are at all protected.