Do You See What I See? Building AI for All of Us
Listen and follow along
Transcript
How can we help make stronger communities happen?
At JPMorgan Chase, we invest in the businesses making goods right here at home so the businesses can create more jobs for more people and more goods can get to the communities we love.
Make momentum happen.
Learn more at jpmorganchase.com/slash impact.
Welcome to Assembly Required with Stacey Abrams from Crooked Media.
I'm your host, Stacey Abrams.
If you're a regular listener, you've heard me mention artificial intelligence a few times.
Since one of our goals on the show is to understand the challenges we face and the tools at our disposal, AI ranks near the top of both.
And for all of the promise that AI holds, it's crucial that at this early stage in its development and deployment, we create ground rules to ensure a truly world-altering technology does not grow unregulated and unchecked.
And don't just take my word for it.
Tonight a stark warning that artificial intelligence could lead to the extinction of humanity.
It comes from dozens of industry leaders, including the CEO of Chat GPT creator OpenAI.
The experts signed the statement, which says mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.
That letter was signed by 350 industry leaders in 2023.
And yet here we are entering 2025, still waiting for meaningful steps to be taken by our government on AI regulation.
To add to our necessary worries, the incoming Trump administration doesn't seem like it's full of people taking that threat seriously.
Trump recently announced that his new crypto and AI czar, a brand new title, would be investor David Sachs, a longtime critic of what Silicon Valley tends to see as government meddling in tech.
Sachs has also invested in Elon Musk's AI company, XAI.
So, two of the most influential people in the president-elect's ear about the topic of AI are also the two people who profit the most from unbridled, unregulated AI development.
Shocking.
Now, we could go on forever listing the conflicts of interest and the devastating intentions.
But today, we're going to play an interview we recorded just before the election.
Its focus is on one facet of AI, which is sadly more relevant than ever given who's coming to Washington for the reasons I just mentioned.
Hi, camera.
I've got a face.
Can you see my face?
No glasses face?
You can see her face.
What about my face?
That is Dr.
Joy Bulamwini, a poet, researcher, and computer scientist in her 2016 TED Talk.
In it, she shows the audience how a technology for identification functionally cannot see her.
Bulumwini is black, and it's only when she puts on a white mask, something you can buy at a Halloween store, that the technology detects her sitting in front of the camera.
In her research at MIT, Dr.
Bulumwini demonstrated that the algorithms behind facial recognition technology literally couldn't detect her because they've been developed by people who don't look like her.
It's one of the many areas of AI that need a deeper look, the prospect for racial and gender bias.
She gave this phenomenon a name, the coded gaze.
The coded gaze refers to the priorities, preferences, and sometimes prejudices of those who have the power to shape technology.
In addition to the lack of AI regulation, We must also consider the overt hostility that Trump, Musk, and others have shown to the potential harm that AI can create for communities already under siege.
This is particularly alarming given Project 2025's visceral hatred for diversity and its stated intention to strip protections from marginalized communities.
As we prepare for the global AI funding boom to continue unabated, for the Trump administration to take a hands-off, if not aggressively hostile approach, and for the technology to inch closer to ubiquity, we need to understand how to hold AI and the companies, agencies, and organizations that use it accountable.
So please take a listen to this fascinating conversation with the incredible Dr.
Joy Bulamwini.
Joy, thank you so much for joining me here today on Assembly Required.
Thank you for having me.
Could not be more excited.
Well, I appreciate that.
What I love about the work that you do is part of the reason for this podcast.
So we have this formula on Assembly Required.
You know, what is the problem?
Why is it a problem?
And then how can we address it?
And one of the reasons I wanted to have you as a guest is because there's this AI conversation happening that for some is absolutely a problem.
And I don't think that's quite right.
You've talked about data being destiny.
You said data data is destiny.
Can you explain what you mean in the context of sort of defining the moment we're in when it comes to AI?
Absolutely.
So people are having all kinds of conversations about AI.
What is AI?
And right now, when we're looking at AI is really giving computers abilities we've associated with humans in the past, ability to communicate, ability to create things.
And the question is how?
How are these abilities being given to machines?
And the how is this approach called machine learning, where machines are learning patterns from humans and that pattern comes from data sets.
So let's say you want to learn the pattern of a face.
Instead of trying to code each individual way any face might look like, the alternative is to say here's a data set.
of faces or think of it as your diet really right and in some cases you have a very bland diet And so when we're talking about data being destiny and we're talking about artificial intelligence that's using machine learning, right now machines are learning from the data.
And oftentimes the past dwells within our data, past discrimination, past exclusion.
And so depending on whatever data we're feeding the machines, this becomes the diet.
And so when I say data is destiny, it's saying that if we have data that isn't reflective of the world, or if we have data that's actually entrenching inequalities, that those are the patterns AI systems are destined to learn and then to reproduce and then to amplify.
So that's what I mean by the phrase of data is destiny.
And we don't want it to be that data is destining us to discrimination, but that will happen if we're not actually intentional about our data diet.
That's a perfect way to lead into the next part of the conversation because I've known about you for a while and I have really respected the work you've done, but I had a good chance to dive into your fuller story this summer when I was writing my new Avery Key novel and it's about AI and healthcare.
And in fact, your book, Unmasking AI, became a foundational text for how I understood my characters and their motivations.
So can you tell your listeners about your first encounter?
with what you term the coded gaze.
What is it?
How does this connect to data being destiny?
And why was it so important for you to tell this story?
Yes, so some of our listeners might have heard of the male gaze, might have heard of the white gaze, maybe the post-colonial gaze.
Okay, to that lexicon, I'm adding the coded gaze, and it's really about power.
Who has the power?
to decide the priorities, right?
The preferences of the machines that are being created, of the AI systems that are being deployed, and also at times, whose prejudices get embedded into these machines.
And so, my own visceral first encounter with the coded gaze that I truly remember was back when I was a graduate student.
And I was taking a class where we read science fiction.
And then, from that science fiction, you made something you probably wouldn't make otherwise if not for that class in your six-week assignment, right?
So, I thought, oh my goodness, what would be a fun thing to create?
And I wanted to shape-shift, change my body in some way.
Notice we had six weeks, right?
So I was like, hmm, that might be a little bit difficult to alter physics, biology, but what if I could change myself in a reflection?
Right.
And so I started looking at different materials and I found something called a half-silvered mirror, which is think of a one-way mirror.
And it had a really interesting property where if you had something black behind the mirror, it would look just like a regular mirror.
But if light shone through, that light would pass.
So, using that, I realized, oh, I can create like a filter you would see on your phone,
but instead of it being on a video feed, it could actually be in the mirror.
So, I mean, I was at MIT Media Lab.
It's just the kind of this, I wasn't doing this for anything other than
why not.
Let's see if it can work, right?
And so I actually got the effect to work.
And I was using Serena Williams' face because goat, goat check,
that's who I want to be, right?
So I have Serena Williams' face.
I have it.
I think there was a shot of her from Platon, the photographer, and it has this beautiful black background and just her core features.
So that was exactly what I needed.
And I'm having her eyes line up with mine and her nose.
So it's looking good.
But now it's kind of like just having a cutout where you have to put your head right where the face is to get the effect so i thought all right engineering chaps let's go one step further so this time i said all right if i can add a webcam and have some software that can track the location of my face then when i move in the mirror serena's face moves with me So I get to stay the goat, right?
You know, I get to continue having that effect.
so that's what I was playing with now what happened was once I started playing with the software to track the location of my face it actually wasn't finding my face
and I was frustrated so first I actually drew basically a smiley face on my palm and I held that up to the webcam and it detected the smiley face on my palm as a face so i not but not my dark skinned face right so at this point i'm thinking okay anything's up for grabs, right?
And in my office, I happen to have a white mask because it was around Halloween time.
And I grab this white mask.
Before I even put it all the way over my dark face, the mask has been detected
as the human face.
And so it was literally that moment of coding in a white mask, coding in white face at what's supposed to be this epicenter of innovation at MIT, right?
I thought I'd arrived at Tech Mecca, woohoo, you know, and here I am wondering, well, why is it that it can detect the face I drew on my palm?
This white mask that is very much not a human, but it has enough of the features, but not my actual face.
And that's when I started asking, are machines as neutral as I had hoped them to be?
And that's what started the research.
Well, you followed up on that initial seminal finding by building research projects.
And you looked at other examples, including what I found to be fascinating research on politicians, especially women.
What did your research teach you about how to understand this current political moment?
That's a great question.
So when we're talking about
data is destiny, going back to the start of the conversation, I was curious why my face wasn't being detected, but I also knew that I couldn't just focus on my face alone.
So I started collecting many different face data sets.
And I started collecting face data sets of women in power and men in power, right, based on the representation of women in parliaments around the world.
So when it came to the top 10 in the world, Rwanda was actually number one.
Policies make a difference.
When you say parity is required in the constitution, lo and behold,
you see a difference.
All right.
You also had Nordic countries, right, moving towards egalitarian ideals.
The U.S.
was no, we were not top 10, top 20, yeah, we were not even in the conversation.
And so
in going through that, it was actually a reflection of what I called power shadows within the data set.
Because I was looking at these data sets and the majority of them were men and the majority of them were individuals with lighter skin.
And the question is, well, why
and it goes back to our actual methods of creating these data sets that then train the systems that then have the data is destiny situation and so when you look at how we got the data sets oftentimes for face data sets at that time it was generally scraping images of public officials
elected officials and so there what do we get we get that power shadow of the patriarchy stark and clear.
And we're still getting that power shadow of the patriarchy in terms of who is expected to be a leader, who looks like the pattern we've seen in the past, and what happens when somebody breaks that pattern.
Oftentimes it breaks the system in certain ways.
And that's what my research was showing.
So I noticed that, okay, we have these skewed data sets.
So I collected something a bit more inclusive.
And because I had a more inclusive data set, I could actually start testing AI models from some of the leading tech companies in ways they hadn't been publicly tested before.
Because this is what was happening.
You had data sets that were largely male, and I would also say lighter-skinned, so pale and male data sets.
And people were patting themselves on the back because they had done well on that data set, right?
But let's say that data set was 90% either light-skinned individuals and or men, right?
So then when you had a test of the real world, suddenly the results don't look so great because the actual measures for success were misleading.
And I think thinking about this political moment, we've also had a situation where the measures of what successful leadership look like have been misleading because
we're saying success means you've been elected versus success being you've changed society for the better.
When you fill up with Philip66 and Conoco, you're ready to go to the library and get a library card instead of paying for overpriced.
And you're ready to go to the amusement park on a weekday when there's no lines and go
on a six-mile hike you instantly regret.
Only two more miles to go.
Go here, go there, go anywhere with Philips66 and Conoco.
With Plan B emergency contraception, we're in control of our future.
It's backup birth control you take after unprotected sex that helps prevent pregnancy before it starts.
It works by temporarily delaying ovulation, and it won't impact your future fertility.
Plan B is available in all 50 U.S.
states at all major retailers near you.
With no ID, prescription, or age requirement needed.
Together, we got this.
Follow Plan B on Insta at Plan B OneStep to learn more.
Use as directed.
Halloween's almost here.
And with Target Circle 360, you can get everything you need delivered to your door right when you need it.
Out of their favorite candy?
Ordered.
Need snacks for a party?
Done.
Looking for a backup costume because your kid changed their mind?
Handled.
With Target Circle 360, Halloween fun is covered.
Join now and get same-day delivery all season long.
Only at Target.
Membership required, subject to terms and conditions, applies to orders over $35.
You are an incredibly gifted wordsmith.
So we've got the coded gaze, we've got power shadows.
And as I said earlier, I've known about you for a while, mainly because of your Algorithmic Justice League.
Another fantastic turn of phrase.
It appeals both to my inner nerd and my outer activist.
And to what you were just saying, when you founded the Algorithmic Justice League, you took on Amazon, IBM, Microsoft, and you specifically called out the bias in their facial recognition technology.
And you forced not only a conversation, but real change.
So I want to talk to you a little bit about that.
So first, can you talk about how you decided to launch the AJL and some of your early wins?
Oh, sure.
So Algorithmic Justice League, as I was starting to see these issues within data sets and within AI systems, I'm thinking we need to do something, right?
And we need to do something that's not just going to be about researchers.
We're going to need the storytellers, right?
We're going to need the artists.
We're going to need the activists.
We're going to need the academics, the authors, all of us, just anyone.
I say, if you have a face, you have a place in the conversation about AI, right?
And so for me, Algorithmic Justice League was this
way of putting an umbrella that there needs to be some sort of movement for algorithmic justice and people who are focusing on these issues.
Because when I started it in 2016 and I talked about it on a talk that ended up on the TED platform, it was very much this concept that AI is coming.
And we are overconfident, as my friend Kathy O'Neill likes to say, overconfident and underprepared.
And so my thought was we need to get prepared and we need to get ready.
So let's form an algorithmic justice league and also have some fun while doing it so we we definitely lean into uh the levity but alongside the levity there is uh accountability and so
what i understood as a student at the time and particularly being a black woman was if i just said we have these issues with tech I did not believe my single story alone would be enough.
I understood I needed to bring receipts, and those receipts were called algorithmic audits.
Algorithmic audits being we've tested the systems, you can test for yourself, these are the issues that we're seeing.
And so the research that I'm probably most known for from MIT was called Gender Shades.
And in that research, I created that more inclusive data set.
and started testing AI systems from IBM, Microsoft, Face Plus, billion dollar tech company in China China at the time, later Amazon.
And I wanted to know how accurate were they when it came to guessing the gender of a face, right?
And long story short, not as accurate for some faces versus others.
Shocking.
What was shocking to me was that I was doing this research as a graduate student, right?
Showing some of the largest gaps in accuracy in commercially sold AI products.
These are arguably tests that the companies could have run internally.
And so I was really curious as to why it hadn't been made a priority.
And later, when the research came out, I heard from
senior scholars, again, I was still a grad student at this point, right?
They're like, oh, we've known about that issue.
So my question was: well, if this is an open secret, why aren't people talking about it?
right?
And also, if you're talking about it, but you don't have the empirical evidence, it's easy to say, oh, that's just the one-off.
And so, the research that I did as a graduate student at MIT was to say, here's the evidence to back it up, and here's the evidence on some of the latest AI models that are coming out from some of the largest tech companies.
So, you also can't say, oh, it's just
in the research arena, or, oh, they're using old methods and things of that nature.
And so it was really important to me to say that this is where we are in a moment where there's so much excitement about what AI can be.
And here you have Amazon labeling Oprah's face male.
Here you have IBM, right, describing the Williams sisters, you know, with all of these tropes in terms of misgendering.
You had IBM describing Ida B.
Wells as as a man in a coonskin cap.
All kinds of tropes being propagated by these AI systems.
And so this is why I did the study.
I called it the counter demo.
And I called these counter demos something that's part of a larger exploration of an evocative audit.
So we're just talking about audits, testing AI systems, right?
So the test that I did showing IBM, Microsoft, later on, Amazon, we have gender disparities, we have skin type disparities, and disparities at the intersection, right?
So to be more specific, you might have error rates of maybe 0 to 1%
when it came to lighter skinned males, but you would have error rates
over 40% in some cases when it came to darker skinned women in commercially sold products.
So this is what I mean by disparities to put numbers behind it.
And so I realized those numbers were important, but the numbers without stories didn't quite connect.
So we had these performance metrics, but I wanted to go from performance metrics to performance arts.
And so that's why I started things like AI Ain't Tai O Woman, which is a spoken word poem.
And these evocative audits I was doing, they were counter narratives because they were counter demonstrations.
Because if Amazon can't get Oprah, maybe we don't want them selling facial recognition technologies to police.
Well, that is a perfect segue because
you were in grad school when you began this engagement, this counter demonstration.
And for a lot of folks who are in this political moment, who are in any of these social justice moments, there's a fear, especially someone who's coming out of MIT, there's a worry that if you are so insistent here, these are potential future employers, or at the very least, these are massive companies that could block you from future success.
Yet, like the women you named, you persisted.
Why?
How did you reconcile your personal fears of loss with your sense of responsibility?
In some ways, my aspiration to be a poet, literally to be a poet of code helped.
Because my goal when I finished MIT was not necessarily to get a job at a tech company, nor was it to have a research lab that would be funded by one of these tech giants.
But there is a lot of corporate capture.
Many of the research labs that are out there are funded by tech dollars.
So I was in some ways really fortunate to be in a situation where the funding for my lab allowed me to ask hard questions.
And because I had that privilege, I remember as a grad student taking a class over at Harvard Graduate School of Education with Karen Brennan and Karen was asking me, what will you do with your privilege?
And I had this privilege of having the megaphone that was being at the MIT Media Lab at the time, these technical skills, right, honed over time, and also this poetic ability
to communicate.
But once I started seeing like the real world implications, you have police departments adopting facial recognition.
At the time, we were assuming people would be falsely arrested.
That happened.
Now, think drones, guns, facial recognition.
Bad if it works and bad if it doesn't.
Maybe now you have the drones targeting people you want eliminated.
Right.
So it's not even questions about accuracy because we know accurate systems can be abused.
But I want to be very honest, and I get into this in the book, Unmasking AI.
I struggled with speaking up about these issues and also being one of few in a space there's this question of man we just got in here
and it took a lot to get to this space and there's still so much more you want to do and i was warned i was told
if you do this sort of research you really risk being canceled or being pigeonholed.
And then I was shown kind of of the bones of graduate students who tried this sort of
I was
I was not encouraged.
Yeah.
I was not encouraged and I was discouraged, but also many of their warnings turned out to be true.
And you still do this work.
So when people are thinking,
you know, the local police are using facial recognition technology or I'm walking through, you know, the airport and TSA wants to take my picture.
We know that there is a tendency in human behavior to comply.
We don't want to stand out.
We don't want to risk.
We don't want to get in trouble.
And so part of what you are saying is people have to balance, strike this balance between
compliance, but also risk.
And that there's a risk that they won't be seen as they are, that they will be unfairly targeted.
So I want to talk to you about two pieces to that.
One is, can you tell us where this technology is being used where we may not know it?
And as a part of that, how do you move through the world knowing what the risks really are and knowing how you had to push through that instinct towards compliance to move to resistance?
In terms of areas, AI is being used that you might not see, because one of the things with the initial work of the Algorithmic Justice League, we start with your face.
We start with the white mask.
It's in your your face, literally.
But then there are the algorithms, algorithmic gatekeepers you don't see, the algorithms that determine if you get a kidney or not, a kidney transplant or not, right?
You have the algorithms determining who's hired or fired, algorithms filtering out resumes.
Amazon had to shut down an internal recruiting algorithm because they kept seeing that.
Resumes with women's colleges were being downgraded, right?
But maybe if you played lacrosse, that was a bonus because why data is destiny they trained on the data of who had performed well and had other signals that weren't just about your ability to perform uh the job but how much you fit those who had had the role uh before and so those are some of i think even um it can be a bit more insidious because you can't see it um at work and so that's why it's even more important to resist these systems that are harmful where you can see it.
So, one place you can see it is at the airport, at the TSA checkpoints.
So, for domestic flights, this is supposed to be a pilot, but to your point, when a TSA officer says step up,
you step up generally, right?
And then not only do you have that power dynamic going on, you have people behind you, social pressure.
I don't know what time you got to the airport, but you might have some time pressure as well.
Financial pressure, these tickets aren't cheap, you know?
So by the time you're going through all of this, right, is this your moment to resist?
Or do you just comply so you can try to fly?
And
it can be difficult to make that choice.
And sometimes people ask, well, if I've said, if I haven't complied in the past, is it too late?
I absolutely think every time you say no to having your face scanned by TSA, it's actually a a vote for biometric consent.
And why I say this is so many people don't even know you have a right to opt out because of the way it's been implemented.
Step up next, step up next.
And it's requiring you to go against the typical process, right?
And so it's making it even less likely.
And I also understand, I just recently went through TSA.
I had this viral moment where my hair was being inspected down to the scalp.
It wasn't just, we're going to pat you down, we're going to become an esthetician and see if you have flakes in your scalp or that kind of thing.
And so I personally experience what it is to be singled out and feel harassed in these sorts of situations.
So I completely understand when people say, for where I am right now, it feels too precarious.
Like there's no judgment on that.
But I also feel others who have that sense that this is something I can do, do, maybe you are in a position of more privilege.
And in some ways, because of the work that I do with the Algorithmic Justice League, that's why I got a direct apology from TSA.
Sorry, Dr.
Bullamini, about your recent experiences, right?
Had I been a general traveler, would that have happened?
So in some ways, I do think if you feel you're in a more privileged position,
right?
It's even more of an imperative, right, to opt out for those who have been robbed of that choice or even your former self robbed of that choice.
And then the government doesn't have the best track record for biometric data.
There have been leaks, right?
There have been stolen.
So even if you're like, Solidarity is nice, but just for your own
peace of mind, you don't actually know where all of that data is going.
And they might say, we've deleted the photo.
This gets interesting.
This is where I put on my tech hat.
You've deleted the photo, but you might have other information you got from the photo before it was deleted.
So just like you have a fingerprint, you can also have a face print.
They haven't been clear about what they're doing with the face prints, right?
So they can be, they can truthfully say they've deleted the image, but it doesn't mean they've deleted all of the data associated with it as well.
And that data can be compromised.
Which is perfect, Segway, because in addition to, as you pointed out,
this is an issue of privacy.
You You know, this use of technology will feed data sets and train algorithms based on who's using the data and what they're trying to learn from it.
Talk a little bit about
how you navigate being a scientist who believes in this technology, who believes in it.
And I want to give you a second to talk about that.
What's the benefit of this technology?
And then
where do you draw the line of, as you said, you know,
how do you decide consent?
And how do you decide how we train things to be better if we don't give them information?
It is very
contingent on what we mean when we say AI, right?
So if we're talking about AI, it can mean so many different things.
So when I say broadly, I support the development of AI, this is to say, yes, I support developing technologies where we can explore our capacities as humanity or find ways to address ongoing problems.
So when you have AI being used for things like alpha fold, let's predict these protein structures so that we can actually do better drug development.
This is literally what my dad does.
He's a professor of medicinal chemistry and pharmaceutical sciences.
So when I was a little girl, going to his lab and seeing the protein folding structures on his computer, this is something that now AI systems are doing that can help him do his work better and then help us be healthier.
So I'm not against those uses of AI.
And I also think there are so many data disparities.
Like let's think about women's heart health.
You know, less than a quarter of research participants for clinical trials are women, but cardiovascular disease, heart attacks, and so forth, they're the number one killer of women.
And so I support an organization, Bloomer Tech, where they're actually, they've developed these bras that can monitor women's heart health.
And so now you're getting this very intimate data that's filling a vital data gap, right, when it comes to health.
But here is what's so important, agency.
You decided to put on the bra, right?
Where we're not having agency is where you have surveillance uses of AI and you don't have a real choice.
And so I think it is possible to opt into a society where we're using AI to solve real and present problems while we're opting out of surveillance.
We don't have to have discrimination or surveillance or the surrender of our data as the entry ticket into innovation.
That just, we can do so much better, right?
And so when I hear it's like, oh, we want to solve all of these problems with AI and we're going to collect your creative data without consent or compensation.
It's a disconnect.
for me because the mechanism you're using doesn't actually support the outcome you claim.
So let's be clear about the outcomes we're looking for or what our aspirations are and then make sure we have ethical AI pipelines and pathways to get there, which we have to push for.
How can we help make stronger communities happen?
Well, at JPMorgan Chase, we invest in what's working, in businesses that create more jobs, in hospitals delivering care where it's needed the most, in workers building new buildings, bridges, and roads, connecting what we need to where we live.
Make the green grass grow all around, all around, make the green green grass grow all around.
Make momentum happen.
Learn more at jpmorganchase.com slash impact.
Halloween's almost here.
And with Target Circle 360, you can get everything you need delivered to your door right when you need it.
Out of their favorite candy?
Ordered.
Need snacks for a party?
Done.
Looking for a backup costume because your kid changed their mind?
Handled.
With Target Circle 360, Halloween fun is covered.
Join now and get same-day delivery all season long.
Only at Target.
Membership required, subject to terms and conditions, applies to orders over $35.
This episode is brought to you by Progressive Commercial Insurance.
Business owners meet Progressive Insurance.
They make it easy to get discounts on commercial auto insurance and find coverages to grow with your business.
Quote in as little as eight minutes at progressivecommer.com.
Progressive Casualty Insurance Company.
Coverage provided and serviced by affiliated and third-party insurers.
Discounts and coverage selections not available in all states or situations.
The AJL has put out this framework for understanding and improving what AI could be that calls for equitable and accountable AI.
So I want to do a little bit of unpacking.
So number one, what is inclusive AI?
So inclusive AI might sound enticing, right?
It's like, let's make sure, let's get all the data.
Let's make sure everybody is part of AI, broadly speaking.
And at face value, it probably sounds good, right?
But
how are you being included and included into what?
So, if you're being included into surveillance, if you're being included into exploitation, you might want to say no.
And if you're being included without your consent, I didn't want to stand in this line, right?
I didn't want those socks you're selling, you know.
And so, for me, the limitation of inclusive AI goes back to that question of agency, particularly because some of the ways in which you might quote unquote make AI inclusive can mean violating people's privacy or disregarding consent.
One of the examples I write about in the book, shortly after my research, Gender Shades came out and people are seeing, oh, we have some power shadows in these data sets.
You had companies like Google saying, we will make more inclusive data sets.
So let's hire a subcontractor.
And guess what?
Subcontractor, they're on the streets of Atlanta getting face data, face photos from homeless people.
So this is why inclusive AI isn't the banner that we go under, but we're actually asking, well, what does it mean to be accountable?
And what does it mean to be equitable?
And part of what that means is agency, right?
People have a voice and a choice.
There's affirmative consent.
If I don't know what's going to happen with this data, or you're just telling me to go through airport security without
and tell me to scan this QR code to go find more, that is not informed consent, nor can you really affirm your consent in a coercive sort of situation.
And so that's why we shifted from this language of inclusion and also even this language of ethical, right?
You mentioned that I like to play with words.
And what we were seeing with the term ethical AI was a lot of what some some would call ethics washing.
Yes, we're ethical, but what are your practices?
So this leads to accountability because you can say ethical all you want, but we're going to say, okay, how did you collect the data?
Did you get consent?
Was there meaningful transparency?
And the biggest thing for accountability that we seldom see is redress.
Oops, made a problem.
It might not be a small oops.
It might be somebody's in jail.
Right.
Mistake.
Right.
And so being accountable means not just saying we did our best, good luck, right?
It means if something goes wrong, you have to
address and affirm that it's gone wrong.
So part of accountability is saying, yes, we messed up, but you have to also take a step further and look for redress as well.
So you've got some countries like Kenya that have said very hard limits on how their national information can be used.
And you've got the Wild West that is the United States.
Who is doing it kind of right?
I would kind of
say the European Union, with the passage of the EU AI Act, is the most comprehensive AI governance legislation that we've seen that actually puts in guardrails and also a risk-based framework for determining what kind of AI systems can and cannot be used.
And I think there's a lot to learn from Europeans in that regard.
I also think even within the US, we do have frameworks.
They have not yet been put into law, but they are reference points.
We have the blueprint for an AI Bill of Rights that talks about the need for safe and effective AI systems, the need for notice and explanation, and most importantly, meaningful alternatives and fallbacks.
We did a campaign around the government's use of facial recognition for access to government services.
It meant some veterans weren't able to access their services.
White man in Colorado, slow internet connection,
you know, not the best webcam, not able to get something that they actually should be getting.
And so I think it's so important that as people are excited about the possibilities of tech, we don't lose the reality of infrastructure, right?
It's not being accessed by everybody in the same way.
And so part of what a government, a federal government, local government, state government should also have in place are these meaningful alternatives, for sure.
Okay, so you have been a perfect guest.
You helped me set up the problem, explain why it's a problem, and now we got to solve it.
So what are three things that a listener can do today or in the next few, you know, in the next few months?
What can they do to address the concerns of inequitable and unaccountable AI in their daily lives, short of getting into a fight with the TSA?
You can join me on that.
I will say writing on masking AI and even the documentary Coded Bias on Netflix showed me the power of sharing your story.
So I share that story of an AI failure.
I'm literally coding in a white mask.
And I think sometimes we underestimate the importance of sharing what we've experienced.
But when you share what you've experienced, other people say, oh, me too, right?
I'm not alone in that.
And you also build an evidentiary record.
So they can't gaslight us and say, oh, what you think is happening is not happening when you have all of these stories.
So maybe we get these kinds of stories submitted to AJL all the time, report.ajl.org, right?
My child was flagged as cheating with AI, but it turns out that it's English as a second language, right?
Not actually cheating.
Those stories are important because it makes it easier to hold companies accountable.
So, the first thing I would say is: please, please don't underestimate the power of your lived experience.
If you are encountering AI bias,
some people will say, Look, I tried to generate an image using generative AI and I got racist stereotypes.
I tried to use a filter to professionalize my photo and my skin was whitened or lightened.
All of those examples are so important.
So share your story, capture that counter demo.
And I think the other thing that I have found to be
really helpful is educating your community.
Because so often, I've seen this time and time again.
If you're not a tech bro, you don't need to go to MIT and have a PhD in AI to be part of this conversation.
And when you have more voices saying, no, we actually want it a different way, that's how you build power.
And so building power by sharing these stories with the communities you're part of, I saw this in such a strong way when we were working with the Brooklyn tenants.
Their landlord installed a facial recognition entry system.
They didn't want it.
And some of them started organizing and they found some of my research.
And I had taken a lot of time to try to make it as accessible as possible, explainer videos, walkthroughs, all of this, and they were using it to speak to their elders, right, about what was happening so they could also be educated about it.
So I would not underestimate the power of sharing what you're learning right with the communities you care about.
This was such a pleasure, Dr.
Joy Wulamwini.
Thank you so much for spending time with us here on Assembly Assembly Required.
Thank you so much for having me.
Each week, we want to leave the audience with a new way to act against what can feel inevitable, an opportunity to make a difference and a way to get involved or just to get started on working on a solution.
In a segment we like to call our toolkit.
At Assembly Required, we encourage the audience to be curious, solve problems, and do good.
Now, Dr.
Joy gave us a great primer on biometric agency.
Number one, tell your story.
Number two, capture your images.
And number three, share with your community.
I would say watch the Netflix documentary, Coded Bias.
It is a fantastic entry point into understanding the ins and outs of artificial intelligence and the challenges of the coded gaze.
Prepare for all the awkward holiday conversations you'll be having by boning up on AI right now and you can wow your audience.
Number two, check out Dr.
Joy Boulumwini's book, Unmasking AI, My Mission to Protect What is Human in a World of Machines.
It is out in paperback November 19th.
I love this book.
Pick it up.
Now for solving problems.
At AJL.org, you can find the Take Action tab, which has multiple ways to participate in the movement for equitable and accountable AI.
AI is with us, so let's make sure it works for us.
You can share your story of when you confronted the Coded Gays.
You can sign up for their newsletter.
You can make a donation and you can help the organization's work in educating more of us on the future of AI.
To check out their most recent campaign, look up the Algorithmic Justice League's hashtag, Freedom Flyers.
There, you'll find information on knowing your rights with facial recognition at the airport, as well as opportunities to share your experience with it.
And to do your bit of good, you can spread the word about what you've learned on social media by tagging AJL-United.
and tell your friends to join us here on Assembly Required, where they too can satisfy their inner nerd and their outer activist.
If you want to tell us what you'd like to learn more about or hear about from us, send us an email at assemblyrequired at crooked.com or leave us a voicemail and you and your questions and comments might be featured on the pod.
Our number is 213-293-9509.
Thank you all so much for joining me on this journey to create Assembly Required with Stacey Abrams.
However, with the holidays upon us, we're not going to be meeting next week.
Instead, we will see you again on January 9th.
Assembly Required with Stacey Abrams is a crooked media production.
Our lead show producer is Alona Minkowski, and our associate producer is Paulina Velasco.
Kirill Polaviev is our video producer.
This episode was recorded and mixed by Evan Sutton.
Our theme song is by Vasilius Vitopoulos.
Thank you to Matt DeGroote, Kyle Seglund, Tyler Boozer, and Samantha Slossberg for production support.
Our executive producers are Katie Long, Madeline Herringer, and me, Stacey Abrams.
How can we help make healthier communities happen?
At JPMorgan Chase, we invest in community health care where it's needed most, so doctors can see more patients closer to home and help them grow and thrive.
Make momentum happen.
Learn more at jpmorganchase.com/slash impact.