Episode 62: AI Slop and the New Fascist Aesthetic with Roland Meyer

1h 14m

Why have our new right-wing overlords taken such a shine to chintzy, shiny AI slop? What is persuasive about these phony, artless, slightly desperate images? How do they originate, and how do they circulate? For this episode, Moira and Adrian are joined by Roland Meyer, who is a professor of digital cultures and arts at the University of Zurich and the University of the Arts in Zurich, Switzerland. If you're trying to picture the kinds of image they're discussing, it might be helpful to check out Roland's huge thread on Bluesky. And if you're trying to follow along with our discussion of specific images, we have collected a bunch of the examples we discuss in the episode here.

Listen and follow along

Transcript

Hello, I'm Adrienne Dobb.

And I'm Weird Donegan.

Whether we like it or not, we're in bed with the right.

So, Adrienne, today we are talking about the aesthetics of AI and their weird, creepy, very uncanny gender politics that they have inserted into our culture.

So, AI imagery has become a real fixture in like certain kinds of social media.

I see it mostly on Elon Musk's Twitter, which I guess now we are all calling X.

And while it's fair to say that AI slop, this like particular kind of imagery generated by these machineries is pretty pervasive, I think it's also clear that there are certain populations online that gravitate more towards it to find it persuasive or they find it to be like a useful tool for their communications and political projects, right?

And there are others who pretty much reject it.

And there are some places online where like using AI imagery is in fact like taboo, right?

So I go so far as to say that the use of AI imagery is becoming itself a kind of signaling tool.

So whether you use it or whether you don't, it says something to your audience about you and about where you stand.

And some camps are really partaking of this technology very liberally in creating images to promote their project.

Yeah.

And it's very telling that like it's very specifically tied up also with certain platforms.

That is to say, Elon Musk loves AI and AI generated imagery and seems to really want to center it on, as you say, the platform he now calls X.

But I mean, I'm guessing that the Venn diagram of embed with the right listeners and Truth Social users is two separate circles.

You never know.

We might have some fans.

Yeah.

Some publicly confused people.

Yeah.

Yeah.

People who got the gift subscription.

Yeah.

But Trump also loves this shit.

Like on Truth Social, it is something that he, well, he frequently goes to.

So I think like one thing that the right and like the MA, Trumpist right, really loves to use

AI for is not exactly like disinformation, right?

It's not exactly meant to deceive.

usually sometimes, but usually it stands in more as a kind of like wish fulfillment or like fantasy illustration.

Yeah.

So I wanted to send you guys this tweet from the White House that came came out, I think last week.

A White House issued tweet that features this AI-generated image that shows Donald Trump both as a king and also apparently as Times person of the year, which I imagine for a king would be a little bit superfluous, but he seems to want both.

And it's meant to both congratulate Trump or like shape the world he's actually living in and also sort of illustrate a desired future, right?

And it has, we'll talk about the actual aesthetics of these images in quite some depth later, but it's also very noticeably like it simulates brush strokes.

It has this kind of, I would say, 1950s, 1960s Robert Moses look to it.

And behind it is a kind of Manhattan skyline that seems to be, frankly, from that's not what it looks like now, I don't think.

This is very 1950s.

Yeah, there's like a rosiness and hyper-reality on like Donald Trump's face in this image.

Like what I have said about AI before is that a lot of the aesthetic references that this imagery creates seems to be a cross between like the

nostalgia and sort of unreality of like a Norman Rockwell painting and the like vulgar hyper-reality of like pornography, right?

It's like drawing from

very ideologically specific image pools for its references.

Yeah.

There's a famous line from Walter Benjamin that I keep coming back to when I look at images like this.

Benjamin said that fascism consists of the introduction of aesthetics into politics, where communism responds by politicizing art.

And yeah, there's a very specific aesthetics, I think, at work here.

And I think this line offers a clue as to why AI slop specifically is becoming so ubiquitous during this particular historic moment.

And to discuss that, we have a guest today, which is very exciting.

Our guest today is Roland Meyer, who is professor of digital cultures and arts at both the University of Zurich.

and the University of the Arts in Switzerland.

And he's been doing amazing work on the aesthetics of AI-generated images and their uses.

And I found his work on this absolutely essential in understanding what the hell I'm even looking at here.

Now, one thing I should say, Roland both teaches at a University of the Arts, meaning he teaches practicing artists, and is partly trained as an art historian.

So today we're not going to be talking so much about the economics of AI slop.

We'll get to that at some point.

But it's about the uses that it's being put to, what audiences it gathers around itself, and what those audiences get from it.

So welcome to Embed with the Right, Roland Meyer.

Yeah, thank you for having me.

It's exciting.

Roland, you told us before we started recording that you're actually a Patreon subscriber.

Yes, I am.

A big fan of the podcast since almost the beginning.

And yeah, love what you do here.

It's really great and love to support it.

Well, we really appreciate it.

Adrian is like leading our domination of the German language media market, I think.

He's just going to be like our crusading spearhead to convert all German speakers into feminist radicals, for which I'm very happy.

But it also means that it brings us these wonderful people like you, Roland, who I'm so excited to talk to.

Thank you.

Yeah, so maybe let's start.

This is almost like a Philomena Kunk interview.

What is AI slop?

Is every AI generated image slop?

How do you stake out your topic?

Is there a taxonomy of AI?

What is it that you're looking at?

What are we looking at when we look at AI images?

I think not necessarily every AI-produced image is slop.

Obviously, what you see on social media platforms, very

quickly generated meme-like AI images, clickbaity images.

So there are of course artists who are using generative AI, different kinds of models, different kinds of tools in very ambitious way.

There is a whole discourse on what that means and whether it's correct or not.

But there are different uses of so-called generative AI, but slop is kind of the cheapest way of using that for very quick reactions most of the time.

So these images are both used as reactions to current events, like people producing image of the hollybot sign in flames, because they have the urge to somehow visualize what is happening and are not content for some reason with the images that are existing en masse.

And these images are also meant to solicit reactions from online audiences.

So to have clicks, to have people sharing them and that's also a business model in parts that has been described for the kind of facebook ai slop the the trim jesus and so on so these are actually produced mainly in the global south by people who make a living of producing ai images that people then um both people and bots react to but the people are uh who actually make the money for them um because they are then sent to phishing websites or shown advertisements and so on.

So that would be AI slop.

And it's, I think,

ever-growing niche within the kind of AI-generated image production.

So on a totally pedestrian level, maybe walk our listeners to it who've been spared some of this stuff.

I'm told that we have listeners who are not on X and who are therefore maybe not seeing these things constantly.

How do these images come about?

You already alluded to who actually makes them.

What's the life cycle of such an image?

By the time someone sends it to me and is like, hey, can you believe what Trump now posted?

At that point, how old is this image, right?

Like, and how sticky are they, right?

Like, memes we know have extreme stickiness.

How sticky are these images?

Have they been circulating around far-right message boards for weeks by the time I see them?

Or have they been, as you say, crapped out by a content farm in like Albania 12 hours ago by the time I get them in my feed?

I think,

but that's a bit of speculation on my part because I haven't done empirical research on that.

I think the life cycle is, as you say, a bit shorter than like memes who have this kind of ongoing attractiveness because they are a kind of template that can be used to produce endless variations.

But you have the kind of same thing also with AI-generated images.

You have all these kind of meme cycles where people react to images that have been shared widely and then they produce endless variations of that also very much with right-wing or neo-fascist imagery so there was a whole wave in britain last year of people posting this kind of content that would show how a Britain that never was in some kind of nostalgic past is now under attack of course by foreigners migrants Muslims and then people use this kind of template and use the hashtag remember England to post ever more absurd versions of an England that actually never existed anywhere with Britons on the moon and people having the union jack on everything you can imagine and having this kind of nostalgic patriotic imagery running wild.

So there was a kind of meme-like image reaction chain that I think went on for quite a while.

But it's not like these now classic memes that are with us for decades and give us a template to express certain ideas still, although we all know them.

Also, you think of the Balanxiaga Pope, that was a big thing for a few weeks, but now nobody would make another Pope AI meme, I guess, or hardly anybody.

Right.

So one thing I think this is driving towards, you know, AI isn't just about a technology.

It's about many things, right?

Like it's not about the possibility of generating these kinds of images.

As you're saying, this technology is Well, I guess I don't even know how old it is, but it's not that old.

And yet already we have a pretty good kind of implicit taxonomy of like

the kinds of things that it is used for, the kinds of people it appeals to, the kinds of stories it's used to tell.

And I think that's key, right?

Like when you look at AI, you're not looking just at a technology and saying, well, this can be used for anything.

This actually helps certain kinds of content proliferate, right?

I would say so, but I think it's important also to look at the technology on a very kind of basic conceptual level and think about, okay, what does it actually do and how does it function and what it is based on.

And for me, thinking about AI generated images is one way is to think as a kind of pattern recognition in reverse.

So in pattern recognition, object recognition, facial recognition, you kind of label things in images, right?

You train these systems to label every cat image out of millions of images of cats and dogs and whatever.

So you have to train them with images that are already labeled as cat images.

And now you can turn around this process and say, okay, give me a cat image.

I give you a million cat images, produce me another cat image that kind of looks like all the cat images that you've been trained on.

And this, I think, then also explains what makes this attractive, what it can accomplish, and where it fails.

Because what it does is it basically starts with noise, with an image where you can see nothing, and then it tries to find patterns that it has learned from billions and billions of images scraped from the web.

Many people say stolen from the web, whatever.

But it tries to find the pattern that are already associated with a certain label, with a certain text, with a certain description, with a prompt, and then tries to amplify that.

And that is kind of a process that goes step by step iteratively, and it makes the image ever more readable and ever more legible.

In every step, it's becoming more of a cat image.

It's become more of a image that you can read as the visualization of that concept.

And of course, these concepts come from all over the web.

They come from social media, they come from our whole visual culture, from the whole archive of digital visual culture.

And all the stereotypes, all the clichés are very much baked into that.

And not only are they baked into that, they are amplified within that process because the whole process is a kind of optimization of the image.

You become ever more like what you prompted.

So you get a visualization of these formulated written concepts in the form of a visual cliché that is drawn from billions of images.

That makes that attractive for some purposes, but there is already a kind of ideological bias in that very much.

And I think we have to talk about that.

Yeah, there's a really interesting point that an AI researcher at Stanford once made that I keep flashing back to.

He said, look,

if you train something on the past, it will repeat the patterns of the past.

If you think about the patterns of our past,

he's like, it's not shocking that this thing is pre-racist racist because you fed it on what's available and what's especially freely accessible, which tends to be older things.

And so, yeah, it repeats biases that are baked into the digital record that we've assembled over the last 30 years.

And the generative quote-unquote AI promise is basically, as you say, pattern recognition in reverse.

A lot of these, right?

DALI, stable diffusion, mid-journey, are just literally that, text-to-image generators, meaning it breaks down my prompt into,

as you say, the patterns that it will then turn into an image based on what it's been trained on.

Right.

And then the danger, I think, that you guys are dancing around, but I'm just going to make it really explicit, is that the technology enables the solidifying of these conceptual categories that then constrains them in the future, right?

If we're confined to generate from what we have recourse to in the past, that is a set of conditions that sort of like are prohibitive for innovation.

Yeah, and it has this kind of cliché.

I mean, I think the fact that we're we're getting to the word cliché is not an accident here, right?

Like when we look at statistical distributions, right, like these things are not trained on Michelangelo pictures because Michelangelo pictures are likely a very small fraction.

It'll be dwarfed by pictures that we've uploaded on social media, meaning you're going to get convention and cliché almost by design.

Yes.

And it also means that the kind of imagery that is uploaded most is the kind that's going to be reproduced most, right?

Like just in terms of sheer quantity of what sort of pictures are online into this training data set that is like most of the internet now for a lot of these technologies, like that is what's going to feed into this style of imagery and inform it in the future.

So like whatever dominates our internet now or whatever dominates our visual space now, that is what is going to be reproduced in the future.

Aaron Ross Bowie, absolutely.

But in a way, it's even worse because these like mid-journey stable diffusion, they are kind of fine-tuned to a certain aesthetic there is a great paper by Ger Thorpe and Christopher Busheck who really got into these data sets and there are special data sets for the aesthetic refinement or fine-tuning of these models these are much smaller data sets with images that have a very high aesthetic score and that is a prediction by an AI of which images are most attractive to people.

But people in that case are the people who produce the training data for that kind of predictive AI, people who already rated images online and they could show, yeah, well, these people in these ratings, that's a very small demographic of mostly white, young, male, North American, middle class, very online guys and their aesthetic expectations are baked into these models as a kind of standard aesthetic of what makes not only a cat image then, but a beautiful cat image.

And that is not only kind of the statistical mean of what's on social media, but a very kind of specific aesthetics.

And that I think also accounts for the specific kind of glow and this kind of filter aesthetics that you see a lot, the shininess, the game-like and fantasy-like image worlds that are predominant in this kind of image production.

So there is like the whole web in all its aesthetic forms that informs this, but also a very small subset curated by the preferences of a very small group.

Yeah, which the internet has been for a long time, but which was never, I think, quite as visually spectacular, right?

This was true for Yelp or even Google Maps initially mapped places that Google programmers tended to like, right?

Like you could find your fancy coffee shop.

You couldn't find the handicapped accessible soup kitchen or whatever.

That wouldn't be on it because they didn't care.

Part of what we see in AI slop, I think, is exactly the fact that the democratizing functions of the internet were never quite what they were cracked up to be.

This was always the playground for the kinds of people that, yeah, as you say, look at Elon Musk Roman Centurion picture and like, oh, isn't that nice?

I want to print this one out for myself.

That's a spoiler, Adrian.

You're jumping ahead of me.

Yes, yes.

So I just wanted to ask you personally how you came to study these.

I mean, it is a funny...

So I'm a literature professor and I try not to engage with large language models because it's basically taking the thing I love and I spend most of my time analyzing and giving me a bizarro world, Sloppified version of it.

Some colleagues find it deeply interesting.

And I'm like, no, it's literally like if I had a stroke and then I wrote a bunch of texts, it would look kind of like this.

And, you know, I don't need to explore that.

How did you, as someone who's, as you say, partly trained in art history, gravitate to these images?

What drew you to them?

Yeah, basically my background is in German called Bildwissenschaft.

So that would be like a parallel project to visual cultural studies.

So opening up the field of art history towards popular images, scientific images, like a whole range of media images and so on.

But with a kind of very specific twist in German discourse that they focus very much on what is an image, like in the singular, what does the image do?

What's the power of the image?

And what interested me always more is what, okay, but what do many images do?

How do people operate with images?

What kind of functions do images have?

Especially in a context like surveillance or identification.

So that's actually what I learned my PhD on.

And then of course you get into a history of facial recognition and its early beginnings.

And for me what then interested me is how with social media you get large amounts of images, huge image populations that become a kind of resource of information for the training of facial recognition systems.

So Facebook was among the first companies developing a quite efficient facial recognition software because they already had these masses of labeled facial images, of course, of all their uses.

And they also provided a whole environment of surveillance that these algorithms could then be used upon.

And so that interested me.

How do images become this kind of resource of information?

And then when these things became popular, like three years ago, for me, it was the next step of that.

Like you have the surveillance capitalist business model that extracts information from large populations of images and now it is turned into

the production of ever more images.

And that kind of fascinates me.

Not so much the single images and what it means, but really these kind of operations that go on with images in a networked environment.

I mean, it's interesting that you mentioned scientific images.

I've been fascinated with those myself, especially when it comes to things like phrenology, craniometry, and all these 19th century pseudosciences, or the way kind of evolutionary science in the 19th century operated with these images that were the facts underlying them were frequently true, but the visual presentation made some suggestions that just are not scientific.

The whole thing is about plausibility, about how images make things plausible to people.

And that seems to me, I can see how you get from that to AI, because like part of what AI is doing is sort of pushing the frame or moving the overturned window on what seems plausible, doesn't it?

Yeah, and that is also a kind of of more direct link even, because in like 19th century phrenological, physiognomic image making, Francis Galton and his composite portraiture, there is this fascination for photography as a way of capturing data that can then be used for statistical purposes to then visualize, for example, the mean criminal, the kind of ideal face of the criminal.

And you have this completely hallucinatory but very influential idea that you can use photography as a statistical practice.

And that is what basically pattern recognition AI and also generative AI is a kind of statistical view of images that then becomes productive.

And of course, it's also used for the same kind of renological, physiognomic, rage science purposes now.

But also this kind of logic of labeling, discriminating people and then trying to get their ideal statistically valid kind of objectified image that also drives some uses of generative AI.

And I think we will also discuss some examples of that.

Yeah.

So I think maybe to make explicit something we've been sort of alluding to a bunch.

What makes AI slop hard to talk about is that AI-generated visuals indeed look like most of our images, right?

They in some way highlight the conventionality.

They make conventions obvious by overdoing them, right?

You write...

in one of your wonderful threads on Blue Sky that AI images match but overful fill our stylistic expectations.

And I think it seems to me that our time just generates these insane amounts of images, and we've become, on the whole, basically less sensitive to how standardized they really are, right?

And that they discipline our gaze in certain ways, that they get us to expect certain things, supply certain things.

And I think what you were saying about phrenological images, what you were saying about physiognomic images is exactly that.

These were used basically to train people in scrutinizing others in real life in certain ways.

And AI, it seems to me, derives a lot of its visual plausibility from the fact that so many of our images were normed to begin with.

That is to say, this is not really mid-journey screwing with our perceptive apparatus.

It's mid-journey exploiting the fact that the previous 10 years of internet pictures have become so normed, so statistically graphable, basically.

Have you guys ever heard of Norma and Norman?

The statues?

No.

Oh my God, they're great.

Norma and Norman were these two statues of the statistically, perfectly average man and woman that were brought around the country

to like county fairs in the early 20th century in the U.S.

as part of like eugenics programming, basically.

And like the idea was that the average was in fact aspirational, right?

And fairgoers at these county fairs that would have these like eugenics pavilions would be encouraged to model themselves after Norma and Norman.

And of course, those statistical data sets that were used to create the average of Norma and Norman, in fact, excluded like everybody black, for instance.

They were like statistically very tricky.

But there's something similar happening with AI, right?

In which this homogenizing force is trained on a lot of exclusions.

And I think there's something about our real world even now that's furnishing this kind of homogeneity of images that AI then like feeds off and mimics, like how every actress in Hollywood has the same same face now because they're all undergoing these standardizing cosmetic procedures that homogenize their facial structures across what would otherwise be a lot of like just natural or ethnic or familial difference, right?

So like AI's stylistic repetitiveness, I think we can see that as just a continuation of the same meta trend.

But I also think this might be a function of the maturing technology, right?

Because one of the earliest observations that I had about AI imagery was like the uncanniness of things it got wrong and it's like tendency to fail at its attempts to imitate.

The extra fingers.

Yeah, like the extra fingers, the extra teeth, the weird like misplaced shadow of an eye like next to the nose or something, like it's excess of the features that it was unable to standardize, right?

And that was something where the technology's attempt to like be very loyal to repetition and reproduction was also ironically what created this monstrosity or these monstrous images, right?

Yeah, I mean, I always think of the fact that, is this true or is this apocryphal that

when he cast Peter Lore in the film M, the director Fritz Slang apparently looked at Cesare Lombroso's compendium of criminal faces and he never told Lauren.

Loa's like, how did you find me?

He's like, oh, no reason.

And he'd gotten him out of a catalogue.

He's like, this guy looks like the criminal face.

It's, I mean, it might be apocryphal, but it is this really interesting thing where where standardizing practices and our visual media sort of mutually reinforce each other, right?

Where they, where one lends credibility to the other, and then that lends credibility back onto real life.

So maybe we should look at an actual image.

There's so much to choose from.

But we thought we'd look at first, this is one, I don't know if you, this is one you've written on before.

This is from a Twitter blue check user.

No shock there with a reply from Elon Musk from November 2024.

Ironically, it is a picture of Elon Musk with very floppy hair that does not look transplanted at all in front of a NATO flag and in a, I would say, Roman centurion uniform, although I think we'll be nuancing that later on, with the caption, thank you, Elon, for making the West Great Again, cross-sword emoji, fire emoji.

And Elon Musk replied, I've a centurion.

Cool.

Yeah, what do we say about this?

Let's do some art history here, people.

No, what I found fascinating about both Musk and Trump, how they love AI.

I think that's because AI loves them also, because it's so easy to make a Musk image, because there are so many images of him already in the training data set.

It's quite easy to produce a Musk-like face.

Much easier than, I don't know, a portrait of Adono, for example, which you don't really get.

You get a kind of...

I don't know, old middle-aged philosopher guy who maybe have some likeness to Adono, but he's not Adono.

But Musk, you always got Musk, you always get the Pope, you always get Trump, of course.

But you get him here in an extremely exaggerated, masculine form that obviously also flatters his idea of how it should look.

So it's him becoming his own cliché mixed with these kind of gender cliches that are baked into the technology that are here performed or shown in the image.

And then combined with extremely readable, legible, very obvious kind of symbols that are combined in a way that you can almost read the prompt from the image.

I mean, mean, there is no ambiguity in this image.

It is kind of the visualization of Musk in a Century uniform standing in front of a NATO flag.

That's it.

Yeah.

One thing that I think is worth talking about is the breastplate, which I think betrays another thing.

As you say, you can kind of read what this AI was trained on, which is it's a superhero costume.

His pecs seem enormous on this thing, which is true of the bat suit, I think, historically.

Again, I'm not a specialist on Roman armor, but my guess is this is trained on Marvel.

This has real Thor and or Captain America vibes rather than Russell Crowe in Gladiator vibes, let's say.

Or maybe both.

I think the point is that it's a kind of synthesis, a combination of the two, and that for these models, all these kind of visuals exist in the same space.

There is no categorical difference between a 19th-century history painting or a film still from some Marvel blockbuster.

they all are sources of repeatable visual patterns.

And if they are similar enough and if they are tagged with similar enough text, then they form the space of possibility in which these technologies operate.

So I think the important point to make itself clear about these images is that they are based on this kind of huge archive of visual culture that is completely messy and completely flat.

Everything is

kind of in a neighborhood to everything else without any kind of categories or high and low or whatever, or historic, authentic versus pop culture that doesn't play a role in the logic in which these models operate.

Yeah.

So my guess is whoever made this, as you say, we can read the prompt.

We can imagine what the prompt was.

It was probably Elon Musk, Roman centurion's uniform in front of...

gold embroidered native flag for some reason.

But my guess is that this person ran this a bunch of times and picked the ones that they like.

Because it's very noticeable.

I I mean, again, like from a physiognomic standpoint, that this feels it was designed for exactly the thing that happened, namely that Elon Musk would react to it, right?

The man famously has a thing about his hair.

His hair is gorgeously floppy in a way that pure Elon Musk's hair hasn't been since his mid-20s.

His chiseled jawline is absolutely what Elon Musk wants himself to look like, very, very obviously, if you watch his repeated surgical intervention.

That's the face he's aspiring to.

Meaning, it almost seems like whoever generated this image basically kept hitting return until they got an image that's like, oh, Elon's going to love this, right?

As opposed to one where he might look the way a 50-something-year-old ketamine user really does look, who hasn't had a good night's sleep in 10 years and whose body is starting to catch up with his rotten soul.

It does feel like it's both, as you say, completely statistical, where everything is the same, but then it's ultimately, it's almost like a dating site image, right?

This image was put out there in order to ensnare exactly one guy, right?

And that's what it got.

It wanted that reply, which is probably all that user needed to monetize their blue check, right?

So they get money for this engagement to get 100,000 retweets.

And the 20 seconds they spent making this thing paid off, right?

So like independent of the politics and the like instrumentalization of these images, like just for a second, something I've noticed about like this new genre or like generation of AI imagery is that they look realistic and unrealistic at the same time.

And Roland, you have talked about this as a kind of like platform realism.

Could you tell us what that means?

Yeah, that's actually a term I borrowed from a colleague of mine, Jakob Bilking, and then tried to run with it.

And it's an attempt to stitchize a couple of observations.

So first of all, these images are made for being shared on online platforms.

And that's a lot of what their purpose is.

They are very much based on online platforms, on the content that is produced there and that is then fed into the training of these models.

And as I already said, the kind of aesthetics of these images also relies on feedback mechanisms that come from platforms.

So every image that is clicked, that is shared, that is liked, that is upscaled, tells these companies what users expect, what kind of imagery, what kind of aesthetics, and that can be then turned into these kind of recursive feedback loops that produce a very certain kind of aesthetics.

And the realism part, I think there is a couple of things there.

The one is that most of these images are neither photographic nor painterly, but somewhere in between.

So they imitate a painterly practice that is already imitating photography in a way.

So you even mentioned Norman Rockwell, that seems to be the kind of the standard reference almost for a lot of these images, or somehow realist painting in a 19th century tradition.

I took this kind of realism notion more than from socialist realism.

There is a very interesting observation by Boris Graues on social realism that, you know, then in the Stalin era, the images of socialist realism were not meant to depict the realities of real existing socialism, but these were images where

you saw people that looked looked like real people, but the people were actually incarnations of concepts, of historic forces, of social classes and so on.

So kind of ideas dressed up as people.

And we have talked about that.

It's the same with AI.

You have this kind of visualization of concepts.

So that's not new with AI.

You already have that with stock images, for example.

So stock images also meant as visualizing certain ideas, values, concepts, and whatever.

But the difference now is that you have this kind of generic imagery, like stock images, images, but you can generate it within a second for the concepts you choose and you choose to recombine.

So you won't find in a stock image library an image of Elon Musk as a centurion, but as long as you can type it, you can get it.

So you get a kind of instant stock image in this kind of...

strangely painterless, realist, quasi-photographic style that is attuned to certain expectations of realism in terms of, okay, you get a right amount of of fingers on your hand and you get lights and shading that is somehow plausible, but still it looks too shiny, too glossy, too whatever.

But that obviously also caters to certain aesthetic expectations.

That whole package called platform realism, this kind of generic aesthetic of a second order that is already based on images that are already generic in a way, like stock imagery and social media platform content.

This like slippage

between

the imaginary and the sort of like recourse of what's already been produced brings us, I think, to AI's use by like proponents of the trad

movement, like various

social and gender traditionalists.

And this is really like our meat and potatoes, Adrian.

This is the shit we've seen.

Yeah, we made it through half an hour without mentioning the trad wives.

I was getting worried for us, but yes, that's where my mind went to.

Because there's like this deeply traditional visual vocabulary right and like duh that's because this tech is trained on these homogenized highly conventional sets of images but then it's also as strange and uncanny because it's stripped of context it's stripped of actual human meaning-making practices and it's stripped of like what we might think of as the elements that add authenticity to history right so adrian you and i talked about this one it's actually a short video that was produced on twitter we talked about it in our very fun conversation with Matt Bernstein for his podcast, A Bit Fruity.

And it's a video I saw originally posted on X.

It's about six seconds long by a Twitter user named Elijah Schaffer or Schaefer,

who says, This type of content awakens in a man something so primal that not even an OF model in lingerie could compete.

And then, Adrian, what is this video depicting?

So it's a, I'll go back to your description of it, which was fantastic.

It is a young woman/slash girl, more shading into girl, I would say, in traditional garb, kind of grays and a little bit of purple.

Yeah, there's an apron and puff sleeves involved.

Yeah.

It's like a pinafore situation.

Yeah, demurely holding a bowl of four eggs with a bunch of chickens behind her.

And I think your joke was, well, yeah, because she is an OnlyFans model in an apron.

This is AI, right?

I'm almost certain that it is.

I believe it's either AI or it's partaking of so many of the conventions and aims of AI imagery that it almost doesn't matter.

But it looks like AI.

Movements are very strange.

Either the camera did something odd or this is, in fact, an AI image.

It's not a way you would actually move if you were in a body that you inhabited for your own purposes.

Maybe if you are what looks like a trafficked Slovenian teenager who has been slapped into an apron and put in a yard with a bunch of chickens, maybe you move in that stiff and self-conscious and unnatural way.

If you're in this image, reach out to us and be like, that's how I walk.

Yeah, like, this is you blink twice if you need help.

But like,

we've talked about how AI has trouble with like periodization and time, right?

It doesn't traffic in history.

It traffics in like pastness or like old timiness.

I came across a description by a writer named Mike Caulfield who characterized Midjourney specifically as having a history slop that, quote, sucks history through a straw.

I think that's pretty good.

That's good.

And so, this like Elon Centurion image, which Musk clearly read as Roman, the armor actually looks more like something you'd see in like a Warhammer video game.

Right.

And Adrian, you and I have talked about this concept from Alexandra Minna Stern, who talks about right-wing aesthetics as partaking of something she calls chao-futurism.

So, like, these people like Musk, this like apron girl with the eggs, they don't look like actually period.

They don't look like archaic or like historically accurate.

They look like old-timey and like futuristic at the same time.

Yeah, there is this really interesting kind of meeting of the aesthetics.

And I think that, Roland, you would probably say this is about the fact that video games are overrepresented in the image databases on which these things are trained.

That's my guess from what we've been saying so far.

That there is this very strange combination.

It's not that the AI doesn't care about period, it seems to compulsively combine the futuristic and the archaic.

Does that seem right?

I'm not sure it does it necessarily.

I think that is an aesthetic choice of people using it for that purpose.

And it makes it very easy to produce these kinds of images because,

as we've talked about, within the what they call latent spaces of these models, these kind of range of possibilities of what images can be produced.

Everything is in some way in neighborhood to everything else.

So it can be recombined.

And if there is a plausible way of synthesizing that, it can be done more or less.

And of course, gaming aesthetics very much influence this.

Superhero aesthetics are very much into that.

And what AI does is maybe just pointing out how close these aesthetics already are in our

last two decades kind of visual culture and overamplifying these kind of similarities between pseudo-historic interpretations of the past and superhero gaming fantasy worlds, because there is no real difference between them already in the image world that these models are fed with.

And then it's over-exaggerated the closeness and likeness and the recombinability of these aesthetics.

Yeah.

That's such a good point.

I mean, the brightness of the AI images resembles the compulsive brightness of Marvel movies, doesn't it?

Marvel movies have that sheen.

They're not AI generated, but like they sort of ask backwards fell into this aesthetic anyway.

I don't know if you guys have watched the show the franchise, which is basically making fun of Marvel.

There's a joke in the first episode where basically the studio wants the film to look brighter.

And the director, played by Daniel Bruhl, accidentally ends up blinding his two lead actors.

And the director's excuse to them as they're like screaming in agony is like, well, the studio wanted more lighting because the culture demands a saturated aesthetic.

So in a way, it's the culture that blinded you.

And so we can say, like, yeah, it's the culture that that made them look this shiny and weird.

I think that it is.

These images, you can see like there are all kinds of filter aesthetics layered upon them.

And this certain kind of glow, how they radiate from within, that is already present in like preset Instagram filters, this kind of vignette effect that you have a very glowy, shiny central part of the image and it gets darker at the edges.

That you can see very much also in AI imagery.

But what adds to that is a kind of technical effect, I would say, that these images, unlike like CGI imagery or architectural renderings or gaming engines, they are not based on a model of a virtual world where you have virtual light sources and then you can compute the shadows and the lighting and so on.

They are just trained on images, a lot of them already synthetic images, and then they try to imitate their look.

So the lighting in AI images is extremely exaggerated, but it's not a simulation of light.

It's a simulation of a flat surface on which lighting effects are distributed.

And I think that also makes for that effect, that it's a

kind of unrealness of this glowy surface of AI images.

And also, I think they are very much optimized to be looked at tiny mobile screens

from behind, as every image is in a way today.

So that's nothing specific, but I think it shows even more in an AI imagery.

It's interesting because what AI images do is like they simultaneously mistake these different aesthetic trends, like the Roman centurion outfits for this video game bug Kerapis thing that Elon is wearing.

And this other image that he posted himself, this was not one that he responded to, but this seems to be one that he made or at least posted on his own where he says SPQR, which is like the civic motto of the Roman Republic.

And then he's got this picture of himself in armor.

Elon Musk seems to really like pictures of himself wearing like some sort of like warrior armor when actually he can barely even wear like a blazer.

Like this is a very like a t-shirt forward guy.

But in this one, it looks like it's very video game-y, right?

It looks like a bug carapace made of metal.

And like these are.

categorizations that don't just like misattribute their distinctions, but actually like in practice, eliminate them, right?

Because like for Musk and his purposes with this photo, that like is centurion armor.

And so the relevance of the distinction becomes a bit like hard to pit down, at least in the discourse that these images like generate around themselves.

So it's like doing for visuals what being on Twitter all day does for words, right?

Like if you're not careful,

the context collapse can erode you like intellectually and psychically so that you start saying things like, you do not under any circumstances have to hand it to ISIL to like your grandma

when you finally log off.

So it's not just the mistake of the categories.

It's actually their conceptual collapse, like the elimination of these distinctions altogether, while also like the eradication from the visual vocabulary, anything that doesn't fit into this like homogenized ideal.

An interesting point about these models is how like every historical aesthetic and historical can mean also very recent becomes some kind of nameable and repeatable style.

And

you can produce everything in the style of, and if you're more ambitious, you can combine lots and lots of styles both styles of kind of individual creators artists and so on but also a polarite photograph is also kind of a style every visual appearance every look becomes a style and then can be recombined in prompting these kind of images and that's extremely interesting because it's for me as a kind of art historian that how this category of style becomes extremely expanded and completely flattened as well as the idea of history that is very well an art history style was very much bound on history.

And now it's like a whole resource of visual patterns that can be freely combined.

I think it has a lot to do also.

with stock photography already mentioned, but also with mood boarding.

So you have this aesthetic practices of recombining certain vibes, certain moods, and fusing them together.

So in mood boarding, you have them spread out and now you can curate vibes and mood and synthesize them into one image that looks like a single image but actually is a synthesis of untraceable influences and images that come before that.

Aaron Powell, so maybe we should switch lenses very briefly and talk about it sociologically.

That is to say, did something like Mid-Journey sort of migrate to the right over time?

Or were far-right fora and sort of return accounts earlier adopters than others?

Aaron Ross Powell, I think that's a great research question, actually, for some kind of media sociological recent history.

So I can only give an anecdotal kind of impression and I have the impression that quite early on in 2023 when it started it was at least not dominant or the far right was not the kind of dominant user group of these tools yet.

I think that started very much around early 2024.

At least then I noticed it.

So first it was a kind of nerdy

people trying that out, very much debating also that there were big clashes of people online between people coming from all kinds of artistic backgrounds, very much hating it and others very much finding joy and fun in it.

But I think it was not obviously politicized yet.

But I think that started like some months or a year after that.

And now it's very obvious that AI slop, as we say, is kind of the aesthetic of digital fascism.

But I think that is a development.

But it would be interesting to trace how that actually evolved.

Well, I mean, in some way, we're having two creator economies sort of playing off against each other.

One is the creator economy that generates a Roman centurion image and hopes Elon Musk retweets it and then makes a little bit of money off of it.

And then there are these genuine creator communities, I would say, on places like Tumblr who are deathly afraid of this stuff because it's going to destroy what little income they still get by drawing fantasy RPG character art or by illustrating, yeah, a Kickstarter or making pornographic images of people's favorite cartoons.

I mean, like all respectable forms of work, but all threatened by AI.

And I do think that there is a kind of, there are two different kinds of class politics, I think, also smashing into each other here.

And I think as people have gotten more triggered by AI images, just on a purely visceral level, being like, this is offensive to me as a creator.

It's become one more way to trigger the lips, hasn't it?

It's just like, there's a good reason why Elon Musk loves this shit.

Part of it is that he looks great in it and he doesn't look great in real life.

But part of it is clearly also that like he knows we're going to hate it, right?

It's there for the anti-fans as well.

And we can just be like, can you believe what this fucking guy just shared?

I think that's a good point and a very fitting observation because I shared AI-generated images in, I don't know, 2023, and I more or less stopped it and now only share other people's AI-generated images and comment upon them.

In part because, of course, it triggers a lot of people.

And rightfully so, there is a lot to hate about that and that made it more and more attractive for the right.

I guess that's exactly what's happening.

I mean on Luska you can see that I'm blocked by people because I shared AI generated images but of course write-wing accounts are not afraid of being on the block list of some creators or fantasy artists or illustrators or whatnot.

That's kind of an honor bet for them.

This is a dynamic where it is clear.

Now it's a statement if you use it more or less.

Although people still try to

use it for, let's say, progressive purposes or for making fun of the right, but it's the question of whether that actually works.

And it's a minority.

Yeah, like for instance, there is the video of Volodymyr Zelensky punching Donald Trump in the face.

It's not all in one location, but you're right.

It's a good question whether or not that actually works.

Visually contesting this kind of neo-fascist rhetoric that really has become coextensive with AI slop.

Can I ask you briefly about the...

you mentioned in the beginning these kind of hashtag remember England pictures and you put them in our planning doc here.

Is the second one supposed to be a parody of the first?

Maybe I'll describe what I see.

I see the houses of parliament with a British soldier returning from what looks to be World War I.

Well, it's supposed to be Dunkirk, I guess, 1940, facing a bunch of Muslim women, right?

And it's very clearly playing into all kinds of racist, you know, great replacement myths.

But then on the right, we get one, my daughter asked me if I remember when English Buffins put a man on the moon.

Of course I do.

We all do.

And then there's a picture of a cross of St.

George on the moon, which has a moon in it for some reason.

Two lions, a girl on a bicycle.

What's happening there?

Is this person making fun of the first, or is this, do people not care what they're putting up anymore?

It's making fun, as far as I understood it, of a whole wave of kind of right-wing nationalist imagery that played upon this idea of, oh, remember the old England before the foreigners came, and that also already had these lions in there sometimes, also with a strange kind of racist animal kingdom-like imagery.

And they took these elements and kind of recombined them in a way that it got more and more absurd.

And had this hashtag remember England and remembered all the things that never happened and made fun of this idea of a past that could be recreated by AI and that should be remembered and defended against the immigrants.

Part of why I was interested in the lions is I was wondering whether you sometimes can see in the visual space kind of representations of fuck-ups in the text that it's based on, which is to say, could this have started out as saying the three lions as in the soccer jersey, but Dolly just spat out three physical lions?

I don't know.

I mean, like, because, right, like the English flag plus three lions, that is what the soccer team wears, right?

So I have wondered about whether or not, like, they just like lost a lion somewhere along the way.

Yeah, I mean, I think this also like touches on or brings us to AI's very tricky depictions of race, because a lot of these...

models don't do very well with depictions of people of color, right?

They either tend to homogenize non-white people's features into those of like an idealized white person, so that often has like skin bleaching effect, or they will depict them as just like flat flat out racist caricatures, right?

So I think in a lot of cases for the users of AI, this is like a feature, not a bug, right?

Because like if AI functions for the right as a kind of like visual wish fulfillment, it's pretty clear that one of those wishes is for an all-white world, or at least for a world in which non-white people are in clear subordinate positions.

And that's worth pointing out that there's the famous example of the, I believe, hand dryers that would only react to white skin, right?

This is a classic thing that like how these models are trained and who they're trained by does tend to encode very real biases in their outputs and tend to reiterate invisibilities and lacunae in whatever the record and whatever data this thing is trained on.

And we'll reproduce those and sort of make them our future, make the past mistakes basically our future biases.

Aaron Ross Powell, Jr.: Right.

But I think that's in some sense by design for a lot of the people who are using this technology most enthusiastically.

And that might bring us around to like the elephant in the room in basically all discussions of AI imagery, which is AI's use in generating pornography, particularly like non-consensual pornography or deep fake porn.

And when we talk about AI, we're basically like mostly talking about porn, especially when we're discussing like videos and moving images.

So like researchers at the AI monitoring company Sensity estimated that 90%, 9,000 of deep fake videos, so not the still images, but the videos that are posted online are of pornography.

And that of those, 95%

feature images of real non-consenting girls and women.

So this is like something that is just now

a new tool of like very older forms of sexual abuse, right?

Like this technology has become very easy to use, very cheap or free to use, very easy to find online.

And for a man or really a boy to make a pornographic video of somebody he wants to target or humiliate, he really only needs like a couple of photos of a woman or a girl's face.

You can do this with like three or four still images, can be used to make like not a perfect, but like a fairly convincing and certainly very like uncanny, disturbing pornographic image and video of her, right?

So this is like a pretty standard part of the job of like any woman who has like any kind of public role, right?

So it famously happened to Taylor Swift.

It happened to AOC.

But you don't need to be like really famous.

Like this has also happened to me, for instance.

Wow.

Yeah.

Did you not know that?

No.

I wrote a article about the use of AI pornography in like school harassment because apparently it's a really big deal in like middle and high schools now.

Like fairly ubiquitous part of teen girls' experience of school is that classmates and friends, male classmates and friends will like make these images of them and share them among themselves.

So now like you don't, as a woman, you don't get a choice anymore about whether or not you're going to participate in pornography, right?

Like it will be made of you.

And that is like a form of misogynist harassment and coercion that this technology has just enabled.

It's just basically like a high-tech manifestation of what are like basically low-tech, more conventional forms of sexual harassment.

So AI imagery becomes wish fulfillment, but it's not only a way to like gratify the solitary impulse of the mind, right?

It's also a way to affect somebody else's status in the real world.

Be that if you want to like flatter Elon Musk by making him look a lot more muscular than he really is, or if you want to humiliate your classmate in the sixth grade by showing her going down on some random man, right?

Right.

It is a way that these other kinds of images can be used to encompass people who would not be in that category of image, right?

Because like the AI porn videos that they make are trained on real porn videos, which are ubiquitous, but they are kind of repetitive.

So it's very, very easy to create generative AI of that image and just put on another face.

So then there's this other situation where AI is being used to create images of women who do not exist.

And those women that it summons into being are like very specific, right?

They all kind of look the same.

They look young.

They are like somewhat like uncannily clear skinned.

They have like an almost Pixar, like plasticky quality.

They always have long hair.

They always have very big eyes that to me look like just a little too close together, like a predator eyes, like a cat that are like right on the front.

And I'd like to take you guys through a strange little artifact I found, which is an AI-generated article.

featuring AI generated images called the most beautiful person from every country.

And we'll note that all of these AI-generated fake persons are very young and that they are all women.

So you can see like what AI thinks every country looks like.

Like the most beautiful, famous, like creepy AI person in Denmark is wearing like a parka with like a fur-lined hood because it's cold in Denmark, right?

The Australian

is Austria.

Sorry, why is the Australian one wearing a dirdle?

No, okay, that makes more sense.

Like the Ireland one is, it has red hair.

It's stuff like that.

It's like ethnic stereotypes.

Is the United States one just Taylor Swift?

It does seem to be Taylor Swift.

I had that thought, too.

That's so weird.

I mean, the Germany one just honestly just looks like a porn star, if I can be for real here.

They all look like porn stars.

Oh my gosh.

Yeah, the German one looks like she's wearing like a beauty in the beast outfit.

It's like Lierhausen, and then there's a mountain that I guess is an Alp behind her.

She looks like Dr.

Schneider from the third Indiana Jones movie.

She is a Nazi.

I don't want to accuse a person who doesn't exist of being a Nazi, but she a Nazi.

All of these women are Nazis.

Like, that's just...

They're eugenics fantasies, right?

Yeah.

Like, it's really telling about AI's use for like enforcing like male supremacy and female subordination, right?

It's like these are the idealized women who do not exist.

And then the AI's use for women who do exist is to like degrade them by forcing them into porn, right?

So what do you guys notice about these images?

I mean, they appear to be the same woman.

Let's put it this way.

You can definitely tell the kind of norming effects that Roland was talking about.

It's very noticeable that for African women,

they're extremely fair-skinned.

The noses look deeply pointy and European to me.

It feels like trying to simulate variation within a data set that clearly didn't have as much variation as you'd expect or want.

And the other thing is they all wear the kind of same expression.

And I'm wondering, like, is this because we all do the same face when we selfie or something like that?

It's like open, guileless, naive,

but vacuous too.

It's blank and patient.

Yeah.

Yeah.

Available without being sort of come hither.

It's not a come on, but it's like, yeah, vacant and expectant.

Like we are being called into the image to complete it in some way.

Does that sound right?

Yeah, I think so.

Yeah, yeah.

As as I mentioned, it's always the same face.

It's a very westernized face in most of the cases, right?

And then they are dressed up in this pseudo-Esno style settings, also with the backgrounds which are mostly blurry, but sometimes not, with the German example, especially.

Yeah.

And also the Austrian example kind of having this trapped family setting that makes them, again, readable.

So you have what the eye produces when it's forced to make something readable as an image of a German beauty or whatever.

It kind of relies on these visual stereotypes that are obviously already associated with German-ness in that case or Austrian-ness.

And the same for most non-Western contexts, but also Canada.

It's really strange.

I think Canada's is the creepiest, which is stiff competition.

Let's all look at Canada for a second.

Yeah, let's all point and laugh.

The poor Canadians.

Yeah, we're sorry, guys.

No, the Canadian is probably like the most Norman Rockwell-ish.

It is a very young-looking like teenager with curly red, like shoulder-length hair and like a button-nose that's like all reddened from the cold outside.

And she looks like she's about to have about 14 white children with the viewer.

You know, I think that's partly something else that this kind of content offers.

It's like offering a prerogative of consumption of women with a global remit, right?

They will all be tailored according to what we imagine your preferences are.

None of them will surprise you, and they will all be available for your like ingestion.

Yeah, she looks like Pixar Emma Stone.

She's in some sort of like maybe coffee shop situation, like Roland said.

It's a blurry background.

It's Christmas.

It's always Christmas in Canada.

All of these are very weird.

They're all disturbing little fascist artifacts.

They are.

But I would just quickly comment on the kind of whole idea of this kind of clickbait content, because what I first noticed was this formulation, most beautiful beautiful person according to ai and i think that's the kind of standard formula for a lot of clickbait content that was produced using ai so let's look at what ai says this looks like so it's like an authority that is addressed like a judge in this kind of beauty contest which is in this case kind of mixed with race science of sorts but ai becomes this authority that can tell you something about the recurring patterns that are behind what we see.

So it tells you the objective statistical mean and that's the truth about whatever Germany, Iceland and Canada.

And of course that's complete bullshit, but I think that's something that you can see that right-wing accounts really buy into and really love this idea that AI

pseudo-objectively on a statistical basis with kind of algorithmic means caters to their stereotypes, visualizes then, affirms them, gives them, yeah, that's how they look like, that's how a German woman should look like, and everybody who doesn't fit into that pattern, who doesn't match, and that's nearly everybody actually,

is not as real as these strange kind of AI produced fantasies.

And that's scary.

Yeah, like, not to put too fine a point on it, but I feel like I've heard about who established what a quote-unquote perfect German woman looked like.

And I don't know that I'm thrilled with their ideas being, you know, algorithmically reproduced like this.

And we should mention that this idea of AI as this kind of judge on high, outsourcing our own responsibility and our own judgments to the AI and to its authority is, of course, what Doge is doing currently to the US government.

That's why people have to write these inane five things you did every week.

This is, it's meant to train an AI to cut people's jobs where they can wash their hands of it and be like, well, the AI said it, right?

Like you're instituting a kind of higher level that you can appeal to in order to simply vindicate your baseline prejudices and fucked up priorities.

And I think this is very much what's happening here.

Yeah, Doge kind of started, or at least the earliest visualization of Doge I know, with an AI image of Musk, where you can see him with the sunglasses and this big golden letters, D-O-G-E, with the last dot missing because it's an AI generated image and was happy enough that he got the letters correctly spelled.

And Department of Government Efficiency, that was a tweet.

An AI image in September 2024 as this kind of meme thing visualized by AI and now it's the tool of a fascist coup.

We live in hell.

Yes.

So I wanted to bring up an image that I know you and I have both thought about a lot.

And I think Moira, I remember you remarking it on it too when it went through everyone's feed, which is not so much about the dangers of AI, but the dangers of kind of AI critiques.

There's a simplistic critique of AI that just says this is fake.

And I think that's one big thing that we've been kind of circling around without ever saying it, which is to say, to just say, hey, it's fake.

is actually not very interesting.

It's how these images come about, what they say about the person who's using them.

And these visual icons have to be analyzed as visual icons.

But there is a kind of debunking energy around AI.

And I wrote on my sub stack about this early last year when this very impactful AI-generated image with the slogan all eyes on Rafah made its way around mostly Instagram.

It was a very, very popular Instagram title.

This was not, I think, by and large, Twitter or X.

And it's very interesting.

I thought there was a lot of good discussion around it.

CNN had an entire article about, well, what does it mean to produce AI slop about a war zone where like you could presumably just take a picture, right?

And they had experts saying like, well, yeah, you shouldn't do that.

You owe people the actual image.

People need to be confronted with the actual image.

There were other people who said, well, Instagram under Mark Zuckerberg will downvote and will, in fact, throttle content that is too upsetting, which a lot of stuff, like real stuff from Rafael would have qualified as.

Ergo, you have to go with AI slop in order to make this visual point.

But it was very funny that my impression was that in, especially the German-speaking world, which sort of didn't want to have the conversation whether all eyes ought to be on Rafa or not, there you had sort of a different discourse.

Like these dummies are reproducing this image thinking it's real.

And I should say, if you haven't seen this image, there's no way in hell that anyone who is not currently having a stroke would look at that and be like, that's a real picture, right?

But we got these explainers from pretty reputable German newspapers that are like, you know, of course, like the real Rafah does not have any high alpine peaks right in the background.

And the tents are not arranged to spell out the slogan, all eyes on Rafah.

And you're like, yeah, I think people know that.

Like, it's a little weird.

So there's something about, there is a kind of fixation on AI that can also obscure what people are doing with images.

I do think it might be fun to talk about this example because this was a kind of, for all intents and purposes, a use of AI for a very different kind of political message.

Yeah, I mean, is it so strange that this image should be debunked because its whole message is obviously literally readable in the image.

That's what it says.

The whole purpose is to spread the image as a carrier of actually a text.

And it's not about what it shows, but really only about what it says.

And then debunking, I think, has become standard reaction to viral content.

There are kind of two standard reactions online if something gets really ubiquitous and you see it everywhere.

And people are always urged to react towards images, also with images, and either...

they produce memes and variations and make fun of it, or they get this kind of forensic gaze and try try to find clues that it's somehow manipulated, that there's something wrong with the image.

And they find it also in images where it's so obvious that it's a synthetic image, but they release this kind of fun in finding clues of manipulation that AI images very much lend themselves to because they have these kind of strange, weird little details.

If you look longer.

In this case, the details are very obvious.

Yeah.

But it's a, I think it's a standard reaction mode, this debunking, and I don't think it leads anywhere.

Yeah, exactly.

The question, what is this image trying to do and do I agree with it or not is a much more interesting one in some way than saying, oh, this is AI, right?

And as you mentioned, what does the spread and the global distribution of this image tell you about the global distribution networks of images today and how matter is moderating content and which kind of content becomes visible and which kind of content becomes invisible.

That's an interesting thing about this image.

Not a question whether this actually shows shows a realistic scenery of Gaza, obviously not.

Yeah.

So you did bring a couple of images, and some of them I don't even know.

And so I thought it might be fun to have you, as we close our discussion, walk us through this.

I must admit that I have seen Balenciaga Pope, but I do not know anything about this image.

I love Bilenciaga Pope.

I'm sorry.

I just want to shout out to how hilarious this is.

It's an image of Pope Francis.

I think it's about like a year or two old.

I think already two years.

Oh, wow.

And it is of him wearing a

papal white, sort of like street wearish,

like very long, very puffy parka coat.

And he's got a like large crucifix necklace and his little like tonsure hat and what appears to be like a takeout coffee.

I don't think the Pope has had takeout coffee like since the Argentinian junta days.

But it is a fun image that's humor comes from the contrast of like high and low culture, right?

Like the Pope in streetwear.

Yeah, it's it's like a clash of concepts, right?

You can also in this image very much see the prompt behind it.

I mean, not the actual prompt, but a supposed prompt, the Pope in a puffy white jacket or whatever.

And then you see it and it kind of surprises you, but it also matches your expectation of what that would look like in real life, although you've never seen it.

That's, I think, part of the fun.

What I wanted to show this image for is like that really, really provoked a lot of reactions in this kind of look at that detail thing.

So there is this coffee cup, there is a cross, there is something about the eyes.

People found all kinds of AI glitches in that image.

And whole online journal articles were written about why we don't have to believe this image, what clues we can find in them.

And then a whole genre of online quizzes sprung out of that where you can guess if an image is AI generated or not, which is in itself a very interesting kind of genre because it always tells you, yeah, the non-AI generated images are really true and authentic, and only the AI generated images are the fake ones.

And now we all have to learn how to spot these little clues and details.

And also from the Pope AI image, a whole wave of other AI-generated popes flooded the web.

So there was this memetic kind of reaction chains to what that image also.

It's now a classic, I think, of this genre.

Yeah.

Early, early history of AI slop.

And I do think that, yeah, anything that involves a pope doing fashionable things is exciting.

I always think of the ads for the, remember when Jude Law was the young pope?

And everyone was like, oh, the tagline better be this pope fucks.

This pope could still get it.

When he was pope, Benedict had those red Prada shoes that were very stylish.

Did not take that poverty vow very seriously, I don't think.

No.

So the second one you brought us is actually a throwback to our last episode, which is by the AFD youth organization in Baden-Württemberg.

So this is a very fake woman with the caption, real women love their heimats and their homeland.

Real women reaffirm their femininity.

Real women are right wing.

It's a choice, combining that with a very much not real woman.

Yeah.

And yeah, I think it's so telling.

I mean, for me, it's the best example of what the reality these right-wing accounts want to see in these images actually is.

It's the reality, as we spoke, of gender cliches, of a world where everyone kind of matches a certain already established pattern and formula in the most stereotypical way.

So I think they mean that in a way serious.

That's how real women are supposed to look like like for them and that of course means that everyone who doesn't look that way is less real and i think there is a threat in this image very much it's also very serious yeah i also think that there's something interesting about right like where does the reality lie right is the reality of what a woman looks like every woman you see or is it the category of woman in some right this was the appeal of phrenologists and of physiognomists in the 19th century.

They're like, well, no, the individual face is not that telling, but the human face as, as you say, statistical composite tells us what we are really like, right?

In the aggregate, the real portrait of the human species, or in this case, the German woman, emerges, right?

Like, where does reality lie?

In the concrete or in these kind of abstractions?

Yeah, an abstraction from online content in that case.

Abstraction from even fantasies kind of hypercharged through these technologies, but presented as something with the aura of statistical objectivity in a way.

Yeah.

Might that be a good place to wrap up?

We can have a call to be statistically irregular.

Be statistically irregular as you already are as somebody who listens to Inbed with the Right.

The first way to be ungovernable is to look real weird.

I'm already successful on this score, so I'm very excited.

I'm already fucking with AI slop by just like pointing my phone camera at myself.

I was like, whoa, got to count for this one.

Yeah.

Roland, thank you so much for being with us.

I learned so much.

I think we had a really great conversation.

Yeah, thank you so much for having me.

That was great.

I really enjoyed it.

All right.

Well, thank you all for tuning in and thank you for being part of our mid-journey through AI slop.

As I say, we'll keep coming back to this topic.

It is a really fascinating one.

And it is shocking just how quickly this stuff has taken over as the sort of lingua franca of the international far right.

Thank you as always for listening and we'll see you next time.

See you next time.

Environment with the The Right is made possible by hundreds of listeners who support us via patreon.com.

Our episodes are produced and edited by Mark Yoshizumi and Katie Lau.

Our title music is by Katie Lyle.