Big Tech Embraced Fakeness in 2025

30m
AI slop anyone?

Press play and read along

Runtime: 30m

Transcript

Speaker 1 Hey, you're listening to the On the Media midweek podcast. I'm Brooke Gladstone.

Speaker 1 This week, we wanted to cozy up by the fire, grab a mug of hot cocoa, and review an absolutely insane year for artificial intelligence.

Speaker 1 Silicon Valley's heavy hitters, Google, Meta, Apple, and OpenAI to name a few, have poured billions of dollars into the industry, creating products for people, businesses, businesses, and even the government.

Speaker 1 But there's also been a growing tug at the AI Superman's cape, reports of unchecked chatbots giving dangerous advice, scammers armed with better tools, and political deepfakes increasing in quality and quantity.

Speaker 1 So to figure out the status of AI now, to try and trace what's happened in the last 12 months, we went all the way back to January 17th, 2025.

Speaker 2 We're going to get back to our roots roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms.

Speaker 1 Meta CEO Mark Zuckerberg.

Speaker 2 First, we're going to get rid of fact-checkers and replace them with community notes similar to X starting in the US.

Speaker 2 After Trump first got elected in 2016, the legacy media wrote non-stop about how misinformation was a threat to democracy.

Speaker 2 We tried in good faith to address those concerns without becoming the arbiters of truth.

Speaker 2 But the fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the U.S.

Speaker 3 The trigger for a lot of this rollback and for Mark Zuckerberg doing what some people have referred to as a hostage video of him sort of at camera was of course Donald Trump gets re-elected.

Speaker 1 Craig Silverman is co-founder of Indicator, a publication dedicated to understanding and investigating digital deception.

Speaker 1 He argues that this video marked the beginning of a swift but forceful transition to a less moderated and less regulated AI world.

Speaker 3 It becomes very clear to these platforms that they need to make sure they get out of the way so that whatever content he wants to have out there can be out there.

Speaker 3 And same from his supporters and from his like-minded politicians leading countries around the world. Because I think Trump made it clear he will come after them.

Speaker 3 He mused about putting Zuckerberg in jail. So I think that's the big trigger.
And because most of these platforms are based in the U.S.,

Speaker 3 what happens in the U.S. cascades around the world.

Speaker 1 But it wasn't just Meta pulling back on fact-checking. In January, Google told the European Commission it wouldn't integrate fact-checking into its search bar or YouTube.

Speaker 1 And in February, researchers found that Google had given up its data void, those small warning signs that are signified maybe by gray banners, sometimes red bars, saying that, hey, you know, there really aren't good, credible results for your search.

Speaker 1 They don't exist anymore. Why not?

Speaker 3 Before Trump was re-elected, the Republicans labeled content moderation and fact-checking as censorship and been pretty effective once they took over the House.

Speaker 3 And so the platforms in a world where Trump is the president and the Republicans control the House and the Senate, they realize we have to move away from this stuff, whether we think it's censorship or not.

Speaker 1 I mean, it was, oh, you with your fact-checking. It's like, you're a liar.

Speaker 3 It's a strange thing, the idea that fact-checking doesn't count as free speech, because fact-checking inherently, it's not removing anyone else's speech, it is critiquing it, it is fact-checking it.

Speaker 3 But Meta was the one who chose to take a fact check from someone they paid to produce it and to put a label on the content that had been fact-checked. That was Meta's decision.

Speaker 3 The fact checkers actually had no control over what Meta did or didn't do with the fact checks.

Speaker 3 And content moderation of trying to enforce the rules that the platforms themselves or that the laws of a country.

Speaker 3 country have created in its worst scenarios can be censorship but on a day-to-day basis like this is what they are supposed to legally be doing.

Speaker 3 And so the reframing of fact-checking and speech of a journalistic nature as censorship was a pretty good judo trick. And it was effective in the U.S.

Speaker 3 I'm Canadian and so I'm going to use a hockey reference here, which is that a lot of people say the NHL is a copycat league.

Speaker 3 The team that wins the Stanley Cup one year, everybody tries to sort of copycat their roster. And it probably happens in other sports leagues.
And I think it's true with tech platforms.

Speaker 3 Look at how much they have all invested in running and building the same types of AI products. And it's the same thing around the so-called censorship.

Speaker 3 They're all copying each other because they think that is the safe thing to do, but also they think maybe that's where the next big boom is coming from.

Speaker 1 Fact-checking is censorship is one half.

Speaker 1 The other half is about business and the fact that these companies are actually pushing technologies that create fakes and that are actively used to deceive, not just entertain.

Speaker 3 Yes, this is no longer just a scenario where you have people coming on to Facebook or TikTok or what have you and sharing and spreading false content.

Speaker 3 And the platforms like Meta, like TikTok, they are building their own AI tools.

Speaker 3 They have a business imperative and a share price imperative to get people to be adopting these AI tools and showing that they are actually getting traction.

Speaker 3 And so as part of that, you know, one of the things you can do with a lot of these AI tools is impersonation.

Speaker 3 One of the things you can do is generate massive amounts of slop, false claims about celebrities.

Speaker 3 And they are allowing this to happen at a scale using their own tools. And they are in many cases directly paying people cash every month based on the AI content they have created.

Speaker 1 Well, give me an example of that.

Speaker 3 So the craziest example for me was there is a guy for a long time. He's a dad who lives in Maine.
He goes by the online moniker Busta Troll.

Speaker 3 And for many, many years, really about a decade or more, he has run what he calls satire pages and associated websites where he just shares 100% false made-up quotes attributed to celebrities, political figures.

Speaker 3 It's like memes where he'll share an image and it's got a photo of Gavin Newsom with some crazy quote he never said that makes him look like an idiot.

Speaker 3 And it's red meat for the right-leaning audience on Facebook. And then there'll be a link to an article he wrote with the same hoax claim that's written like a news story.

Speaker 3 And his model used to be that he would get people to go to the website and he'd earn money from ads on the website, just like you know, the teens and the young men in North Macedonia that I wrote about doing pro-Trump stuff back in 2016.

Speaker 3 But this has changed because this year he got accepted into Meta's content monetization program, where they will pay you based on the engagement your content gets.

Speaker 3 So he is getting paid by Meta now every single month for how viral his hoaxes are.

Speaker 1 In March, OpenAI used to ban photorealistic images of real people.

Speaker 1 That ban is over?

Speaker 3 Impersonation is in. OpenAI makes the change in March where these photorealistic things representing real people, public figures, they're okay with.

Speaker 3 And they said, we can't maintain a database of all the public figures in the world. It's just not possible.
So we'll just try to prevent people from doing harmful representations.

Speaker 3 And then months later, they launched Sora, which is a dedicated app that does amazing deep fakes of anyone.

Speaker 3 Google also rolled back some of its policies on impersonation because it started building some really powerful image and video generation models.

Speaker 3 So now something that in the early days of this technology, these companies said, listen, we're going to make sure that impersonation and deepfakes are really reined in.

Speaker 3 At a certain point, they just said, you know what, let's let it go. You know, this has really, really serious consequences.

Speaker 3 And the most common use that is really dangerous around this type of technology is for scams, where someone can impersonate another person. They can impersonate Elon Musk.

Speaker 3 They can impersonate a famous movie star or politician. And they can convince people that there's an amazing investment offer or something like that.

Speaker 3 And there are people who are literally losing their entire life savings and having to sell their homes because they get sucked in by these things.

Speaker 3 Impersonation is also stealing money from corporations.

Speaker 3 People are cloning the voices of executives or cloning the faces of executives and calling up other people in the company and getting them to transfer money.

Speaker 1 And this stuff is legal.

Speaker 3 As long as you are not infringing on someone's personal rights, putting defamatory things in their mouth, Typically, we don't see people suing around this. There is a case right now in California.

Speaker 3 Andrew Forrest, who's often nicknamed Twiggy, he made his money in mining in Australia. He's suing Meta because his face, his voice have been used in hundreds of thousands of scam ads over the years.

Speaker 3 It's not an accident, I think, that it's a billionaire suing them and actually seeing this through because that's the kind of money you need to actually get past Meta's formidable legal teams of filing a lot of motions for dismissal and other things.

Speaker 3 things.

Speaker 1 I'm not going to say that this year there were some really positive signs, but maybe less negative ones.

Speaker 1 TikTok didn't make any grand proclamations in the style of Zuck about amending its misinformation guidelines. Blue Sky actually put extra money into verifying the people behind accounts.

Speaker 1 Meta, along with TikTok, beefed up their community notes features.

Speaker 3 TikTok was notable because I don't think they did as public of a bending of the knee.

Speaker 3 And it's strange because obviously they're tied up in this process where the government has been trying to force TikTok to be sold for some time to get rid of its China-oriented ownership.

Speaker 3 And yet, they managed to navigate this in a way without doing a big video saying, hey, we're rolling back this, we're rolling back that. They still actually work with fact-checkers around the world.

Speaker 3 And I should note, Meta outside of the U.S. continues to work with fact-checkers.

Speaker 3 So that's continued, and it's kind of positive in that sense.

Speaker 3 You know, I think community notes is a really important thing, and i don't think in and of itself doing community notes is a bad thing let's explain how that works basically you can sign up to be part of the community notes program and if you see something online that you think is inaccurate or requires a bit of context around it you can submit a note if enough people from different political points of view who are registered in the program actually think it's a helpful note and vote that it's helpful it will show up as a label on that piece of content just like the fact checks kind of used to on meta So it is collaborative, it is open, it's participatory, and it's supposed to also have some things built in to combat bias a little bit.

Speaker 3 And all of that's positive.

Speaker 3 The problem is when we at Indicator looked into the state of these community notes programs, the big one on X and then the big new one on Meta, we found that there's some real problems.

Speaker 3 Meta hasn't invested much in it. There's very few people in the program.
There's very few notes being applied.

Speaker 3 As a replacement for fact checkers in the US, it seems like it's really far away from doing that. And then there's X, which is the template, and X similarly doesn't seem to be investing a lot in it.

Speaker 3 There are fewer and fewer notes being rated helpful. So people who are participating are actually not seeing the end result.
And they've opened it up to bots now.

Speaker 3 So that won't be very long before most of the notes that are getting appended are actually coming from automated AI-oriented systems.

Speaker 3 And so I think as a model, it's turning into more of a fig leaf than an actual real good faith effort.

Speaker 1 Now, as you went through the year, you say the next phase saw, quote, agents of deception baked AI into their workflows. Who are these agents? What is this workflow that is making use of AI?

Speaker 1 And here, I want you to pause for a moment on a word that's come up a lot, and I'll be surprised if it isn't in the Oxford Dictionary word of the year, which is slop.

Speaker 3 Yeah, so let me put them into a couple of buckets here. Let's put a hustler bucket over here, and then let's do sort of state-backed or propaganda, politically oriented stuff on the other side.

Speaker 3 And sometimes the hustlers do the politics because it makes the money, but let's divide them.

Speaker 3 So, on the politics side, or on the state operations side, it's a cliche to mention it, but Russia has been doing this for a long time.

Speaker 3 They've had operations masquerading as European or American news organizations spreading false articles to sort of push a particular propaganda narrative, attack Zelensky in Ukraine or what have you.

Speaker 3 They've looked at AI and they said, oh, now we don't need to use Photoshop and take a lot of time to create a whole bunch of fake headlines or fake front pages or fake videos.

Speaker 3 We can actually generate these at scale. They can put out more crap much faster than ever before and really overwhelm any defenses that are out there on these platforms.

Speaker 3 The state-backed operations, it might be a video claiming that there's a villa that Zelensky owns that's worth $20 million, right?

Speaker 4 Ukrainian president Vladimir Zelensky bought a $20 million mansion on the Florida coast in 2020.

Speaker 4 Boasting 200 feet of ocean frontage, the 13,000 square foot mansion features six bedrooms and 10 bathrooms, each with stunning ocean views.

Speaker 3 A lot of the state-backed operations are now churning out, in some cases, thousands and thousands and thousands of articles or pieces of content a day.

Speaker 3 And that stuff could end up getting hoovered up into AI models where the AI models start spitting out the exact same propaganda narrative with disinformation.

Speaker 3 So the snake is like eating its tail and regurgitating it all for bad things.

Speaker 1 And the hustlers?

Speaker 3 I have a soft spot for the hustlers, I have to say. They're clever, they're creative, they're very innovative.
They actually embody a lot of what Silicon Valley talks about, which is rapid iteration.

Speaker 3 Just build and put your head down. So for example, I did a story this year about one kind of AI-infused study app.
And their strategy for marketing was they don't really pay for ads.

Speaker 3 What they do is they recruit young women in some cases at least one high schooler they have them create brand new ticket talk accounts all they do is just post after post after post talking about this amazing study hack or this trick they found that in the end leads to this one app which i'm not going to mention to give them free advertising almost none of these and there were thousands of them said they were an ad the people's bios did not say they were a paid creator for this company and on top of all of these young creators churning this stuff out they also had a channel where they had filmed a a bunch of confrontations between students and professors.

Speaker 3 A student yelling at a professor or the professor yelling at the student. And they uploaded these to generate engagement.

Speaker 3 And all of them again included a mention of the product without saying that these were ads.

Speaker 1 Were these fake arguments?

Speaker 3 Fake arguments.

Speaker 5 I said this to you a week ago, and you're still using ChatGPT. All your work looks like shit, it's wrong, and you're all getting the same goddamn answers.

Speaker 5 I emailed this to you, you upload your lectures, your readings, whatever the hell you want, and then you actually get insights from the AI tutor.

Speaker 3 One of the most frequent people featured as a TA or a professor is actually like the head of growth marketing who recruits all the influencers. And they're very proud of this strategy.

Speaker 3 They have boasted online about how many views they got and the fact that they're paying absolutely zero in advertising.

Speaker 3 And I brought these thousands and thousands of undisclosed ads, which by the way, the FTC has rules around this, and they're clearly outside the rules of what the FTC has set.

Speaker 3 And TikTok didn't remove any of them.

Speaker 1 So tell me, which of this hustler-generated slop you have a soft spot?

Speaker 3 Well, my weak spot is sometimes they come up with stuff that's like genuinely clever, where I'm like, oh, you're terrible, but wow, you really figured that out.

Speaker 3 One of the things that stood out to me this year was that Andreessen Horowitz, one of the most reputable, most important venture capital firms in the world, let alone Silicon Valley, it invested a million dollars in a company that is building bot farms for you to run ads on TikTok with.

Speaker 3 And it invested $15 million with other investors in Cluly a company that its first product was to help programmers cheat on the job interview coding tasks that they get assigned when they're applying for a job.

Speaker 3 You know, if you think about it, like Andreessen Horowitz is potentially funding an app that people used to deceive some other Andreessen Horowitz-funded companies into hiring programmers who weren't as good as they actually claim to be.

Speaker 3 And I haven't seen that before.

Speaker 1 You reported in June for The Guardian on another example of what this can look like. There were 26 channels on YouTube dedicated to AI-generated videos about the P.
Diddy trial.

Speaker 1 The videos would have an uninvolved celebrity in the thumbnail and then some salacious headline about some fake testimony or a fight. These videos ranked 70 million views and none of it was real.

Speaker 3 There was a golden age of Diddy Slop this year, which, again, a phrase I never thought I would say.

Speaker 3 But people realized that there was a lot of attention around the Diddy trial that was going on. And they also realized that the average person didn't really know who was testifying.

Speaker 3 And so you had folks who would generate thumbnails and, in some cases, entire long videos claiming that someone had just showed up and testified. P.

Speaker 3 Diddy's mom testified against him, or The Rock showed up and testified against him.

Speaker 3 And it would be in the thumbnail of the video, it would be in the title of the video, and then there would often be a bait and switch where the video would actually be a lot of AI-generated stuff and clips of news reports.

Speaker 1 Wait a minute, I saw something that I totally believed, and now I wonder whether it's completely wrong, which was that they found a recording of Prince condemning P.

Speaker 1 Diddy and saying that it was disgusting what was going on and he was actually afraid to say anything about it. Now I think maybe it was totally made up.

Speaker 3 It might be. Like, I don't know the Prince one specifically, but Prince has been dead for a while, unfortunately.

Speaker 1 Yeah, this was sort of found in his house or something.

Speaker 3 Yeah, I mean, I feel like.

Speaker 1 Oh, my God, I'm such an idiot.

Speaker 3 I feel like we're in Slop Town right now. Yeah.
But look, any of us are susceptible at any moment. If it's the right message, piece of content delivered in the right moment.

Speaker 3 It is not an intelligence thing. It is not a class thing.
It is not an education thing. Any of us at any time can be persuaded of something.

Speaker 3 And it's really important to kind of operate in this insane information environment with that element of awareness and humility because we are not too smart.

Speaker 3 And as much as we can joke about some of this crazy slop stuff with these amazing tools and with the lack of the kind of oversight by the platform, someone will create something that works great for me and works great for you.

Speaker 3 And we're going to watch it and we're going to take it in.

Speaker 3 And there's less of a chance that something is going to intervene and say, hold on a second, you may want to know this about this thing that just loaded in front of you.

Speaker 1 But here's the real problem. And this goes back to the earliest days of photoshopification.

Speaker 1 Ultimately, if you develop the skepticism that you think is going to protect you, you're going to start not just disbelieving the stuff that's false, but also the stuff that's true.

Speaker 3 You know, a lot of people who were taken in by the QAnon conspiracy theory, a lot of those people believe they were engaged in really deep, serious internet research and that they were actually the ones doing the fact-checking and the media literacy.

Speaker 3 And so simply to say somebody like, hey, you should check that, or hey, you don't believe everything you read, like that's not actionable advice.

Speaker 3 We actually do need to be sort of equipping people with like, if you see something, here might be the three steps you take.

Speaker 3 You might go and look at that thing in front of you, but then you might go and search online and look for a wide variety of of different sources to see if there is some alignment.

Speaker 3 And make sure those sources aren't all from the same kind of point of view or the same location to see if they're not all just in one place.

Speaker 3 You know, you mentioned earlier that Google stopped putting data void warnings, saying, Hey, you're searching for something where we're not seeing a lot of good quality information.

Speaker 3 That's the place where the people who are trying to misinform or trying to jump on and get traffic to monetize, that's where they jump in.

Speaker 1 And you say the whole thing is the quintessence of easy money, but obviously the social cost can be alarmingly high. Case in point is this moment on Joe Rogan's podcast.
Can you describe it?

Speaker 3 This fall, there was a video that started spreading, pretty popular amongst sort of right-wing social media users, where it showed former vice presidential nominee Tim Waltz.

Speaker 3 He's on an escalator, he's singing, he's dancing.

Speaker 3 Don't you wish your girlfriend was hot like me?

Speaker 3 Don't you wish your girlfriend was freak like me?

Speaker 3 Don't you?

Speaker 3 And he's got a white t-shirt on in black letters that says, fuck Trump.

Speaker 3 And what happened is that they basically took his face and superimposed it on the body of a creator who had actually originally shared this video, which again, a very easy thing to do.

Speaker 3 Joe Rogan is talking about it as if it's real and he ends up getting corrected by his producer. But his reaction isn't like, oh, sorry.
Whoops, I fell for that.

Speaker 3 He did not feel the guilt and the shame that you did. What Rogan did was basically explain it away and he said, do you know why I fell for it?

Speaker 3 It's because I believe he's capable of doing something that dumb.

Speaker 1 So truthiness lives, just as Colbert discussed it back during the Bush administration. If it feels true, it is true, because there's no such thing as facts anymore.

Speaker 3 Shame is no longer a factor. in all this stuff.
And I think it's almost like tech and the platforms in some ways caught up this year.

Speaker 1 And we're basically like, yeah, why should we do things that the president is not restraining himself around 404 media's Jason Kiebler wrote last month that America's polarization has become the world's side hustle

Speaker 3 It's 100% true. A perfect example from this year was I found a whole network of foreign-run pages that only spread 100% false hoaxes and AI-generated images about global celebrities.

Speaker 3 And it was in English and other languages, but they hit upon a lot of culture-war topics, like trans stuff and political stuff, and what have you.

Speaker 3 And the first time that I brought those pages to Meta, they removed almost all of them.

Speaker 3 But then, after I brought a second group of more than 100 pages to Meta, and this was after they had gotten rid of the fact checkers and after they had sort of rolled back some of their moderation policies, Meta actually removed almost none of them and basically said this doesn't violate our policies anymore.

Speaker 1 So let me get this straight. Scammers can use AI to generate tons of ads and videos quickly to either sell or to pretend to sell users something.

Speaker 1 Tech companies like TikTok and OpenAI are creating tools to integrate AI into videos because that allows people to make content faster, more speed, more videos, more engagement.

Speaker 1 VC firms are investing either in the AI tools themselves or bot armies that high bidders can deploy to manufacture online content and maybe manufacture consent.

Speaker 1 It's all juiced by meta and the like who embrace these ads, reward people for using these kinds of tools because that's even more money in their pockets. So what am I missing?

Speaker 3 Well, I think that's a pretty good, heartbreaking, disappointing summary.

Speaker 3 And I think at the end of the day, if people are wondering kind of the why piece of it, it's that the business imperative now for tech platforms is to show that you are a leader in the creation of AI tools and the advancement of AI tools.

Speaker 3 You need to show that your users are using them and that there is value from what you are creating.

Speaker 3 And so to have policies that put you from their perspective at a disadvantage to your competitors who maybe are allowing impersonation and are allowing this and allowing that.

Speaker 3 The race for AI supremacy has led to some of these rollbacks and the pressure from the new administration has led to the other piece of it.

Speaker 3 And so that is the recipe that led to the laundry list of things that you have just mentioned. This is where we are going into 2026.

Speaker 1 But then, what do you make of the report that you cite from The Economist saying despite all that, adoption of AI amongst businesses is down? Here's a quote.

Speaker 1 The employment-weighted share of Americans using AI at work has fallen by a percentage point and now sits at 11 percent.

Speaker 1 Adoption has fallen sharply at the largest businesses, those employing over 250 people.

Speaker 1 What's more, The Economist predicts that, quote, from today until 2030, big tech firms will spend $5 trillion on infrastructure to supply AI services.

Speaker 1 And to make those investments worthwhile, they will need, according to JPMorgan Chase, on the order of $650 billion a year in AI revenues, up from about $50 billion a year today.

Speaker 1 People paying for AI in their personal lives will probably buy only a fraction of what is ultimately required. Businesses must do the rest.
Looks like they're not gonna.

Speaker 3 You know, this goes back to our conversation about the hustlers. They are early adopters.

Speaker 3 So the scammers, the hustlers, the bleeding edge marketers, the tech entrepreneurs, these are categories of people who are absolutely adopting this kind of stuff and pressure testing it and using it.

Speaker 3 And it's not to say that average people aren't.

Speaker 3 My wife uses ChatGPT to ask questions far more than I am comfortable with, but there is absolutely a gap here between the amount of investment, the valuations on the companies in the AI space, how much the platforms are pushing on this, and where things stand today.

Speaker 3 If they don't close that gap by getting actual businesses and people in their daily lives to integrate this stuff, then a lot of this is going to to have just supercharged scams and fraud and impersonation and slop not actually translated into the business value that would lead to a sustainable AI boom.

Speaker 3 This is part of the argument some people make that there is inevitably going to be some kind of a bust around this.

Speaker 1 So, looking ahead to next year.

Speaker 3 Oh, God, no.

Speaker 1 There's been a ton of press about the potential popping of the AI bubble, that it's reminiscent of the dot-com sparkle and fizz of the 2000s.

Speaker 1 Now, let us consider that AI right now is propelling our economy. If that weren't in the equation on the stock market, it would be completely flat.

Speaker 1 The New York Times reporter David Streitfeld, writing of the dot-com bubble compared to the AI bubble, said this week, for all the similarities, there are many differences that could lead to a distinctly different outcome.

Speaker 1 The main one is that AI is being financed and controlled by multi-trillion dollar companies like Microsoft, Google, and Meta that are in no danger of going kaput.

Speaker 1 So you've been watching this for a year. Any thoughts on the financial stability of this industry?

Speaker 3 It's absolutely true. that Meta isn't going to go bankrupt if AI and its super intelligence team doesn't pay off the way they want.

Speaker 3 It would actually could end up for Meta being like their big bet on the metaverse. Do you remember that? Yeah.

Speaker 3 He spent around $70 billion saying the future is the metaverse, and that has amounted to almost nothing.

Speaker 3 But I do think a company like Meta and Google, they're really so big and they have such good foundational, strong advertising businesses that they can make a bunch of bad bets.

Speaker 3 There will be some carnage. There are a lot of AI startups that have been funded.

Speaker 3 There are people who left senior positions at OpenAI, started new companies with multi-billion dollar valuations right away. And those are the things that may go away.

Speaker 3 But at the end of the day, from my specific perch and my specific bias of wanting to see some better kind of guardrails around what honestly this technology that I use and enjoy at times is if the business hype reduces a little bit, then them actually saying, okay, so what do we have here?

Speaker 3 And what are actually some reasonable rules?

Speaker 3 Right now, I feel like a lot of these companies feel like they can't put too many constraints on their models and on the use of them because that means they might lose.

Speaker 1 Craig,

Speaker 1 all this is dizzying. What in God's name are we supposed to do about it?

Speaker 3 There isn't an easy answer. I think in spite of all this, which seems very overwhelming, you can sort of feel disempowered and small.

Speaker 3 But the truth is that we individually are the atomic units that these big tech platforms need. They need our attention and they need us to spend time on them.

Speaker 3 And so I really encourage people to think very consciously about where you are giving your attention and where you you are spending your time.

Speaker 3 I'm not going to tell you to get rid of all the social media on your phone. I have it on my phone too.

Speaker 3 But think about the fact that anytime you slow down on a piece of content and watch it or like it or share it, that sends a signal to the system and it might get more people shown the same thing.

Speaker 3 So being conscious of all of this stuff, all of these threats, all these risks, and thinking about what you reward with your attention. It's valuable.

Speaker 3 So take your power, put your attention where you feel good about it. Control where your eyes go and what you listen to, and patronize the stuff that you think is worth it.

Speaker 1 I think I'm going to run some patriotic music underneath that, Craig.

Speaker 3 Please let it be O Canada or hockey highlights.

Speaker 1 Perhaps O Canada will do. Thanks again so much.
It was great to have you back.

Speaker 3 Thank you so much for having me, Brooke.

Speaker 1 Craig Silverman is co-founder of Indicator, a publication dedicated to understanding and investigating digital deception.

Speaker 1 Thanks for listening to the On the Media Midweek podcast. Tune into the big show on Friday for the lowdown on how AI shaped 2025.
I'm Brooke Gladstone.