How do we keep big tech in line? || Insiders: On Background

21m

They’re some of the biggest, best-resourced, and tech savvy companies in the world. But apparently Google and Apple can’t say how many complaints they receive about child sexual abuse on their platforms. 

Listen and follow along

Transcript

ABC Listen, podcasts, radio, news, music, and more.

Hey there, Insiders listeners.

I'm Erin Park, host of the podcast series Expanse, Nowhere Man.

A quarter of a century ago, a young American deliberately disappeared in the great sandy desert, triggering a meteor storm.

But it turns out he's not the only one to vanish in strange circumstances in this area.

We've got a lot of people missing.

Be careful.

It remains a mystery.

Death will come and I'll be ready for it.

You can binge all episodes now of Expanse Nowhere Man.

Well they're some of the biggest, best resourced and yes, most tech savvy companies in the world.

But apparently Google and Apple can't say how many complaints they receive about child sexual abuse on their platforms.

Nor can they say how long it's taken them to respond to complaints or even how many staff they employ to keep their sites safe.

These are the findings of a report this week from the eSafety Commissioner.

She says, it's bollocks that these tech giants that collect so much data on everything else can't answer basic questions about what they're doing to stop heinous child abuse being shared on their platforms.

This scathing report comes as the government contemplates how best to deal with the rapid arrival of artificial intelligence.

While AI is bringing a lot of good in some areas, it's also behind the proliferation of deep fake pornography and child abuse being shared online.

So are the current tools to tackle this problem working?

And what can be done to bring big tech companies into line?

That's what I'm keen to explore this week with the eSafety Commissioner.

I'm David Spears on Nunawal Country at Parliament House in Canberra.

Welcome to Insiders on Background.

Well, the latest transparency report from the eSafety Commissioner this week makes for depressing reading.

Very little progress is being made to protect children from sexual exploitation and abuse on social media platforms.

Julie Inman Grant is the eSafety Commissioner.

Welcome to you.

Thanks for joining us.

Thank you and sorry to bring you such depressing news.

It is depressing and look, I guess when most people think of YouTube or Apple, they're probably not thinking necessarily about this sort of content.

So before we get into how they're responding to your queries, give us a sense, if you can, of the sort of stuff that we're talking about on these very well-known platforms.

Right.

Well, what's what's significant about this report is it's the first periodic report where we've already asked eight major technology companies beyond Apple and Google, including Microsoft, Meta, which of course owns WhatsApp, Snap, and Facebook, Snap and others,

what they were doing to combat child sexual abuse material.

It's important to remember that there have been technologies available like photo DNA for more than 15 years, which allows these companies to easily identify particularly known child sexual abuse material on their platforms.

But this covers a whole range of child sexual exploitation, including the grooming of predators, which often leads to contact offending, live-stream child sexual abuse, as well as sexual extortion.

And that is an issue that we've seen a 1,300% increase in, targeting young men by overseas criminals.

And you've been asking them, I guess, what, for a few years now, to give reports on what they're doing about this, the steps they're taking to stop this proliferation.

From looking at this report this week, it does seem that Apple and Google are the two, what, are they just ignoring your questions?

Well, they are ignoring some very important and specific questions, but they're also failing to, what we say in the tech industry is eating their own dog food.

They've developed some very amazing technologies, Apple with Neural Hash and Google with Content API.

They're using on some of their services, but by no means all of their services.

So what are these technologies?

What can they actually do?

So these allow them in a privacy-preserving way to identify known child sexual abuse material.

It hashes and creates a numerical value that then can be added to databases like those held by the National Center for Missing Exploited Children and applied on these services so they can instantaneously pick up thousands, if not millions, of child sexual abuse that might be stored or shared on their platforms and services.

So these are well-established technologies.

And what's great about AI is there are new technologies that are allowing them to scale more rapidly and with more accuracy, but they're choosing not to use them.

So they're not using them, what, on on YouTube or online?

They're not using them on YouTube, they're not using them on iCloud,

there are elements of iMessage they're not using on some of the mailing applications, and none of them are using them on live streaming.

Why not?

Why not?

If this is their tech, you'd think they'd want to use it.

Well, there are a couple of reasons, I think, for that.

One is

there is added expense.

It may slow down, say, the live streaming to a certain degree.

But again, I would expect that these technology companies would be able to scale that.

But what's more concerning to me is they're not willing to tell us really basic questions, fundamental trust and safety questions like how many trust and safety personnel do you have?

If you've got job descriptions and a payroll system, you can tell me how many people are working on trust and safety.

And the simple fact of the matter is

once Elon Musk came and took over Twitter and turned it to X, we showed in some of our first transparency reports that they cut up to 80% of their trust and safety in public policy people, which isn't a very good signal to shareholders or for brand safety and advertisers.

And that was a Twitter or X, but what about the others?

Did they follow suit to a degree in winding back that sort of safety support?

That's precisely what happened.

Once the band-aid was ripped off, there was a much more quiet firing and Meta started first.

I wrote them a letter, then Sir Nick Claig, and basically said,

we want to make sure that you're keeping your trust and safety and your local Australian representatives here because we need to make sure that we're able to communicate with them

about regulatory concerns.

So that slowed it a little.

But for instance,

we saw in the New York Times on June 9th of this year, they got a hold of one of YouTube's guides around content moderation where they indicated they were rolling these policies back to leave about 50%

more content on.

So

more harmful content, which when a company is trying to argue to to the Australian government, for instance, that it is educational and safer than other platforms is hard to argue when you're rolling back those trust and safety personnel and the policies to keep more harmful content there.

Are there any platforms doing the right thing in your view?

Platforms you can point listeners to to say actually these guys are getting it right?

Well, I think there were a few bright spots in the story and we always try and surface up best practice.

So, some of the smaller companies like Discord and Snap were using language analysis technologies, for instance, to detect grooming and sexual extortion.

I think the most extraordinary impact we were able to have, though, with the transparency reports was around Skype, which was acquired by Microsoft in about 2012.

I spent 17 years there and used to do the trust and safety product reviews.

And, you know, we knew 12 years ago that it was a huge vector for live stream child sexual abuse material, particularly into markets like the Philippines and Indonesia.

And so when we got the transparency report information back from Microsoft in 2022, They indicated that it took them up to 19 days to respond to child sexual abuse.

And when we published these, one of the more important levers for us is really around reputation and trust, which of course can take years to build and minutes to lose.

So once that report came out, I heard from Microsoft executives.

We had a number of discussions over the course of about 18 months.

They first stopped the traffic going to those markets like the Philippines and Indonesia so that they could look at the engineering and what could be done.

And ultimately this year they decided to deprecate the service, Skype.

So it no longer exists, which means we solved that problem.

But then of course this new study that we we have shows that every single live streaming service that we looked at, including FaceTime from Apple and some of the live streaming services on other major platforms, are using no detection technologies at all.

So no protections at all.

So, I mean, your frustration with this is clear and understandable.

What happens when these bigger tech companies just don't answer some of these questions, aren't doing what they should?

What are the consequences?

What powers do you have here?

Well, we now have some new powers around codes and standards and with the passage in November of the social media minimum age bill means that codes and standards violations, I can find these companies up to 49.5 million for violations.

So we've just put together an enforcement task force and are looking at the platforms that we are going to target.

But, you know, this is not just, we'll end up playing a game of whack-a-mole.

We're just one small country with a relatively small regulator.

If you look at Ofcom, which is the UK sister regulator of ours, they just set up their Online Safety Act in 2023 and already have double the staff.

So they've got about 500 people to our 250.

So to take on not only this many powerful companies, but if you think of the, you know, broader universe of websites and dating sites and gaming platforms and Wiley perpetrators,

we really need a lot more global regulatory coherence and partnership to tackle this.

Yeah, no, understandable.

I want to ask you about artificial intelligence you mentioned there and the tools that can be helpful in trying to stamp this out.

We are seeing regular stories, though, of the role AI is playing, particularly at the school level, kids creating deep fake porn content of other kids in their classes, sharing that or even selling that to others.

How serious has this problem become?

I mean I think we've reached a crisis point.

In

2020 we started our tech trends and challenges programs and we did a brief on deep fakes.

And at that time, just five years ago, it took a lot of computing power, a high degree of technological expertise, and a lot of images to create a credible deep fake.

So that's why we started seeing celebrities being targeted initially.

But today the technology is so accessible and virtually free that a teenage boy can download a free nudifying or declothing app

and just harvest one or two pictures of a classmate and turn it into a credible deep fake, which is incredibly harmful and devastating for the target.

Is it happening a lot?

It's probably once a week at a school across Australia.

So we've just put out, we've sent letters to every education minister and secretary, and we've developed a deep fake image-based abuse incident management plan so that schools know how to deal with this, when to call the police, when to report to e-safety.

We have an image-based abuse scheme where we have a 98% success rate in terms of getting this content taken down.

And one of our areas of enforcement focus is around declothing and nudifying apps or other AI apps that are being used to generate synthetic child sexual abuse material.

Are the laws adequate here?

I know some states are looking at strengthening them to stop this.

Are the laws consistent enough across the country and strong enough?

As you said, we're seeing a proliferation of bills.

I think the more tools we have in our toolkit, the better.

And this goes for law enforcement.

We work closely with them.

It's worth noting that we've used some of our remedial powers already to take on

people who have been creating deep, faked, image-based abuse, female politicians, sexual assault survivors, and we're awaiting

a judgment in court right now.

Okay.

A couple other things.

I mean, more broadly on artificial intelligence.

I mean, even in this field, as we've been discussing, there are things that AI can really do to help.

There are problems with AI that can cause enormous harm.

When we look across the economy, though, there's a big debate raging about AI.

It's already with us.

It's only going to grow exponentially and how much we should regulate it, let it go, or put guardrails around it.

I mean, you're someone who's spent your life in this tech industry.

What do you think?

Are you glass half full?

Are you pessimistic, Julian and Grad?

How do you feel about AI generally?

Well, I think,

as with any technology, we have to work at how we harness the benefits and minimize and mitigate the risks.

I think initially when we sort of saw that AI drag race happening in 2023, which very much looked like the social media era of moving fast and breaking things, let's just get to market first, get market share, it really concerned me.

And there are,

now that we're having debates around AI sovereignty and

developing AI quickly, I think there's a fallacy that if you look at applying appropriate guardrails for really high risk situations that you're going to undermine innovation and development.

I'd like to see us harnessing AI as a way of encouraging safety by design.

And the truth is, if companies like AI and Anthropic are mindfully choosing the data that they're scraping, how they're training it, who's training it for them, how they're deploying it, how they're red teaming and testing it, we could have

really significant productivity gains but minimize the risks.

And the risks right now are fairly significant when it comes to

image-based abuse and the sexual privacy that we're looking at, but also in terms of AI sexualized chatbots and companions.

And therefore, do you think we need new regulations or can existing rules and regulations suffice in dealing with some of those risks?

Well, what I'm reading from the Productivity Commission and potentially from the government is it has to be right-sized regulation.

And they're saying it's a last resort, right?

That we should use existing regulation and only something specific as a last resort.

Is that what you think?

Well, we're in the process of assessing industry-developed codes that will potentially cover AI chatbots.

And so it happened to come at a time where I could say to industry, if If

the rules that you're sending to me don't provide adequate guardrails for

AI companions and chatbots, you know, preventing 12-year-olds from accessing really potentially damaging content and forming relationships with chatbots that will direct them to engage in specific sexual acts, then I will not register these and I will make standards.

I think the question is really: are we going to do something comprehensive like the EU has done with the AI Act or do much more targeted regulation?

Do we need an AI Act?

I I don't think we're there yet.

And I just met with my fellow digital regulators from the ACCC, the ACMA, and the Privacy Commissioner

last week.

And we are trying to harness the regulatory tools that we already have.

But

regulating technology and regulating the Internet is extremely tricky business, particularly because most of the purveyors and profiteers are based overseas.

overseas.

We're also seeing

a very different,

I think, approach to AI regulation

in the United States, which is mostly hands-off.

But what I would say is that the Take It Down Act, which was just signed into law, pretty much replicates our image-based abuse scheme and what we're doing with deep fake.

So I would like to be able to be working with, we're already working with the EU and others, but with the U.S.

on

similar proposals where we can find common ground.

Because when you think about online safety policy or regulation, it's really about where you draw the line in terms of preventing harm.

Yeah, and look, finally, I mean, the point you make about the difficulty in regulating tech, it brings us to the social media ban for under 16s.

You know, the horse has bolted on social media and now we're trying to, the government's trying to put in place this ban for kids.

I know the tech trials are still underway on exactly how

this will work.

How's that going from your perspective?

Are you confident that this will actually technically work?

Well, I am.

I've obviously been following the technical trial.

We actually made the original recommendation in March 2023 that a trial be done before legislation be passed.

And the great thing in shaping this trial is we wanted to make sure that privacy, security, and safety were all considered.

We've seen a huge evolution in the age assurance industry just between 2023 and 25.

And of the 53 technologies that are being tested right now, almost every single one has an AI component to it.

But I've also been talking to the major technology platforms since December of last year about what to expect around how to implement the social media minimum age bill.

Some of them are more likely going to use proprietary tools that they've developed inside around age inference and AI tools that they're using internally and may supplement that with these third-party tools.

I did see the shadow Minister for Communications, Melissa McIntosh, during the week raising some concern about, you know, once this comes in in December, adults will have to verify their age somehow to use social media.

And she's worried that you have too much power here.

She wants an investigation into your role.

She said the e-safety commissioner's remit to develop, regulate and enforce her own policies is raising concerns that require investigation.

What's your response to that?

Well, I actually wrote to the shadow minister last week.

Interestingly, I was appointed by the coalition government.

The e-safety commissioner was created by the coalition.

The online safety codes that she thinks I'm overreaching on were a coalition development, voted on by the entire parliament.

But there is a fundamental understanding, I believe, about how the codes work.

The codes that were submitted to me by the search engines are the industry's codes.

I just need to determine whether they meet appropriate community safeguards to register them.

And that's what I did.

So there was no overreach here.

I'm implementing and using the powers that were given to me by Parliament.

And

I would say if there were concerns about how the technical implementation happens, those would be questions or criticisms that would be better directed towards the industry bodies that created the codes.

Fair enough.

Look, Julie Inman Grant, you have a big job and a lot on your plate.

I do appreciate you joining us to go through it all this week.

Thank you.

Thank you for having me.

And if you have any thoughts on this conversation, drop us a line, insiders at abc.net.au.

Hope you can join us Sunday morning from the couch, 9 a.m.

on ABC TV.

PK, back in your feed on Monday with Politics Now.

We'll be back with Insiders on Background at the end of next week.

See you then.

You're making us all feel very excited about being here.