CZM Rewind: The Academics That Think ChatGPT Is BS
In a paper released earlier this year, three academics from the University of Glasgow classified ChatGPT's outputs not as "lies," but as "BS" - as defined by philosopher Harry G. Frankfurt in "on BS" (and yes I'm censoring that) - and created one of the most enjoyable and prescient papers ever written. In this episode, Ed Zitron is joined by academics Michael Townsen Hicks, James Humphries and Joe Slater in a free-wheeling conversation about ChatGPT's mediocrity - and how it's not built to represent the world at all.
Paper: https://link.springer.com/article/10.1007/s10676-024-09775-5
Michael Townsen Hicks: https://www.townsenhicks.com/
Joe Slater: https://www.gla.ac.uk/schools/humanities/staff/joeslater/
Original Air Date: 7.18.24
YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use codeΒ FREE99 for free shipping on orders of $99 or more.
BUY A LIMITED EDITION BETTER OFFLINE CHALLENGE COIN! https://cottonbureau.com/p/XSH74N/challenge-coin/better-offline-challenge-coin#/29269226/gold-metal-1.75in
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/Β
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.
There was the six-foot cartoon otter who came out from behind a curtain.
It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.
Should I be telling this thing all about my love life?
I think we will see a Twitch stream or a president maybe within our lifetimes.
You can find Close All tabs wherever you listen to podcasts.
There's more to San Francisco with the Chronicle.
More to experience and to explore.
Knowing San Francisco is our passion.
Discover more at sfchronicle.com.
Every business has an ambition.
PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.
And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards so you can focus on scaling up.
When it's time to get growing, there's one platform for all business, PayPal Open.
Grow today at PayPalOpen.com.
Loan subject to approval in available locations.
Run a business and not thinking about podcasting?
Think again.
More Americans listen to podcasts than ad-supported streaming music from Spotify and Pandora.
And as the number one podcaster, iHeart's twice as large as the next two combined.
Learn how podcasting can help your business.
Call 844-844-iHeart.
Callzone Media.
Hey, everyone, it's me, Ed Zittron, and we're doing a rerun episode this week.
Sadly, the in-studio recording we did with Ashwin Rodriguez and Victoria's Song, we had a technical fault, nothing we can do.
It sucks, but we're doing a rerun this week.
We're doing the academics that think chat GPT is BS.
And this is one of my favorite episodes ever recorded.
It changed how I do interviews writ large.
It's with these three academics, Michael Townsend Hicks, James Humphreys and Joe Slater, who wrote a paper using the actual Frankfurtian definition of bullshit to say that ChatGPT bullshit.
It's so much fun.
It's one of my favorite episodes I've recorded.
I will have you a monologue this week as well.
I do apologize for not having something new for you this week.
You'll still get the monologue on Friday though.
Thank you for your time, your patience, and of course for listening to Better Offline.
Hello and welcome to Better Offline.
I'm your host Ed Zidtron.
In early June, three researchers from the University of Glasgow published a paper in the Ethics of Information and Technology Journal called Chat GPT is Bullshit.
And I just want to be clear: this is a great and thoroughly researched and well-argued paper.
This is not silly at all.
It's actually great academia.
And today I'm joined by the men who wrote it, academics Michael Townsend, Hicks, James Humphreys, and Joe Slater, to talk about ChatGPT's mediocrity and how it's not really built to represent the world at all.
So, for the sake of argument, could you define bullshit for me?
So, you are bullshitting if you are speaking without caring about the truth of what you say.
So normally, if I'm telling you stuff about the world, in a good case, I'll be telling you something that's true and like trying to tell you something that's true.
If I'm lying to you, I'll be knowingly telling you something that's false or something I think is false.
If I'm bullshitting, I just don't care.
I'm trying to get you to believe me.
I don't really care about whether what I say is true.
I might not have any particular view on whether it's true or not.
Right.
And you define between like soft and hard bullshit.
Can you also get into that as well?
Can you also identify yourselves as well?
Sorry, yeah, and Joe.
So the soft bullshit, hard bullshit distinction is a very serious and technical distinction.
Right.
Sweeney, I came up with this because bullshit is in the technical, philosophical sense, comes from Harry Frankfurt, recently deceased, but a really great philosopher.
And he talks about the amount of bullshit that there is in in popular culture and just in general discourse these days.
Some of the ways he talks about bullshit seem to suggest that it needs to be accompanied by a sort of malign intention.
Right.
Like I'm doing something kind of bad.
I'm intending to mislead you about the enterprise of what I'm doing.
Yeah, maybe about who you are or what you know.
So you might be trying to portray yourself as someone who is knowledgeable about a particular subject.
Maybe you're a student who showed up to class without doing the work.
Maybe you're trying to portray yourself as someone who's virtuous in ways you're not.
Maybe you're a politician who wants to seem like you care about your constituents, but actually you don't.
So you're not trying to mislead somebody about what you're saying, the content of your utterance.
You're trying to mislead them instead about like why you're saying it.
That's.
That's what we call hard bullshit.
And it's one of the things Frankfurt talked about.
Yeah.
So Frankfurt doesn't make this hard bullshit, soft bullshit distinction, but we do, because sometimes it seems like Frankfurt has this particular kind of intention in mind, but sometimes he's just a bit looser with it.
And we want to say that ChatGPT and other large language models, like they don't really have this intention to deceive, because they're not people.
They don't have these intentions.
They're not trying to mess with us in that kind of way.
but they do lack this kind of caring about truth.
Well, I'm James, by the way.
I suppose we strictly don't want to say that they aren't hard bullshitters.
And we just think if you don't think that large language models are sapient, if you don't think they're kind of mines in any important way, then they're not hard bullshitters.
So I think in the paper,
we don't take a position on whether or not they are.
We just say if they are, this is the way in which they are.
But minimally, they're soft bullshitters.
So they're kind of soft bullshitters, as Joe says.
doesn't require that speakers attempting to deceive the audience about the the nature of the enterprise hard bullshit does so if it turns out that large language models are sapient which they're definitely not like that's just tech broke up.
Yes, that's nonsense.
Yeah.
But if they are, then they're hard bullshitters and minimally they're soft bullshitters or they're bullshit machines.
So you also make the distinction in there, the intention.
So the very fabric of hard bullshit, that you intentionally are bullshitting to someone.
You kind of make this distinction that the intention of the designer and the involved prompting could make this hard bullshit.
Because with a lot of these models, and someone recently jailbroke Chat GPT and it listed all of the things it's prompted to do, could prompting be considered intentional?
That Sam Altman, CEO of OpenAI, could be intentionally bullshitting?
I think he is.
And yeah, this, again, I think is something, I don't know what the kind of hive mind consensus on this is.
I'm sort of sympathetic to the idea that if you take this kind of
purposive or teleological attitude towards, kind of, what an intention is, is an effort to do something, then maybe they do have intentions.
But again, I think in the paper we just sort of wanted, I mean, it's a standard philosophical move, right?
Just sort of go, look, here's all this uncontroversial stuff as we can make it.
Now we can hit you with the really controversial shit that we wanted to get to.
So in the paper, we sort of deliberately went, maybe you might think it has intentions for this reason.
We kind of have no judgment on this officially.
I'm sympathetic to this sort of view that you're putting.
I think you're kind of sympathetic to this as well, Mike, right?
Yeah.
Yeah.
So I'm Mike.
There are a few ways that you can think of ChatGPT as having intentions, I think.
And we talk about a few of them.
One is by thinking of the designers who created it as kind of imbuing it with intention.
So they created it for a purpose and to do a particular task.
And that task is to make people think that it's a normal sounding person, right?
It's to make people, when they have a conversation with it, not be able to distinguish between what it's outputting and what a normal human would say, right?
Right.
And
that kind of goal, if it amounts to an intention,
is the kind of intention we think a bullshitter has, right?
It's not trying to deceive you about anything it's saying.
It doesn't care whether what it's saying is true.
That's not part of the goal.
What the goal is to do is to make it seem as if it's something that it's not, like specifically a human interlocutor.
And one source for that goal is the programmers who designed it.
Another is the training method.
So it was trained by being given sort of positive and negative feedback in order to achieve a specific thing, right?
And that specific thing is just sounding normal.
And that's so similar to what our students are doing when they try to pretend that they read something they haven't read.
There is something very collegiate about the way it bullshits, though.
It reminds me of when I was in college.
I went to Penn State and Aberystwyth, two very different institutions.
Both kind of in the middle of nowhere.
Yeah, both very sad.
But the one thing you saw with like students who were doing like B-plus homework is they were using words they didn't really understand.
They were putting things together in a way that really was like the intro body conclusion.
There was a certain formula behind it.
And it's just, it feels exactly like it.
But that kind of brings me to my next question, which is, how did you decide to write this?
What inspired this?
I wouldn't feel this one because whenever Mike tells the story, you get a sort of sanitized version.
We were in the pub whinging about student essays.
Perfect.
Yeah.
Like, Mike, you were not long here, right?
You were not long here.
I just started.
Yeah, not long in post.
A whole bunch of us sort of went to the pub on a notional let's let's welcome our new members of staff uh sort of event and inevitably within about two points we were all pissing and moaning um i think it might have been neil that kind of prompted this no i don't think he was there you know yeah okay fair enough we talked to him about it we talked about it after right in a case like sort of came up we were talking about this sort of prevalence of chat gpt generated stuff and kind of what it said about
how at least some students were kind of approaching the assessments and you know i forget who someone sort of just went offhandedly yeah but it's all just frank fertility and bullshit though isn't it and we sort of collectively went oh, hello, because obviously we've all having a background in philosophy, we've all read on bullshit.
You know, we all went, hee, hee hee, we get to say bullshit in allegedly serious academic work.
So the start of it was we'd had this experience of having to read all of this kind of uncanny valley stuff.
And when prompted, we all went, oh, it really is like Frank Fertian bullshit in a way that we can probably get a paper out of.
Yeah.
And at that point, I think we were in the department meetings.
talking about how to deal with chat GPT written papers.
And there were discussions going on all over the university including kind of in high levels in the administration to come up with a policy and
we specifically wanted a stronger policy because our university is very interested in cutting-edge technology right and so they wanted the students to have an experience using and figuring out how to use whatever the freshest technology is which as they should as they should right but not for essays right that's the worry we had we thought you know if we're not very clear about this, the students will be using it in a way that will detract from their educational experience.
Right.
And at the same time, it was becoming more widely known how these machines work, like specifically how they're doing next token or next word prediction in order to come up with a digestible string of text.
And when you know that that's how they're doing it, I mean,
it seems so similar to what humans do when they have no idea what they're talking about.
And so when we were talking about it, it just seemed like an obvious paper that someone was going to write.
And we thought, that had better be us.
You know, eventually people are going to see this connection.
And it's a great paper.
I think it worked very well.
I mean, I think that it's the kind of thing where when I pitch this to other philosophers, it doesn't take them very long to just agree, to be like, ah, yes, that's right.
It's an interesting philosophical concept as well, because the way that people look at chat GPT and large language models is very much machine do this, but when you think about it, there are other ways where people are giving it consciousness.
I just saw a tweet just now where someone talked about saying please with every request.
It's like, no, I will abuse the computer in whatever manner I see fit.
But it's
curious because I think more people need to be having conversations like this.
And one particular thing I like that you said, I actually would love you to go into more detail is you said that chat GPT and large language models, they're not designed to represent the world at all they're not lying or misrepresenting they're not designed to do that what do you mean by that i mean kind of as background i do philosophy of science but my thoughts about something like chat gpt are largely inspired by the fact that i also teach a class called understanding philosophy through science fiction oh and in that we like talk about whether computers could be conscious and
i don't know what you guys think actually i think they could right
i just don't think this one is and part of the reason i think think they could but this one isn't is that i think that in order to sort of represent the world or have the kinds of things we have that are like beliefs desires thoughts that are about external things you have to have internal states that are connected in some way to the external world usually causation we're perceiving things information is coming in then we've got some kind of state in our brain that's designed just to track these things in the external world, right?
right that's a huge part of our cognitive lives is just tracking external world things and it's a very important part of childhood development when you figure out how to track it it's semiotics right yeah daniel chandler aboriswith taught me semiotics it's like a perception of the world and this is like theory of meaning stuff so yeah uh semiotics is like theory of signs how is it that a sign A sign can be both the representation of the thing and the thing itself.
That can happen.
Yeah, but not always.
Sometimes it's just the representation of the thing.
And there's a lot of philosophy is about figuring out how brain states or words on a page can be about external world things.
And a big part of it, at least from my perspective, has to do with tracking those things, keeping tabs on them, changing as a result of seeing differences in the external world.
And ChatGPT is not doing any of that, right?
That's not what it's designed to do.
It's taking in a lot of data once and then using that to respond to text,
but it's not remembering
individuals, tracking things in the world, in any way perceiving things in the world.
It's just forming a sort of statistical model of what people say.
And that's kind of so divorced from what
most thinking beings do.
It's divorced from experience.
Yeah.
I mean, as far as I can tell, it doesn't have anything like experience.
Yeah, and that's one of the things that this, I think, sort of, in one way, comes down to, is that if you sort of push this sufficiently far, someone is going to go, ah, isn't this just bio-chauvinism?
Aren't you just assuming that unless something runs on like meat, it can't be sentient?
And this isn't something we get into in the paper, partly because we didn't really think it was worth addressing.
But the sorts of things that...
seem like they're like never mind consciousness, right?
But that seem to be necessary in order for something to be trying to track the world or in order to be corresponding to the world or to form beliefs about the world, ChatGPT just doesn't seem to meet any of them.
If it does turn out that it's sapient, then ChatGPT has got some profoundly serious executive function disorders.
But of course it's not sapient, right?
So we don't have to worry about it.
But kind of, it's not the case that we've got some blundering proto-general intelligence that's trying to figure out how to represent the world.
It's not trying to represent the world at all.
Its utterances are designed to look as if it's trying to represent the world.
And then we just go, well, that's just bullshit.
This is a classic case of bullshit.
Yeah, it seems to be making stuff up but making stuff up doesn't even seem to describe it it's just throwing shit at a wall very accurately but not accurately enough it's got various guidelines that allow it to throw shit at the wall with a sort of reasonably high degree of accuracy no you're right i mean one of the things that a human bullshitter could at least be characterized as doing is they'd have to try and kind of judge their audience right they'd have to try and make the bullshit plausible to the audience that they're speaking to and chat gpt can't do that right all it can do is go on a statistically large enough model and it looks like z follows y follows X.
It's not kind of got any, well it doesn't have any consciousness at all, of course, but it doesn't have any sensitivity to the sorts of things that people are in fact likely to find plausible.
It just does a kind of brute number crunching.
Sorry, it's more complicated than that, but I think it boils down effectively to kind of number crunching, right?
Kind of data and contexts in which
probabilistic planning of what planning.
It doesn't plan.
That's the thing.
It's interesting.
There are these words you use to describe things that when you think about it are not accurate.
You can't say it plans or thinks i think one thing between when we came up with the idea and when we finished writing the paper we spent some time reading about how it works and how it represents language and what the statistical model is like and i was
maybe more impressed than james
about that because it is like doing things that are similar to what we do when we understand language.
It does seem to kind of locate words in a meaning space, right?
Right.
And connect them to other words and, you know, show similarity and meaning.
And it also does seem to be able to, in some way, understand context.
But we don't know how similar that is for a variety of reasons, but mostly because it's too big of a system and we can only kind of probe it and it's trained indirectly, right?
So it's not programmed by individuals.
And
even though that's kind of a very impressive model of language and meaning, and may in some ways be similar to what we do when we understand language,
we're doing a lot more.
Things like planning, things like tracking things in the world, just having desires and representing the way you want the world to be and thereby generating goals doesn't seem to be something that it has anything, any room for in its architecture.
This is something at the time of it.
It's just good to meet you were talking about the kind of in some ways it learns language the same way that we do.
I mean, it's got no grasp grasp of expletive and fixation right this is one of chomps what does that mean just for not for me i definitely know yeah yeah no of course um if i give you the sentence that's completely crazy man and tell you to put the word fucking into that sentence there's a number of ways in which any language speaker is going to do it that they'll just go yeah like of course that where's where it goes right um right but we it seems that we've got a grasp on this incredibly early on in a way that doesn't look like it's the way that at least most of us are taught language right we get quite harshly told off when we try and do expletive infixation.
Yes.
So this, I think, would be one of those cases where you could do a sort of disanalogy by cases, right?
You present ChatGPT a sentence and say, insert the word fucking correctly in this sentence.
And I don't think it would be very good at it.
I think it would be.
I think it would be good.
You reckon it would.
I mean, we probably could test it.
We could.
But we shouldn't.
Not right now.
Yeah.
One of the things that I thought was like...
kind of interesting about how it works is that it does learn language probably differently from the way we do, but it does it all by examples.
You know, so it's looking at all these pieces of text and thinking, ah, this is okay, that's okay.
And one kind of interesting thing about how humans understand language is that we're able to kind of understand meaningless but grammatical sentences.
It's not clear to me that ChatGPT would understand those.
That's another Chomsky example.
So, you know, Chomsky has this example that's like,
what is it?
The green
woozles sleep furiously?
It's colorless green ideas, I think.
Sleep furiously, right?
And that's a meaningless sentence, but it's grammatically well-formed.
And we can understand that it's grammatically well-formed, but also that it's meaningless.
Because ChatGPT kind of combines different aspects of what philosophers of language, logicians, linguistics people see as like different components of meaning.
It sees these as all kind of wrapped up in the same thing.
It puts them in the same big model.
I'm not sure it could differentiate between ungrammaticality and meaninglessness.
There's more to San Francisco with the Chronicle.
More to experience and to explore.
Knowing San Francisco is our passion.
Discover more at sfchronicle.com.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world world collides with everyday life.
There was the six-foot cartoon otter who came out from behind a curtain.
It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.
Should I be telling this thing all about my love life?
I think we will see a Twitch stream or president maybe within our lifetimes.
You can find Close All Tabs wherever you listen to podcasts.
So I've shopped with Quince before they were an advertiser and after they became one.
And then again, before I had to record this ad, I really like them.
My green overshirt in in particular looks great i use it like a jacket it's breathable and comfortable and hangs in my body nicely i get a lot of compliments and i liked it so much i got it in all the different colors along with one of their corduroy ones which i think i pull off and really that's the only person that matters i also really love their linen shirts too they're comfortable they're breathable and they look nice get a lot of compliments there too i have a few of them love their rust coloured ones as well and in general i really like quints the shirts fit nicely and the rest of their clothes through too they ship quickly they look good they're high quality and they partner directly with ethical factories and skip the middleman.
So you get top-tier fabrics and craftsmanship at half the price of similar brands.
And I'm probably going to buy more from them very, very soon.
Keep it classic and cool this fall.
With long-lasting staples from Quince, go to quince.com/slash better for free shipping on your order and 365-day returns.
That's q-u-in-ce-e dot com/slash better.
Free shipping and 365-day returns.
Quince.com/slash better.
Run a business and and not thinking about podcasting?
Think again.
More Americans listen to podcasts than ad-supported streaming music from Spotify and Pandora.
And as the number one podcaster, iHeart's twice as large as the next two combined.
So whatever your customers listen to, they'll hear your message.
Plus, only iHeart can extend your message to audiences across broadcast radio.
Think podcasting can help your business?
Think iHeart.
Streaming, radio, and podcasting.
Call 844-844-iHeart to get started.
That's 844-844-iHeart.
So we're doing real-time science right now.
I just put
the word fucking into the following sentence in the correct way.
Man, that's crazy.
And I did it six times, and I would say 50% of the time it got it right.
And it did, man, that's fucking crazy, man, that's crazy, fucking, man, that's fucking crazy, man.
That's crazy, fucking.
My favorite is
man, comma, that's crazy, comma, fucking.
To be fair,
I think you're a right.
Very unreliable.
You take the commas out of that last one and you've got a grammatical sentence.
Yeah.
Of course, in Glasgow, you can also start with fucking, man, that's crazy.
West London as well.
But the thing is, though, it doesn't know what correct means there.
Yeah, no.
When it's trained on this language, when it's trained on thousands of internet posts it stole, it's not like it reads them and says, oh, I get this.
Like, I see what they're going for.
It just learns structures by looking, which is kind of how we learn language.
But it kind of reminds me of like when I was a kid and I'd hear someone say something funny, I'd repeat it.
And my dad, who's wonderful, would just say, that doesn't make any sense.
And he'd have to explain.
Because if you're learning everything through copying, you're not learning.
You're just memorizing.
Yeah.
Yeah.
Exactly.
There's a, I don't know if you have already talked to somebody about this.
But there's a, you know, a classic argument from Chomsky against behaviorism.
Right.
Behaviorism is the view that we learn everything
through stimulus and response, roughly.
That's not exactly it.
But I'm not a philosopher of mine, so I can get away with
that.
So Chomsky says, look, we don't get enough stimulus to learn language as quickly as we do just through watching other people's behavior and copying it.
We have to have some inbuilt grammatical structures that language is latching onto.
And there's been some papers arguing that ChatGPT shows Chomsky was wrong because it doesn't have the inbuilt grammatical structure.
But one interesting thing is it requires 10 to 100 times more data than a human child does when learning language, right?
So Chomsky's argument was we don't get enough stimulus.
And ChatGPT can kind of do it without the structure, but it's not quite doing it as well.
And it gets like orders of magnitude more
input.
than a human does before a human learns language, which is kind of interesting.
And And it still can't do something as basic as putting the word fuck in.
So it's right, it still can't do that.
Yeah, it doesn't even see, and it doesn't have the knowledge to say request more context because it doesn't perceive context.
And that's kind of the interesting thing.
So there was another paper out of Oxford, I think, that's talked about cognition and chat GPT and all this thing.
And it's just, it doesn't feature Chat GPT features in no way any of the things that the human mind is really involved in, it seems.
It's mostly just not even memorization because it doesn't memorize.
it's just
guessing based on a very large pile of stuff but this actually does lead me to my other question which is you don't like the term hallucination why is that hallucination it makes it sound a bit like i'm usually doing something right i'm looking around seeing the world as something like what it really is And then one little bit of the feature for a visual hallucination, one feature of my visual field actually isn't represented in the real world.
It's not actually there.
Everything else might well be, right?
Imagine I hallucinate, there's a red balloon in front of me.
I still see Mike, I still see James, I still see the laptop.
One bit is wrong, everything else is right.
And I'm doing the same thing that I'm usually doing.
Like my eyes are still working pretty much normally.
I think I, and this is the way that I usually get knowledge about the world.
This is a pretty reliable process for me
and learning from it, right?
And representing the world in this way so when we talk about hallucinations this suggests that chat GPT and other similar things they're going through this process that is usually quite good at representing the world and then oh it's made a mistake this one time
but actually no it's bullshitting the whole time and like sometimes it gets things right yeah
just like a politician imagine a politician that uh bullshits all the time if you could possibly imagine it.
Sometimes they might just get some things true.
And we should still call them bullshitting because that's what they're doing.
And this is what ChatGPT is doing every time it produces an output.
So this is why we think bullshit is a better way of thinking about this.
Or one of the reasons why we think bullshit is a better way of thinking about this.
I also kind of think that some of the ways we talk about ChatGPT,
even when it makes mistakes, lend themselves to overhyping its ability or overestimating its abilities.
And talking about it as hallucinating is one of these, because
when you say that it's hallucinating, as Joe pointed out, you're giving the idea that it's representing the world in some way and then telling you what the content is.
And it has perception.
Yeah, exactly.
Like it has perceived something and like, oh no, it's taken some computer acid and now it's hallucinating like imaginary things.
And yeah, just as you say.
And that's not what it's doing.
And so when the kind of people who are trying to promote these
things as products talk about the AI hallucination problem, they're kind of selling a product that is a product that's representing the world, usually checking things and occasionally makes mistakes.
And if the mistakes were, like Joe said, were a misfiring of a normally reliable process.
or, you know, something that normally represents going wrong in some way, that would lend itself to certain solutions to them.
And it would make you think there's an underlying reliable product here, which is exactly what somebody who's making a product to go on the market will want you to think, right?
But if that's not what it's doing, in a certain sense, they're misrepresenting what it's doing, even when it gets things right.
And
that's bad for all of us who are going to be using these systems, especially since people
You know, most people don't know how this works.
They're just understanding the product as it's described to them using these kind of metaphors.
So the way the metaphor describes it is going to really influence how they think about it and how they use it.
Yeah, just to sort of cap that off, if I can, one of the responses to some comments has been to sort of say of us, look, you whinge about people anthropomorphizing chat GPT, but look, if you call it a bullshitter, you're doing exactly the same thing.
And I mean, there might be some extent to which it's just really hard not to anthropomorphize it.
I don't know why I picked a word that I can barely say.
Like, we've been doing it constantly throughout this discussion, right?
When we were talking through the kind of through the paper, we kept talking about chat GPT as if it had intention, right?
As if it was thinking about anything.
That might be another reason to call it bullshit.
We go, look, if we have to treat it as if it's doing something like what we do, it's not hallucinating.
It's not lying.
It's not confabulating.
It's bullshitting.
And if we have to treat it as if it's behaving in some kind of human-like way, here's the appropriate human-like behavior to describe.
I also think the language in this case, and one of the reasons they probably really like the large language model concept,
language gives life to things.
When we describe the processes through which we interact with the world and interact with living beings, even cats, dogs, even we anthropomorphize living things, but also when we communicate with something, language is life.
And so it probably works out really fucking well for them.
Sam Altman was saying a couple of weeks back, maybe a month or two, he was saying, oh yeah, AI is not a creature.
It was something he said.
And it was just so obvious what he wanted people to do was say, but what if it was?
Or are people saying this is the creature?
And it almost feels like just part of the con.
Yeah.
Yeah, yeah.
It fully is.
I hadn't thought about that as a reason for them to go for large language models as a way of kind of, I don't know, being the gateway into
more investment in AI.
Fake consciousness.
Yeah, yeah.
But I had thought about like...
how this might have been caused by just like deep misunderstandings of the Turing test.
Right, go ahead.
No, I want to hear this one.
Yeah, yeah.
So that, like, the Turing test, I think this is closer to what Turing was thinking, but the Turing test is a way of getting evidence that something is conscious, right?
So,
you know, I'm not in your head, so I can't feel your feelings or think your thoughts directly, right?
I have to judge whether you're conscious based on how you interact with me.
And the way I do it is by listening to what you say, right?
And talking to you.
And
Turing sort of was asked, you know,
how would you know if a computer was conscious?
So, you know, we think that our brains are doing something similar to what computers do.
That's a reason to think that maybe computers eventually could have thoughts like ours.
Right.
And some of us think that.
I think it's possible.
I think it's possible.
Great.
Yeah.
I didn't know if they thought it was possible because not everybody thinks it's possible.
It's possible.
I just don't know how.
Yeah, exactly.
So Turing was kind of, you know, thinking like, how would we know?
And one way we would know the obvious way is to do the same thing you do to humans, talk to it and see how it responds.
And that's actually pretty good evidence if you don't have the ability to look deeper.
But it's not constitutive of being conscious.
It's not what makes something conscious or determines whether they're conscious or in any way like grounds their consciousness, right?
Their ability to talk to you is just evidence.
It's just one signal you can get.
And that's the way to think of the Turing test.
So as a result of people thinking in a kind of behaviorist way, thinking, ah, passing the Turing test is just all it is to be a thinking thing.
There have been, at least since the 90s, attempts to design chatbots that can beat the Turing test, right?
And popularizations of these attempts and run-throughs of the Turing test that talk as if, oh, if a computer finally beats the Turing test, I should say what the Turing test is, right?
Yeah.
The way Turing suggested the test works is you have a computer and a person
both chatting in some way with a judge, and the judge is also a person.
Right.
And if the judge can't tell which of the people he's chatting with is a human, then the computer's one.
The Turing test, because it's indistinguishable from a human, right?
So people have taken this, and it's been popularized as a way of determining sort of full stop whether something's conscious.
But it's just a piece of evidence, and we have a lot more evidence.
Like we know a lot more than Turing did about how the the internal functioning of a mind works.
Functionally, what it's doing, how representation works, how experience and belief works, how they are connected to action, and how they're connected to speech and thought.
And
once you know all that stuff, you have a lot of other avenues to get more evidence about whether the thing is conscious.
And whether it passes the Turing test is just like a drop in the bucket compared to these, especially if you know how its internal functioning is.
The other notorious problem with the Turing test, and I think to be fair, Turing did mention this, if not in the original, then kind of later on.
One problem with the Turing test is that it's like the De Voigt Kampf in Do Androids Doing Rive Electric Sheep, right?
Plenty of humans would fail the Turing test.
Yeah, it is a piece of evidence, but it was never, as Mike says, it wasn't supposed to be constitutive.
It wasn't like, if you can do this thing, you're conscious kind of full stops.
It was supposed to be, here is a thing that might indicate the being you're talking to is conscious.
Happily, as Mike says, we've got loads and loads of other evidence.
So these guys have made a machine that's just designed to do one thing, and that's pass the Turing test.
I can give you one more annoying example.
Are you familiar with Francois Cholet, the abstraction and reasoning corpus?
So this is going to make you laugh.
So he's a Google engineer and he created this thing called the ARC, the abstraction reasoning corpus, to test whether a machine was intelligent.
And someone created a model that could beat it.
And then he immediately went, okay, you can't just train the model on the answers to the test this is why people well i say people a fairly small subset of of weird nerds um but this is why a small subset of weird nerds have been for the last 20 odd years emphasizing artificial general intelligence right and the kind of what we'll call it when something really is a thinking being is when it's not specialized to do one and only one task and but rather when it's you know capable of applying reasoning to multiple kinds of different and disanalogous cases on the one hand it does seem a little bit like the guy flipping the table and going, oh, for fuck's sake, no, you've one, I'm changing the rules.
But on the other, I think he's got a point, right?
Yeah, if you're if you're training a thing to do very specific,
like you know, like activate certain shibboleths, then unless you're some kind of mad, hard behaviorist, then yeah, like it's not, that doesn't demonstrate intelligence.
It is one thing that might indicate intelligence.
It's the same problem with Chat GPT.
It's built to resemble intelligence and resemble consciousness and resemble these things, but it isn't.
It's almost like it's meaningless on a very high-end philosophical level.
I find the whole generative AI thing deeply nihilistic.
I do too.
I mean, one thing that connects to this is how bad it is at reasoning.
And this is kind of good for us, especially in philosophy, because our students, when they use it to write papers,
the papers have to have arguments.
And ChatGPT is very bad at doing reasoning.
if it has to be sort of an extended argument or a proof or something like that.
It's very bad at it.
I think also that if there's one thing kind of we learned from ChatGPT,
it's that this is not the way to get to artificial general intelligence.
I was gonna ask, do you think that this is getting to that?
No, partially because it's so subject specific, right?
It's trained to do one task and it takes quite a lot of training to get it to do that task well.
It's bad at many of the other tasks.
that we think are connected with intelligence.
It's bad at logical and mathematical reasoning.
I understand that OpenAI is trying to fix that.
Sometimes it sounds like they want to fix it by just connecting it to a database or a program that can do that for it.
But either way, what you have with these kind of big base nets models is something that is
really good at whatever you train it to do, but not going to be good at anything else.
You know, it's fed a lot of data on one thing.
It finds patterns in that.
It finds regularities in that.
it represents those.
The more data you feed it, the better it'll be at that.
But it's not going to have this kind of general ability.
And it's not going to grow it out of learning how to speak English.
Have you heard the Terry Pratchett quote about, it's quite early on, he's talking about HECS, the kind of steampunk computer they make at Uncle University, and it just has this offhand line.
A computer program is exactly like a philosophy professor.
Unless you ask it the question in exactly the right way, it will delight in giving you a perfectly accurate, completely unhelpful answer.
So, if you abstract the justified shot of philosophy lecture, it's there.
Basically, what intelligent things do is go, you can't have meant that, you must have meant this.
Chat GPT just goes,
I will take a question as read.
I mean, of course, it doesn't have an I, there is nothing like I've just answered, formal piles it again, but it's the same thing, and it's trained to do incredibly specific things, and you get the same problems as any program, like garbage in, garbage out.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.
There was the six-foot cartoon otter who came out from behind a curtain.
It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.
Should I be telling this thing all about my love life?
I think we will see a Twitch stream or a president maybe within our lifetimes.
You can find Close All Tabs wherever you listen to podcasts.
Run a business and not thinking about podcasting?
Think again.
More Americans listen to podcasts than ad-supported streaming music from Spotify and Pandora.
And as the number one podcaster, iHeart's twice as large as the next two combined.
So whatever your customers listen to, they'll hear your message.
Plus, only iHeart can extend your message to audiences across broadcast radio.
Think podcasting can help your business?
Think iHeart: streaming, radio, and podcasting.
Let us show you at iHeartAdvertising.com.
That's iHeartAdvertising.com.
Jan Marcelek was a model of German corporate success.
It seemed so damn simple for him.
Also, it turned out, a fraudster.
Where does the money come from?
That was something that I always was questioning myself.
But what if I told you that was the least interesting thing about him?
His secret office was less than 500 meters down the road.
I often ask myself, now, did I know the true Rihanna at all?
Certain things in my life since then have gone terribly wrong.
I don't know if they followed me to my home.
It looks like the ingredients of a really grand spy story because this ties together the Cold War with the new one.
Listen to Hot Money, Agent of Chaos on the iHeartRadio app, Apple Podcasts or wherever you get your podcasts.
Elite Basketball returns to the Elite Caribbean destination.
It's the 2025 Battle for Atlantis men's tournament happening November 26th to 28th.
Don't miss hometown team St.
Mary's, along with Colorado State, Vanderbilt, Virginia Tech, Western Kentucky, South Florida, VCU, and Wichita State, playing 12 games over three days.
It's basketball at its best, plus everything Atlantis has to offer.
Aqua Venture Water Park, White Sand beaches, world-class dining, and more.
Get Get your tickets and accommodations at battlefrontlantis.com.
So have you found a lot of students using Chat GPT?
Because this is a hard problem to quantify.
Is it all the time?
I mean, it's a lot.
I wouldn't say all the time.
Jay, you said you thought that there were more season deeds this semester?
Yeah, so I think there are quite a few and sometimes it is difficult for us to prove.
And if we can't prove it, then at our university, then, well, shucks, they kind of get away with it.
We can interview them, but unless we're like,
unless we have the proof that you could take before like a court, then we are we're not able to really nail them for it, which is a bit of a shame.
So it's suspicion.
Yes, we suspect that a lot of them are.
I'm very.
I'm 100% on some of them.
I just know.
What are the clues?
I want to hear from all of you on this one.
Like, what are the signs?
It's the uncanny valley.
Like, I will not shut up about this, right?
So, you know,
like you, of course, course, I imagine a lot of you kind of, this readable view as well, but the uncanny valley thing in respect of humans is that there's a kind of a point up to which humans seem to trust artificial things more the more they look like a human.
So like we'll trust a robot if it's kind of bipedal or if it looks like a dog, but after a point, we really rapidly start to trusting it.
So like if it basically, when it starts looking like data, uh some deep buried lizard brain goes, danger, danger, this thing's trying to fuck with you somehow.
Yeah, right.
And ChatGPT writes exactly like that.
It writes almost, but not entirely, like a thing that is trying to convince you that it's human.
I mean, the dead giveaway for me is that it never makes any spelling mistakes, but it can format a paragraph to save its life.
Normally, you would expect someone who didn't know how to kind of order their sentences to misspell the occasional word.
Chat GPT spells everything correctly and doesn't know what like subject-object agreement is.
It's like it's hacked.
So can you just define that, please?
Subject-object?
Subject-object
the beer that I drink, right?
Not the drink that I bear, right?
Right, because it's interesting, it almost feels like there is just an entire way of processing and delivering information as a human being that we do not understand.
There is something missing.
I mean, for me, like it's very good at summarizing, but when it responds, it responds in the in really trite and repetitive ways.
So, like, you'll get a paper that summarizes a bunch of literature at length very effectively and then responds by saying,
well, you know, this person said this, that is doubtful.
They should say more to support that, you know, which is basically saying nothing, right?
And that's pretty common.
It also does lists.
It formats things in kind of like a list.
And even if the students don't want it to like make it look less like a list, the paper still reads like a list.
Oh, because someone has asked, give me a few thoughts on X.
Yeah.
Yeah.
Yeah.
That's very depressing.
What about you?
What's up?
Things like the introductions you get.
You'll get, this is an interesting and complex question that philosophers have asked through the ages.
And this is one of the things we shout at students not to write from first year.
And then you'll get this garbage right back at you.
And then at the end, it'll be...
oh overall this is a complicated and difficult question um with many nuances it's a world of contrasts one of the things we tell them: don't ever fucking do this.
This is terrible.
And right back at you, in perfect English, the sort of English you'd expect a really good student might have written, but clearly they can't be a good student because otherwise they've listened to our fucking instructions.
I also, I had some papers that I suspected were Chat GPT this year, but they were already failing.
So I didn't think.
Yeah, I didn't think it was worth it to pursue them, you know, as a plagiarism or chat GPT case.
So it's never good papers then.
It's never like an eight, like a first.
No, absolutely not.
So I think that part of what goes on is you can get a passing grade with a chat GPT paper sometimes in the first couple years when the papers are shorter and we're not expecting as much.
But then when you move into the what we call honors level here, which is like upper level classes in the U.S., like third and fourth year.
I got a first to have a wrist with.
I know.
Yeah, exactly.
Great.
Well done.
Well done.
You would not have gotten it with ChatGPT because you get dropped in these classes where we expect you to have gained writing skills, minimal ones, in your first two years.
And then we're going to build on that and have you do more complicated stuff.
And ChatGPT doesn't build on that, right?
It just stays where it was.
So you go from writing a kind of C grade passing paper to E or F grade paper.
And it's also more obvious because the papers are longer and Chat GPT can write long text, but it gets very repetitive and noticeably repetitive, right?
And so you're kind of lost.
Like you haven't done the work of figuring out how to write on your own.
And the tool that you've been using is not up to the task that you're now presented with.
And so I think I have seen a few papers that I was suspicious of, but the papers that I was certain of were ones that were like senior theses.
Very clearly, the person just had no way of writing coherently.
That's insane at that stage.
Yeah.
I mean, it's fun because there's, I mean, one of the things that we get told about is like, oh, students have got to learn how to use, you know, AI and large language models, plagiarism machines, responsibly and in a kind of positive way.
Well, if they're using them in a way that means they don't learn how to write, then it's not positive, is it?
Like, yeah, it's fucking hard to write a good essay.
Yes, it is fucking hard to write.
That's why we practice.
That's why we have editors.
That's why we do this collaboratively.
And if you're using this as a sort of, oh, I don't know how to write, well, tough shit.
You're never going to know how to write.
That doesn't seem to me to be a positive use of any of this.
Well, that's the thing.
Writing is also reading.
The consumption of information and then bouncing the ideas off of your brain, allegedly.
Yeah,
I worry about, like, because ChatGPT is good at summarizing.
So I worry that one of the uses people will think, ah, this is a pretty good use for it, is summarizing the paper that they're supposed to read.
and it will do that effectively enough for them to discuss it in class if they're willing to be yeah
right
but they're not going to pick up a lot of the nuances and a lot of the kind of like stylistic ways of presenting ideas that you get when you actually do the reading and it's it's so frustrating as well because like for this for example I've just got to printing off things.
I don't read PDFs anymore because I feel like you do need to focus.
There's some evidence that reading physical copies makes you engage more.
Sorry, I'm very old-fashioned, I guess.
But no, it's true though, but also reading that, I wouldn't have really on the PDF, I wouldn't have given it as much attention.
But also, going through this paper, you could see what you were doing.
Like you could see that you were lining up.
Here are the qualities that we use to judge bullshit.
But also, summarizing a paper does not actually give you the argument.
It gives you an answer.
So what do you actually want students to do instead?
Because I don't think there's any reason to use ChatGPT for these reasons.
Like it doesn't seem to do anything that's useful for them.
I don't have, no, I don't actually have any use for ChatGPT that I can put to my students and say, here's what I think you should do with it.
We are like kind of developing strategies for keeping them from using it.
So like building directly on what you're saying, like in my class next year, I'm going to have the students do regular assignments, which are argument summaries and not paper summaries.
So the idea is they have to read the paper, find an argument, and tell me what are the premises, what's the conclusion.
And that's something that ChatGPT is not good at, right?
But it's also something that will give them critical reading skills, which is what I want to do, right?
So yeah, I think that I've mostly been thinking about ways to keep them from relying on it, because I think that often if they rely on it,
they'll
put themselves in a worse position.
Yeah, when it comes to future work.
They won't develop the skills that they're going to need.
And the skills that we tell them and their parents, they're going to get with their college degree, right?
It almost feels like we need more deliberacy in university education because I was not taught to write.
I just did a lot of it until I got good enough grades.
And Daniel Chandler, great mentor, but I've had tons of them.
And it almost feels like we need classes where it's like, okay, no computer for this one.
I'm going to print this paper out and you're going to underline the things that are important and talk to me about.
Almost feels like we need to rebuild this because, yes, we shouldn't be using ChatGPT to half-ass our essays, but at the same time, human beings are lazy.
Yeah.
I mean, for me, I also prefer to read off the computer, but I often read PDFs because I'm terrible at keeping files, right?
Physical, like, you know, I'm not going to keep a giant file drawer with all the papers that I've read and then written my liner notes in.
You guys can see in my office, they're just piled around.
Like you can't see this end.
That's academia.
Yeah, but I just have piles of paper with empty coffee mugs everywhere that the camera is not facing.
But it's the terrible system.
So at least on my computer, if I'm like, oh, I read that paper like a year ago, what did I think?
I can click on it and see my own notes.
And I do think that there's something to keeping those records and kind of actively reading in that way.
I don't know how I ended this without telling you how to make students do that.
But no, you started with the correct answer, which is don't use chat GPT.
Yeah, yeah.
I actually,
I've got a certain amount of sympathy with like, just keep writing till you get good at it.
But I realize as a lecturer, that can't be my official position.
And I certainly think that it's the case that certainly Glasgow has got better over the last few years about going, oh, actually, you do like, we do need to give you some kind of structuring and some buttressing on here's how to write academically, here's how to kind of do research.
And I think that's all to the good.
It's worth saying this started happening well before chat gpt started pissing all over our doorstep so they don't get to claim that as being a benefit um there was the whole wikipedia panic when i was in school yeah yeah it's the thing about wikipedia right it's like i used to say this to my students wikipedia one of the best resources it's it's absolutely fine as a starting point for research yeah absolutely no problem with it whatsoever but if you're turning in an honest level essay i want you to go and read the things it's referencing yeah that's right and um yeah i think these things are often great as sources my worry about chat ChatGPT is that it's not great as a source.
It's just like we've been saying, it often gets things wrong.
And it often, it'll make up sources, whereas Wikipedia will never do that.
I don't think.
There are some
famous hoaxes, but it gets edited out fast.
They get caught, yeah.
Joe, have you got any positive things to say about ChatGPT?
Positive things to say.
Big fan.
So I know some people who have used these kinds of things productively, not in ways that our students would, but I know some mathematicians who have been using it to do sort of informal proofs and things like that.
And it does still bullshit, and it bullshits very convincingly, which makes it very difficult to use for this kind of purpose.
But it can do some interesting and cool things, you know, that I think some people in that sort of field have found useful.
And also, we've mentioned this before: like, if you want ChatGPT to write you a bibliography, if you've got a bibliography in one style, tell it to put something into a different one, then it's good for that, and it's good for like code data processing, doing certain things.
And I also think, I don't know, I'm not sure how I feel about what I'm about to say, but
yeah, I'm going for it.
It is a somewhat positive thing maybe for ChatGPT, which is that we often have students who have like really interesting ideas and well thought out arguments.
but for whom English isn't their first language and the actual writing is kind of rough and you have to like
push through reading it to get the the good idea which is often really there and quite you know creative and insightful and so I do wonder if there's a way to use it so it just smooths off the edges of this kind of thing but I worry that if you tell students to do that they'll just
first it if they can develop the language skills they often get really good by the end yeah what are you gonna say Jason I want to say
yeah you can see me getting agitated yeah I think much correct about like that this is a kind of possible use but I think this and this is why I'm getting visibly agitated here, that that students either need to or feel they need to use this speaks to a deeper issue, right?
To a social issue, to a political issue, to an issue about how universities work.
If a student is having problems with English, then there's a number of like explanations or a number of kind of responses, right?
One response is that like Glasgow is an English teaching university.
If someone's English isn't good enough to be taking a degree, then plausibly it shouldn't have been let in.
And why have they been let in?
Well, because of money.
Or alternatively, if someone's having problems with English for like whatever reason at all, there should be support here.
There should be kind of tutors.
There should be people who can help with English.
But again, that will cost university money.
So of course that doesn't happen.
It doesn't happen anywhere.
It doesn't happen anywhere near the extent it would have to happen in order for this to be a general policy.
Yeah, I think it could be better.
But I do think that universities often have like a writing center or a tutoring center that you can send.
Oh, yeah, I mean,
but they don't have the sort of spread that would be needed or the staff that that would be needed for this to be instead of using ChatGPT to sound the edges off.
Yeah, I think
my worry especially would be that this is my first year here at Glasgow, but
I think they probably have a good writing center.
Universities I've been in in the past, I felt very confident sending students to the writing center when they have these problems.
But I think James...
is completely right that we don't want the universities to see this as a way to get rid of the writing center.
And that's 100% a risk given the financial problems that universities are facing.
And maybe they're already not using the writing center as much as we'd like, given the quality of papers we sometimes get.
But they're often good.
I think another thing, as far as this is like a social problem, is that when grading, I myself try to grade in terms of like the ideas and argument, because this is philosophy, and not the quality of the age
right but not everybody does that so i kind of think that another part of this is figuring out how we want to evaluate the students and what we want to privilege in that evaluation yeah sure so yeah then again the kind of that that becomes a problem about what people are checking for not let's take this ars backwards approach to marking which is like how fancy is your english ah fancy english is good english have an a but rather we should be kind of checking for different things right so again the blame lies differently in that case but it still becomes a question that's not solved technologically yeah almost feels like large language models are taking advantage of a certain kind of organizational failure oh yes what an idea
crazy that the tech industry manipulating a part of society that was weak i have a kind of related tangent here which is like what are the use cases that open ai was expecting but didn't want to emphasize because for everybody in universities as soon as this came out the first thought was students are going to use this to cheat and certainly like the people in open ai went to college, right?
And that's what I hear about them.
So they must know.
Well, Sam Altman dropped out.
Oh, right.
Yeah, I'm sure he really understands the value of a secondary.
He's like, I got to write at least fucking essays.
Anyway, maybe he was thinking, I would have loved to have a computer write my essays.
I'll devote my life.
But I mean, I'm sure that they like recognize these bad use cases, right?
But they're doing nothing to mitigate them as far as I can see.
And like another one that's very related is like,
you know, I'm sure you've heard of this, Ed, phishing, right?
A lot of, you know, corporations get attacked and get hacked not by someone cleverly figuring out a back door to their system, but by somebody sending social engineering.
Yeah, asking for the password of somebody else.
And one of the biggest barriers to that is that a lot of the people who are engaging in phishing aren't from the same country as the company they're targeting, right?
So they're not able to write a convincing email or make a phone call that sounds like that person's supervisor.
But with a tool like this, you could 100% write that email, right?
It's going to make it a lot easier for these kinds of illicit schemes to work.
There has been a market increase, according to CNBC, which you just brought up.
It's a 1,265% increase in malicious phishing emails since the launch of Chat GPT.
Great stuff.
I mean, if I could have thought of that.
Imagine what a criminal could do.
Right.
But also, weren't the people at OpenAI thinking about that?
Like, fucking.
They don't care.
Yeah, yeah.
We've all seen Jurassic Park, right?
Yeah, yeah.
They were so busy thinking about what they could do, they never thought about whether they should.
Yeah, this is the kind of problem with the move fast and break things mentality.
Like, there are obvious, I mean, I think I might be the only person who was raised in the U.S., but we had future problem solvers where you, you know, you think about a future problem and what.
bad consequences there could be of some technology and how to solve them, usually through social cues.
If I could do that in fifth grade,
I would expect these people to have thought through some of the bad consequences of the technology they're putting out.
And, you know, some of those are cheating on tests, and they don't seem to have worried about that.
And another one is phishing.
They don't seem to have worried about that, you know.
And bio season algorithms, right?
So
this again is a, you know, it comes as no surprise to you, Edward.
It turns out.
So with a lot of the facial recognition systems, they were incredibly racist.
They were going back to Microsoft's Kinect.
They could not see back.
Yeah, yeah, yeah.
And kind of CCTV stuff that basically just sort of, unless it was presented with a blindingly white Caucasian went, I don't know, right?
But like
the sort of stuff where these live language models are trained on certain sets of data and they're trained on certain assumptions and
shitting shit out, right?
And particularly if people think that it's actually doing any kind of thinking.
and if they kind of cargo cult it, we again get a kind of social problem multiplied by technology feeding back into a social problem.
And it's the
sorry, these guys have heard me whinge about it so much in
a year or so, but I'm profoundly skeptical of technology's ability to solve anything unless we know exactly the respect in which we want to solve it and how that technology is going to be applied.
You know, like, sure, experiment with bring back dinosaurs, but like, don't tell me that it's going to save the healthcare system unless you can demonstrate it to me step by step how that big ulti rect run around on island nooblade is going to save anything and they just try and blind people actually bringing back dinosaurs would be just good in itself that would be great um yeah all right this isn't the best example but i already had uh
i had jeff goblin in my head and i had to go to the drastic park example fellas this has been such a pleasure oh right are we out of time all right don't put that in the recording
nah it's fine we won't edit it in post
you will give us your names i'm mike hicks but my papers are written by Michael Townsend Hicks and I'm a lecturer at the University of Glasgow.
My website is townsendhicks.com.
It'll be in the podcast profile.
Don't you worry.
All right, just plug in my
paper.
Lug it.
My name is Joe Schlazer.
I'm a university lecturer in moral and political philosophy at Glasgow.
I'm James Humphreys.
I'm a lecturer in political theory at the University of Glasgow.
And even if I wanted to, I couldn't give you my website because I don't have one.
Fuck you, Mike.
Everyone, you've been listening to Better Offline.
Thank you so much for listening, everyone.
Guys, thank you for joining me.
Thanks, having us.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Matosowski.
You can check out more of his music and audio projects at matosowski.com.
M-A-T-T-O-S-O-W-S-K-I dot com.
com.
You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and, of course, my newsletter.
I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash betteroffline to check out our Reddit.
Thank you so much for listening.
Better Offline is a production of CoolZone Media.
For more from CoolZone Media, visit our website coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
This is an iHeart podcast.