Predicting everything
The Royal Society recently announced the shortlist for their annual Science Book Prize – and nominated is science writer and journalist Tom Chivers, author of the book Everything is Predictable. He tells us how statistics impact every aspect of our lives, and joins Marnie as a studio guest throughout the show.
A drug – lecanemab – that can slow the progression of Alzheimer’s disease has recently been approved for use in the UK, but the healthcare regulator NICE has said that it won’t be available on the NHS. But what is behind this decision, and what makes creating an Alzheimer’s drug so difficult? Professor Tara Spires-Jones from the University of Edinburgh talks us through the science.
And could ‘smart paint’ supersize our fruit and veg? Reporter Roland Pease heads over to the experimental greenhouses of Cranfield University’s crop science unit to see if the technology works.
Thee Paralympic Games are now underway in Paris, with athletes competing across 22 different events. But as competitors have a range of different impairments, how is it ensured that there's a level playing field? Professor Sean Tweedy from the University of Queensland calls in from Paris to explain how athletes are sorted into categories for competition.
Presenter: Marnie Chesterton
Producers: Sophie Ormiston and Ella Hubber
Editor: Martin Smith
Production Co-ordinator: Andrew Lewis
Listen and follow along
Transcript
This BBC podcast is supported by ads outside the UK.
A happy place comes in many colours.
Whatever your color, bring happiness home with CertaPro Painters.
Get started today at Certapro.com.
Each Certipro Painters business is independently owned and operated.
Contractor license and registration information is available at Certapro.com.
From Australia to San Francisco, Colour Jewelry brings timeless craftsmanship and modern lab-grown diamond engagement rings to the US.
Explore Solitaire, trilogy, halo, and bezel settings, or design a custom piece that tells your love story.
With expert guidance, a lifetime warranty, and a talented team of in-house jewels behind every piece, your perfect ring is made with meaning.
Visit our Union Street showroom or explore the range at colournjewelry.com.
Your ring, your way.
This is BBC Inside Science, first broadcast on the 29th of August, 2024.
I'm Marnie Chesterton.
Hello, coming up in the next half hour, everything is predictable, which means you might have already guessed that we're talking about why it's so hard to find Alzheimer's drugs, how to scientifically judge a Paralympian's level of disability, and a new light tweaking paint that boosts your greenhouse growth.
And I'm joined in the studio this week by a guest, science writer and journalist Tom Chivers.
Hello.
Hello, hello Marnie, how are you?
Good.
You are the author of the book Everything is Predictable, which was recently shortlisted for the Royal Society's annual science book prize.
Did you predict that?
Absolutely not.
No.
Partly I was incredibly pleased out of professional spite because my podcast co-host Stuart Ritchie also got nominated a few years ago and he lost.
So I'm just hoping to, you know, go one further.
But yeah, also it's just it's a lovely thing.
It's a lovely thing.
It's a recognition of a book that I'm...
I'm quite proud of.
So, you know, hopefully it does well.
Yeah.
Underrated professional spite as a motivator.
Well, maybe I'll tell you, but I started writing the book out of spite as well.
So the book's about Bayes' theorem.
And I started writing it because I once wrote a piece about Bayes Theorem, which used the word obscure in it.
And the internet got furious at me for three days.
You know, it's obscure.
I'm a professor of biostatistics and I use Bayes all the time.
So I immediately pitched a book about Bayes to sort of shut them all up.
Okay.
So you're appealing to the stats Bayes community with this book?
The idea of the book is to show how powerful a tool Bayes' theorem is for thinking about the world.
So Bayes' theorem is basically the maths of prediction.
It's the maths of how you take information in and use it to make new predictions about the world, and it underlies everything.
So what was Bayes' theorem?
What is this?
What is Bayes' theorem?
What is this mathematical tool?
It's a one-line equation developed by a guy, Thomas Bayes, Reverend Thomas Bayes, in the 18th century.
And it tells you how likely something is to be true.
I could probably get the actual equation right if I really.
It's fine.
Let's not, shall we?
Equations on the radio.
Equations on the radio.
Even radio four.
Yeah, too much.
Too much.
Okay, fair enough.
Sort of a probability that you might have done at school.
So when I roll three dice, how likely am I to see three sixes, you know, that sort of stuff?
Three sixes on a three fair dice.
But that's telling you how likely you are to see some event given some hypothesis, like the dice affair.
But what I think what scientists want to do with maths and what statisticians want to do is say I've got this I've seen this event I've got this new data I want to say how likely it is that my hypothesis is true and what Bayes' theorem does is it tells you how you do that it how it tells it tells you how you use this new information to tell you how likely your your predictions your best guesses are and his big insight was that you have to use the information you already had so you're adding it to your priors does that
that seems remarkably common sense so rather than saying i don't know how likely is it to rain at the moment, you look at the annual rainfall and you use that plus maybe the weather prediction for that day, and that gives you a good probability.
Well, the classic example is sort of medical testing, right?
If you do this test and you have the disease, look at the condition, whatever it is, if you have it and it will 99% of the time correctly tell you that you have it.
And if you don't have it, it will 99% of the time correctly tell you that you don't have it.
You do the test, you get a positive result.
How likely are you to have it?
Most people, I think, would instinctively say 99% then.
But what Bayes' theorem tells us is the correct answer is you don't know.
You have no idea how likely you are to have it because
you haven't taken into account how likely you were in the first place.
I've been trying to work out a way of making this intuitive for people.
The best I've come up with so far is imagine I did that test, me, Tom Chivers, and I get a positive result.
And yeah, how likely am I to have the condition?
And then you might say 99%, but what if I then say, ah, but it was a pregnancy test?
You might say, okay, perhaps it is more likely that Tom Chiva's a 43-year-old man was the one in the hundred where the test got it wrong than that he is actually pregnant.
So that's all it is.
You're right.
It's totally common sense.
It feels common sense, I think, because we are Bayesian animals, right?
Yes, yes, absolutely.
So
what our brain is constantly doing is taking in messy, noisy information from the outside world and using it to build a model of the world and help us make decisions and predictions about the world.
So it feels like we're seeing the world through a sort of clear window,
take vision.
Actually, we know that's not true.
We're getting a little sort of spots of light coming through our eyeballs.
Our retinas are curved and lumpy.
They're constantly moving around.
So what our brains are actually doing is making predictions of the world and then updating those predictions with new information.
One of the cases I'm making in the book is that you can describe almost all intelligence as basically prediction.
How good you are at predicting the world is pretty synonymous with how well you understand it, I would say.
Now, Tom, you're staying with me for the whole programme because you've written about a couple of the topics that we're talking about today.
We also have a podcast version of Inside Science where we're allowed to talk more about stats.
Can you stick around for that?
I would love to.
I would love to.
But bore on for ages about Bayes' theorem.
Love to.
Excellent, excellent.
Now, Alzheimer's, it's a devastating form of dementia which puts a huge emotional cost on sufferers and their loved ones.
And it also has a financial cost
both for carers and the wider NHS.
But potential drugs are emerging and one, lacanomab, which can slow the progress of Alzheimer's disease, has been recently approved for use in the UK.
But as you will have heard on the news, the UK healthcare regulator NICE has said it won't be available on the NHS.
Here on Inside Science, we wanted to check in with the state of research into Alzheimer's drugs.
Why has it taken so long for us to get to this stage?
What makes creating an Alzheimer's drug so difficult?
Joining me to talk about this is Professor Tara Spires-Jones, who's been researching Alzheimer's disease for over 20 years.
Welcome, Tara.
Hello.
So,
what's the state of Alzheimer's treatment at the moment?
Is lacanomab the first drug to slow progression of the disease?
Yes, lacanomab is the first drug to slow progression of disease that's approved in the UK.
There are another couple of drugs that are almost exactly the same that are approved in other parts of the world.
But it has been a real turning point in the fight against Alzheimer's disease because these drugs, like lacanamab, they're antibodies and they actually remove one of the toxic proteins from the brain that accumulates in Alzheimer's and they do significantly, although modestly, slow disease progression.
And nothing we had before could do that.
Do we know how these drugs are working?
These drugs work by removing this toxic sticky amyloid from the brain.
So you've probably heard of plaques and tangles before, but what happens in the course of Alzheimer's disease is you get two different proteins that abnormally clump in the brain.
One's called amyloid beta or amyloid, and the other's called tau.
And these drugs are antibodies that actually are directed to amyloid beta, and they remove the amyloid from the brain.
So, people who have Alzheimer's disease, or are even in the very early stages, will have a large amount of amyloid in their brains.
They get infusions of these drugs, and that actually pulls this toxic amyloid directly out of the brain.
And are these amyloid proteins a symptom of Alzheimer's, or are they the cause?
Well, how long do we have, Marty?
There's a bit of a debate about this.
Amyloid is one of the proteins that we know is always in the brain when people have Alzheimer's disease.
There are very rare familial forms of Alzheimer's that are directly caused by changes in amyloid due to a gene change.
But for the more common sporadic form or more usual form of Alzheimer's, we know only that amyloid is a very early event, and that in animal models, it does cause things like memory decline and causes the death of synapses, the connections in the brain.
So, most of us in the field agree that amyloid is an important part of the early stages of Alzheimer's disease.
But Tom, you told me something really interesting about what pointed towards the amyloid theory, and it's about Down syndrome individuals who have an extra copy of chromosome 21.
Yes, as I understand it, the rate of dementia or Alzheimer's in people with Down syndrome is extraordinarily high, and that does seem to be something in their genes that leads to this, to the overproduction of these things.
So
that seemed like a useful indicator.
I will ask, I would love to know, the other drug that I heard that made some progress was aducanamab a couple of years ago, but there was a lot of backlash because as I understood it, the Federal Food and Drug Administration's own statisticians disapproved of the approval of that and it wasn't clear that it actually worked and there were some awful side effects.
so is this one that we're approving now that's clear a benefit is it yeah lacanumab did pass its phase three clinical trial aducanyumab was controversial because it did not pass its primary end point in its final big clinical trial and despite that it was approved by the FDA and they had done some as as someone who's interested in stats you'll love this they'd done some post hoc non pre-approved stats tests to say well in this subset of people it worked but the bottom line across all of these drugs and now there are three is that they act in a very similar way and they seem to be, at the end of the day, having very similar outcomes, which is they do all slow disease progression, they don't slow it much, and they do come with very potentially dangerous, although rare, side effects.
It was bleeding on the brain and things for some patients, weren't there?
Yeah, brain swelling and brain bleeding and death in a few cases that were directly attributed to the drug.
And that's in lacanumab?
Yes, in all three of these drugs.
But yes, lacanamab has these potentially dangerous but rare side effects.
And part of the issue with lacanamab is that people have to be monitored very carefully to watch for these effects.
They have to be chosen very carefully because they have to be in the early stages of disease for this drug to work.
And that's difficult in and of itself.
It requires cerebrospinal fluids or PET scans, which are radioactive scans.
And then once people are getting the drug, they need an injection or infusion into their blood vessels every two weeks.
And then they have to be monitored very closely for these dangerous but rare side effects.
Yeah, so it's on average five months extra that you gain.
But it sounds like most of that five months is spent in medical facilities making sure that the drug doesn't misfire and cause catastrophic results.
Yes, and that's why even though it's going to be disappointing for many people to know that a drug has been proven to be safe and effective and accepted by the regulator of that kind of level, the MHRA, it's still not going to be available on the NHS.
But it's a tough decision, right?
Because you're only getting this 30% slowing of disease progression.
You're not getting any better.
You're just getting worse more slowly.
And as you say, you're spending an awful lot of time being monitored in hospital settings and it's a risk.
So even if it was approved on the NHS, it would be something you would want to think about carefully.
Is this worth it?
And potentially there are other things out there, right, Tara?
Exactly.
So the good news about this, even though it's sort of a mixed bag of lacanomab being working but not working very well and being expensive, but it's really opened the door because now we have this proof of principle.
We can slow disease progression.
And thinking back, this disease, we've known about this since Aloise Alzheimer in the early 1900s, and have had absolutely nothing that can change disease progression until very recently.
But now that that's in place, it gives a lot of enthusiasm to the field.
We're getting smart people joining us.
It gives enthusiasm to the companies who want to invest money and to the funders to bring us more funding.
Because we learned from things like COVID that if you throw a lot of money at a problem, you can make progress quickly.
And beyond that, we have some research coming through that's amazing.
So Tom mentioned Tau.
So, this amyloid cascade hypothesis is that amyloid happens early, and then downstream of that, you get tau pathology, and then you get the brain death.
And the accumulation of the sticky tau tangles is linked much more closely or predicts much more strongly the symptoms and the cognitive decline.
And there are lots of things coming through the pipeline to target that.
There are lots of things coming through the pipeline to target the brain inflammation that's associated with disease.
So, it's hugely promising and optimistic on a scientific front, even if lacanomab is not the cure that we'd all hoped we would be having by this time.
Tara Spires-Jones, we're going to have to leave it there.
But Tara, is there anything that our listeners can do?
I'm sure everyone's interested, to lower their risk of developing this disease.
Absolutely.
So I would like to leave people with the thought that science works, scientific research is working, and we're getting closer and closer, as you mentioned.
But in the meantime, you can do things to reduce your own risk.
You can lead a brain-healthy lifestyle.
You can get involved in dementia research.
You can support research by working with dementia charities or encouraging your elected officials to fund us.
So there are lots of things you can do.
Thanks, Tara.
Thank you.
My phone just buzzed.
Another data breach alert.
It was a reminder that VPNs and encrypted apps can't fix what's broken at the network level.
That's where CAPE comes in.
CAPE is a secure mobile carrier built with privacy as its foundation.
It doesn't collect names, addresses, or personal data, so it can't sell what it never stores.
Use the code CAPE33 to get the first month of premium nationwide service for just $30 a month and 33% off the first six months.
Go to CAPE.co.
Privacy starts at the source.
It's BBC Inside Science with me, Marnie Chesterton, and science writer Tom Chibbers is with me.
Tom?
Yes.
Do you think that we'll have effective treatments before either of us gets old enough that the Alzheimer bomb goes off in our heads?
Honestly, no.
I just,
I should be more optimistic.
And Tara made me more optimistic there.
She seemed very positive about things, but I just, I get nervous when, as Tara was talking about, it seemed like somewhat shady research practices were going on with the Adacanamab stuff.
And there's so many questions to be answered.
And also, I'm pretty old.
Spring chicken.
No, no, no.
It makes sense because...
We don't understand fairly fundamental things about how the brain works.
And it feels like this is trying to fix something on an aeroplane before having built the aeroplane.
Yes, I think that's exactly it.
I mean, I think we have now got sort of clear diagnostic criteria for Alzheimer's, but I feel the brain research is still very much in its infancy.
Tara made me feel a lot more positive about it listening to all that.
But before I came in, certainly, I would have been much more sceptical.
Okay.
Well, we will talk about this more because I want to talk about that dodgy research you mentioned in the podcast.
One, it's fascinating, and two, it also links back to statistics.
Science, doesn't it?
Yes.
And radio listeners can find that on the podcast of Inside Science on BBC Sounds.
Next up, can science supersize your strawberries and tomatoes by solar boosting them?
Chemists at Cambridge and Bath Universities think they can.
Result?
A smart coating that tweaks sunlight to the plant's preference.
Reporter Roland Pease headed over to the experimental greenhouses of Cranfield University's Crop Science Unit to see if it works.
We're just remarking that some of the growth, like the leaf growth, there's so much more leaf growth in some of the coated
ones than in the uncoated.
Yeah, I think so.
Not surprising because you get more red light, it tends to tell the plant to actually grow more.
That was Neil Horheen, chair of Lambda Agri, the startup sponsoring the trials, and with him was botanist Shonee Maguaza, watching over the plants, and Petra Petra Cameron, Professor of Energy Materials at Bath University.
Yeah, so these are our eight greenhouses
with our coating.
So you can see the ones with the coating because they look slightly opaque and they've been spray coated with our active material.
So you can see lots of quite impressive looking tomatoes in these greenhouses.
Much bigger than mine on my allotment, I have to say.
Mine are tiny.
So definitely good.
Can we have a look in?
Can we go in that once?
Yeah.
It's a tight squeeze.
Yeah, we have a tight squeeze.
Oh, there's a ripe one.
Yeah, you started type in the message.
I have to say, it looks like there have been a swarm of birds or something doing something on here.
No, not quite.
I was being unfair.
It was more like clear glue had been smeared over the outside of the greenhouse glass.
So it's a paint which has our active material in it is also slightly light scattering and that's on purpose because the plants quite like the diffuse light that comes into the greenhouse.
I mean, a lot of gardeners will put whitewash on their glasses just to reduce the amount that the intensity is.
It's just the glare and the intensity, and diffuse light just means the light's scattering about everywhere, so it's not the bright, direct light that you get in summer, which can burn the plants and be quite bad for them.
But the difference is, rather than just scattering the light, you're converting
some of the useless light.
Yes, so the coating is on the outside of the greenhouse, and it's taking UV light and it's turning it into red light, and that should help these tomatoes grow better because, obviously, these plants are green, they photosynthesize, they use a lot of red light, that's what they want.
Surely you're in charge of the growing here, so is it working?
Yeah, it's working.
It's just that now we're still collecting data.
We haven't analyzed some of the results, but it is a fantastic growth very well.
Outside once again, Petra was taking a closer look at the condition of her magic paint.
So, what's the magic in this?
Oh, we're making molecules which strongly absorb light.
So, similar to the kind of dye molecules that you'd use to dye your clothes, for example.
So, it's a very similar kind of system, but they're very specific types of molecules because we want them to be stable and last a really long time.
I mean, you're talking about clothes.
The familiar thing for me is when you go into a disco, long time since I've done that, and the UV lights are in the middle of the colour.
Floresis.
Or your white shirt.
It's the same.
It is the same.
So it's the same kind of principle.
So your fluorescent gyl and tonic has quinine in it, which is a molecule which takes UV light and re-emits it, and that's why it's glowing in the disco.
It's exactly the same principle that we have here.
We're taking UV light, but this time from the sun, the molecules are absorbing that UV light very strongly and then re-emitting it, in this case, as red light.
So a lot of the science has been behind making sure we're taking just the right part of the spectrum, taking enough of that light to make an impact on the red light intensity inside the greenhouse.
And that's quite a challenging thing to do, actually.
I mean, I have to say, when you first told me about this, I imagined I was going to step inside here
and come in, go on in.
I thought I was going to be bathed in red, but it's quite a sun effect.
I can't see it, actually.
The plants can tell it's there, but by eye, our eyes are just not good enough.
But the plants are happy.
The plants are happy, and that's the main thing.
Well, they look very happy, but the point is to do serious science, the measuring of their growth and the quality of the crop, which is Shirley's today.
Starting this week, we're measuring photosynthetic rate for all of them.
How do you do that?
There's equipment, it's called Liquor6800.
You just punch the leaves and then it gives you the whole data of stomatal conductance, transpiration rate, photosynthetic rate.
So, you're actually going to get complete measurements of all this sort of biochemistry.
Yes, and then we're also going to the laboratory to measure the sweetness of the food.
And the flavour.
There's some really interesting evidence that, depending on what colours of light you give them, they actually taste different because you get different metabolites being formed inside the tomatoes.
And it's really interesting.
I mean, it seems a lot of change just for a small change in the spectrum.
Shirley might disagree with me here because she is the expert on the plants.
But I think there's still a lot that isn't understood about how plants take the different colours of light and then use them and develop the fruit, develop the flavour.
After these tomatoes come out, there's going to be ever-bearing strawberries going in here.
But hopefully, with our kind of light management system, we can actually really improve the strawberry flavour of some of the strawberries coming out as well.
Well, if you want a blind tasting, I'll come back for the strawberries.
I'm going to take one from each from the control versus the coded control.
I'll eat as many as you want.
Perfect.
Selfless guinea pig Roland Pease there.
Finally, the Paralympic Games are underway, and over the next 10 days we'll watch disabled athletes show their extraordinary abilities in more than 20 sporting events.
But if you've watched it before, you may have wondered how, when presented with a range of different impairments, officials managed to keep a level playing field.
Well, before the games, the athletes are sorted into myriad categories, and Inside Science wanted to know how on earth you judge someone's disability in a scientifically accurate system.
If one swimmer is faster than another, are they less impaired or are they just better?
I called up Professor Sean Tweedy, part of the International Paralympic Committee's Classification Research and Development Centre, who is currently in Paris, and I asked what this classification system even is and what it's for.
We want to have competitive, high-level sport for people with disabilities and we need a way that allows everybody to compete that doesn't just give favouritism to people who have got less severe impairments.
And each class will comprise athletes who have impairments that cause about the same amount of disadvantage in the sport.
So within every single Paralympic sport, there's a subclass system?
Correct.
There are 22 Paralympic sports and everyone has a classification system that's unique to their sport.
And the object of that is to break up the eligible athletes so that
the impact of impairment on the outcome of competition is minimised and it's not just the person with the less severe impairment that wins the race or wins the competition.
So how did the classification system work back in the day?
I mentioned it's changed.
How has it changed?
It started out as medical classification where classification systems were not sports specific.
If you had a spinal cord injury at a certain level, you went into a certain class and that class was the same whether you were doing athletics, swimming or basketball.
But as the movement grew and people with more and more different types of disabilities came in, it became evident that that wasn't a sustainable way of breaking people up.
So there was a move towards more functional classification systems and function ended up being rather unsatisfactory as well because function is also affected by training.
That needed to be taken into account and so now we're trying to come up with evidence-based methods of classification.
That sounds really difficult because if you've got someone who has an impairment and they've worked really hard and they've managed through training to largely compensate, how do you judge that?
The sport that I learnt the most about is athletics.
So if I use that as an example, one of the first things that we do is take a training history.
Athletes who are very new to the sport, it's taken into account the fact that there's going to be room for improvement and that'll be factored into how the class is allocated.
So what kind of other factors do you take into account when you're judging this?
So we assess their impairments, we then look at them using two different types of motor tasks, motor tasks that the athletes have trained really hard to become proficient at, and then we do a set of novel tasks.
Oh, stuff that's nothing to do with what they've trained for.
Exactly.
So things like touching yourself on the nose really rapidly, rubbing your hands together, that's another way that we can get a bit of a read on how well trained an athlete is.
If the athletes are doing really well in the sport specific tests, but then rather poorly in the generic tests, then we might say, well, look, we've got somebody who's a really well-trained athlete here.
And we put all those pieces together using clinical reasoning and logical case building to come up with their class allocation.
Do you get pushback from the athletes when you put them into a particular class system?
It would be fair to say that not everybody's thrilled with the class they get allocated, but it's really
much more the exception than the rule.
And since we're talking about classification systems, I have to ask you about another classification change, which is the first openly transgender athlete who is competing in a female category has previously competed in males.
What do you say to critics who say that this is an individual who has gone through male puberty and has the advantages that that imbues?
Oh gee Wiz, that's a
it's a tough one.
There are features of sex that have a direct influence on sports performance and there are features of sex that don't.
And I think that there are aspects of the framework that we use for getting athletes into classes that we could apply very usefully to the sex classification question, but we need an evidence-based.
And at the moment, we're not in a position where we can state that, really.
The classifications still aren't entirely evidence-based, are they?
So are you still striving for change?
Is there room for improvement in this system as new research comes out?
Yes, absolutely.
The big step forward, I think, can be using artificial intelligence.
It's a project we've actually got funded from the Queensland Government in Australia now.
You can actually get film of them performing all their tests of impairments from a number of different angles, and you can use that as the basis for training your AI to come up with a process that becomes more and more efficient at allocating people to classes and to make sure that we've answered a very difficult question in the best possible way.
Also, if they don't like their categorisation, you can blame the computer.
There's a nice element of impartiality, is I think what you're trying to get through there.
That's what I meant to say.
Yes, that's it.
My thanks to Sean Tweedy from the University of Queensland.
Tom Chivers is still with me.
Tom, Sean mentioned AI making things more accurate, and we're back to Bayesian predictions, aren't we?
Yes, we are, always.
It's everywhere, you see.
But this time, an AI may be judging an athlete's level of impairment better than a human with a clipboard.
So I've heard of similar things in ideas for football refereeing and things, because obviously every decision the referee makes is to some extent subjective.
You know, the ball hits the arm, is it intentional, this sort of stuff.
And the idea would be that the AI would train on so much data that it would get a sort of best sense of what the median referee would guess in that and better establish a sort of what we'd almost call objective or sort of collective subjective, I suppose, approach to it.
And you're right, it's absolutely Bayesian because before you give the AI any training data, it would make...
rubbish predictions and then you give it more information each time and its predictions would get closer and closerly around some tighter average and hopefully better.
But whether it works or not, we'll find out, I guess.
Tom, because this is the podcast of Inside Science, we're not limited by Radio 4's schedule, which means that we can just stay talking about Alzheimer's and stats until the producers kick us out of the studio.
Fantastic.
So, if I would just read the entire book.
Could you?
I think the trouble with writing a book about stats is really the numbers, the equations.
But you give some really lovely examples that bring things home to people in the book.
Yeah, so
my horrible, dirty little secret, which I do reveal in the book, is I hate equations.
I hate them.
I've written three books basically about maths, and I hate equations.
And whenever I run across one in the book, I sort of grind to a halt.
I see a sigma symbol or whatever, and my brain just stops.
But you go on to auto.
No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, yes, but what I have found is
Bayes' theorem, which has these sort of important-looking vertical lines and all this sort of stuff, and it looks very serious and equation-y, it's just multiplication and division.
My 10-year-old son, my eight-year-old daughter, could do all the bits of it, and it's just what's important about it is the concept, is the conceptual nature of it rather than the mathematical.
And while it spits out these very counterintuitive results sometimes, if you actually walk through the numbers, if you walk someone through the numbers, imagine you tested 100,000 people, you know, 1,000 of them have the disease to begin with, all that sort of stuff.
Most people can follow it.
So, like, one of the examples you give near the beginning of the book is
about
the prosecution of O.J.
Simpson.
Oh, yes.
Well, that's a really interesting example, I think, because it's the defence saying, hey, O.J.
Simpson was a domestic abuser,
but only one in 2,500 domestic abusers.
Go on to murder their wives.
Only very much in inverted commas there.
Showing that actually this is incredibly rare.
Yeah.
Whereas Bayes would turn it the other way around.
Well, absolutely.
So
I'm going to go before this.
There's a common thing in the law called the prosecutor's fallacy, which is literally just not being a Bayesian.
And that is there was an awful, awful case, Sally Clark, and her.
She had two kids who were
who died in their first weeks of life of sudden infant death death syndrome.
And she was basically convicted on stats.
Yes, exactly.
A statistician said the odds of the, you know, the odds of one of these is about one in 8,500, therefore the odds of two is one in 73 million.
And that's wrong on a variety of ways, most notably that these aren't
independent things.
Obviously, if you have one cot death in your family, you're more likely to have another.
So it was flat wrong from the beginning.
But you can't just say there's a one in 73 million chance that this would happen by chance, and therefore there's a one in 73 million chance that she is innocent.
It's flat wrong.
It's the equivalent of saying only a one in eight billion chance that a given human is Taylor Swift and thinking that's the same as there's only a one in eight billion chance that Taylor Swift is human.
It is totally the
totally different questions.
So that was called the prosecutor's fallacy.
What happened in the O.J.
Simpson trial was the opposite thing.
It was the opposite way around.
Instead of failing to take into account the base rate, and
what happened there was
the defense said, yes,
there's only one in 2,500 only.
Domestic abusers have gone to murder their wife.
But he said, well, that's not what we're interested in.
He's not just a domestic abuser.
What we're interested in,
we've had some more information, which was she was murdered.
She was murdered.
Therefore, you work backwards from the number of women who were murdered
and go, how many of them were murdered by their
husbands who domestically abused them.
Exactly.
And
it's a lot higher.
So a failure to take that sort of stuff into account completely messed up the prosecution.
Can we talk about the history of Alzheimer's research?
Because I'd like to link it back to
it's a paper from almost 20 years ago.
Okay.
And it was a paper that dealt with something called amyloid beta star 56.
Oh, yes.
As I recall, there was an anawful lot of research into this particular form of amyloid plaques, amyloid proteins, and it got millions and millions of dollars worth of funding.
You know, it was really, it seemed to correlate incredibly well with Alzheimer's.
Because there are loads of different hypothesis burbling about at this point.
Like, is there a pathogen in the brain?
Is it fat accumulation?
Is it sort of mitochondria that are damaged?
But this one paper seems to suggest there was a particular kind of amyloid.
Not yet, because there are many kinds of amyloid proteins.
And this one paper
was amyloid beta star.
Yeah, okay, okay.
56.
56.
Yeah, yeah.
Brilliant, yeah.
And then it turned out not to exist?
Yes, so no one could replicate it.
No one could replicate the paper.
And no one has ever been able to,
in the last five or ten years, I have become quite, you know, quite dispirited about the state of a lot of science because so much of it is based on poor statistics, which...
seems like such an abstruse topic to a lot of people, but actually is fundamentally, for exactly Bayesian reasons, is fundamentally why we can't trust a lot of scientific results because people have gone and tested unlikely hypotheses with weak,
underpowered statistical tools, and they end up finding basically mining noise.
But this,
I don't think it was ever shown to be fraud.
I think it was ever, I think it was a lot of people got worried that it was.
And it really undermined the amyloid hypothesis, which is
the idea that Tara was talking about earlier on in the show, that these these plaques are causative,
they don't just correlate with Alzheimer's.
As I said, I think since then, more evidence has come back.
The fact that Lecanomab and things do show some impact on the progression of the disease suggests that it's got some causative role, you know.
But that really undermined it quite a lot.
And there was
like
literally millions and millions of dollars spent and years.
There was a groupthink of
this is the one theory and everyone should put all the research money into this.
Yes, yeah.
Well there was one bit that people were told they wouldn't get funding if they couldn't make their their pet project about the amyloid hypothesis and it it's a tragedy for because this is a horrible disease and we and i if it put back research by ten years and if the cure comes ten years late, that's millions of people who are who who suffer horribly because of it.
Yeah.
Well Tom, that was fascinating, but I have got the call in my ear now from the producers saying that they are kicking us out of the studio.
Good ear and I hardly got to talk about statistics at all very much.
It's almost like we planned it that way.
No.
So drawing this to a close, you've been listening to BBC Inside Science with me, Marnie Chesterton.
The producers were Sophie Ormiston and Ella Hubber.
Technical production was by Mike Mallon and the show was made in Cardiff by BBC Wales and West.
A happy place comes in many colours.
Whatever your colour, bring happiness home with Certopro Painters.
Get started today at Certapro.com.
each certapro painters business is independently owned and operated contractor license and registration information is available at certapro.com the mercedes-benz dream days are back with offers on vehicles like the 2025 e-class cle coupe c-class and eqe sedan hurry in now through july 31st visit your local authorized dealer or learn more at mbusa.com slash dream