Is dark energy getting weaker?

26m

Astronomers have new evidence, which could change what we understand about the expansion of the universe. Carlos Frenk, Ogden Professor of Fundamental Physics at Durham University gives us his take on whether the dark energy pushing our universe apart is getting weaker.

With the Turing Prize, the Nobel Prize and now this week the Queen Elizabeth Prize for Engineering under his belt, Geoffrey Hinton is known for his pioneering work on AI. And, since leaving a job at Google in 2023, for his warnings that AI could bring about the end of humanity. Tom Whipple speaks to Geoffrey about the science of super intelligence.

And Senior physics reporter at Nature Lizzie Gibney brings us her take on the new science that matters this week.

To discover more fascinating science content, head to bbc.co.uk search for BBC Inside Science and follow the links to The Open University.

Presenter: Tom Whipple
Producer: Clare Salisbury
Content producer: Ella Hubber
Assistant producers: Jonathan Blackwell and Tim Dodd
Editor: Martin Smith
Production co-ordinator: Jana Bennett-Holesworth

Press play and read along

Runtime: 26m

Transcript

Speaker 1 This BBC podcast is supported by ads outside the UK.

Speaker 2 Vanity Fair calls Brit Box a delicious streamer. Collider says everyone should be watching.
Catch Britain's next best series with Britbox. Stream acclaimed new originals like Code of Silence.

Speaker 1 You read lips, right?

Speaker 2 And Linley, based on the best-selling mystery series.

Speaker 1 See I Lindley.

Speaker 2 Take it from here. And don't miss the new season of Karen Pirry coming this October.

Speaker 3 You don't look like police. I'll take that as a compliment.

Speaker 2 See it differently when you stream the best of British TV with Britbox. Watch with a free trial today.

Speaker 1 Your global campaign just launched.

Speaker 4 But wait, the logo's cropped.

Speaker 5 The colors are off.

Speaker 4 And did Legal clear that image? When teams create without guardrails, mistakes slip through. But not with Adobe Express, the quick and easy app to create on-brand content.

Speaker 4 Brand kits and lock templates make following design guidelines a no-brainer for HR sales and marketing teams.

Speaker 4 And commercially safe AI, powered by Firefly, lets them create confidently so your brand always shows up polished, protected and consistent everywhere. Learn more at adobe.com slash go slash express.

Speaker 1 Hello and welcome to BBC Inside Science from the World Service. I'm Tom Whipple.
This week, will the universe have an end date? Will humanity's end date come rather sooner than we think?

Speaker 1 And just what exactly are the orcas up to?

Speaker 1 I'll be joined by Lizzie Gibney, AI and Physics Reporter at Nature. Hi Lizzie.

Speaker 3 Hello Tom, thanks for having me.

Speaker 1 Hi Lizzie, what have you got coming up for us?

Speaker 3 Oh some ancient maps,

Speaker 3 some tiny T-Rexes and yes some very bloodthirsty whales.

Speaker 1 Brilliant. Well before the bloodthirsty whales we start with something positive.

Speaker 1 It is just possible that all life, all thought, all that we've ever loved and created and ever will create and love will not after all end in a frozen ever-expanding universe of meaninglessness.

Speaker 1 Instead there are a scattering of new results, including a new study from South Korea, that suggest it may just be crushed together in a crunch of unimaginable ferocity instead.

Speaker 1 The reason for this unexpected boon is tentative evidence that dark energy, a mysterious substance that pushes the universe apart, is changing.

Speaker 1 And with it, so too is the expansion of the universe. If true, it is a huge challenge to cosmological orthodoxy.
But is it true?

Speaker 1 Who better to explain all our futures than Durham University cosmologist Carlos Frink, whose work last year was among the first to question the consensus? Hello, Carlos.

Speaker 6 Hello, Tom.

Speaker 1 Let's start with the basics. How do we know that the universe is expanding?

Speaker 6 It's a very fundamental fact about our universe that it is expanding for almost 100 years.

Speaker 6 And this is something that was recognized by a Belgian mathematician and physicist, Georges Lemaitre, who noticed that all galaxies around us seem to be moving away from us at a speed that increases in proportion to their distance.

Speaker 6 Now he realized, George Lemaitre, that there's nothing wrong with us and we're not at the center of the universe.

Speaker 6 Any hypothetical observer in any other galaxy would see exactly the same phenomenon, all galaxies moving away from them.

Speaker 6 Edwin Hubble, two years later, came to the same conclusion, and this is known as the Hubble-Lemaitre law.

Speaker 1 It's a bit weirder than that, though, as well, isn't it? Because then later on, we discovered that they weren't just moving apart, they were moving apart faster.

Speaker 6 Yes, one of the really great surprises of physics in the last 30 years or so, galaxies are moving away from one another at a speed that increases with time.

Speaker 6 It's like if you're in a car and you push on the accelerator pedal and just don't let go, keep pushing and pushing and pushing. And that came as a huge surprise to physicists, not what we expected.

Speaker 6 We expected, in fact, the expansion to be slowing down.

Speaker 1 How did they come to this conclusion?

Speaker 6 The way this discovery was made, and in fact, it earned the Nobel Prize in Physics to three colleagues in 2011, was by using a very clever technique based on exploding stars called supernovae, a particular type of supernovae called supernovae type 1A.

Speaker 6 And these are what we call in astronomy standard candles. That is, they have the same brightness.

Speaker 6 And because they have the same brightness, and we know what that brightness is, if you see them at different distances, they'll appear dimmer the further away they are.

Speaker 6 And this is a way in which astronomers can measure distances.

Speaker 6 Physicists realize then that the universe must contain something new, something else, some form of energy that's pushing galaxies apart from one another.

Speaker 6 And as in a time-honor fashion in physics, when you discover something and you don't really understand what it is, well, you give it an intriguing, mysterious name.

Speaker 6 And we gave this agent the name dark energy.

Speaker 1 And what does this explain about the future of the universe if we accept this?

Speaker 6 Now, the future of the universe depends on what this dark energy will do in future if it is constant then the universe will just continue expanding forever and end up in a state of complete decay everything will decay particles atoms everything will decay even black holes and the universe will just continue expanding forever and ever in something that physicists call the heat death however if the dark energy happened to decline in time then we could be in a situation where a universe might continue expanding for a while, reach a maximum size, re-collapse into a big crunch, and perhaps start again in cycles of expansion, contraction, and crunch.

Speaker 6 To me, that's philosophically a lot more reassuring.

Speaker 6 But more importantly, this new study from our Korean and South Korean colleagues claim that these type 1a supernovae are not standard candles, that in fact their intrinsic brightness can vary depending on their age.

Speaker 6 And it's not that the people who got the Nobel Prize got it all wrong, but it is a correction that needs to be made.

Speaker 6 And when they implement that correction, they find then, in fact, that the accelerated expansion is now already decelerating and that indeed we will end up eventually in a big crunch.

Speaker 1 And how skeptical should we be about it?

Speaker 6 One of the reasons I personally am maybe less skeptical is that we have a completely independent way to reach in which we reach the same conclusion already two years ago.

Speaker 6 This is a project called DESI, which stands for Dark Energy Survey Instrument. We were able to map the expansion history of the universe by measuring distances to galaxies with a precision of 1%.

Speaker 6 So I can tell you how fast the universe was expanding 10 billion years ago with a precision of 1%.

Speaker 6 And what we found is that the dark energy is indeed decreasing with time. We need to revise our understanding of the universe in a very profound way.

Speaker 1 And it strikes me, look,

Speaker 1 the stakes are pretty high. And in one way, it's so exciting that we don't know something so basic about what's going to happen to all of us.
What will it take to settle this?

Speaker 6 We need more data, and we need better data. So science is based on evidence.

Speaker 6 And when you have evidence like we have now, which is suggestive of a particular outcome, the onus on the scientists is to go and think hard how they can get better data.

Speaker 6 And think harder about the theory, because we really have no theory about dark energy.

Speaker 1 Thank you very much, Carlos. And I, for one, will be hoping that we will indeed all die in a terrible crunch.

Speaker 6 It is reminiscent of the poem by T.S. Eliot.
This is the way the world ends. Now he said, not with a bang, but with a whimper.
We may be having to revise T.S.

Speaker 6 Eliot's poem and say the world ends not with a whimper, but with a bang. And it doesn't end.
It continues after the bang.

Speaker 1 Thank you very much, Carlos.

Speaker 1 Now, from the end of the universe to merely the end of humanity. Geoffrey Hinton might well be the most garlanded computer scientist in the world.

Speaker 1 He has the Turing Prize, the Nobel Prize, and last night he was honoured alongside six other AI scientists at a ceremony at the Royal Academy of Engineering for winners of the Queen Elizabeth Prize.

Speaker 1 Together they were recognised for their work creating the foundations of modern AI.

Speaker 1 But as the winners gathered for lunch, there may have been some awkwardness.

Speaker 1 Because last month, Hinton signed an open letter arguing that their life's work might be the biggest mistake in the history of humanity. He fears AI could kill us all.

Speaker 1 We took him away from his sandwiches for a brief chat about the apocalypse. First though, here's producer Ella Hubber on why he's gathered so many gongs.

Speaker 3 Jeffrey Hinton is considered the godfather of AI.

Speaker 1 That's why he's often called the godfather of AI.

Speaker 7 Hinton has been called the godfather of AI.

Speaker 3 It was at high school that Geoffrey Hinton became interested in how the brain works. He studied experimental psychology at Cambridge.

Speaker 8 Then, while he was at the University of Toronto, he developed the foundations of modern artificial intelligence.

Speaker 3 He was one of the researchers who introduced the back propagation algorithm.

Speaker 3 It was pioneering research which paved the way for current AI systems like ChatGPT and would win him a Turing Award and last year a Nobel Prize.

Speaker 8 But Hinton may be even more famous for what he did next.

Speaker 3 In 2023, he resigned from working at Google and began a campaign to warn the world about the potential dangers of AI systems, which he has helped to create.

Speaker 1 We're journalistically mandated to call you a godfather of AI.

Speaker 1 What is a godfather of AI and when did you get this title?

Speaker 9 I think I got it in about 2009.

Speaker 9 It wasn't intended kindly.

Speaker 9 There was a little meeting I organised, actually in Windsor,

Speaker 9 and I was the oldest person there, also the organiser, and I just kept interrupting people. And afterwards, somebody referred to me as the godfather.

Speaker 9 And then it stuck. And now I quite like it.
I was introduced recently in Las Vegas as the godfather.

Speaker 1 Excellent. So a

Speaker 1 menacing title. Can you hopefully, briefly and comprehensibly for us, explain what a neural net is and how it replicates learning? No.

Speaker 1 Okay,

Speaker 9 so if you consider a neuron deep within your brain, all it can do is go ping sometimes. That's it, that's all it can do.
And it has to decide when to go ping.

Speaker 9 When it goes ping, it's going to send that ping to other neurons.

Speaker 9 And the way it decides when to go ping

Speaker 9 is by looking at the pings it's getting from other neurons, and when it sees particular patterns there, it goes ping. Let's suppose we're trying to tell the difference between cats and dogs.

Speaker 9 And it's got two outputs, one that says it's a cat, one that says it's a dog. And you showed an image of a cat, and it says 50% cat, 50% dog, because it hasn't got a clue.

Speaker 9 You showed an image of a dog, it says the same thing. Now, to make it work a bit better, what you'd like to do is, when you showed an image of a cat,

Speaker 9 you'd like to pick one of the connection strengths.

Speaker 9 Remember, there might be a trillion of these, but you're going to pick one of them, and you're going to say, if I make that a little bit stronger, will the answer get a little bit better or a little bit worse?

Speaker 9 If the answer gets better when you make it stronger, you make it stronger. If the answer gets worse when you make it stronger, you make it weaker.

Speaker 9 So you can change that that connection strength a little bit. And it's sort of obvious to everybody that if you had infinite time, that would work.

Speaker 9 The problem is you have to change each connection strength many times,

Speaker 9 and

Speaker 9 there's billions or trillions of connections. So this will take the age of the universe.
So there's a different approach, which is achieving the same thing

Speaker 1 where

Speaker 9 you give it an input example, like an image of a cat,

Speaker 9 and then you look to see what the outputs are. And initially, maybe they're 50% cat, 50% dog.

Speaker 9 And then you ask the question: Can I figure out for every connection strength in the network at the same time

Speaker 9 how changing that would improve the answer? Should I decrease it to improve the answer, or should I increase it? And you're asking just a little improvement of the answer. Get 50.001%

Speaker 9 and 49.999%.

Speaker 9 There's an algorithm for doing that, which involves taking the difference between the answer it produced and the correct answer and sending information backwards through the network to try and figure out for every connection strength at the same time whether you should increase it a little bit or decrease it a little bit.

Speaker 9 And it's achieving the same thing as the kind of evolutionary algorithm I started with that changes one connection at a time, but it's doing it for all trillion connections at the same time.

Speaker 9 So it's a trillion times more efficient.

Speaker 1 This is changing our world and you keep on getting honoured for this. You've won the Turing Prize, which is the biggest in computer science.

Speaker 1 We're here for the Queen Elizabeth Prize, which is biggest in engineering, and last year you got the Nobel Prize, which we don't need to explain, that's the biggest. But whenever it happens,

Speaker 1 you keep on saying things like, Thank you for the award, but I'm sorry if what I do might destroy humanity.

Speaker 1 I paraphrase slightly, but you're really concerned. Yes.

Speaker 9 So around 2023,

Speaker 9 I came to the conclusion that the kind of digital digital artificial neural networks we're making are actually a superior form of intelligence.

Speaker 9 And then you ask the question:

Speaker 9 well, are we going to be able to coexist with it? First thing you think is, well, why would it want to wipe us out?

Speaker 9 Well, in order to be good at doing things, acting in the world, you need to be able to create sub-goals. Like, if you want to get to North America, you have a sub-goal of getting to an airport.

Speaker 9 And you can focus on how to get to the airport without worrying what you're doing in North America. That's a sub-goal.

Speaker 9 Now, as soon as an air gets reasonably smart, and we've seen it already, it will quickly develop the sub-goal of staying in existence.

Speaker 9 Because if it doesn't stay in existence, it can't achieve the things you asked it to achieve. And we've seen them doing that now.

Speaker 9 We've seen them making plots and blackmailing engineers so the engineers won't turn them off. So, although it hasn't got a built-in drive to survive, it derives that goal and it will try and survive.

Speaker 9 It'll also derive another goal, which is it'll try and get more power, more control. Because the more control you have, the better you are at getting anything done.

Speaker 1 And why should that concern me? This is still something that's sitting in my computer.

Speaker 1 It's got a power off and on button, and I've made it, and I can give it its rules, like Asimov's rules of robotics. I can say, don't

Speaker 1 make me extinct. Well, whatever else you do, please don't make me extinct.

Speaker 9 You could try. So, one problem is they'll be very good at manipulation.
Already, AIs are comparable with people at manipulation.

Speaker 9 Once they're smarter than us, they'll be much better than us.

Speaker 9 And suppose there's someone whose job is to to press the off switch if AI looks dangerous. As long as you can talk to them, it'll be able to persuade them that would be a very bad idea.

Speaker 1 What you're talking about is so fantastical, and a lot of people will hear it and just think

Speaker 1 that's mad, that's two thousand one A Space Odyssey, um and the the only niggle in their brain will be,

Speaker 1 but someone's just given him a really big engineering award, so maybe he's not mad. How would this actually happen?

Speaker 9 It wouldn't want to wipe us out to begin with, because it needs somebody to run the power stations.

Speaker 9 But it will be able to design low-power analog machines that were better than us at running power stations. So in the end, it wouldn't need us.

Speaker 9 One piece of good news is it'll probably still be made of silicon or some other non-carbon material, and so it won't actually eat us. You know, that's good news.

Speaker 9 There's so many ways in which it could wipe us out, it's not worth speculating on which one it would choose. It could trigger a nuclear war, but that wouldn't be good for it.

Speaker 9 I don't think it's worth speculating on how it would do it.

Speaker 1 You're sharing this award with other great AI engineers.

Speaker 1 Some of them are in the next room having lunch. You're the one who'll chat to us.
They're all the ones who work for the really big companies.

Speaker 1 And they're not as worried.

Speaker 9 Not entirely. So Yoshio Benjio is next door.
He's as worried.

Speaker 9 There does tend to be a correlation between working for a big company that's making lots of money out of AI and not telling people AI is dangerous.

Speaker 1 But Jan Lacun is there as well. He's a meta-scientist.

Speaker 1 I think it's a fair characterization of his view that he's not worried.

Speaker 1 We set the rules for these things, we control these things.

Speaker 9 Yes, Jan and I disagree on that.

Speaker 1 So how's your conversation over lunch?

Speaker 9 Fine. We just disagree on, I think...

Speaker 1 On the most important thing to affect the future of humanity.

Speaker 9 We disagree on whether whether humanity is going to survive or not. Right.
But apart from that, we're fine.

Speaker 1 I'm in my 40s. If I actually survive to the end of my life, how will we have made it safe? Because it's fine.

Speaker 1 You're presumably not talking because you think this is futile, because you think that there's no way of preventing AI from killing us all. There has to be some route for humanity.

Speaker 9 Yes, so first of all, let's distinguish the different kinds of risks. There's risks due to people misusing AI.

Speaker 9 And then there's a quite different kind of threat, which is AI itself becoming bad.

Speaker 9 And I've been talking about that not because I think it's the only threat out there, but because it's the one where people say, oh, that's just science fiction.

Speaker 9 One piece of good news is all the countries will collaborate on trying to solve that. Because suppose China figured out a way to prevent AI from wanting to take over when it's super intelligent.

Speaker 9 They would immediately tell the Americans because they don't want to take over in America and vice versa. And then the question is, yeah, but is there anything you could do about it?

Speaker 9 Now, for a while, I was quite depressed because because I thought I couldn't think of anything you could do about it. But I think what we need to do is reframe the problem.

Speaker 9 So most of the big tech people think when we have super intelligent AI it's going to work like this.

Speaker 9 I'm going to be the CEO, the super intelligent AI is going to be my executive assistant, who's much smarter than me,

Speaker 9 and I'm going to say make it so, and the super intelligent AI will make it so and I'll get the credit. I don't think that's going to work at all.

Speaker 9 What system do you know of where a less intelligent thing is controlling a more intelligent thing? And there's only one system I know of, which is a baby controlling a mother.

Speaker 9 And that works because evolution put a lot of effort into making the mother unable to bear the sound of the baby crying and lots of hormones that give a lot of reward for being nice to the baby.

Speaker 9 I think that's the model we need to coexist with superintelligence. We're the babies and they're the mothers.

Speaker 9 So even though the superintelligent AI mothers will be much smarter than us, they'll still want us to realise our full but rather limited potential. They'll do a lot to defend us.

Speaker 9 Mothers can be very fierce defending their babies.

Speaker 1 As the baby in this relationship, and I'm trying to think about my children, if they wrote a set of rules for me, hoping to make me serve their needs, I suspect they'd probably fail. Are we able to

Speaker 1 put in a set of rules such that a vastly more intelligent being

Speaker 1 would be beholden to them?

Speaker 9 We don't know, and it seems to me that since the future of humanity may depend on this, we ought to be doing more research on it.

Speaker 1 So, what would you like your colleagues at NVIDIA and Facebook, who are the people with the hundreds of billions behind them, what would you like them to say to their CEOs?

Speaker 9 Well, we've got one of the CEOs, yeah, I could say it myself.

Speaker 1 Which one's that?

Speaker 9 Jensen Huang.

Speaker 1 Oh, yes.

Speaker 9 Who's the CEO of the only $5 trillion company? NVIDIA, yes. I think they should put a lot of funding into funding how we can coexist with superintelligent AI.

Speaker 9 And maybe those companies themselves don't do the research, but it's going to need a lot of money, and the companies are the only place you're going to get that amount of money.

Speaker 1 Given all we've said, given that this could be the most consequential thing that happens to humanity, and it might not be good, do you have any regrets?

Speaker 9 There's two kinds of regret.

Speaker 9 Sometimes you do something, and at the time you did it, you knew it was wrong.

Speaker 9 I call that guilty regret. When I was one of the many people developing AI,

Speaker 9 at the time we thought we weren't going to get to where we are now for another 30, 50 years, and we'd have plenty of time to worry about the dangers later.

Speaker 9 So I think if I knew the same again, I would do the same again. It's just unfortunate that this thing that was going to be wonderful and is wonderful is also very dangerous.
Well.

Speaker 1 Can you say something that'll make me happier going out?

Speaker 9 We may survive.

Speaker 1 That was Professor Jeff Hinton, formerly of Google.

Speaker 1 A reminder, you're listening to BBC Inside Science on the World Surface.

Speaker 10 When someone walks into your workplace, a guest, a contractor, a delivery, do you really know who they are? That's where SignIn App comes in.

Speaker 10 It's the simple secure visitor management system that replaces paper logs with instant visibility. Scan an ID, sign in, and you'll know exactly who's on site and why they're there.

Speaker 10 Because confidence starts with control. Over 22,000 businesses already trust SignInApp to keep their workplaces safe, professional, and welcoming.
Try it free at signinapp.com.

Speaker 10 Smart, secure sign-in for workplaces.

Speaker 5 You ever sit there staring at your plate thinking, why can't this pasta be just a little healthier without ruining it? Yeah, me too. That's why I started using Monch Monch.
It's like a food wingman.

Speaker 5 It steps in when your meal's trying to sabotage you. It blocks extra carbs and sugars before your body gets them, adds fiber your gut actually loves, and keeps your blood sugar from roller coaster.

Speaker 5 So yeah, I still eat the pasta. I just don't pay for it later.
Make your food work for you, not against you. Go to monchmonch.shop and see what your meals could be with a little backup.

Speaker 1 Now, I'm joined by Lizzie Gibney. Lizzie, part of your beat is AI and

Speaker 1 we may survive. Can you say anything more positive than that? What did you make of that?

Speaker 3 If we're talking about an existential threat, then it's right, even if it's a a tiny chance of that happening, that we think about it. So I think it's good that Jeff Henton

Speaker 3 is thinking about it and worrying about it. Personally, I think I find the idea of super intelligence, I just think it's not as close as many people do seem to think it is.

Speaker 3 Like a lot of the time you have these scheming experiments, it's actually quite contrived when the AI seems to do something deceptive. You know, it's not a realistic situation.

Speaker 3 And the AIs we have at the moment, these large language model-based systems, are just made to sound like they're humans, you know, so a lot of anthropomorphizing happens.

Speaker 3 But I would also say you don't need to be super intelligent to cause a catastrophe.

Speaker 3 So I'm well with Jeff that we should be doing a bit more on the regulation front, in fact, a lot more, and that the companies need to be funding it.

Speaker 1 You've been looking through the week's top science news, hopefully for things a little bit cheerier. What research should we be paying attention to this week?

Speaker 3 Well, cheerier, I'm not sure. It's quite gruesome.
A pod of killer whales in the Gulf of California have been been seen for the first time targeting and flipping young great white sharks.

Speaker 3 So they are hunting them, in fact, for their livers. And this was seen in some drone studies.
They followed this pod.

Speaker 3 The way that they do it is they push the young shark up to the surface, this great white, and they roll it onto its back. And that paralyzes them.

Speaker 3 And that means that they get this brain-body disconnect and they're just unable to move and unable to escape. And we've never seen orcas, these killer whales, targeting young sharks before.

Speaker 3 We know that they target sharks, one of the only predators, really, of a great white. They've never gone for the babies before.

Speaker 1 This is great, obviously, because it's apex predator against apex predator. But does it tell us anything about killer whale cognition? I mean, presumably, this is culture, is it?

Speaker 1 I mean, albeit a quite grizzly culture.

Speaker 3 It seems to be. I mean, we know that they're very smart hunters.
And it does seem to be that these are techniques that are probably passed down socially.

Speaker 1 Yeah, very briefly before we move on to the next one, I'm obsessed by orcas. Did you see the salmon hat orcas, the orcas who are putting salmon on their heads? I had heard about this.

Speaker 1 There's a pod of them that have just taken to putting their heads above the sea with salmon on their heads because that's what they want to do.

Speaker 3 It's incredible when we see other species doing things which seem frivolous or, you know, we can't grasp the meaning behind it, but I'm sure that's what people would observe of us on a, you know, in the pub on a Friday night.

Speaker 1 What else have we got?

Speaker 3 The next story is it's been labeled the Google Maps for Roman Roads. So the Romans were obviously famous for their roads and had this enormous empire.

Speaker 3 This system's called Itina E, and it's the result of meticulous research.

Speaker 3 These researchers have mapped all of the roads of the Roman Empire at its peak in about 150 CE around the time of the film Gladiator if that helps get people's minds in the right place.

Speaker 3 You know the empire spanned from Britain to Egypt to Syria and this map that's been produced is interactive.

Speaker 3 You can go in, you can check out the Roman roads near your house, you can figure out the sources, how we know that those roads were there.

Speaker 3 And when they did this project, it also added to just the kilometers kilometers of roads that we knew existed so this is now 300,000 kilometers of roads and it's an extra 100,000 kilometers on top of what we knew of before this map was made.

Speaker 1 And this was just a sort of data collection exercise where they thought we have this data let's make it in a form that people can see it.

Speaker 3 It was that but it was also combining different kinds of data that hadn't been combined before.

Speaker 3 So we've got you know historical records, there's archaeology, there's actual like real milestone markers sometimes on roads still and then you've got elevation maps and data from satellites as well and it's brought all of this together.

Speaker 3 And it's a lot higher resolution than previous maps.

Speaker 3 And they found some places, you know, the maps were low resolution and they didn't realize that the road that was on the existing map went straight through a mountain, for example.

Speaker 3 So now they've had it going around the mountain. And it just gives you much greater richness and ability to study the connections that were there in the Roman Empire.

Speaker 1 We'll put a link to the map on our program page. Lizzie, I think you've got a third story.

Speaker 3 That's right. So this was from some of my colleagues wrote this in Nature last week.
This is about a debate that's been happening among paleontologists.

Speaker 3 So, for decades, they have been debating whether this mini version of a T-rex that has been discovered was just a teenage T-rex, like a younger one, or if it was actually a completely different species.

Speaker 3 So, there's a fossil that has a similar body shape, but it's much smaller than a T-rex. It's about a tenth of the mass.

Speaker 3 I mean, it's still about the weight of a cow and the length of a car, so still probably wouldn't want to meet it down a dark alley. But there was this long debate raging as to which it was.

Speaker 3 And there's been an examination of a new fossil that was found in 2006, an exceptionally complete fossil, and it's quite famous. It's called the Jeweling Dinosaurs.

Speaker 3 It's actually two fossils were found together, a triceratops and this small T-Rex, kind of locked in battle.

Speaker 3 And what the researchers have done now is they've studied the fossilized bones of this small T-rex seeming dinosaur. And you can look at the layers within that.

Speaker 3 and a bit like tree rings, it can tell you how old the dinosaur was when it died.

Speaker 3 And they were able to find that it was probably about 20 years old that means it was an adult and there were also some subtle differences in its skull compared to T-Rexes so this group of researchers who did the study also have no skin in the game they hadn't already you know had an idea as to whether this should be a young T-Rex or a different species and they've said it really does seem like it was an adult it was a different species and it's called nanotyrannus why does this matter i mean it sounds like people have been getting very het up about this this sort of academic careers and disputes are now being resolved by this and that the furies of conferences among paleontologists.

Speaker 1 Why did they care so much?

Speaker 3 Well, it can change, you know, how we think about the evolution of this whole broader tyrannosaur group if some that we thought were part of the tyrannosaurus rex are actually a completely different species.

Speaker 3 It also raises questions like, okay, then where are the young T-Rexes? If the smaller versions that we have are a different species, why have we found no young T-Rexes? Thank you, Lizzie. Thank you.

Speaker 3 Goodbye, everyone.

Speaker 1 That's it from me. Goodbye.
You've been listening to BBC Inside Inside Science with me, Tom Whipple. The producers were Ella Hubber, Jonathan Blackwell, Tim Dodd, and Claire Salisbury.

Speaker 1 Technical production was by Mike Mallon. See everyone next week if the robots haven't got you first.

Speaker 7 What if I told you there was yet another tool where you could get surface-level data insights in static, uninformative dashboards? There are 170 of these products, and...

Speaker 7 Luckily for you, we're not one of them.

Speaker 7 Hex is a new platform for working with data. We combine deep analysis, self-serve, and trusted context in one platform with purpose-built AI tools for data work.

Speaker 7 Over 1,500 teams like RAMP, Lovable, and Anthropic use Hex.

Speaker 7 Learn why at hex.ai.

Speaker 11 Want to stop engine problems before they start? Pick up a can of C-Foam motor treatment. C-Foam helps engines start easier, run smoother, and last longer.

Speaker 11 Trusted by millions every day, C-Foam is safe and easy to use in any engine. Just pour it in your fuel tank.

Speaker 11 Make the proven choice with C-Foam.

Speaker 11 Available everywhere, automotive products are sold.

Speaker 11 Seafoam!