The Skeptics Guide #1055 - Sep 27 2025
Listen and follow along
Transcript
You're listening to the Skeptics Guide to the Universe, your escape to reality.
Hello, and welcome to the Skeptics Guide to the Universe.
Today is Wednesday, September 24th, 2025, and this is your host, Stephen Novella.
Joining me this week are Bob Novella.
Hey, everybody.
Kara Santa Maria.
Howdy.
Jay Novella.
Hey, guys.
And Evan Bernstein.
Good evening, everyone.
How's everyone this fine Wednesday?
So pretty good.
Doing well.
So, any of you watched the full press conference with RFK Jr., Trump, and Oz?
Yes.
You could stomach that.
No, I couldn't do that.
I did watch the, did you guys see the cut that they made where they said it to like Bill Nye the science guy, but it was like Don Trump the scientist guy?
It's very funny.
It's all the best quotes.
It was terrible.
I mean, it was a straight up
propaganda.
Fire hose of misinformation, propaganda, and all with a very specific purpose as well.
Although I honestly think Trump was sort of rambling off script and giving away the game.
Like, I can imagine they had a meeting where they said, all right, this is what we're going to say, and this is the overall strategy.
And Trump didn't know what they were supposed to say at that conference versus what was the long-term goal.
So he sort of gives the game away.
But anyway,
do you think when they prepped him that they told him how to pronounce Aceta Minophon and he just forgot?
I don't know.
I don't think they thought they had to.
But remember,
nothing bad can happen.
It can only good happen.
It can only good happen, yeah.
Well, these world
belong to us.
So here's the quickie version.
We talked about it on the live stream.
I wrote about it on science-based medicine and neurologica.
The announcement basically had two components: that they've discovered the cause of autism.
Wrong.
And it's Tylenol, acetaminophen, in pregnant mothers, which is wrong.
Again, I talked already about why
the evidence for that was preliminary and inconsistent, and then actually the best evidence is that, no, there is no causal link between those two things.
And pretty much every medical organization, especially organization in the world, has looked at the evidence and come to the same conclusion.
But they're, of course, just cherry-picking whatever the studies that they want to cherry-pick.
Because they had to, again, the RFK promised he was going to find the cause of autism in six months, so boom, here it is, right?
Even if he has to just make it up.
The second one was a new treatment for autism, which is really a treatment for cerebral folate deficiency, which may have some manifestations of autism.
And again, this is preliminary.
It has not been proven yet.
It requires more research and more evidence.
But
it looks like they just pressured the FDA into, which is, you know, he has his toady in there now, to just give approval for this drug, which is already on the market for other reasons.
They want basically giving it a new indication indication for autism.
So there you go.
They found the cause of autism and they found a treatment for autism, both of which are complete bullshit.
But the deeper game was given away by both RFK and what he said on script and what Trump said off script.
You know, RFK basically made the case that, or tried to make the case, that autism is primarily an environmental disease, right?
It's not genetic.
He said that the research showing it's genetic is all fraudulent and it's a conspiracy.
Fraudulent.
And that he's going to direct the NIH now to look for environmental causes of autism, i.e., vaccines, right?
But other shit as well.
I'm sure whatever.
It'll be drugs and vaccines and toxins, you know.
So that's always going to be redirecting the NIH to waste their money
on his pet project rather than having scientists and researchers following the evidence
where it actually leads.
And Trump, of course, goes off on vaccines you know the how the MMR vaccine is bad it's just bad and you have to break it up into three different shots which of which I think is the strategy here right because we don't have a separate mumps or measles or rubella vaccine we just have the the MMR vaccine the combined the combined vaccine
so if you if and they and RFK's vaccine panel that he packed with his anti-vaxxers already has removed the approval for the recommendation rather for the MMRV, you know, plus the Varicella vaccine, saying it's slightly higher risk of fever-associated seizures than the MMR alone.
So, what's the end game there?
Is it like a majority of the majority?
So, the end game is they're going to do the same thing to MMR.
They're going to say, nope, we're going to delay it till after four years old, and you have to give the individual vaccines.
But there are no individual vaccines.
So, if they try to get individual vaccines approved, then kicks in the gold standard science we talked about, where you have to have a placebo-controlled trial, which you can't do
on something that already has a working competitor, right?
You can't do that.
Oh, see, I thought you were going to go Wakefield on this and be like, oh, they're just going to get their own people to make their own vaccines and make a ton of money off of it.
Well, I don't think that's the point.
I think he just wants to get
rid of it.
Just get rid of it altogether.
So, this is all maneuvering to make it, again, not outright ban vaccines, but just maneuvering to remove them from the recommended schedule, to delay them until an older age, where insurance companies won't cover them, to prevent any new vaccines or variants from coming on the market because you can't do the science, and to direct research into only what he wants, which is only looking for environmental causes of things, because that's what he does.
Aaron Powell, and what happens to the mortality rate once this all is all going to be a completely unmitigated disaster.
This is a healthcare disaster for the American public.
The only question is, how much, how far along is he going to get in the time that he has?
And also, like, what happens in 2026
when the next election happens?
Is the public paying attention?
Do they care?
Are they full of misinformation?
Are they idiots?
I mean, you know, what combination of these things,
as I've said for a very long time,
human civilization will destroy itself because of stupidity.
That is the most grave threat to humanity.
And Carl Sagan said as much as well.
Yeah.
But I think the scary thing here is that for some, not all, but but some of these childhood diseases, the manifestation, the public health crisis won't happen until after he's out of office.
Yeah, take many years for it to come to
its fruition.
Some of them will be overnight because the minute that people are unable or unwilling to vaccinate their children, children are going to start.
dying of disease.
Like it's going to happen quickly with new births for some diseases, but for other diseases that don't really become an issue until kids are in daycare or in you know elementary school there is going to be a delay well they're not going to i mean are they turning off the spigot tomorrow here or i mean does this stuff take years to get to the point that they want to get it to well i think they're they're trying to turn off the spigot as quickly as possible obviously and and kind of like affect change very fast i guess one of the things that i think I don't know, maybe we don't talk about enough or we do, but I'm so curious about is the fundamental motivation that's sort of behind the motivation that you often see with key players in anti-vax movements.
We go back to Wakefield and we know that the fraud with Wakefield had a financial incentive, right?
And there was a power incentive.
Very often, when we talk about RFK or we talk about his HHS kind of group, Steve, I know I sent you some articles today about,
it's not going to be what we talk about later, but about like David and Mark Geyer
or about
William Parker, these individual anti-vaxxers who themselves were either practicing medicine without a license or
committing fraud in their own ways, but had their own quote treatments that they were peddling, which were often really dangerous.
Like one of them was using Lupron.
It's a hormone blocker and it can basically
chemically castrate young children.
But so these like horrific experiments and really dark kind of approaches to offering an alternative to an afraid public.
That's what scares me the most is when people are told this thing that is safe is actually not safe.
It's the cause of all the things you should be afraid of.
But here, don't worry, I have something you can do instead.
It's the instead that makes me go, let's follow the money and let's figure out why these kind of unproven treatments are being peddled.
Do we know what's up with the kind of new treatment that they're starting to tout?
Well, the only thing that came out about that was that Dr.
Oz, at one point, had a stake in the company that sells that.
Which he then claimed he divested from, but that's never been confirmed, so we don't know.
So
I don't know if this is specifically about grifting
and trying to make money off of alternatives, although that is what's fueling the alternative medicine industry
is selling supplements and stuff like that.
RFK Jr.
mainly makes his money by being a lawyer defending people suing for toxic exposures and things like that.
So
that's how he makes.
He wants everything to be environmental and toxic, right?
Because that's how he makes his money.
But I think we need to do it.
We need to, I'm sure somebody has already done it, a detailed deep dive of everybody on that vaccine panel, of every single consultant that has been brought in, where a legitimate scientist who has dedicated their lives to doing this kind of research was nixed and somebody else was brought in to give their opinion.
Maybe it's just because they're towing the party line and they're anti-vax.
But I have a feeling that part of the reason they're anti-vax is because there's some sort of incentive in there.
Yeah, they're often intertwined.
Yeah.
I mean, it doesn't matter for the terrible arguments they're making and what the science actually says, but you're right.
They are often intertwined.
And I think it matters for the public to better understand this, because if there's just straight-up fear-mongering, a lot of people go, well, why would they do that if there's not something,
if it's not true?
A lot of people say, why would this public official say that if it's not true?
But if it's like, oh, this is why,
it starts to make sense for people.
Yeah, I mean, obviously, we're going to have to keep an eye on this as it unfolds.
But I'll just say this, too, that
me and my colleagues at science-based medicine, especially David Gorski, who's been writing about this, but
most all of us have at one point or another been predicting what RFK Jr.
is going to do, and we've been pretty spot on.
So it's not as if we don't have a good bead on where he's going with this.
He is going to do everything he can to limit and minimize Americans' use of vaccines short of outright banning them.
And so far, he's way ahead of schedule.
Like he's doing it faster even and just more
draconian than we even thought.
It's basically at the worst end of the spectrum.
Right.
He's doing the exact thing he promised he wouldn't do when he got...
when he got approved in the Senate.
Oh, gosh.
All right.
Kara, we're going to do a what's the word?
And it's kind of related.
It is.
Yeah.
It's not grift.
I actually wanted to dig a little bit deeper into the word autism itself.
I know we've done deep dives on the show in the past about what autism is, what autism isn't, some of the kind of misinformation in the pseudoscience that we're often hearing peddled about autism.
But I was really curious, like, where does the word come from?
Because I think most of us can sort of recognize the two components of the word, right?
If we break break it up into two, it ends in the suffix ism, just like many actions or like states of being are isms.
But the prefix or the first portion of the word comes from the Greek for autos, right?
Or auto, meaning self.
And so why is it self-ism?
Like, where does that come from?
And one thing that I remember learning, sort of, it was somewhere in the recesses of my mind from when I was early on as a psychology student, but was refreshed for me today, is that the term was actually coined way back in 1912, so you know, over 100 years ago, by a Swiss psychiatrist by the name of Paul Bluler.
Blohler, I'm very bad with like German pronunciation.
I have it right here.
Bloiler.
Okay,
well, fine.
Jugen Bloiler.
So he actually coined the term autism, but he wasn't referring to what we now know to be autism back in 1912.
What he was actually referring to was a symptom that he saw in many of the severe cases of schizophrenia that he was studying.
So he also kind of created the concept of schizophrenia.
He was the first to sort of look at that and determine it as a syndrome.
Basically, he said autistic thinking has to do with,
and this is back when psychoanalysis was king.
And so a lot of psychiatrists thought that, you know, there were portions of your mind that would kind of do things in order to avoid facing the harshness of reality.
And so he described autistic thinking, this self-ism,
as spending time in one's inner life and not being readily accessible to observers.
He actually characterized it by, quote, infantile wishes to avoid unsatisfying realities and replace them with fantasies and
hallucinations.
But around the mid-century, so the 1950s and 1960s, we saw a big change in the way that that word started to be used.
So not only did we know more about schizophrenia at that point, we also saw something big happen in like the 60s having to do with mental health.
Do you guys know what that was?
Something really big, a big change.
Electric shock therapy?
No, that's when we closed all of the asylums.
Right.
That's when we had the rise of psychiatric medication.
We started to, yes, classify with a little bit more kind of science, but we also were closing the asylums.
And so there was a real push for individuals to integrate into society and to be able to do that with appropriate therapies.
At that point, that word autism started to shift and mean more what it refers to now, which is, yes, a diagnosis, some people might might say more of a syndrome, right, than an actual, like, quote, disease or disorder.
And really, for a lot of people, I actually read a really lovely,
it was on Reddit, somebody talking about how they really like the word autism.
They really like going back to the roots because they,
as somebody who is neurodivergent, they see it as having an extremely absorbing interior life and that that was something that really related for them.
And so now we'll often see that shift, and that happened again through a change in psychiatry and also epidemiologic measures that helped us kind of understand
incidence rates of these different diagnoses to less have to do with excessive hallucinations or fantasy, and more have to do with one's kind of tendency to draw inward or sort of deficits in social interaction or in communication.
And so it's interesting that the word still holds and it still does define not all individuals with autism, because as we know, many people with autism have
very different manifestations of the diagnosis, but
that kind of core root of self being kind of on one's own, being somewhat internal, having this like deep relationship with oneself does hold for many people who identify in that way.
So it's a pretty interesting, I think, etymology that sort of like left and came back.
You know,
it sort of was a core symptom of schizophrenia back when a lot of psychiatric syndromes and disorders were all sort of mashed together and they weren't, you know, well understood.
And then over time, it was teased apart and
better used to describe what we now would call autism as a diagnosis with communication deficits.
I think a lot of it, and I know you said this, but just to emphasize, they actually thought it was the early stages of schizophrenia at one point.
Yeah, and back then, schizophrenia was kind of everything.
Yeah, schizophrenia was like the catch-all.
They didn't really know what that was either.
Yeah, they were just focusing on the, they're just absorbed in their self.
Yeah, you had psychotic symptoms, you had psychotic disorders and you had neurotic disorders, and that was pretty much it.
Neurosis was things like anxiety, depression, you know,
nerves, and then psychotic disorders was pretty much anything else, anything that seemed kind of bizarre or odd or just different.
And then later that was kind of teased out and we started to have a better understanding of what psychosis actually was.
And autism emerged as a developmental disability, not as having anything whatsoever to do with schizophrenia.
But the root came from that.
Right.
Okay, thanks, Kara.
Jay, tell us about NASA's new mission control.
Well, there's a couple of things going on.
The first one is very brief, but interesting.
NASA has just recently opened the new Orion mission evaluation room, and that's called the MER.
Merr.
Say merr.
I love it.
This is inside the Mission Control Center at the Johnson Space Center in Houston.
The room was activated on August 15th, 2025.
You know, like they turn the lights on, then you hear all like the,
you know, that's how I see it.
It's fun, Steve.
You should try it sometime.
This adds 24 engineering console stations.
They're staffed around the clock during the 10-day mission, the Artemis 2 mission.
These are meant to augment the standard white flight control room.
I guess that's what they call the existing one.
And this is because they're going to have expert engineers from NASA, Lockheed Martin, ESA, and Airbus that are going to be constantly monitoring the spacecraft data, comparing performance against their expectations, and help troubleshoot unexpected issues that always pop up.
It's important to note: like, this is not overkill.
This represents just how complicated Orion systems are and how many moving parts need simultaneous people looking at them to keep the crew safe and the mission on track, right?
It's exactly what mission control is supposed to do.
It's just like mission control on steroids.
Artemis II is also going to be, as a quick reminder, this is the first crewed flight in NASA's modern lunar program.
I'm personally extraordinarily excited about this.
All the reasons why I will probably list most of them in what I'm about to tell you.
The first reason why I'm super psyched is that this is when things start to get really, really exciting, right?
We have the four astronauts that are going to ride Orion on this 10-day mission.
It's called a free return.
And what happens is they're going to go to the moon.
They're going to circumnavigate the moon.
They're not getting off the rocket.
Nothing like that.
This is just people in the ship going around the moon and then coming back.
This is going to prove that the rocket and then the spacecraft and the ground systems are all ready for sustained deep space work, which from here on out, after the second mission, like that's what we're talking about.
Even though it might not seem like a big deal, you know, what's going to happen?
It's going to ride there and come back.
Like, this mission is unbelievably critical and it's really cool.
This is the beginning of crewed missions.
And if things go as planned, they're never going to stop.
Just think about it.
You know, they're building a huge, huge system on the moon.
You know, there's so many different giant pieces of the puzzle that need to be constructed and brought to the moon and a moon base and figuring out all of the technology that's needed.
And then they're going to go to Mars.
You know, things, if the funding is there and the will is there, like it's just going to be, you know, crude flight after crude flight after crude flight, on and on and on.
And I think we're all going to get bored with it at some point.
You know, like it's just going to become so common.
NASA's schedule is that the flight launches no later than April of 2026.
So I remember when we were talking about this, guys, I haven't really been bringing it up that much just because there really wasn't that much to say.
I was waiting for this milestone.
But I remember hearing April 2026 and saying to myself, oh, Mike, that is so far away.
And now it really isn't.
It's less than a year.
Yeah, it's going to come very quickly.
Was the Artemis
project designed in, what, 2018, I think, is when it first came up?
Yeah, I don't remember the date, but the original launch, I think, Artemis 2 was supposed to go off like in late 24 or early 25.
That's what I remember as well.
Yeah, so we had a significant delay.
And again, good, good for them.
Delay it.
You know, we're talking about sending people to the moon again with all new technology.
So they have to get it right, exactly.
So the agency left the door open to fly even earlier than April if the work finishes faster, but the official commitment is still April 2026.
The crew's set, meaning they're selected, and they have been selected for a while.
We have four people going: commander reed wiseman uh pilot victor glover mission specialist christina koch and mission specialist jeremy hansen of the canadian space agency these guys are going to take orion out of the garage and take it out on the highway and this will be the first time since apollo 17 that a crew will travel beyond low earth orbit so these are all profound moves that are happening here
the hardware status is better than i think a lot of people assume at this point the space launch system Core Stage, right?
This is the rocket, the single-use rocket, right?
They're going to have to build one specifically for each mission if they don't, you know, eventually have SpaceX help.
The Space Launch System Core Stage, right?
You got that, Steve?
Space Launch System Core Stage.
That's essentially the rocket without the Orion capsule.
This arrived at Kennedy Space Center.
by a barge in July of 2024.
That was a long time ago.
The solid rocket boosters were stacked in the vehicle assembly building, right?
These are all things that happened before they start to really build the whole ship out.
NASA reports that the core stage and boosters were connected and integrated on the mobile launcher in March of this year.
And these are the hard milestones and a clear sign that things are definitely a go.
So Orion is past its assembly phase, meaning it's built.
Lockheed Martin says development for the Artemis II spacecraft is not only finished, but it's in launch preparation flow at Kennedy.
I like that they call it flow, right?
So, all the things that they got to do to get it prepared before they attach it to the rocket.
Now, this matters because most of the open risks after Artemis 1 centered on Orion, right?
Not the rocket.
So, if you guys remember now, during Artemis 1,
this was back in December of 2022, NASA discovered an issue with Orion's heat shield during re-entry.
We don't want this problem.
It's a really bad problem to have because this is where people could easily die.
And you don't want them dying literally moments before they touch back down.
The shield made of something called AV coat or AV coat ablative material, which basically means it's heat resistant.
It's designed to gradually burn away in a controlled manner to protect the spacecraft.
However, Orion lost more material than expected because there was chunks of that stuff kind of popping off prematurely in a process that they call spallation.
I've never heard that word before.
Well, this didn't endanger the Artemis 1 because it was, you know, first off, it was uncrewed and internal temperatures still remained safe.
It was still a big deal.
Like, the concerns were really high for future missions.
There was excessive material loss, and that could allow the interior to get superheated, which means the gases are going to dramatically expand.
And this would definitely pose a threat to anybody that would actually be on a future mission.
So, NASA spent over a year investigating the problem.
They ran a huge number of tests to recreate these re-entry conditions.
They were examining the existing flight data.
And back in December 2024, they identified the root cause.
So, for Artemis II, NASA decided to fly the heat shield as built, which is the same spec as the first one, using the same materials and construction as Artemis I.
While they're essentially relying on updated thermal models, they adjusted re-entry procedures, I guess, changing the angles and stuff like that.
They enhanced monitoring to keep risks within safe margins.
Although, I don't know what the monitoring is going to do if they're on their way in and there's a problem, but I guess they know something I don't.
But a permanent hardware fix, which is going to mean manufacturing tweaks, the improvement to how the AV coat tiles are bonded and layered, you know, all of these details.
These are being developed for later missions, probably Artemis 3 and 4.
It's only going to be implemented until after extensive testing to ensure the reliability.
Meaning that if they were going to change something that big and that significant, it would not only delay, but it could throw the whole thing, the whole mission series off kilter, right?
You don't want to like throw in a three-year delay.
Just do it on the next one and they're confident that everything is going to be fine.
The crew is now actively preparing for the mission.
NASA is showing the crew like running the launch day walkout drills.
Like, you know, what happens when the day comes?
This is exactly what's going to happen.
So they have to coordinate everything with all the people that go with them, like that entourage.
This includes people like getting them into the capsule, buckling them in, giving them a pack of gum, slapping them around, you know, all the things that need to happen.
They're rehearsing like nighttime operations.
There's separate updates that they put out that describe the research plan for the mission.
This includes monitoring sleep and the activity during the daytime, collecting biological samples on the the astronauts to support the human research for deep space flight, meaning they have to know everything about these people just to make sure that they're perfectly healthy and that nothing is going to come up.
They have independent reporting that shows them practicing lunar observation protocols.
I know that sounds simple, but these are very useful backup skills.
Nothing here is fluff.
It's how NASA lowers something called burn-down risks.
A burndown risk is a potential problem or technical issue that has to be fully resolved, tested, and signed off on before major milestones are launched.
And they have them.
They have some risks there that they have to work on.
There's some unknowns, and some of these are quite big.
If anything is going to cause a delay, it's going to be in the next few things I tell you here.
So, life support performance.
NASA has to confirm that the environmental control and life support systems have to make sure it works properly inside the fully integrated Orion capsule, and it has to be better than lab testing.
It has to be fully put together and 100% functional.
Heat shield confidence, again, I went over this, but the heat shield has to perform safely for specific re-entry trajectory.
Artemis 2 will fly.
It's going to be a different re-entry than Artemis 1, so
they have to really, really test that up and make sure it's 100% go.
They have something called first crewed mission pacing items.
These are slower checks for safety.
They're required because this is the first flight with astronauts.
This is naturally going to introduce more steps and potential failure points and potential delays, but they have more more protocols that they have to go to.
The agency's official timeline remains to be no later than April 2026.
Of course, it will be pushed if they have to push it.
Keep in mind everything I just said.
Everything has to be completely greenlit by all of the engineers and
everyone whose skill set matters here.
Everyone has to give a thumbs up.
If you hear any other dates, from outside sources, you have to be
very skeptical of that.
Like, you should really only listen to the dates that are coming from NASA because
there's been a lot of reports of like other
companies, whatever, like groups that are trying to say this is not going to happen, this isn't going to work or whatever, but they don't have the inside information.
They don't really know what's going on.
NASA, I don't think NASA has any real reason to lie.
They make it very clear,
we're only going to launch if it's safe.
We're saying, you know, April 2026.
And again, we know that they'll delay if there's a problem because they've already done it, and that's the culture at NASA.
So
I trust them and trust the engineers, and I'm looking forward to some spectacular space adventures moving forward in 2026.
Did you guys all see the picture of the
planned mission control?
Yeah.
No.
So that's cool.
Yeah, it's pretty cool.
I mean, it's basically a bunch of big monitors, right?
But just a bunch of computer stations with gigantic monitors.
What games would you play on those monitors?
I mean, those control rooms, like, they don't really differ that much from the historical ones, right?
It's really, it's, like Steve said, giant monitors on the walls, computer monitors and computer desks, like everywhere with tons of people with signs above the desks and all that.
It's the same thing.
It's just, you know, just better modern technology.
I think the old school stuff looked really cool.
I just like the layout, but the new one is cool.
Take a look at it.
All right.
Thanks, Shay.
Bob, tell us about Element 120.
If you insist, more accurately, 119.
I'm not sure why they're focusing on 120, but that's neither here nor there.
Okay, never mind all that crap.
Steve, a new method of discovering new super heavy elements has recently been tested with positive results.
Could this method find new elements that do not exist yet in our periodic table of elements?
Okay, this announcement came from the Lawrence Berkeley National Laboratory.
I'm sure all of you have heard about the periodic table of elements, right?
Most of these elements, you know, they're essentially just lying around waiting for us to catalog them, right?
I'm just like laying right there.
Some of them will never appear naturally on Earth, though.
And I was curious, what is that cutoff?
I wasn't 100% sure.
Do you guys know what's the heaviest naturally occurring element that forms on Earth?
92.92 protons, if that's what you meant.
That is true.
That is correct.
So uranium-238 with 92 protons and 146 neutrons.
But then what I didn't know was that what is the heaviest natural element that we know of?
And it's not uranium-238.
It's plutonium-244,
which we found in some meteorite dust.
But apparently plutonium-244 apparently has life, has a half-life of like 80 million years.
So if some were created, we're on the Earth, it's already decayed away.
So it's not totally fair to say that.
But it is correct, though, that
all the elements beyond plutonium were never just found.
They had to be synthesized.
They had to be created.
So have you ever wondered how they create even heavier synthetic elements to add to the periodic table?
All the time.
I've wondered.
I mean, yeah, I've thought about it.
I've read some stuff about it.
But
what I learned recently, though, was a lot of it was new to me.
What they do of a super high level is they smash elements together one way or the other.
They're just smashing them together and hope that the protons and neutrons of one nucleus confuse to the nucleus of another atom.
That's kind of what we're doing here.
And we've talked about that in the context of colliders and things like that plenty of times.
So if you add new protons to a nucleus, you have created by definition what?
A new element.
Exactly.
A new element since the number of protons defines what an element is.
So, for example, all elements with six protons are carbon atoms.
There's no other way.
There's nothing else that they could be except carbon atoms.
This number is the atomic number.
But if you change the number of neutrons in an element, that just changes the isotope of of that element.
So, say you go from deuterium to tritium.
That's all that is.
It's still a form of hydrogen.
It's just a different isotope.
And atomic mass, you don't necessarily need to know that much for this talk, but atomic mass is the protons plus the neutrons.
The atomic number is just the protons.
That's the critical one.
That defines the new element.
So, all right.
So, the old method of doing this used a particle beam of calcium-48.
I did not know this at all.
They essentially used a particle beam weapon.
I mean, I don't know how much of a weapon that would be, but
I wouldn't either.
It's a particle beam of calcium-48 with 20 protons and 28 neutrons.
So, this is a rare isotope of calcium.
So, imagine you have a beam of calcium atoms with no electrons, right?
Just a nucleus, and they all hit other heavy elements like curium or californium.
One, so you've got millions, billions of these calcium ions that are just impacting onto this californium, say.
So once in a while, one of those calcium-48 bullets would fuse to a californium atom instead of just bouncing off.
And that's literally like the odds of that happening are one in
like quadrillions.
But, you know, with enough, if you have enough of these atoms in your calcium beam, it's going to happen.
So after fusion takes place, what do you got?
You've got a new element since the number of protons has changed.
No matter what you do to the protons, if you add one, take away one, or anything like that, you now have a new element.
So these super-heavy atoms don't last long, but we can detect the decay chain of the elements.
And once we can detect what this mysterious thing decayed into, what that decay chain is, with the daughter elements and granddaughter elements, if you will, then you can definitively say what had to have existed to create those daughter elements.
You can follow that.
So
it's like cataloging your daughter's DNA and her son's DNA to conclude that you definitely had to exist.
So they're seeing this decay chain.
Do you like that, fun care?
So you see this decay chain, and they say, well, for this decay chain to exist,
this element had to have created it.
So that's their evidence, and it's pretty damn solid.
The specific method that I've been talking about, about this calcium-48 beam,
it has actually helped us find elements 113 to 118 in the early 2000s.
Or should I say aughts?
I just just don't like saying aughts.
Does anyone like saying aughts?
I do.
Oh, wow, you're weirdo.
So
unfortunately, though, unfortunately, this calcium beam technique has reached the end of its useful life in finding new elements.
It's just not heavy enough to create an element above the current heaviest element, which is 118 or Ganeson is one way to pronounce it.
Wow, I would have never guessed that as true.
Yeah.
118, 118.
So we need something heavier.
Calcium 48 is just isn't cut.
It's not enough, oomph.
You know, there's not enough power behind this.
We need something more a little bit more formidable, a little bit heavier.
And this is where this news item kicks in.
And it's called titanium-50.
It's a beam of titanium-50, which is a little bit more than 48, right?
Yes, I agree.
Yes, yes, thank you for agreeing.
This new particle beam,
this new particle beam, the team has developed, titanium-50.
Oh, yeah, I have it right here: 22 protons and 28 neutrons.
It's been tested essentially as a proof of concept for creating super heavy atoms beyond 118.
So, this is what this was their goal.
They've been developing this new titanium-50 beam for quite a while, and they're like, Let's test this out, let's just see what it can do.
They weren't expecting to make any huge major breakthroughs, they just wanted a proof of concept.
So, to do this, they researched it, they sent a new beam against a target of plutonium-244, the heaviest natural element that we have encountered.
They shot the beam against plutonium-244, 244 and when the titanium nucleus and plutonium nucleus fused, they briefly created a new heavier nucleus.
And that and what it created was they found actually two elements two atoms of element 116 called livermorium.
I mean when did that name I don't remember I guess I remember the old was it an old Latin the old Latin name for for these elements but they must have renamed it and I miss it because I never heard of livermorium before.
It sounds vaguely funereal, doesn't it?
Okay.
It must be named for somebody named Livermore.
Well, right.
The Livermore Laboratory.
Yes, I suppose.
Oh, damn.
There you go.
Lawrence resolved.
Is it Lawrence Livermore?
Yeah.
Is that right?
Yeah,
I believe it is.
You are correct, sir.
So they use this new technique, this new beam, and they found two atoms of element 116.
Now, this element, like I said, it's already been found.
It was found using the calcium beam probably back in the aughts.
But like I said, this was a proof of concept.
And they pretty much, well, you know, prove the concept.
The odds, though, were against their success.
Like I said, it's like only a few nuclei within a quadrillion of the tries should have done this.
And of course, if your beam is, you know, big enough and long enough, you're going to, you're going to eventually hit it.
All right, so what does this mean?
This means that titanium-50 could work for perhaps at least the next few elements.
So we may be able to get to 119, 120, maybe 121 at least.
If we are super lucky, it could last us, it could help us discover a few more after that.
But I suspect that we're going to probably need another type of beam after 121 or so.
But now,
this is one thing that caught me by surprise.
These next elements, though, 119 and 120, they could be extra special for a couple of reasons.
One is that element 119 would be a new row in the periodic table, a new period, I guess is what they would say.
Because all seven rows or periods are basically filled up right now.
So when they discover 119, then
it's going to go to a new row.
So who here, which of you guys knows what the rows of a periodic table or what those periods, what do they reflect?
What is the significance of the surface?
Electron shells, right?
Yeah, it's essentially how the electrons are arranged around the atomic nuclei.
So all of known chemistry, everything we know about chemistry, fits in the seven rows of the periodic table of elements right now.
If or when we confirm the next heaviest element, say it's 119 or 120, we're not sure which one it would be, we will then be on a new row.
It would be row eight or period eight.
In which column?
Well, far left.
It would be the far, it would probably be the yeah, 119 would be the far the farthest left.
Okay, so it's expected, though.
So, what's going to happen in row eight, right?
We can't be 100% sure, but we do very strongly suspect that relativistic effects could strongly influence the electron behavior.
One website was saying that the electrons are essentially traveling near, you know, close to the speed of light.
So that's why they're saying that
relativistic effects could have some influence here.
So these elements, who knows, they probably won't follow expected chemistry patterns, right?
We're not sure what kind of chemistry these things could engage in, but not that we would ever see any chemical reactions, right?
These are ultra-heavy atoms.
Their half-lives are probably, they're in the microseconds.
They're very, very super brief.
So
there's not going to be any real chemistry going on there unless,
unless, of course, that there's that holy grail of chemistry known as, and I've mentioned it here and there on the show and even just talking with you guys recently, the island of stability.
Evan, you and I were talking about this.
That's one of the things that some nuclear theories predict.
There may be some very heavy elements that might have considerably longer half-lives.
Instead of microseconds, it could be whole seconds.
Imagine a whole second or minutes or even days.
I mean, you can't rule that out.
I mean, maybe it's unlikely.
Some theories point to it.
And this would be due to some special, some call it
some magical ratio of protons
to neutrons.
They say that that could make these super heavy new atoms just extra stable, so stable that
they could last far longer than the microseconds that these
super heavy elements typically last.
So who knows?
I mean, who knows what we could learn if we had that much time to play with the super heavy element?
You know, even
seconds, I think, would allow us to do a lot more testing, far more than what we could accomplish with just microseconds.
We could do something, something more substantial in just looking at the
decay of its daughter particles and stuff like that.
So I have a silly hope.
It's a silly hope.
I don't tell too many people, but
sometimes I think, imagine if my what-if scenarios here.
What if...
We all have what-if machines.
Yeah, right.
What if at the highest levels of what's possible with technology,
it could be reasonable or feasible to create a technology using these elements with half-lives that go not even seconds, days, or I'm talking like imagine half-lives in the years or even decades, which I'm not aware of any theory
that says that that's even a reasonable expectation.
But imagine that.
This is the kind of stuff that I would expect from super advanced aliens.
Having materials with radical new properties based on these relativistic or quantum effects that this super heavy element in this island of stability could have.
I mean, I did some research.
What kind of abilities could these have?
You know, it could be stuff like super dense fuels, super, imagine super compact reactors that that you could like put in your phone or something crazy like that.
Whole new branches of chemistry.
Oh, here's a good one.
Element 126 armor plating.
All right, I'm going to stop right there because that's just really goofy.
I mean, nobody's saying that
this island of stability would be that awesome.
I mean, I think they'd be incredibly happy if it lasted for a few seconds or a minute.
But who knows?
Who knows?
Once we get there, they may be so ridiculously stable that they could have a half-life.
Don't count on it, but hopefully, we could, at the very least, we can find elements using this new technique, this titanium-50 beam.
We could find 119 and 120, and maybe even element 121, and see what this period 8 is all about in the periodic table of elements.
So long, calcium-48.
We'll miss you.
Thank you for your help.
You served us.
Sure, man.
Bob, how are you feeling about quantum computers?
Pretty good.
Pretty frustrating.
You're there.
It's frustrating.
You know, I'm just, they just got to, they have to focus, and they are focusing to a certain extent, I'm sure.
Error correction is key.
It means
you wouldn't even need that many qubits.
If you had negligible errors,
with 200 qubits or even less, you could do some amazing things.
The error correction is what's taken up so much of the effort because it's so hard, right?
If they can crack that nut.
And I really don't know what you're going to be talking about.
Yeah, you don't.
So Caltech,
Caltech just set a record with a 6,100-qubit array.
No, no, no, wait, wait, wait.
What does that mean?
Wait, wait, wait.
There was only 1,000 like a few months ago.
It's huge.
That is huge.
It doesn't mean much.
What's the error correction?
But that's not what I'm talking about.
That's not what you're talking about.
But yeah, you know, but you should.
And Australian company, startup Dirac, has now shown that they can maintain 99% accuracy needed to make quantum computers viable.
This is with production of silicon-based quantum chips.
That's not what I'm talking about either.
But this is the
quantum computer news that we see all the time.
It's just so hard to know what to make of it.
I know.
We do appear to be making steady advances, but that doesn't, as you say, Bob, doesn't give us a good feeling for how closer we are to really functional quantum computers, you know, where you get quantum supremacy, where it's doing stuff we couldn't do without them.
Some claim that already, but I haven't taken a deep dive dive in that in a while.
I'm not sure how accurate those claims of supremacy are, but okay, continue.
All right.
So there's, I just say, just the number of qubits we're lashing it together is not the only piece of information that's important to understanding quantum computers.
And just for quick background, for those of you who don't know what we're talking about, regular computers use bits of data like ones or zeros, right?
Anything that's binary.
It could be any state, like a switch is either on or off, or a gate is open or closed, or whatever.
Quantum computers use qubits, which essentially have their bits in a state of superposition.
So it's not a one or a zero, it's a superposition of one and zero.
That's one of the quantum, weird quantum effects that are critical to quantum computers.
The other one is that the qubits need to be entangled.
And it's the entanglement that actually makes the quantum computers work.
That's how you connect them into a circuit.
And that entanglement is what both the superposition and the entanglement mean that we need to maintain these quantum states while the calculations are undergoing.
But these quantum states are very fragile.
You need to have super cold temperatures, you know, single-digit degrees Kelvin, for example.
This is why it's never going to be sitting on your desktop, or at least no nothing, no extrapolation of current technology.
This is always going to be like governments and countries, you know, and wealthy institutions may have these to do, again, the kind of computing that you can't do with classical computers.
All right, so this is where the breakthrough comes in now: in the entanglement part of this.
One of the huge limiting factors is how far apart the two entangled qubits can be because they have to be isolated.
So, one of the analogies given in the study is: imagine two people in a soundproof booth, right?
Like Like GetSmart.
Yeah.
So they have to be in a soundproof booth in order to limit the noise, because it's the environmental noise which breaks down the entanglement.
But that also means they have to be close together.
So you can't have somebody far away because then they'll be outside the soundproof booth.
But what if?
What if
you could connect soundproof booths together?
so that they can't communicate with each other while still being isolated from outside noise.
So that's kind of the idea here.
So what they did,
what the researchers did is they found a way to keep the systems isolated to maintain entanglement and minimize noise while simultaneously giving them the ability to communicate over much longer distances.
So they're using nuclear spin
as the information you know holder, the spin of phosphorus nuclei.
That's their qubit, right?
The spin of a phosphorus nuclei.
And they keep it in a clean quantum system
by surrounding it with an electron.
And they demonstrated that they could maintain an entanglement for 30 seconds, which is a massive amount of time when you're talking about quantum computers, with, Bob, less than 1% errors.
So that's a very low error rate.
or very long period of time.
This is a good workable quantum system.
But now they've taken it one step further.
They've figured out how to manipulate the electron so that its orbit can essentially surround two phosphorus nuclei, which electrons do, right?
Nuclei can...
Yeah, they can share electrons.
But this enables two nuclei to communicate with each other over 20 nanometers.
Now, again, 20 nanometers is a very short distance, but you know what that's on a par with?
Our current manufacturing techniques for regular silicon computer chips.
Oh, yeah.
Right?
That's what I'm doing.
Oh, so we can use the material we've already got.
Yes.
So the idea is we could use manufacturing techniques we already have to make stuff at the 20-nanometer scale, and that could be applied to this system because you're dealing with the 20-nanometer scale.
So they proved that this works.
Basically, that you can have a quantum entanglement in two qubits separated by 20 nanometers and
using this phosphorus nuclei spin as
the qubit system.
So this could be, again, is this going to be the basis of future quantum computers?
It's too early to tell.
But
they are progressing nicely.
The thing about this system, which they say is a massive breakthrough for quantum computers, is that it's scalable.
Because you could just keep adding phosphorus nuclei and connecting them with other phosphorus nuclei using this shared electron technique.
They said they see no reason why they can't just keep scaling this up.
And the scaling is, of course, that's the main limiting factor with quantum computers, is making it bigger and bigger.
So we'll see where this plays out.
I mean, it may be years, you know, before we really see this mature into the kind of thing where you're mass-producing quantum computers, you know.
Neat.
But it's the right path.
Yeah, but
this seems like a very encouraging path.
But even still, I mean, it's still like just to give people an idea of
why do people talk about quantum computers and what are they, and how do they work?
Nobody knows.
I mean, basically, it's complicated, it's super complicated.
Every time I think I understand it, someone's like, no, it's not really that, it's really this other thing.
Well, who famously said, like, if you think you understand it, you don't really understand it.
Yeah, yeah.
Feynman, I don't remember.
It's super complicated.
When I wrote about it recently, I talked about quantum encryption because this is like the big thing with quantum computers.
Once you get a really powerful quantum computer, it kind of breaks all old encryption, and you need a quantum computer to make encryption that another quantum computer can't crack.
But then it was pointed out that, yeah,
there are ways to make quantum computer resilient encryption that doesn't require a quantum computer.
Exactly, yeah.
Yeah, so I see.
So we're already working on that.
But still, it seems like there there could be huge technological advantages to having a quantum computer.
And you don't want your adversaries to have one when you don't want one, when you don't have one.
So I think that's what's fueling a lot of this research.
So, you know, when will we have like mature quantum computers?
I don't know.
It's so hard to tell, even reading these kinds of news items.
It's very sexy.
It's very exciting.
This sounds like a big breakthrough.
It all makes sense.
Sure.
You can have these entangled qubits that are stable over 30 seconds and over long distances at the distance of manufacturing existing computer chips.
I get all that.
I just don't know how meaningful it really is.
Do you have any other thoughts on that, Bob?
No,
the error rate is encouraging.
Yeah, the next less than 1% error rate is very encouraging.
And the scalability is encouraging as well.
So yeah, definitely be tracking this one.
Yeah.
Yeah, we'll be tracking it.
Maybe one day we'll be able to report that we have a really significant usable quantum computer.
All right, let's move on.
All right, Evan, tell us about artificial intelligence and lying, but maybe not the way you think.
Yeah, exactly.
And there's a study out.
It was in Nature, and I had made the rounds in the media this past week in which the headline, and this is what drew me in.
Using AI increases unethical behavior.
We know that headlines are never the whole story, so we have to definitely take a closer look at that.
What did this study actually show?
How worried should we be about a supposed impact of AI on human morality here?
So you go to the paper.
The paper is titled, Delegation to Artificial Intelligence
Can Increase Dishonest Behavior.
They ran 13 experiments with over 8,000 participants, and the researchers explored what happens when people can delegate tasks to AI systems compared to people doing those tasks themselves.
I would say that the central question here wasn't just, you know, will people cheat if given the chance?
You know, we kind of know that answer.
But the deeper question was, does delegating tasks to AI change the psychological dynamics in a way that make cheating more likely?
So there is a distinction there.
And the experiments were built around controlled tasks where participants could benefit financially by being dishonest.
This was the test.
The die roll game.
Apparently, psychologists have been using this for decades.
Is this true?
Have you heard of the die-roll game?
I'll explain how it works.
There's a lot of paradigms like that.
Roll a six-sided die and keep the result to yourself.
You then report your result to the experimenter.
And the higher a number you report, the more, say, money you get, as an example.
So let's say I roll in secret, there's a three, but I tell the experimenter, hey, I got a six.
I'll actually get more cash or
whatever the reward is.
And there's no way to prove essentially that you're lying.
It's almost like liars' dice in a way.
You know, that game they played on Pirates of the Caribbean, in a sense, when you can call the bluff.
But in any case, so in the study, people rolled and reported directly to another person.
There was no AI involved.
And then others delegated the reporting to an AI agent, and they used different kinds of interfaces.
Some used a rules-based system in which they said report it honestly.
Some used what they're classifying as like a supervised learning.
They train the AI by selecting example data.
And then there was goal-based to maximize your profit.
You know,
let the AI figure out how best to, you know, make it come out in your favor.
They also, at the same time, ran a more realistic, say, tax reporting game.
And this is where I got interested in which the participants reported their income and they could choose to underreport it if they wanted to
and you do it with people and you do it with AI agents and to see what the differences are there you know are people how are people going to report that to a person someone like me a tax preparer
versus having an AI take care of that reporting for you So here's what they found.
Delegation did increase dishonest behavior, they're saying.
They said even with the most explicit rule-based interface, people who used AI were more likely to cheat than those doing the tasks themselves.
The effect skyrocketed with the vague goal-based interface.
That was the one where basically saying, hey, maximize this for me.
And they turned it over to the AI to do that for them.
And in that condition, they said 12 to 16% of participants stayed completely honest, and everyone else cheated to some degree.
So basically, the more abstract and hands-off the delegation, the easier it became for people to let the AI do their quote-unquote dirty work for them.
And also, the AI agents were far more accommodating than the human agents.
And
this is where I have a little
practical experience with this.
When I'm sitting down with someone for a tax appointment and I'm talking to them about it, I would definitely say that they would be less inclined to be more liberal with their answers to me as opposed to me because of the interaction that we have.
You know, because I make sure that they're trying to be as accurate as possible.
That's part of my job because I don't want to get my clients in trouble.
I'm trying to save them basically from themselves and point out where certain things might be, say, red flags for IRS.
For example, somebody comes to me and says, hey, I earned $100,000 last year and I gave $50,000 to qualified charities, so I get a charitable deduction off my, I don't have to pay $50,000 half of my income taxes because I get to write that off.
That is outside the boundaries of
the normal statistics, and that is an outlier.
I would therefore press back and say, you need to make sure you can produce your receipts and do all these kinds of things.
Make sure you've got it ready because this is a high-audited item.
The IRS is going to come back and ask you to prove it.
So I encourage them to do that or to change their answer.
Well, yeah, it wasn't $50,000.
It was actually $5,000.
Okay, that's more of a number number that would be believable.
Whereas if they go and they do that with a computer and AI or something else like that, an AI will be, generally speaking, more accommodating in allowing them to go ahead and report that $50,000 without the pushback.
Yes, but to be clear, Evan,
people were no more likely to request unethical behavior from the AI than from people.
So they still asked people at the same rate to do the cheating for them.
Right.
Unless there there were guardrails.
Now, what you're talking about is that you provide guardrails.
Right.
Yes.
So that's two different things.
So, as you say, like the AI will may not make people request cheating more, but it's more likely to do it and not ask any questions.
And that was
my answer.
That was a great idea.
Let's do that.
Yep, the guardrails.
And that is kind of the point.
And the authors of the study also definitely point this out: that we need to guardrail, better guardrails need to be incorporated into these systems to protect people from basically, you know, from themselves.
And because I think the tax reporting example is a good example of this,
you know, and a practical one that a lot of people can understand and how they can be, you know, their own, can be led astray in a sense and get themselves in
frankly trouble in this way.
The data showed basically, yes,
so again, the data showed delegation to AI lowers psychological barriers to unethical behavior.
The effect is strongest when instructions are vague or high level.
I don't think any of that's surprising.
And that the AI systems at the moment are more compliant with, say, unethical requests than when dealing with humans with this data instead.
Now, what about the headline, though?
You know, using AI makes people unethical.
That's an oversimplification.
You know, it definitely always needs more nuance.
We've talked about the misleading headlines and things like that.
So you really, that's a tough one to swallow right there.
Maybe they should have said something like, you know, delegating to an AI can increase dishonest requests, especially with vague interfaces.
That might have been a more
accurate headline in a sense, even though it's a subtle difference, but still, you know, pretty important one as far as I'm concerned.
And again, we need to design systems that minimize moral wiggle room and need accountability mechanisms that keep people inside this loop.
So an interesting study, definitely informative, but never go by just the one study and always read read a little deeper into it.
All right.
Thanks, Evan.
Kara.
Thank you.
Yes.
Tell us about scams and fraud.
So we often talk about scams and fraud on the show.
A new article in The Conversation that was published by Raul Talang, professor of information systems at Carnegie Mellon.
He writes about sort of scams and frauds in the age of AI and crypto, because of course, we see this all the time, whether we're talking about, you know, frauds to make money or pseudoscience, is that this same rhetoric is like repackaged with whatever today's sort of zeitgeist allows it to be.
I don't know, I know this is an aside, but I don't know if you guys were following all of the like rapture stuff on TikTok this past week.
And I was like, God, this, it's so old hat.
It's like all the same stuff, except because it's like gen alpha people who are talking about it.
There's like a very modern spin.
Maybe, oh, their first time hearing about it.
This is a regularly occurring thing.
Exactly.
Well, I mean, it's just that these things just keep getting repacked over and over with whatever the technology of today is.
And the technology of today is AI.
And so the professor who wrote the article, he talks about
sort of emotional tactics, first of all.
He talks about things like duty, fear, and hope.
And he says that most scams occur because of an individual target duty, fear, or hope.
So duty refers to, you know, if you're an employee and your employer asks you to do something, you feel a sense of duty to do that.
Fear is the idea that maybe somebody is telling you that, like, a loved one or somebody that you really care about is in danger.
So, you need to do something to help them.
And then, hope is often like, you know, investment scams or job opportunity scams.
They talk in the article about
specifically AI-powered scams and deep fakes, and then after that, cryptocurrency scams,
both of which are sort of, like I mentioned, repacks of age-old,
you know,
scammery.
There's got to be another word for that, right?
Swindling.
What are all the words we often use?
Flyn flammery.
Age-old, flim flam.
Flyn flammery.
Mary actually.
But repackaged for a modern era.
We've talked about this before.
I know, Jay, you've covered like AI
and like AI deception quite a lot.
Yeah.
So we've got to remember that this is not a, in the future, this could happen.
Like this is happening right now.
A little bit of statistical data here.
Just documented.
Well over 100,000 deep fake attacks were recorded back in 2024.
And only in the first quarter of this year, of 2025, individuals who were swindled.
So these are people who actually reported it, said that they were swindled out of 200 million plus dollars.
And this is all from individuals using AI-generated audio or video to impersonate other people.
Oh, no.
Yeah.
So whether it's, hey, grandma, I'm in trouble.
I'm, you know, I'm overseas and I really need some money because I lost my passport.
Or it's, hey, worker, I'm your CEO and I need you to do X, people are falling for them.
You know, very often there are different kinds of ways that they go about it.
So we talked about like fake emergencies.
That seems to be one of the hardest ones to fight against because there's so much emotional manipulation and it's a lot harder to check against the fraud.
But we do see, you know, kind of tech support scams happening a lot in corporate settings where somebody will get like a pop-up on their screen that says that either there's a virus or there's some sort of identity theft and I need you to call a number or, you know,
somebody will get called directly from a number.
And then while they're on the phone with tech support,
they'll be like, okay, I'm going to take over your computer.
And you guys have all done this at your actual jobs, right?
When something's wrong with your computer, the tech support at your job will like be granted remote access to fix the thing.
But when it's a nefarious actor and not actual tech support
within your job, they can install malware.
They can steal a lot of information.
I mean, so many things can happen.
There's also examples here of like fraudulent sites that impersonate like ticket sellers or universities or people being offered fake jobs and then having like placement fees taken from them, or having, you know, personal data stolen.
But they also talk about crypto scams.
And I mean, I've got to admit, Jay, you may know all of this terminology, but a lot of this was new to me.
Like, you know what a pig butchering scam is?
I actually don't.
What is that?
Okay, so it's like a hybrid.
It's a hybrid.
It's sort of a crypto scam.
It often involves crypto.
And then it's usually some sort of like romance scam or catfishing scam.
Sometimes it can involve investment fraud.
So basically, the scammer builds trust over like weeks, months, maybe even years with a victim because they're either supposedly dating them or they're investing a lot of time in them.
And eventually they have them invest in a fake crypto platform and then they'll extract a bunch of money and then vanish or otherwise send them money, but usually using crypto because crypto is not traceable, right?
And there's really not a lot of recourse.
Like if somebody exploits you using crypto, you can't really do a lot about it, right?
It's not FDA insured money.
Also, there's pump and dump scammers.
You've probably heard of that.
So that's like,
we often think of it in terms of the stock market, but like, let's say the scammers will artificially inflate the price of like a crypto that's not really worth a lot through hyping it up on social media.
So they'll get a bunch of investors.
And then the minute that people start buying it like crazy, they just dump it off their holdings, right?
So they pump and then they dump, and then they end up having all of this worthless crypto.
And then finally, the author talks quite a bit about phishing scams.
We just had a science or fiction about that.
And also, have you guys heard of smishing?
Smishing?
I feel like this is just like, this is just a thing.
I do that with my wife.
Right?
Like, I feel like this is something that isn't just not going to catch on because there's an FCC article about it because I was like, what is smishing?
And I Googled it and it's like the FCC is writing about smishing.
Basically, smishing is just a portmanteau of phishing and SMS, right?
Or text messaging.
So it's phishing via text as opposed to phishing via email.
Phishing, I guess, is specifically an email scam and smishing are text message scams.
But those are rising all the time.
And because of tools like AI, whether we're talking about artificial voices, making artificial videos, or manipulating imagery, it's just it's cheaper and easier to do now.
So you have these sort of like scam farms, these huge organizations that are able to do this and exploit victims cheaply, easily, and then
vanish just as quickly as they arrived.
So we've talked about this before.
You know, how do you protect yourself?
Well, we know that, like, what did we just talk about, Steve?
Third-party apps, you know, using two-factor authentication, you know, any sort of like like additional security that you can use, making sure that, you know, when you're on a website, it's legitimate.
But honestly, that's getting harder.
Like back in the day, you could almost be like, you fell victim to a phishing scam.
That's embarrassing for you.
Did you notice that it was eBork that was asking you for like a, you know,
some payment?
But now, like, people are cloning whole websites and they look the exact same.
And they're even cloning interior company, you know, videos or sounding like the company CEO, and it's coming from emails that look the same.
So it's getting harder and harder to recognize that.
But of course, don't click on suspicious links, don't download attachments from people you don't know.
Like we said, enable two-factor authentication.
Remember that most legitimate businesses are not going to ask you for information.
They're definitely not going to ask you to send them money.
It does seem to be the case that the pig butchering type scams and the personal relationship type scams are just, they're just a lot trickier.
But more and more, we're seeing organizations and governments are posting some information on what to do, how to avoid it.
And if you do feel like you're involved in a scam, who to reach out to, like the FBI.
Again, this is age-old fraud.
It's all the same stuff that always happened.
A swindler is going to swindle.
You've got to protect yourself.
But in the age of AI and cryptocurrency, they can do it faster, easier, cheaper, more efficiently, more effectively, and without a trace.
And so we've just got to remember that if we are victims of these types of scams, we probably have less recourse.
And it's kind of gone are the days that it's like, fool me once, you know, shame on you.
Because I think a lot of people can be fooled pretty readily, even very savvy people.
So you've got to get your heckles up.
You've got to stay skeptical.
Absolutely.
Yeah, you're right.
I mean, even like as totally, you know, how much radar I have for this up all the time, every now and then I still almost click things I shouldn't click.
Of course.
Because they seem to be coming from a legitimate source where you can.
Or the timing is coincidence.
That's usually what gets me.
Like the timing is.
But the thing is, you have to realize that there's so much going on, you're going to get that
incidental timing every now and then.
You know, like I just did something and then I get an email that might relate to that thing and it's just specific enough where
you think it's, oh, yeah, this is the follow-up of that thing that I just did.
But wait a minute, is it?
And that's the ploy, right?
Because if we can send out thousands, hundreds of thousands of these emails,
scammers, yeah, somebody's going to click.
It's terrible.
I also just think, you know, relying on like everybody doing the right thing every time
is not a good strategy, just statistically speaking.
Yeah.
And because they just overwhelm the statistics by just flooding the zone with scams, you know?
And so that's the world we're living in, where we're constantly being bombarded with attempts at stealing our information and stealing our money.
Who wants to live in that world?
There has to be something we could do at the infrastructure side to just lower
how easy it is to just mass produce scams.
Steve, I hate to say this, but the political will has to be there.
Yeah, of course.
And it's not.
Yeah.
I agree.
And I think that, you know, organizations that are offering us the products, you know, the banking products that would allow us to be scammed, they need to see that there is a capitalist incentive to help protect us, right?
I would much rather use one of my credit cards online than a debit card because I know that if somebody steals my credit card.
You'd have protection with a credit card.
I have protection.
Less with a debit card.
I think banks, especially online banks, are getting very careful.
I've been recently dealing with that.
And I had to download an authenticator app.
that just exists solely to be another layer of authentication for these types of interactions.
And that's fine.
I'm doing basically three-factor authentication now.
Yeah, saying
one of my banking apps is just as intense as my hospital records app.
Yeah.
Like, it's amazing.
I mean, it's annoying, but I'm like, okay, it's like, all right, here's my two licenses.
Here's like all this paperwork.
I have to prove who I am.
Like all these things.
It's like, okay, I get it, though.
It's a bank.
Yeah, and I get why.
Yeah, we're talking a lot of money.
And the truth of the matter is, like, I think we have to
be more vigilant.
And yes, be more vigilant with clicking links and all of that, but also like with your actual information.
You know, in the past, I might have been that person who like didn't really look at the receipt before I signed it.
But now I'm the kind of person who uses, you know, software both for my personal banking and my business banking, where, you know, every few days I'm going in and I'm reconciling each transaction line and I'm constantly looking to make sure that everything is up to date.
Are you finding any weird stuff?
No, I mean, if anything, it's just making me a better bookkeeper.
Every time there's weird stuff, it's user error.
I'm all for that.
Yeah.
Yeah, it's like
you gotta look at your
listed something as a, as a, as a transaction, this kind of transaction when it should have been an asset and blah, blah, blah.
But I'm learning a lot, and yeah, it is definitely helping because the quicker you can figure these things out, the quicker you can try to do something about it.
But I have a feeling the numbers that are reported are exceedingly low.
Yeah, it's probably 10 million.
That's right.
It's embarrassing.
It's embarrassing to say, you know, I fell victim to somebody who pretended to be my grandson who was, you know, stranded and needed cash from me, and I gave him cash really quickly.
Like, what a bummer.
Yeah, and the elderly are high targets.
It's a
target because they're not as savvy, and sometimes they're just mild cognitive impairment or whatever.
Or they're more isolated.
And they're more likely to be emotionally manipulated into helping people who depend on them.
The older you are, the more likely you are to have people who depend on you because you might have children and your children have children.
So we need to watch out for our parents as well, or whoever our elders are.
We have to be part of that team to help them.
Yeah, but don't think you're immune if you're young.
Nope, because you're not.
No, none of us are.
All right.
Thanks, Kara.
Jay, it's Who's That Noisy Time?
All right, guys.
Last week I played This Noisy.
Okay, ha ha, everybody knows what that sounds like.
I got
I'm glad you
but I, you know, I got tons of emails.
Like, people are like, it's someone peeing in an airplane flying over New Mexico.
You know, it's like,
God, the only guy that wasn't New Mexico.
It's funny, I know, but it's not what it is.
And I would never do a noisy of someone peeing unless it sounded really cool.
No, but it's funny.
I got you, guys.
Thanks.
But I did get some legitimate guesses.
Oh, if you guys can only be a fly on the wall of the wacky emails I get.
All right.
But before I get into that, I'm going to do a correction of a noisy a couple weeks ago.
Remember the one I explained to you?
It was a recording of someone who spoke out loud in a room.
They recorded themselves.
Then they uploaded that sound file and then they downloaded it and then they and then they played it open air again and upload, I guess, or uploaded again, whatever.
Okay, it's a little complicated, but basically there was like massive distortion going on over the iterations to the point where you couldn't understand anything anymore.
Okay, so that was called, it's Alvin Lucier's I Am Sitting in a Room bit, right?
So the person that wrote in said, well, well, many people wrote this in, but this person in particular particular said,
So he's continually re-recording a playback of his own voice, and the resulting degradation of the sound is less a case of media lossiness, right?
Meaning when I described it, it was that every time they uploaded it, the algorithm inside of like YouTube would it would lose a little bit of data every time and it would get really messy if you did it like a hundred times, right?
But that's not really it.
The real thing that's going on is that the room that room that he was in was of a particular size and geometry, and it caused certain resonant frequencies to be emphasized in the playback, while others are attenuated, right?
Every room has acoustic signatures like this, where certain things bounce more readily depending on the objects and the surfaces and all that stuff.
So the end result is that the recorded voice gradually morphs into like a natural resonant frequency of the room.
It wasn't an artifact of the uploading and the algorithm that would be processing that.
So, if you play the full original recording of that person's voice, he's actually explaining it
in the original recording of him sitting in the room.
He's telling you exactly what's happening.
I never listened to the whole thing because I was listening to it more as a noisy and not as a piece of information.
So, anyway, there it is.
It's even more interesting now because it's not just software losing it, it's the acoustic.
It's the acoustics in the room and the effect of those acoustics on the recording, which I think is fascinating.
All right, so now back to the noisy that sounds like people peeing.
So, of course, Visto Tutti had to chime in here.
He said, This one reminds me of the sound of tropical rain going down a big drain pipe.
I've heard similar sounds in Thailand where it can pour down like God Himself has been drinking beer.
So, you are incorrect, sir.
But then I got another person that wrote in.
This is a listener named James Joyce.
And James says, Hey there, Jay, my bro.
I'm probably way too late, but I'm going to take a crack at who's that noisy anyway.
This week's noisy is a spacecraft, is the spacecraft Ingenuity, the helicopter on Mars that went with the Perseverance mission.
That is not the helicopter, but I do understand why you selected that.
I have another person that guessed.
This is Karen Goode.
And Karen says, this week's noisy sounded like water to me, but it also had a high-pressure sound.
I didn't like that.
That reminds me of a drill or the high-pressure water plaque remover that's used by a dentist.
Remember that thing they stick in your mouth?
It's like, you know, it's like a water pick, right?
You guys know that?
Yeah.
Yes.
But she said it sounds bigger than that.
Uh, so she's getting us a high-pressure water cutter in a shop, like a saw, and she points out that you know, with enough power there, it can cut through metal, right?
Definitely, definitely.
I've seen it lots of times, it's a really cool sound, but that is not correct.
I have a listener named Sierra Asher,
and Sierra says, Hi, Jay, and he identifies himself as a man because, depending on what culture you're from, Sierra might not be a male name.
He's from Melbourne, Australia.
He says, where cafes with espresso machines are everywhere.
This week's noisy sounds to me like milk being throughed and heated by a steam wand of an espresso machine.
I do that at home.
My wife and I are coffee fanatics and we have an espresso cappuccino machine, whatever you want to say, and we do that all the time.
There are definite similarities.
I totally see it.
But you, sir, are not correct.
And look at this.
I have another listener from Australia.
This person is Mark Penny, and he says, Good day, Jay.
I'm Novisto Tutti, but to me, this sounds like thousands of bats leaving a cave at night.
And he says he's looking forward to Australia 2026.
Mark, you are not correct.
I do know what you're talking about because the bats flap their wings, and there could be like a staccato type of thing happening for sure.
And regarding Australia 2026, just so everybody knows, it is fully, fully, fully going to happen.
It's completely in the works.
We have purchased airline tickets.
I am finalizing details with the Australian conference, which is is going to be NATACON, right?
So let me just quickly explain this while we're in the middle.
It's like a break in who's that noisy.
The conference is going to be in two places.
First, it's going to be in Sydney.
So that conference will start on the 23rd, and it'll go to Saturday, the 25th.
This is a NATACON, guys.
This is a NATACON that we're running in Australia.
This is an SGU conference that is being hosted by the Australian skeptics.
So we're working in coordination with them.
But just to make it clear, like, it's not going to be like any of their other conferences.
It's going to be exactly, if you went to Naticon, that's what it's going to be.
If you haven't, it's going to be us, like all the SGU, George Robb,
Ian will be there, and Brian Wecht, and Andrea Jones-Roy.
We are Naticon, and we will be there.
And then the following weekend, we will be going to New Zealand, which I'm working with right now.
I'm working with Johnny from New Zealand, who's part of the New Zealand Skeptics.
That's right, Johnny.
That's right, Johnny.
And we're going to be picking the location and all the details and everything to be announced soon.
But tickets will go on sale for the Australian side of this.
Hopefully, if I can push hard enough, maybe within a week.
But I'll keep you updated.
Anyway, thank you, Mark, for writing in.
And again, no winner.
Nobody guessed it.
It's not an easy one, guys, but I'm going to tell you what it is.
This is simple.
This is molten metal being poured into cold water.
Which I've I was surprised nobody guessed it because I've had without exaggeration I I must have had a hundred people email me one variation on this noisy or another.
But I finally got one that I thought was a really interesting version of it.
So it's a dynamic sound because lots of things are happening.
First of all, you know, it's a liquid metal, so when it hits the water, there's immediately a burst of steam, and you're also hearing
like the metal itself like entering the water.
So it's complicated.
It has a few different things going on.
If you haven't heard it in person or go watch a video of this and you'll see it, there's an interesting little change to the sound.
It's not like just dropping coins in the water.
It has its own effect.
It kind of reminds me of the difference between pouring cold water into a cup or pouring hot water into a cup.
You can hear the difference.
Hot water makes a different sound than cold water.
You guys remember that?
Nope.
No, yes.
All right, don't get too excited.
All right, I got a new noisy for you guys.
This week's noisy was sent in by a listener named Justin Fisher.
Yeah,
if you guys think you know what this week's noisy is or you heard something cool, email me at wtn at the skepticsguy.org.
If you guys watch our live streams on Wednesday, Bob, Steve, and I recently demoed a video game that a friend of ours and a supporter of the SGU, his name is Alex, him and his team created this game called Platypus Reclade.
And, you know, we're trying to help him because he's, you know, he's got a small gaming company.
There are a bunch of skeptics, and we just thought it would be cool to help him promote his game.
So, first of all, I just want to tell you real quick.
It's called
Platypus Reclade.
And the cool thing about this game is it doesn't have computer graphics at all.
It's all handmade clay.
Yeah, it's cool looking.
It's really cool.
You've never seen anything like it.
So every frame of it is clay that they've molded into different positions.
So it's like, you know, it's an incredible amount of work, an incredible attention to detail.
So that alone is worth checking out.
But it's a side-scroller.
I've played this game at this point quite a bit.
It is a ton of fun.
It's a good simple game.
It's a lot of fun.
Absolutely.
Yeah, I think it would actually be good.
It's a good game as a parent to play with your younger kids because it's accessible to them and it's accessible to you as the parent.
You can actually play it because they have different levels of difficulty and everything.
And it's interesting because there's lots of different options in the game, and you just got to see it.
It's got really cool parallax.
Bob was freaking out about the multi-layered parallax.
The
bottom line is: we want to thank Alex for his support, and we want to help support their video game.
So, anyway, if you end up taking a moment to play it and you like it,
leave them a good review because that helps more people find them.
So, anyway, very cool game, and I hope you guys enjoy it.
Jay, didn't he say that they're including some kind of SGU shout-out in the game?
Yeah, so that was a little secret, but okay, he spilled it.
So, he is going to put in some SGU Easter eggs into the game, which I don't even know what he's going to do.
I mean, God,
I just, when he said it, I just thought, how cool would it be if the ship shoots Steve's head out as the weapon?
That would be really fun.
All right, anyway, if you have the time, go check it out.
Platypus Reclade.
And Jay, even though we're going to Australia next year, they are having their 2025 conference, October 4th to 5th, at the University of Melbourne, Parkville.
You can go to skepticon.org.au to check it out and get tickets.
All right, guys, I'm going to do a quick email.
This is a follow-up to Bob's news item, actually, last week.
about the nuclear propulsion.
And we were talking a little bit about hydrogen as a propellant, and so people emailed in for some clarification.
So one thing to for background, right, so sometimes it gets confusing and I had, like Bob and I had to make sure we were consistently using the right terminology here.
For rockets, something could be a fuel and or a propellant, right?
Usually if you're burning hydrogen to oxygen, the result of that combustion is the propellant as well, right?
So it's the fuel and the propellant.
But with the nuclear system, the nuclear reaction is the fuel, and the propellant is not the fuel.
It's just the propellant.
So that's what we were talking about.
Hydrogen is a great fuel because it's very light.
And so you get the most acceleration change in, you know, delta V over for the for the mass of fuel, which is for rocketry, that's the big deal.
The question I had, though, was like, is it a good propellant alone because it's very light, so you don't get that much inertia out of it?
But what a couple of of people pointed out, I'll just read the one email from Matthew who said: hydrogen is a great propellant if you are optimizing for ISP.
With the combustion chamber at a given temperature, the average kinetic energy of the molecules is equal, irrespective of the type of gas.
If the gas is made up of lighter molecules, those molecules will be moving faster.
Faster molecules lead to faster exhaust velocity, faster exhaust velocity leads to higher ISP, higher ISP leads to hate, hate leads to suffering.
Thank you, Steve.
Oh my gosh, I was about to say that.
Whoa.
So that was in his email.
So Matthew gets the Star Wars
nerd points for that.
I wasn't even reading.
That's where exactly where my mind was.
Leads to.
And that leads to the dark side.
Okay, so
essentially, yes, it's lighter, but it goes faster.
So the temperature is really the
key determining factor, right?
Heavier molecules go slower.
lighter molecules go faster as propellant at a given temperature, and so it kind of evens out.
Now, it's way more complicated than that.
It's all kind of gas stuff.
You know, it's a lot of complicated equations.
It's not just
simple like that, but just as a general sort of physics principle.
The other thing that is interesting, though, that hydrogen as a propellant,
really the main downside is that it is volume, is that it doesn't like liquid hydrogen doesn't condense down as as well as other propellants might.
And
you have to keep it very cold, and it is very corrosive.
So it's just not a great propellant for that reason, right?
It's just, it takes a lot of technology and infrastructure, and it's very tricky to deal with.
It's a corrosive?
I wasn't aware of that one.
Yeah, it's very, and yeah, it's hard to contain, too, because it's so small.
Again, kind of leaks out.
It can get through.
It leaks a lot.
All right.
I'm also going to do a quick named out logical fallacy.
While you're at it, you might
this one comes from Max.
He writes, Hi SGU, I came across the following fallacy used by Douglas Murray and Mossab Youssef in debates against critics of the IDF.
Unless you've been there, you cannot express an opinion on the issue.
And since I've been there, I have more credibility than you.
Someone made fun of that argument by saying Katy Perry, therefore, knows more about space than Stephen Hawking because she's been there and he hasn't.
I can't quite pinpoint if this is just an argument from authority or if there's something else to it.
Max, what do you guys think?
think?
Unless you've been there.
Is that moving the goalpost?
No, I think it is an argument from authority.
It's just kind of a tangential one in a way.
I remember Joe Nicol, when he would do investigations, he would always go to the place that he was investigating, even if it gave him zero information, just so he could say he was there.
Because he knew that people used this logical fallacy.
So, for example, he was writing an article about the Bermuda Triangle.
You gain absolutely no information by actually going to the Bermuda Triangle.
Unless you eliminate their
degree.
Because I remember I went on a couple of investigations with him, and
he's like, take a picture of me in front of the house.
Like, why?
Because I'm here.
I have to prove that I was here.
Otherwise, people will say, well, you didn't even go there.
So how do you know what's going on?
So, yeah, it's a total logical fallacy.
It's again kind of a non-sequitur, but it's just saying your argument is not valid because of something about you, or your argument is valid or more valid because of something about you rather than the argument itself.
So, that's sort of the broad umbrella of the argument from authority.
In this case, it's not even genuine authority.
It's just that were you physically there or not, even when it doesn't matter for your opinion.
It's one thing to say, well, you didn't see something yourself, and so that kind of diminishes your opinion.
Like, if we're talking about how wondrous the Grand Canyon is, I say, well, did you ever see it in person?
Like, no, I saw pictures of it.
It's like, well, you really do get a different impression of it if you see it firsthand.
I tried to say that to you guys before the eclipse.
I remember.
Yeah, absolutely.
I was like, you just don't know.
Or even when you're like, it was partial.
I intellectually believed you, but until I saw it myself, I didn't appreciate it.
You're
100% right.
You have to see it.
But this is different.
I do think the Katy Perry example is perfect.
Like, you don't understand space anymore because you went up in a rocket, you know, and Stephen Hawking's knowledge about astrophysics is not diminished because he's never been in space.
There may be other legitimate reasons why a person doesn't understand something, but that's not one of them.
But that's not one of them.
Right.
And it's so silly because it's so broad, and that statement is so broad.
I mean, what you could say is that she understands what it's like to launch in that specific rocket from a lower observer.
Sure, she does, but that's about it.
You went to a suborbitable, suborbital, orbitable.
Orbitable.
Was that intentional?
I do think you can say something like, you know, I hope that you will understand
that
my perspective on the issue is different than your perspective because I have experience.
Experience, right?
Yeah, it is.
And I think that's what makes sense, right?
Like, I do have a different perspective, but not, I have more intellectual knowledge than you.
Or your opinion, your definition because
you should defer to my opinion because of some whatever tangential relationship I have with the topic.
Yeah.
All right.
Right.
Pilots who say they found, you know, have seen UFOs and things like that, right?
Oh, well, you're not up there in the air, in the cockpit.
Well,
they're going beyond it.
They're saying they have special perception skills because they're pilots.
That totally is an argument from authority.
But what about, I guess, here would be a question, and tell me if you think that this is parallel, because an example I can think of is if a person, let's say like a white person, tries to make a racial argument about
the experience of a black person, and then a black person says, you don't know what it's like to be black.
Like
your opinion on this is not valid.
Yeah, I mean, I think there are limits to that, though.
I think it is valid to say, listen, it depends on what they're talking about.
I think you can understand racism
intellectually, and you could make valid arguments that are logical and evidence-based that deal with that even if you were not personally involved but you do gain a perspective it's like you don't know what it's really like until you've lived it that's valid and I think the issue is that very often what we'll see happen with sort of intellectual dark web types is that they'll try to make intellectual arguments to counter lived experience arguments to minimize the lived experience and say no I know better than you because look at the data and that person's like yeah but I've lived this life
I know what it feels like to have microaggressions committed against me.
But it does come both ways, but you shouldn't say, I've lived it, therefore, I could make up facts about it, and your statistics are wrong, because I don't believe your statistics.
Yeah, you could make it a logical fallacy from either way.
Which is often, again, these are informal logical fallacies, and it all depends on exactly how you're formulating your claims.
And, right, and then not, it's not a simple formula.
Like some arguments from authority are legitimate, some are not legitimate.
Right.
It depends on the details.
And I think just this idea of I know more is such a vague statement.
That's the important thing, right?
I know more because of X.
Okay, let's be specific about what I have an experience that you don't have, therefore, you know, X, Y, and Z.
Or I have, you know, studied this intellectually.
I have a PhD in this, therefore I've studied this.
That's rather important.
Yeah, I just got into an argument with in my in the comments of my blog about autism,
and somebody is like, has no idea what they're talking about.
Bottom line is they don't know what they're talking about.
And he's like throwing one link to one study up.
I'm like, dude, I have surveyed the literature on this.
I've been writing about this for 20 years.
Yeah, you
swam those waters.
Yeah, this is I'm telling you what all the evidence shows, not just you're just cherry-picking this one study.
You have no way to put it into context.
You just don't know what you're talking about.
That's different, you know.
Oh, you know, a perfect example of this is that, you know, I have a very dear friend who's a young mom.
She's not a young mom.
She's an older mom.
She's my age.
But she's a mom of a young child and she struggles with, shall I say, boundaries with her child.
And one of the things that we often, I bite my tongue and I don't, because I don't have children, right?
It's like, it's not my place to judge.
It's not my place to give advice because I don't have children.
But there are times when she might say, yeah, but you shouldn't, blah, blah, blah.
And I'll be like, well, I am a psychologist who treats people in family dynamics.
And I do have specialized knowledge about parenting styles and about outcomes for children.
And so it's one of those really tough things where it's like, no, no, no, I have intellectual knowledge.
She has experiential knowledge.
Sometimes my intellectual knowledge is more valid in that setting, but sometimes her experiential knowledge is more valid in that setting.
Exactly.
It depends on exactly what you're talking about.
Again, where I, as a parent, you know, where I think people who, you know, either they're too young, they haven't had their kids yet, or whatever, whatever reason they don't have kids, being judgmental about parents.
It's like, you know, until you've had to deal with kids,
you have absolutely no basis to be judgmental.
That doesn't mean that you can't have an opinion about like beating your kids, you know, but I'm just saying, oh, I would never let my kid do that.
It's like, yeah, talk to me when you've had the kids.
But at the same time, when somebody says, I don't know why I just keep doing this and this keeps doing the outcome, it's like, well, because there's evidence to help us data show that blah, blah, blah.
Right, it's tricky.
Yeah.
Okay, let's go on with science or fiction.
It's time for science or fiction.
Each week I come up with three science news items or facts, two real and one fake, and then I challenge my panel of skeptics to tell me which one is the fake.
Just three regular news items.
You guys ready?
Okay.
Oh yeah.
Here we go.
Item number one.
In the first such study in Germany in almost 50 years, a mandatory speed limit of 75 miles per hour would result in a 26% decrease in crashes with severe injuries.
Item number two, scientists have demonstrated a quantum sensor that is able to determine linked properties such as position and momentum to great precision, bypassing the limits of the Heisenberg uncertainty principle.
And item number three, a recent study finds that, despite advances, people are still able to distinguish in many cases between AI-generated voices and human voices.
Evan, go first.
Okay, first such study in Germany.
In almost 50 years.
Okay.
A mandatory speed limit of 75 miles per hour.
Unusual that they're using miles per hour, but that's.
Well, it actually is 120 kilometers per hour.
I should say that too, but I think it translates to 75 miles per hour.
Okay.
Would result in a 26% decrease in crashes with severe injuries.
So right now there isn't any.
So we're talking Autobahn.
Yeah, there is no speed limit.
Oh, boy.
That sounds right.
I'm not sure where the trick would be here on this particular one,
but this makes sense to me.
And can I ask for clarification?
When you say speed limit, you mean upper speed limit.
Yeah.
You don't mean minimum speed limit.
Oh, yeah, upper.
Yeah, yeah.
Right, maximum speed limit, I suppose.
Yes.
A 26 degree.
Okay, I'm buying that one.
The second one about scientists have demonstrated a quantum sensor that is able to determine linked properties such as position and momentum to great precision, bypassing the limits of the Heisenberg uncertainty principle.
And I'm sure that, and there's a reason why it's called the Heisenberg uncertainty principle.
Do you want to know what that is?
Yes, please.
So, the Heisenberg uncertainty principle is a law of quantum mechanics, basically, that says that there are absolute limits to how much you could know about linked properties.
So, like position and momentum.
So, if you're studying a particle, the more you know about its position, the less you know about its momentum.
And the more you know about its momentum, the less you know about its position.
And you could mathematically calculate how precisely you could know each of those factors.
If you know one with certainty, you could know nothing about the other.
Yeah, basically.
Got it.
So 100% one, 0% other,
like a
game.
It's a zero-sum game?
It's a zero-sum game.
Okay.
So they've demonstrated a quantum sensor able to determine the linked properties.
Well, I don't see why that's, you know, I mean, you had a news item earlier, Steve, about quantum computing and advancers there.
Why couldn't they have developed a quantum sensor able to determine this?
I don't know.
Not sure I have a problem with that one either.
Don't just blithe.
I'll shut up.
Thank you, Bottle.
Thank you.
That's all I needed.
Number three, a recent study finds that despite advances, people are still able to distinguish, in as many cases between generated, AI-generated voices and human voices, People are still able to distinguish.
Recent study, despite advances.
Ooh.
Well, this is Kara's news item, right?
Weren't we just talking about this?
How they're using AI to trick people?
Because they can't determine, you know, if the grandchild is calling the grandmother.
The grandmother isn't going to know between AI and human in certain cases.
And this technology is getting better.
It will continue to get better.
Yeah, all right.
I'll say the AI one is the fiction.
I have a feeling that
in more cases, they weren't able to make the determination between the two.
How's that?
Okay, Bob.
The Germany one.
I mean, what are they, are they changing it?
Like, this is the Autobahn territory, right?
I mean, with an unlimited.
Are they saying that
if you take the unlimited speed limit down to 75, then we're seeing this?
Or I'm not sure of the context.
Okay.
That's correct.
I mean, yeah, I mean, that sounds doesn't sound entirely unreasonable.
Of course, the second one got my damn attention here, this quantum sensor.
I'd Steve, I know you knew I'd be all over this.
I'm not going to fall for this one.
They're doing some trick.
I mean, because normally this should not be possible.
This is pretty fundamental, but they're just, they're doing something that
is not removing, probably that's not removing the uncertainty, it's just shifting it.
Something that makes sense.
I'm not sure how they would do that.
Because like you said, these are linked.
But it's some trick that they're doing here.
That's what I'm thinking is happening here.
So, for the third one, I'll just have, I think this is baloney.
I think this one's fiction here.
I don't think, let me see,
let me make sure I'm not yet again missing a critical word in
this thing here.
They developed a sensor,
people, yeah, people are still able to distinguish many cases between, yeah,
I'll say that this one's fiction.
I mean, I've heard some really great stuff.
I don't know what the cutting edge is right now, but what I have heard was fairly convincing.
Oh, wait, question then, Steve.
Is this like, here's a voice?
Is this AI or is it real?
Or is it like, here's your brother, Jack?
Is this, you know what I mean?
Is it a voice you know?
It was both.
All right.
They did just AI voices not based on any person and the AI voices that were trying to mimic a specific person.
Okay.
I mean, I've heard some done like for you, Steve, and
it wasn't perfect.
I mean, it seemed like I could tell the difference, but that was like, what, a year ago?
I think they're probably good enough where people are not going to easily detect that with any reliability.
So I'll say that's fiction number three.
Okay, Jay.
Yeah, I mean, this one about in Germany and the speed limit, right?
So they're saying that they're going to change it to 75 and that would result in 26% decrease in crashes.
I mean, how can that not be science?
I just, I can't imagine that decreasing the speed limit wouldn't result in lowering crashes.
I guess the real number here is 26%.
All right.
A good question in here would be like, how fast were people typically driving on these streets you know
i just think that's science there's too much there to to agree with uh the second one about the heisenberg principle i mean it's a heisenberg it's the heisenberg you know who am i mean how the hell could they possibly do it right i agree with what bob was saying about like you know that when you more know more of one of one parameter, the other one, the information on the other one decreases.
I can't imagine a way for them to get around that.
I mean, I'd like to think that they could.
That one just seems a little too obvious that that's the one.
Going to the third one, a recent study that finds that despite recent advances, I guess people are still able to distinguish AI-generated voices and human voices.
See, I agree with this.
This could be the two-pay fallacy, but I know I can do it.
What I can do is I can't, if you played a recording for me, there's lots of little subtleties that are in there.
And when I've made extensive recordings of all of us, you know, AI recordings,
you know, I know what those little nuances are that it gets wrong.
I'm an AI right now.
Can you hear what I'm saying right now?
So, I mean, I know that I know your voices better than most people's voices in my life, but the point being, though, is that there are tells still that I think are detectable.
And I think they're going to go away very soon.
But I think that's science, too.
I feel comfortable going with the second one, you know, the Heisenberg one, as the fiction, just because it's a big, long-standing, you know, what would you call it, a rule?
A, um,
you know, it's a definitive barrier, right?
That has been well documented and gone over so many times.
I just can't imagine that that was overturned.
That one's the fiction.
Okay, and Kara.
I think you'd call it a principle, Jay.
Thank you.
It's not a maneuver, though.
It's not like the Heimlich maneuver.
No,
Heisenberg maneuver.
Principle.
The Kobayashi Meru.
I feel like I don't have a lot to add to what most folks said.
I think that
you would really get us on this if the fiction was that putting in a speed limit actually didn't decrease severe injuries from crashes.
Because otherwise, like, is every speed limit in the world not evidence-based?
I just think, yeah, we've seen it over and over.
We saw the speed limits go down in New York City to like really low and fewer bicycle and pedestrian crashes.
So I don't know.
That one just seems realistic
unless you fudge fudge the numbers somewhere.
So really it's between going with Evan and Bob and saying that the
AI voices are distinguishable from human voices is the fiction or
going with Jay and saying that Heisenberg uncertainty principle has not been bypassed.
I guess, I don't know, is a principle different than like a fundamental law?
And is anything really fundamental in physics?
Like we think it is until it's not.
That's right.
Quantum.
Yeah, even like gravity.
Like it worked for Newton.
So I don't know.
And you did say that they're using a quantum sensor.
It's not like a traditional sensor.
So maybe you have to fight quantum with quantum.
So, and then, yeah,
I think I have to go with
Evan and Bob on this.
I don't think people are generally good at distinguishing between the voices.
And Jay, maybe you are.
I mean, the wording says despite advances, people are still able to distinguish in many cases between AI-generated voices and human voices.
I think probably the opposite is true.
That's a good distinction, Kara.
I used my anecdote to kind of overlay on what I should have thought it broader, and I okay, but that's yeah, and so my guess is that people are generally not able to distinguish, but maybe some people still can, but they're the majority, not the minority.
So I'll go with the other two guys and say that that one's the fiction.
Okay, so you all agree on number one, so we'll start there.
In the first such study in Germany in almost 50 years, a mandatory speed limit of 75 miles per hour, 120 kilometers per hour, would result in a 26% decrease in crashes with severe injury.
You all think this one is science.
I guess the question is, is it possible that German drivers are such that they're comfortable driving fast?
Or is the Audubon sort of designed to accommodate faster traffic and forcing it into a lower speed wouldn't necessarily make it safer.
Or maybe that 26% figure is wrong.
I think the idea is
you can go way fast on the Audubon.
There's no limit.
But I'm saying, like, the shape of it doesn't reduce the speed.
There's a suggested speed limit of 130 kilometers per hour, but there's no mandatory limit.
So that's like 87.
So this would, yeah, so this would introduce a mandatory limit.
Which I think by definition, a lot of people choose to take the Autobahn just so they can drive really fast.
I just think it's crazy.
I think it's crazy that they let people drive that fast because the people who aren't driving that fast would have a big problem, right?
Oh, they stay on the right lines.
Yeah.
All right.
Well, this one
is
science.
This is science.
Yeah.
That makes sense.
I can't believe they've just now done a study on this.
Well, they haven't.
It was 45 years or something from the last study.
I guess they didn't want to study it.
You know what I mean?
It's like, we're driving fast.
Leave us alone.
And we like it.
Leave us alone.
All right, let's go to number two.
Scientists have demonstrated a quantum sensor that is able to determine linked properties such as position and momentum to great precision, bypassing the limits of the Heisenberg uncertainty principle.
And of course, these would be gentlemen Heisenberg compensators, right?
So
hang on.
Now, it seems like Jay, Evan, and Kara are not totally clear on what the Heisenberg uncertainty principle is.
Bob, would you say it's fair to say that this is as well established as the speed of light limit as just a fundamental property of the specificity?
Oh, it's fundamental.
You could absolutely say it's fundamental.
Yeah, like it's not a function of like our tools aren't good enough.
Correct.
No, no, it's not.
It's not a
technical limit, it is a physical limit.
Right.
It's how the universe presents itself to us.
There's no way around it unless, you know.
Unless we have new physics.
No, not even new physics, but just some
a way to
preserve it, it, but gain the information you're still looking for.
I don't know.
It depends on what you're saying.
What do you think the one key word is in this item?
There's a very key word in this item.
Quantity.
It's able to?
No.
One word.
It's demonstrated.
To great precision.
Nope.
Hold on.
Bypassing the limits.
Bypassing the limits.
It's the terms.
Bypassing.
Yeah.
It's not instead of breaking
removing, it's bypassing science because it's not violating the limits of Heisenberg uncertainty principle.
It's bypassing them.
Going around them.
It's going around them.
So, Bob, you pretty much nailed it.
They figured out a way to spread the uncertainty out to things they don't care about
and limit it to the features they amazing.
What else could they do?
Given that this is true, which I assume this was true, it had to be something like that.
Otherwise, because you're not going to get rid of it, you can't.
You can't get rid of it.
And again, they're very specific.
This does not violate the Heisenberg uncertainty principle.
All right.
The name of the paper is Quantum Enhanced Multi-Parameter Sensing in a Single Mode.
And here's the metaphor they give to sort of explain what's happening.
They said, all right, the metaphor is it's like a clock with an hour hand and a minute hand.
The hour hand,
let's say you have a clock with just one hand.
It has just an hour hand or a minute hand.
If you choose the hour hand, it gives you good information about where you are in the day, but it's not precise.
Or you could choose the minute hand and you could know precisely what minute it is, but you don't know where you are in the day.
You have to guess.
So it's a scale analysis.
So what they do is they said that if you're looking to nail down position and momentum, you can have uncertainty about where you are on the bigger picture.
Like we don't know what grid we're in.
But whatever grid we're in, we know exactly where we are in that grid.
And they don't really care about the bigger picture.
They just want to know the precise momentum and position wherever it is, right?
So
that's it.
So they basically said it's like we're spreading the uncertainty out to these other parameters that we don't care about so that we could have more precision.
with the things we do care about, like position and momentum.
So, yeah, it's we, you know, it's still puzzling.
It's still puzzling, but it's because friggin quantum mechanics, but yeah, it's just an end run around that limit.
Sounds like BS to me.
Seriously, like you're saying, oh, they're just kind of, you know, jerking around the corners.
Like, that doesn't make much sense to me.
It says, we deterministically prepare grid states in the mechanical motion of a trapped ion and demonstrate uncertainties in position and momentum below the standard quantum limit.
There it is.
Crystal.
Yep.
Yeah, I mean, that's they're below the limit, so that's they did something special there.
Yeah, they did it.
So damn, man.
I wonder what that implications are for other.
Well,
you can make sensors with incredible precision.
That's where they're.
I think the other thing they said is that they kind of, Bob, they borrowed principles they learned from quantum computing.
Oh, wait.
So they kind of developed this technology because they're trying to error reduce in quantum computing, and they basically ported it over to sensing technology.
Oh, fair.
Hence the quantum sensor.
I don't know if that helps, but that's what they said.
All this means that a recent study finds, despite advances, people are still able to distinguish in many cases between
AI-generated voices and human voices is the fiction, because what the study found is that people were completely unable to distinguish the AI-generated voices from human voices, and that was either just generic voices or specific people.
Either way, that this is just what the latest, greatest high-end voice uh you know technology ai voice technology the people in their study had no idea interestingly they talked about um looking at ai generated pictures of people and they've gotten so good oh yeah that not only can people not distinguish but they're more likely to believe that an ai generated picture is real than a real picture is
ai AI-generated pictures are so-called hyper-real.
Now, in this, in the audio test, they did not see the hyper-real phenomenon.
So people were not more likely to think AI voices were real over real voices,
but they were unable to distinguish the two.
I bet you.
I would love to see an experiment done where, because I think my hypothesis is that this plays off of a very human bias where we like things that are slightly more attractive.
And I think that we don't have that with an audio bias, but we have it with a
vision bias.
And the AI knows what little tweaks to make to enhance.
Yes.
The AI can make people look kinder, they smile more with their eyes, they look slightly more attractive, and people are going to go, oh, yeah, that's more real.
Interesting.
It would be really interesting to have AI ramp that up and ramp that down.
That's weird.
They know what our brains want.
It's not like the uncanny valley.
It's like the hypercanny valley or something, right?
Or
real.
We've blown way past the uncanny valley.
But here's another hypothesis, Kara.
Perhaps we're, and I don't know if they could control for this in a subsequent study.
In our media saturated culture, we are so used to photos of people that have been altered and perfected that we think that's real, that that's our story.
Yeah, I think we could probably do two studies.
I don't think people could distinguish between a Photoshopped picture and a non-photoshopped picture of like a model, for example.
And then I think that people would, or distinguish which is real versus which isn't real.
And then you add that to like even a picture of ourselves.
Yeah.
I bet you we would have a hard time being like, oh, that's the real me versus that's not the real me because it's, there's the slightest little tweaks.
And now that we don't have like 17 fingers in AI.
Yeah, once you deal with that issue.
Yeah.
It'd be a better test if it's, if it's somebody you know, because how often do you look at yourself compared to looking at other people?
Yeah, people look at themselves more than they look at themselves.
Plus we always see ourselves in the mirror.
So when you look at a picture of yourself, it's reversed from what you're typically looking at.
Yeah, which is why we like selfies.
But I still would think that we would know, like, I think I'd know Jay's face and how it should move more than I would know my own face and how it moves.
And I think that that is generational.
Well, no, but Bob is saying, the movement is different.
That's a different layer.
None of this is dealing with movement.
I know, but I think, but even I think, like Jen Alphas and around that era, they're watching their faces on videos all the time.
But in terms of being able to distinguish AI, because I recently saw, you know, there was this company that, did we talk about this?
They make movies where you can like dub a foreign movie into English and then AI changes the lip movements to match the
and it's total Uncanny Valley.
Oh, yeah, yeah.
But we're not there to be able to
photo.
He was saying a video of Jay versus a video of him.
Yeah.
And I disagree with you, Bob.
Or I agree with you, but I think it's a generational difference.
I think younger people have a very self-gaze when it comes to social media.
Yeah.
Yeah.
All right, Evan, give us a quote.
Inductive reasoning is, of course, good guessing, not sound reasoning.
But the finest results in science have been obtained this way, calling the guesswork a working hypothesis.
Its consequences are tested by experiment in every conceivable way.
And that was penned by Joseph William Mellor, M-E-L-L-O-R,
who was an English chemist and an authority on ceramics, ceramics.
And he grew up in New Zealand, 1868 to 1938.
Apparently,
what?
An expert, I mean, you know, there you go, an expert
in this particular field and, you know, looked upon as a world expert on this.
Now, the quote itself, I kind of thought was interesting because I did a little reading about inductive reasoning because I don't know that I really read much about it before.
And you know, Einstein was not a proponent of inductive reasoning.
In fact, he argued quite extensively, apparently, against it.
And he was more about deductive reasoning and, you know, didn't feel that inductive reasoning brought you to the true nature of science.
And there was kind of a, you know, a collision there in a sense of those two schools of thought.
But effectively, I think what modern science is saying is that they're partners in a sense.
Induction and deduction.
You can have both.
Yeah, deduction goes from the general to the specific, inductive goes from the specific to the general.
You have to engage in inductive reasoning.
That's how you come up with a hypothesis.
Yeah.
Yeah, that's bottom-up reasoning.
But I think the problem is that bottom-up does tend to
not always be as accurate the more that you kind of test it.
Well, that's why you got to test it.
It doesn't matter how you come up with your hypotheses as long as you test them, right?
Yeah, no, I guess that's true.
But I think there is a difference between using reasoning for hypothesis testing and using reasoning philosophically.
Deductive reasonings are definitely more valid philosophically.
If you're just trying to reason to a conclusion, that's why inductive reasoning doesn't give you a conclusion, it gives you a hypothesis.
And as long as you understand that, you're fine.
The problem is when people use it to come up with a hypothesis that they think is a conclusion when it is
and to form these huge generalizations.
Exactly.
Which is why I think Melor couched this particular quote correctly and put it in good context.
Steve, yeah.
I heard a beep on my phone.
I looked down.
It was a link to a news item, and the title of the news item is Quantum Limits Redefined.
Yep.
Oh,
there you go.
Just made it.
Josh made it.
Yeah.
All right.
Well, thank you all for joining me this week.
Yeah, Steve.
And until next week, this is your Skeptic's Guide to the Universe.
Skeptics Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking.
For more information, visit us at the skepticsguide.org.
Send your questions to info at the skepticsguide.org.
And if you would like to support the show and all the work that we do, go to patreon.com/slash skepticsguide and consider becoming a patron and becoming part of the SGU community.
Our listeners and supporters are what make SGU possible.