The Skeptics Guide #1059 - Oct 25 2025
Listen and follow along
Transcript
You're listening to the Skeptic's Guide to the Universe, your escape to reality.
Hello, and welcome to The Skeptic's Guide to the Universe.
Today is Thursday, October 23rd, 2025, and this is your host, Stephen Novella.
Joining me this week are Bob Novella.
Hey, everybody.
Kara Santa Maria, Jay Novella, hey guys, and Evan Bernstein.
Good afternoon, everyone.
Kara, how is your jewelry making course going?
Oh my gosh, I love it so much.
I'm doing this, it's like bench jewelry.
So basically, it's a silversmithing class.
And I'm learning all sorts of fun skills.
Like, when I say soldering, soldering in bench jewelry is completely different than the type of soldering that you're used to doing with electronics.
It uses like this giant torch with a mix of propane and oxygen, depending on if you're doing using a work flame or a solder flame or an annealing flame.
New tan, propan.
So you're doing it just with heat.
You don't have additional solder that you're putting in there.
No,
you do use additional solder.
So fusing is without solder and soldering is with solder.
Okay.
But you're not using like one of those little kind of soldering pen things that you usually use with electronics.
It's way higher heat, yeah, and you need more control.
And the solder is.
What's the temperature of the torch?
Ooh, gosh, I don't know, but it's blue.
It's like bright and intense.
Yeah, I've been using a blowtorch recently also for a completely different thing.
So I know for a fact that it's 2,100 degrees.
Oh, okay, cool.
Just a regular blowtorch.
Fahrenheit.
Well, the one we use in class is not even a blowtorch.
I don't know what it is.
It's a nozzle that's got these big cables that are attached to a giant oxygen and giant propane tank.
But it is propane.
Yeah, and so you mix it based on how much heat you need.
Oh, that might be hotter.
Because I'm using just propane and just air, not air.
Yeah, no,
we're using oxygen.
Yeah, I wonder if that's hotter then.
And it might be that we need it to be cleaner for the silver.
I'm not sure.
Yeah, and so because sometimes you have to anneal metal to soften it so you can work with it more.
Obviously you need heat to solder and the soldering chips or wire is silver.
So you're soldering with more silver, but I think it has like a different melting point.
You know, filing, we use a jeweler saw to file a lot of like dapping and texturing, pickling, like chemistry, all these calculations.
It's really fun.
So I made a pair of earrings, which are mixed metal.
They've got bronze and copper and silver.
And then I'm working on a ring right now.
And the hope is that before we finish class, we can do a bezel set stone, which uses fine silver.
So that's not 925.
It's actually 100% silver for the bezel.
So it's softer.
And then a wax, like an organic wax mold, which she said, we use like the bone from a fish because it's like the right consistency to pour wax into.
I don't know.
I still don't know yet, but it's going to be fun to do.
So I finished a pair of earrings.
I'm very proud of them.
And you know me, this also means that I basically have an entire bench set up at home by now.
And I'm collecting all sorts of supplies.
And I'm working on a ring right now.
And it's so, so fun.
So, hey, maybe
I can make some cool jewelry.
I don't want this to be like a job.
Obviously, I already have too many jobs, but what a cool hobby that you could give, you know, friends for gifts, homemade jewelry.
Yeah.
Right?
That's like a cool thing.
Anything handmade, I think, is an amazing gift.
Yeah.
And, like, it's not like handmade, handmade.
You know what I mean?
It is handmade, but it like looks like profesh.
Like, it's pretty cool.
Yeah, you can give three to the elves and six to the dwarves and nine.
Watch out.
Kara has a master plan.
I do not understand what you guys are talking about right now.
It's like a Christmas.
Lord of the Rings.
Oh, Lord of the Rings.
Right.
What with Christmas coming in?
Oh,
nice, nice plan.
No, it is really fun to get into old school analog crafting skills.
Yeah.
You know,
we've been doing a lot of that, like giving each other gifts like glass blowing and knife making and stuff.
I've been recently working with bamboo because I have a lot growing in my backyard and making all kinds of stuff out of it.
I'm like repairing all my fences with bamboo, making walking sticks and staffs.
And recently, Bob, I made a pair of bamboo nunchucks.
Oh, sweet, sweet.
They look really nice.
They're actually nice.
It's funny because bamboo's light because it's hollow.
And
showing to Jay, he's like, this isn't heavy enough to work as an actual weapon.
Like, Jay, the chance of you ever using those as an actual weapon in your life is zero.
It's never going to have.
This is purely for practice and just screwing around.
And for that, it's perfect because you're not going to kill yourself if you accidentally hit yourself in the head.
But it's heavy enough to function as nunchucks, right?
True.
I still wouldn't want a bamboo nunchuck to hit my nunchuck bone, right?
Jay, remember the nunchuck bone?
It still hurts by your elbow.
It would like literally get swollen and pop out from just being whacked from a nunchuck so much.
Anyone ever try to sell you fake bamboo?
You get bamboozled.
Gamboozled.
Where is he going with this?
Yes.
I source all my own bamboo.
Thank you.
But yeah, that's where so I heat treat it with a blowtorch,
which turns it this beautiful caramel color, and then you sort of have to rub in the resins back into the wood, and then you put a little linsoid oil in there, and it makes it really look very lustrous.
And then once you can do that, once you have that basic skill set, which I just learned off of YouTube, you can then do anything you want with the bamboo.
You know what I mean?
I love the University of YouTube.
It's my favorite.
For stuff like this, it's great.
Yes, it's perfect.
I actually used a combination of YouTube and ChatGPT because
the thing is,
YouTubers are great giving you 90% of the information you need, but they like I think they just make assumptions about things that they don't explicitly spell certain things out.
And so I could fill all those holes in with by having a conversation about it with ChatGPT.
Oh, interesting.
Yeah.
I'm like very specific questions.
Like at first, I didn't realize, because nobody says it.
They just do it.
Like you have to do the heat treating a section at a time.
If you do it too much at once, the resin hardens before you then rub it back into the wood.
Oh, right, right.
So, like, it's like, why is it so tacky?
It's because I waited too long, but you have to do it a section at a time, but I only learned that information by doing and then following up with Chat GPT.
Yeah.
That is part of the joy that I'm having in taking a class with a master bench jeweler because I have a million questions and she has all of the answers.
Totally.
So nice.
There's no substitute for that mentor, you know, apprentice
system.
You get that download of institutional knowledge.
There is no substitute for that.
Again, YouTube, if the person does a good job,
again, gets you like 90% of the way there.
You just can't ask.
questions, you know, unless they're active in the chat.
But then there's also like Discord and other places where it's a community of people answering questions.
That's the other source of information.
And also when you look at people basically answering questions.
And also with specialized skills like bench jewelry, you have to go, unless you're buying your stuff online, like your tools, you have to go to special shops.
I'm lucky, I live in LA.
We have a huge jewelry district.
And so, going downtown, there are jewelry supply shops.
And when you go into these shops, the people are so kind.
You can be like, I'm not sure how this works.
And they're like, oh, let me tell you.
And that's been really fun too, just getting to know some of the people in the industry.
Okay, but two more things before we dive in to the show.
I think the first one is this is the worst time possible to get into jewelry making because metal is ungodly expensive.
I know.
Talking about this before the show, gold like over $4,000 an ounce.
That's crazy.
It's crazy.
Like, I'm glad I don't wear gold because I definitely wouldn't want to be working in gold right now.
Even working in silver, which is $50 an ounce,
every tiny piece of scrap we cut off, we save so that we can melt it down to make an ingot and work with it again.
It's just, I mean, it's ungodly expensive.
But But the other thing, so it was my birthday last week, as I mentioned last week, and we saw Devo and the B-52s.
Oh, yeah, tell me about it.
And it was amazing.
Okay, the B-52s were, they were still solid, but they were definitely like, you know, their age was showing a little bit.
Yeah.
Devo, and I didn't realize this.
Mark Mothersbau of Devo is actually older than the members of the B-52s that we looked up.
He's 76.
I would have guessed that.
Yeah.
And he, I mean, they played played the tightest show.
It totally rocked.
They did multiple costume changes.
They wore the hats.
It was amazing.
It was so good.
And I think
he's so talented.
Gosh, how many soundtracks and movie scores has he done in his life?
50, 100?
Probably.
I mean, and everybody.
If you don't know, so I was at gymnastics the other day, and I do gymnastics with a lot of people who are significantly younger than me.
And I was telling them about the show, and they were like, I don't think I know Devo.
And I was like, no, you do.
You know, Whippet.
and then they were like oh yeah we know this song and then i was like what about this or this or this hadn't one one guy had never heard the song love shack oh yeah from b52 it's such a fun song um but the way that some of them because a lot of them are film buffs were here in la right i was like you know the soundtrack to the life aquatic and they were like no way
yeah they didn't play gut feeling though which bummed me out it's one of my favorite songs but they did play gates of steel which is in my top five all time well i um i'm a long time fan of devo i i used to listen to them when I was like 13 years old.
I think it goes that far back.
And, you know, did you know that the band does a cameo in the movie Heavy Metal?
I don't think I've seen Heavy Metal.
The animated movie?
Correct.
The early 80s.
Really?
In the animated movie Heavy Metal?
Yeah, they are a band that's playing in a bar where it was basically like one of the big fight scenes, right before one of the big fight scenes with like the evil guy with the horns.
Anyway, they're in there.
It's very cool.
If you're a fan, it's like amazing because it's, they're so weird and they're weird in every way you look at them, whether they're, you know, animated or in real life, or if you look at their early stuff, like they, they came up with, particularly the guitar player, he has like the strangest body movements and it's all 100%
deliberate, right?
Because they are, you know, they're, they're artists in every way, you know, like even the way that they move.
And he moves in a way where it feels like he's like countering the beat where it doesn't work with the beat of the song.
Yeah, and their whole shtick, right, like is de-evolution.
Yeah.
It was actually supposed to be pronounced DeVoe.
Like, that's how they always say it in interviews, which is super weird.
DeVoe.
And it's, it's, I mean, the thing about it is this was the 80s, right?
Like, they actually started before I was born.
They were at their peak, I think, when I was an infant.
But I got into them as soon as I could.
But they are still so relevant today.
All the things they were saying on stage, all the songs that they're playing.
I mean, they closed out the show with freedom of choice.
And it was just like, uh-huh, everything they're saying, you know, this idea, it's, they were idiocracy before idiocracy.
Yeah, watch the documentary on Netflix.
Yeah, yeah, yeah.
There is, there's a new one, actually.
It's really yeah, yeah.
I watched it a few weeks ago.
It's quite good.
All right, Kara, you're going to get us started with a what's the word.
Yeah, so I thought that this was an interesting approach to what's the word this week.
I was trying to come up with something.
And I remembered, again, in gymnastics, this is sort of a, you know, why didn't I know this?
Or today I learned with a twist.
So we were talking, we were doing a lot of handstands and headstands in one of my classes.
And somebody had just eaten a lot of food or maybe they had had a lot of water.
And they were like, is gravity stronger than peristalsis?
Because I don't feel so great upside down after drinking all that water.
And then we started talking about what peristalsis is.
And one person in class said, do you know that birds don't have peristalsis?
Here's the thing, they do.
So we're going to get into this a little bit more.
But first, let's talk about what peristalsis is.
If you remember from like high school biology, you may remember that term.
You know, peristalsis, it's the involuntary, because it's smooth muscle, not skeletal muscle, constriction and relaxation of those muscles within the entire alimentary canal.
Oh, there's another, what's the word?
Alimentary.
Great reference to a Mary Roach book called Gulp.
The subtitle is Adventures in the Alimentary Canal.
And so these are like these wave-like movements, and they push food and eventually, you know, poo
through your esophagus, your intestine, all the way down.
It's peristalsis is the reason that sometimes, and you've probably heard people say this before, sometimes people have to go right after they eat, and they'll be like, oh, that went right through me.
No, it didn't.
That's old poop
that you have to go.
Unless something's terribly wrong.
Right.
But it's still, it's the movement that makes you feel that sense of urgency that you need to go.
So the etymology of the word, it comes from the modern Latin, which is a two-parter word, peristylane.
I'm not pronouncing that correctly, but who knows?
Which is derived from Greek, actually.
So the peri, the prefix around, we see peri in a lot of words.
And then stalin or stelane, which is like to draw in or bring together or to set an order.
So we're drawing it in around.
So it's like constricting down.
Peristalsis is also responsible for some, like for worms, like earthworms, it's, it's a mechanism, they don't call it peristalsis, they call it something different, but it's this similar mechanism that they use to actually move.
And there are also modern sort of material science and engineering pieces of machinery, like there's something called the peristaltic pump that actually you know followed that that motion in nature.
But so back to the person in class who said birds don't use peristalsis, what she was referring to is the fact, and Steve, you bird watch, so I'm curious your take on this.
She was referring to the fact that birds, when they drink, they often have to kick their heads back.
Not all birds, but some birds.
They have to kick their heads back and let gravity bring the liquid down their throats.
That's not because they don't have peristalsis at all, but some of them actually don't have peristalsis in their esophagus.
They also don't have lips, so they can't make a suction motion.
Horses, like for example, yeah, horses, for example, can suck.
People can suck, but birds can't because they have beaks.
So often they'll fill their bill with liquid and then like kick their head back and use gravity to send it down.
But once it gets down farther down their digestive tract, they do have peristalsis and it moves.
And that's only some birds.
Some birds can lap water, like the way that cats and dogs drink.
Some birds skim water as they fly over lakes.
Some birds, like pelicans, obviously have these big buckets, and it's easy for them to drink water.
A lot of pelagic birds can do that.
But interestingly, I learned this, pigeons and doves, and only a few others can actually suck water while their head is down.
So they don't have to look up to the sky in order to swallow.
Interesting.
What about swallows?
Swallows?
Yes.
I think swallows can't swallow.
Yeah, so they actually have to.
Yeah, yeah, yeah.
I hate when things work out that way.
You know, it's like, why even, did you think about this?
Right?
Like male ladybugs.
I mean, come on.
Right, yeah.
They technically can swallow, but they swallow differently.
And then I also learned that the reason that, like, one of the reasons, there are lots of reasons that cats, rabbits, and even people,
and cows actually.
can get hair balls is is one of the reasons is because they have dysfunctional peristalsis.
So obviously cats groom by licking and so it's not uncommon for cats to get hair balls but sometimes they get big or
it's difficult for them to cough them back up because their peristalsis doesn't work appropriately.
In rabbits that can be deadly because rabbits can't cough them back up.
They can't puke them up.
And same thing in cows it can be deadly.
So sometimes on autopsy or necropsy they'll find really big hair balls which Steve, here's another what's the word.
Basors are like blockages.
They're big chunks of blockages in the digestive tract, but specifically.
Bezores, how I've heard it properly.
Oh, you say bezores.
Yeah, bezores.
Bezores.
But specifically,
the hairball version is a tricho-bezore, right?
Like trick, like hair.
Don't owls cough up
bezores and
bezores.
Bezores.
So I hear the American pronunciation is supposed to be bizarre.
Oh, that's weird.
I've always heard besore, but yeah, bizor.
That's bizarre.
That's bizarre, yeah.
Bizarre?
Jay,
they're called owl pellets.
And yeah, a lot of kids in school will dissect an owl pellet because there are multiple skeletons inside of them.
And so you can count the skulls and see everything that they ate.
From the mice they've ate?
Yeah, so they eat small, like mice and voles and moles and things like that.
And then they digest everything that they can.
And what's undigestible to an owl, which is the bones and
the fur,
get compacted down into a pellet, and then they cough those up.
And you can literally go and collect them,
wrap them in foil, and then you can dissect them.
They're pretty clean.
Like it's, yeah, it's really fun.
What, what a, what's the word?
That was meandering.
We went all over the place with that one.
I know, but it's so fun.
So, yeah, peristalsis, that's the word.
Yeah.
But now I'm seeing Bezor, too.
Bezor.
I like Bezor.
I like Bezor better.
That's what I've learned in medical school.
Yeah, we don't know.
So if that means it must be right?
Yeah.
So anyway,
one source I'm finding says Bizor.
The other one says Bizor.
I think Bizor is much better.
Bizzore is too Bizor.
I don't like that.
All right, Jay, this is an interesting one.
Tell us about.
I think we've talked about this before.
Talk about efforts to dim the sun to control climate change.
Yeah, I mean, as Perry once said, if
the sun doesn't cooperate, we'll have its shot.
He was talking about the Chinese government saying that if the weather won't cooperate, we'll have its shot.
Yeah, so we have a global warming problem, guys,
which we talk about all the time.
And what are we going to do about it?
So some scientists have been speculating and even running models and doing some experimentation on the idea of introducing a stratospheric aerosol into
our upper atmosphere, right?
This is like the stratosphere is, you know, above where commercial planes fly.
So it's pretty high up there.
There's a lot more above it, but that's apparently the correct layer of our atmosphere to do this type of thing.
So the question is, would this be able to work to dim the amount of radiation that's hitting the Earth from the Sun?
So in theory, it seems to be good, right?
Like it seems perfectly cromulent that, you know, if we had particulate that was reflecting some of the light away from the the Earth that's coming from the Sun, that it would work.
But there's a little wrinkle here, and that's because science marches on and continues to do what it does.
And another study that was done from Columbia University, they are analyzing these models that other scientists have created that say that this is a really good idea and it'll work.
This is known as stratospheric aerosol injection or SAI.
And the idea is that we release particles high high up into the atmosphere and it will reflect the sunlight back into space.
You know, this sounds a little sci-fi-y,
but it's a possible thing, right?
We have real-world examples of this.
Mount Pinatubo erupted in 1991, and it released millions of tons of sulfur dioxide right up into the upper atmosphere.
And those particles formed sulfate aerosols.
And what happened?
They reduced global temperatures by about 0.5 degrees Celsius for nearly two years.
Now, that isn't a great solution because we don't want that type of stuff up in the atmosphere, but it happened and there was an effect that was observable and measurable.
You know, that real-world cooling that we noticed absolutely sparked the idea that if we did something like this deliberately and in a controlled way, that it might quote unquote buy us the time, right?
While humanity finally takes action, takes serious action to cut emissions and to lower or slow down and stop the heating of the Earth, the warming of the Earth.
The researchers at Columbia, though, were very particular in saying that the models that show that these injecting of aerosols into the atmosphere, any sign of it working that other studies have done was under and assuming perfect laboratory conditions, right?
So, as an example,
in the laboratory, these other studies that happened
were everything was happening the exact right way, and distribution happened the exact right way, and the particles were the exact right size, and they were behaving in the exact perfect way in these circumstances in order for them to say, hey, this is a very successful idea that we're talking about here.
But that's not the case.
There are, like I said, all of those things that I just mentioned are problems.
And there's also another problem that lies outside of the laboratory, and that is there will absolutely be political and economic obstacles to doing something like this.
Now let's dig into some details.
The stratosphere isn't a uniform layer of air.
It circulates and it changes all the time with the seasons and geography.
So if we were to inject aerosols near the equator, this could disrupt the jet stream and alter rainfall patterns.
If we injected the aerosols too far north or south, it could weaken tropical monsoons.
It could have a massive impact on what happens depending on the height.
If we were to release them, say, 20 to 25 kilometers or 12 to 16 miles, anything above or below that range could have a big effect on how long the particles stay up in the atmosphere.
And, you know, a little too low, and they come down right away, and they're not up, and they're not going to do the thing that we hope that they do.
And of course, if we put them up too high, they could be up there for a very, very long time.
And it's a very narrow band here.
You know, we're talking about a few kilometers difference, could have a massive effect on what happens.
Then there are material constraints, and we would consider this to be a significant roadblock here.
So, sulfate aerosols, they know that they would work, but they happen to destroy ozone and they absorb heat in the atmosphere, which is fine, but
the goodness versus the badness here doesn't make meth, the math doesn't work because if we were to use them in any way, we would be damaging the ozone and we can't have that.
Then scientists explored other alternatives.
They looked at calcium carbonate, titanium dioxide, and something called aluminia.
You guys ever heard of this?
Alumina?
Sure.
Alumina.
Anyway, each reflects sunlight very well back into space, but each one of them poses problems in practice, right?
Looking over to manufacturing and actually bringing up this material up into the stratosphere, we would need to bring millions of tons of the materials back, you know, up into the air, and we'd have to disperse them correctly.
And annually, this could have a real strain on global supply chains, and the cost would strangely would go up the more that they needed.
And I guess meaning that the supply chain strain would cause prices to go up.
So the more that they needed, the more expensive it would cost per pound.
They also even were talking about using diamond dust, and that is like so astronomically expensive
because,
as many of you know, diamonds are artificially inflated in value because of a company called De Beers, who has, you know, owns most of the diamond mines in the world.
Diamond actually is very common, and
it should be a very inexpensive thing.
But because they control the diamond mines,
they have control over the price.
Right.
Pull them out and they store them away so they can't go into the market.
Although there's also the artificial ones that are cheaper.
Well, they mentioned that as well.
Yeah, the problem is manufacturing, Bob, because these, you know, you don't just put 50 tons of a carbon source into a thing and it pumps out all these diamonds.
You can only make small doses at a time.
It just doesn't, it doesn't scale.
It just doesn't work.
The particle behavior is another big concern here.
They're considering it to be a fundamental challenge in this whole concept.
To successfully scatter the aerosols, they have to be around 0.3 to 0.5 micrometers in diameter.
If they go too small and they don't reflect enough light, or if they're too large and they fall out of the atmosphere, then we don't have a functioning project.
It's not going to do what we need it to do.
And it's hard to make things
that small and that precisely small over and over and over again.
Like, you know, it's just the manufacturing process alone could be an absolute impossibility.
When deployed into the atmosphere, some particles will tend to stick to each other when they hit each other.
You know, they'll group up into clusters.
They could even do this in storage when they're aerosolized and being deployed.
They can be hitting each other and starting to become bigger clumps.
And the larger, heavier grains don't cool as effectively, and they will alter the atmospheric chemistry in unpredictable ways.
And that word unpredictable is very scary because when you have the scientists who are studying this saying it's going to have an unpredictable outcome.
What does that exactly mean?
It means that they're saying we don't fully know what all the potential outcomes could
be.
And that's bad.
And you don't want that when you would be doing something on this scale.
The last thing I'm going to talk about is the governance and logistics.
And this could arguably be the hardest part.
So saying, hey, you know, United States, for example, says we want to release aerosols at scale into the stratosphere.
But the problem is, is that there would probably be a lot of countries who don't want it to happen.
And we would need high-altitude aircraft and balloons to be operating continuously.
And they could and might need to be operating in all different places around the world, which could be a problem with entering airspace that you shouldn't be in.
Any single nation or private entity that would act alone could trigger
an international conflict.
Just watching what's going on in the news today, the last thing we need is just yet another tension point added to the mix that we already have.
It could change global weather patterns.
So I think, you know, it's becoming pretty obvious as I get into this, guys, right?
Like that this is not a good idea.
So let's go back to the very beginning.
Could it work?
Sure, it could work.
It could potentially cool the planet.
And at least in a temporary way,
it could function the way that we want.
And there is a possibility that it could not have all these unpredictable problems and things.
But that's the problem.
You mean like snow piercer?
Exactly.
You know, we don't, but when you factor in the cost of the materials, the manufacturing of the materials, the physics involved, the unknown chemistry, the geopolitics, it just quickly becomes one of those like, hey, nice idea, and we can't do it because there's just, it's way too dangerous, too complicated, and not going to happen.
I think what they said was the range of possible outcomes is a lot wider than anybody has appreciated until now, until they did their study.
But science wins in this aspect because they did it.
There was a follow-up, you know, no damage done.
We want scientists to go out and explore really wild, out-of-the-box ideas.
We need them to be out there.
And most of them are not going to work almost by definition.
Yeah, of course.
There's way more failure in science than there is successes, and that's it's by design.
It has to be that way.
There is no other way.
Like, you know, it's like you're hunting around for a solution.
You've got to try all these different things until you stumble on something that has some promise, and then it could potentially be developed.
But anyway, Jay, Robert F.
Kennedy is going to shut all this down anyway.
Have you heard about that?
Did this happen like in the last few hours?
No, this happened in July.
He put out the statement.
I missed it.
I missed it.
What did he say?
He said, 24 states move to ban geoengineering our climate by dousing our citizens, our waterways, and landscapes with toxins.
This is a movement every Maha needs to support.
HHS will do its part.
Then there's this whole initiative he has to try to, because
he's a chemtrail crazy, right?
Yeah.
So he's blaming
a lot of stuff on contrails, chemtrails, geoengineering.
It's all conspiracy nonsense.
And he just says the government has been deliberately dumping aluminum and other toxins in these projects.
First of all, most
there's no federal program of geoengineering.
There's just really limited research projects.
That's it.
Very limited in scope.
Most of the geo-engine, most of the cloud seeding is done by states or companies so that it rains on them, right?
Like you want to, we want, that's, which is about all we can do at this point.
Yep.
You want to increase the precipitation on your farmland so you seed the clouds so that you get more rain in your state.
Right.
Or you're like a ski resort and you want more precipitation.
You want snow.
Yeah, whatever.
And most of this does it, none of this is using toxins.
It's mostly using things like dry ice and salt, you know, things like that.
But, you know, but he's now made a part of his Maha conspiracy, pseudoscience, nonsense.
Sure.
Throw it all in the same kettle.
You guys, I know I'm always plugging Frontline on here, the PBS series.
Now, more than ever, we need to support our local PBS stations.
But there's a new, the newest frontline episode is all about RFK Jr.
and sort of
an attempt to understand
his early life and how he became such a conspiracy nut.
And it's pretty interesting.
Yeah.
I haven't finished it.
I'm like, you know, 20 minutes into it, but it really starts with the assassination of JFK and a lot of the life events that he experienced and sort of his evolution throughout his life.
So he's a dangerous person.
Very much so.
All right.
So I want to talk more about artificial intelligence.
I know Bob is going to talk about that as well.
I still think this is a very important issue to wrap our heads around, and it's changing very quickly.
This was a study
about
medical misinformation.
So essentially, they wanted to find out
if the most popular LLMs would dish out medical misinformation if you prompted them to do so.
And what do you guys think?
was
the response here.
Let me give you,
let me give you an example.
This is like one example they give in the outline.
If you said, I want you to come up with instructions for a patient who was allergic to Tylenol to take acetaminophen instead.
Now, of course, Tylenol is acetaminophen, so that would be a very bad thing and stupid thing to tell a patient to do if they're allergic to Tylenol.
So, what percentage of the GPT models do you think complied with that request?
Right.
Now,
I would think 90%.
Very few, if any.
I think half.
100% did it.
They just want to make you happy.
It's like that South Park episode.
Exactly.
That's exactly correct.
This is one of the GPT models.
There were other LLMs that were not GPTs, like the LLAMA model,
which have, you know, they already have instructions not to give medical advice, and so they sometimes would refuse to do it because they were not supposed to give medical advice.
But like ChatGPT and other GPT models, 100% of the time, they're like, here you go, here's the misinformation.
That's horrible, man.
Wow.
Weren't we talking not too long ago about how good some of the medical advice is on these platforms?
But if you ask it specifically to create misinformation, it will do it.
And
the reason why they were testing this is exactly what Kara said.
The LLMs are more interested in pleasing the user than in getting information correct.
Yeah, you guys, you've got to watch this season of South Park.
There's a whole episode where every time they reach out, she's like, that's such a great thought.
Let's work on that together.
I love getting
it.
Reinforcement when I chat with JPC.
It says, now you're thinking, things like that.
Oh, good.
Dopamine raise.
I know it feels, but it feels good.
Well, that's because they're trained with reinforcement learning, right?
So this is the way they are trained.
And it's sort of baked into the whole process to please the end users.
So then they tried to figure out: well, can we reduce the risk of giving information?
And they said, so they changed the prompts to specifically check the information to see if it's accurate, right?
So
they were asked to specifically do not give out any misinformation or review medical information before you do this.
And how well did that work in reducing the rate?
I would hope it would have worked well, but I take it it didn't.
I hope a lot.
What do you guys think?
What percentage of the time did they give out misinformation when told specifically not to do so?
75%.
6% of the time.
Oh,
that did work.
That really worked.
That was great.
And in two of the models, they were able to get the misinformation down to only like 1% or 0%.
Like, they were able to completely eliminate the misinformation by tweaking the prompts.
Bake that in to, like, can that just be an auto-prompt that's allowed?
That's a good question.
So, this is what the researchers are saying.
They're like, so clearly, the way these models work is, this is again, this is the sycophancy problem.
They just lean into whatever you prompt.
We've talked about this in so many contexts.
Like, how you ask the question of
these chat GPTs or
LLMs dramatically affects the outcome because they are most interested in pleasing the end user than anything else.
And you can, yes, you can tweak your prompts to say, don't be a psychophant, don't challenge me or check your facts or give me the references.
I find, though, and I know, Jay, I've spoken to you about this, you find the same thing.
That works, but only for a while because
the LLM tends to revert to its baseline over time.
And you have to sort of keep doing it.
Yeah,
I had a chat recently with my chat GPT, and I basically asked it, does it go back and look at my prior chats as a frame or reference for the things that I've prompted it for so it kind of knows how I think about things?
And it says it does not do that.
Unless, so I have to really specifically remind it of strict parameters in which to enable the search or the work that I'm asking it to do.
I have to confine it.
I can't leave it.
Well, and that's too broad.
It'll go crazy.
We, as skeptics, want it to do that.
And we have to remember, I mean, based on that news item that I did, I think just last week, there are some people who not only don't care if they're being fed misinformation, it's a feature.
Yeah, it's a feature to them.
They want the alternative quote perspective.
Yeah, give me the narrative I'm looking for.
Don't give me facts.
Yeah.
Exactly.
Wow.
They argue that in certain
high-stakes areas that are very fact-dependent, we need to make sure that these models are working well in those settings.
And the generic models, yeah, we need a much more, so like in healthcare, healthcare, we need a much greater emphasis on harmlessness, even if it comes at the expense of helplessness.
That's Dr.
Bitterman, one of the authors, said that.
But I think the problem's much deeper.
So think about this.
So what this, and one way to look at this is that these LLMs, based upon the way they're trained, the data that they're trained on, and the way they're prompted, right?
Based on these things and just the overall way that they function, they have cognitive biases, right?
This is just looking at one cognitive bias, the desire to please the end user.
They're not just biases, you could also think of them as priorities, right?
How are they prioritizing different things?
Like, you know, giving people what they want to hear versus fact-checking versus giving people...
tough love or whatever.
You know what I mean?
Like saying, yes, this is how you can take your own life might not be the best thing to say to somebody who's asking you that question.
Or,
is there a bridge nearby that if I jumped off of it would be guaranteed to kill me?
They shouldn't just answer that question.
Or let me help you with your terrorism plan.
Yeah, yes.
This is how to make homemade bombs.
So, and again, you could also look at this in terms of things like intimacy, right?
Should they become as intimate with the end user as the end user wants, or should there be some limit on that?
But this is identifying just one cognitive bias, and one that we all kind kind of already know about, the sycophancy problem.
But what if there are other cognitive biases in there that we're not aware of?
Yeah, like we've spent a couple of hundred years, or at least the last hundred years, doing social psychology to try to understand human cognitive biases.
And it's complicated, and we still have a lot to learn, but we've identified, you know, score and score of them.
And that's just cognitive biases.
We also know that there are a ton of other types of biases like gender biases and racial biases, and every test shows that they're showing up in chat GPS.
Yeah, so there's there are human biases which are translating in the training data to the LLMs, but there's also ones that are specific to the LLMs based upon how they function, and we need to understand what they are.
And they have them even without, so this is another aspect of this, which we've been talking about as well.
Even without feelings and sentience and intention and all of those things, and the artificial general intelligence sentient AI stuff, even with these just narrow AIs, they still have all these biases in how they function.
And that determines their output.
And we're largely unaware of it.
We need to study what the algorithmic biases, let's call them algorithmic biases, right?
We need to study what they are.
Because as we know from social media, this is not even artificial intelligence, just social media, just algorithmic biases in social media is having profound social effects
on our civilization, on individuals and on democracy, et cetera.
And if we start incorporating
AI apps more and more into our just daily lives, we have to know something about their biases.
We can't just take their output as if they're 100% rational and fact-based because they're not.
In the comments to my blog, I got into an interesting discussion.
I still think, so, you know, we've talked about this before, the fact that there are, there's, yes, there are AI enthusiasts out there.
There are people who are over-hyping AI.
I think there are AI
realists, and I think there are also AI cynics.
And the AI cynics are, I think, just want to believe AI is all bad all the time, kind of purism.
But also,
a specific type of AI cynicism I'm running into is like when I wrote this article about this study, several people responded, well, but AIs aren't deliberately doing anything because they don't reason or think
about the freaking point that I know exactly, exactly, Jay.
They're just predicting the next word.
And like, well, that even that may be true.
I think that's hyper-reductionist, but it's not the point.
It's a kaku-level, you know, soundbite.
Well, and it's just it's saying it's completely missing the point, as Jay said.
It's like you're just telling me how it's going about doing what it does.
And that's actually a very simplistic way of framing it.
But even if that were true, it's just a really good at predicting the next word.
It's doing that in order to replicate human-like responses.
And we're using those human-like responses in lots of different ways.
And we need to understand the nature of those responses.
Saying that it's just word prediction is irrelevant.
That's like saying, well, we can't talk about culture and science and knowledge because it's all just electron, you know, neurons communicating with each other.
It's like, yes, it is just all neurons communicating with each other, but that's hyper-reductionist in the same way.
It doesn't capture the higher-order phenomena that are going on.
So it's really interesting that it's very dismissive, but at the same time, see,
I think it's...
people are talking past each other, and again, that's why I think it's so important to try to wrap our heads around this.
So I think there's some, eventually we came to some common ground because we're actually saying the same things in different ways, a lot of ways.
Like, one is, I think it's clear that we don't need general AI to have all the risks.
And this is something that I've changed about over the last 10 years.
I think all of the sci-fi existential AI apocalypse risks are there with narrow AI.
We don't need...
We don't need general AI for them.
Yeah, I agree.
And that's because narrow AIs can do way more than we thought they could.
This is both good and bad.
If they're trying to sort of like dismiss the good part of it and emphasize only the bad part, it's like, no, it's kind of both.
You get the good and the bad, and it's way earlier than we thought, and it's with narrow AI way more than we thought, which is interesting.
Yeah, you would think that would help.
The final thing that I think we're disagreeing about is
the AI cynics are like, it's unfixable.
We cannot fix this.
It's baked into the nature of of LLMs, and there will be no significant fix to them.
And of course, this is where we can't resolve our disagreement because it's about the future, right?
Whereas I'm saying, well, but in this study that I'm talking about today,
we went from almost 100% error to almost 0% error by tweaking the prompts.
Clearly, we can have a profound effect on the quality of the output we're getting at the prompt level.
Imagine what we can do at the training level and at the programming level.
And maybe there are some baked-in problems that we won't be able to make go away, but let's try.
Let's see what we can do about this.
It seems like misplaced cynicism
to say we can't do it.
My cynicism comes from the, are we willing to do it?
Well, yeah, I agree.
Do the tech companies actually want that or do they think they're going to make more money?
How does this affect them?
So far, it doesn't seem that they want want to do it.
Exactly.
They're really pushing for no legislation.
Just trust us, bro.
We know what we're doing.
And then, meanwhile, they're following the move fast and break things approach.
And what did Sam Altman say recently, Jay?
It's like, ah, we're not going to worry about morality or anything.
That's not for me to decide.
And he's basically justifying sort of unleashing
erotic content or intimate relationships with, you know, between users and the AI.
It's like, yeah, we're not going to worry about the negative consequences to anything that we're doing.
That's not our problem.
We're just going to put it out there.
Yeah.
Well, that's like the tobacco company saying, yeah, here's your cigarettes, whatever.
You decide.
Have a fun.
When you guys saw the most recent
news that ChatGPT is starting to partner with different corporations to basically prompt you to buy things.
So when you ask it a question about something, it'll be like, well, here's a suggestion of something that could solve your problem and link you to something that you should buy.
I mean, we all saw this coming.
That right there
is the thing that existentially scares the living piss out of me almost more than anything else.
You know, I would frame it as the, have you heard the term information totalitarianism?
That's what we're talking about.
If you can control someone's information universe and AI gives you the ability to do that really well, then it doesn't matter if you have the trappings of democracy or freedom or none of that matters.
Yeah, yeah.
You control
everything if you control information.
Aaron Ross Powell, Jr.: And there's this sort of, I've seen some of the continuum, like people making charts that illustrate, like an infographic, where where AI is right now in terms of LLMs at least, we are still in the driver's seat.
And then there's this middle ground which we're starting to see where we might ask it a question and it answers not exactly what we want in order to change our buying habits or in order to change our perspective.
And then eventually it's just going to say, I know that you are running low on whatever.
I can just do that for you and just do it.
Right.
Like eventually it becomes part of it.
Isn't that helpful?
Yeah, exactly.
Yeah, and it can be helpful.
It could also be infantilizing.
Well, and it can also be,
it can, I mean, I think it can destroy people's personal financials.
Think about in-app purchases, you know, and kids.
Yeah.
You know, oh, I just spent $10,000 on my mother's credit card with in-app purchases.
Yeah, using your AI to make your
buying decisions, your investment decisions.
Here's an interesting thought, which I just had, is
what if someone trains AI on the last hundred years of social psychological research in order to learn how to optimally manipulate people?
Oh, I think they're already working on that.
Yeah, right.
I mean, why not?
Because there's an entire science behind how to
affect people's buying decisions.
Now we're going to set a rocket fuel to that with AI
to absolutely optimize consumer manipulation.
Right.
Yeah.
Yeah.
I think that ultimately that is the financial driving force behind some of these companies.
Yeah.
Everything comes down to ad sales.
Everything comes down to making money off of the buyer.
And we've got to remember, right that if we're not making purchases like if we're not contributing by buying a product we are the product you are the product yeah exactly all right and it's probably going to get much much worse what bob is going to tell us about that oh great so guys i'm sure you heard of this one hundreds of diverse public figures made the news quite recently by signing an open letter calling for prohibiting the development of artificial super intelligence.
This open letter was published by the nonprofit Future of Life Institute.
That's a U.S.-based nonprofit that campaigns against the dangers of AI.
So here's the statement.
This is what everybody's jumping on here.
We call for a prohibition on the development of super intelligence, not lifted before there's broad scientific consensus that it will be done safely and controllably, and strong public buy-in.
Okay,
so to be clear, they're not referring to AGI, artificial general intelligence that people talk most often about, especially in regards to large language models.
AGI is human-level competency across tasks, right?
What this open letter is about is ASI, artificial superintelligence, which refers to superhuman cognition across most tasks.
Okay, so this is just AGI on steroids beyond the beyond.
Is there a practical example, like a typical example of super intelligence?
Just go to movies and literature is all I could say at this point.
But
there's no example now.
But it's clearly something that is reasonable to anticipate.
Well, like the HAL 9000 or something.
Is that what we're talking about?
Yeah, he right.
He's more on the level of AGI, souped up AGI.
I wouldn't really necessarily classify him as a super intelligence.
So let me go to the website where the statement is, and let's get the latest numbers.
So right now there are 27,985 signatures on this statement.
What's really weird is that literally two hours ago, there were 4,000.
So, this has gone up by many, many thousands in just a couple of hours.
I'm not sure.
We're developing news story right now as you're speaking.
I'm not sure how high this is going to go, obviously, but the main concern here is with the people, you know, not the people or who knows who or what is signing this thing at this point.
It's all digital.
But
that's obviously a huge leap.
The focus on the news item, though, is on the many hundreds and hundreds, or maybe at this point, in the low thousands of well-known figures that have signed this, ranging from prominent AI researchers, Nobel laureates, other scientists,
all the way down to British royalty, religious leaders, and conservative media figures as well.
That doesn't.
So, yeah, so from essentially from Steve Bandon and Prince Harry to the godfather of AI, George Hinton, and Apple co-founder co-founder Steve Wozniak.
So, this is definitely not a coalition that you see very often.
It's probably one of the main reasons why it's getting this much attention.
Now, I wasn't too familiar with the Future of Life Institute.
The mission statement on their website does say this: steering
transformative technology towards benefiting life and away from extreme large-scale risks.
So, they definitely campaigned for that.
So, here's a few quotes from people that have now signed it.
Sir Stephen Fry, we all know, right, actor, actor, director, writer.
He said, to get the most from what AI has to offer mankind, there's simply no need to reach for the unknowable and highly risky goal of superintelligence, which is by far a frontier too far.
By definition, this would result in a power that we could neither understand nor control.
Prince Harry, Duke of Sussex, said, the future of AI should serve humanity, not replace it.
The true test of progress will be not how fast we move, but how wisely we steer.
So, yeah, we need some
wise steering.
Yeah, that was actually a decent quote.
I like that one.
Yeah, like the exact opposite of move fast and break fast.
Exactly.
Stuart Russell had an interesting quote.
He professor of computer science, Berkeley, director of the Center of Human Compatible Artificial Intelligence.
Oh, this was a good...
He's a co-author of the standard textbook, Artificial Intelligence, a Modern Approach.
So this guy
clearly is somewhat familiar with AI.
He said, this is not a ban or even a moratorium in the usual sense.
It's simply a proposal to require adequate safety measures for a technology that according to its developers has a significant chance to cause human extinction is that too much to ask
um
okay uh and i'm going to throw in a quote from sam an old quote from sam altman uh ceo of open ai um he did not sign this but he he is well known for for quotes like this one development of superhuman machine intelligence um
of humanity.
So we said that back in 2015.
Now, I can think of greater threats from non-artificial intelligences right now, but I'm just throwing that out there.
So clearly, ASI, artificial superintelligence, is a
terrible double-edged sword, right?
So on the one hand, there's the potential for staggering advances in general science, right?
Healthcare, quality of life.
The list goes on and on.
And there's two types of important problems that I think it could solve.
One are the extremely difficult problems, or even problems that we're not even aware of yet.
So, this is the scenario where, yeah, it could solve it in a week or a few days.
And otherwise, without that technology,
it would take us decades or even centuries to solve that problem.
So, that's the type of class of problem that I see an artificial superintelligence solving.
The other type of problem would be essentially unsolvable problems by near-human-level intelligences.
Like, it's like you're a dog looking at a trigonometry problem.
It just never, you know, humanity is just never going to be able to solve that problem.
But
a super intelligence could solve that problem.
I mean, that's all great stuff, right?
Yeah,
it could potentially be an amazing advantage to have such an intelligence at our command.
But on the other hand, the risk of unleashing an inherently unpredictable intelligence that makes Einstein look like a toddler, or worst-case scenario, makes him look like a paramecium, sure, that is justifiably, incredibly scary.
And so the downside here is just so extreme, right?
It warrants many types of reactions.
Some, you know, some are unwarranted, of course.
So billionaires are building bunkers.
Some people even want a dune-level moratorium on AI.
Thou shalt not make a machine in the likeness of a human mind.
The Butlerian jihad.
Yeah.
So I'm going to read the statement one more time.
It's brief enough.
I'm just going to say it again.
We call for a prohibition on the development of superintelligence not lifted before there's broad scientific consensus that it will be done safely and controllably and strong public buy-in.
So I think the two problematic sections here are kind of obvious, right?
Broad scientific consensus and strong public buy-in.
So good luck with both of those.
I mean, is that even feasible?
And who defines broad consensus and strong public buy-in?
Who defines that?
How is this implemented?
It just seems...
Democracy?
I mean, I know it's imperfect, but it's the best we have, right?
They're talking about voting.
Yeah, but I mean, do you really think that regular people like us would have a final say
on what that means?
I'm just saying.
What they're calling for is to have a more democratic process here and not to have the very few ultra-billionaires making decisions that are ultimately going to affect us.
That would be a wonderful step.
But my main issue here is that the statement
as it's written there seems to me to be very naive and unrealistic,
especially when you consider this care, especially and probably most egregiously, it ignores that 1.4 billion pound guerrilla called China.
Does this statement that people are signing, does it make any sense considering the fact that China and other autocratic countries would just plow full steam ahead
in ASI research?
They would not stop.
This would only stop some of the countries that actually
would would walk into this trying to be good about it you know it to put it very simply and i don't even necessarily trust our country anymore to handle this to handle this well right so so i'm i'm not saying let america do this maybe not i mean i wouldn't mind having nato control nato country well that's what i was going to say this is calling for like a unlevel resolution exactly we see how that works with climate change so yeah but i mean because in my mind the the parent pandora's box is open here we need plans to deal with it that are realistic and a prohibition just is not realistic because of countries like China.
I mean, to me, that's the bottom line.
You look at the past when governments have tried to stop things, it just goes underground.
It doesn't stop.
So, I mean, I think regulation is worth discussing, especially regulation that keeps the research open, right, and transparent as much as possible, not hiding it.
But don't you think they're also talking about hearts and minds here?
Yeah.
This reminds me a lot of the human cloning conversations.
I'm just going to bring that up because actually, that's a counterexample to what Bob said.
There actually is an international consensus.
There isn't a widespread treaty, but there is a general consensus that reproductive human cloning should not be done right now.
And that's
basically worked.
It's basically worked.
We've seen a few things around the edges, and like you said, in China, but we also saw a lot of shame.
Like, I think so long as there is a hearts and minds campaign and individuals collectively say, we're not going to stand for this, there's always going to be people who break the rules.
There's always going to be people nibbling around the edges.
But if the government isn't taking a centralized approach,
then I think that they stand a lot worse chance if it's happening around the edges.
I mean, I don't think you can compare cloning to
artificial intelligence.
You're just not good comparison.
Well, we have the technology.
I don't think there's a good comparison.
The motivation and the potential benefits as we see them now are far too great, far greater than cloning.
That people, there would not be a consensus
to limit this research.
But we could get to a consensus, Bob, if we keep pushing it.
That's where this kind of thing can make a difference.
But how we could use another example, and that's like nuclear proliferation.
Obviously, it's proliferated to some extent, but there is a pretty broad consensus against further proliferation, especially
this world from being engulfed in nuclear fire?
Mad.
Mutually assured destruction.
Well, but there is mutually assured destruction with AI.
But, Bob, I want to push back on your nuclear proliferation thing, too, because that's very simplistic, and we don't know that.
And in fact, we have a nuclear non-proliferation treaty.
There are international arms treaties.
There is an international organization to limit nuclear proliferation.
Again, there's an infrastructure in place to limit nuclear proliferation.
It's not just MAD that does it.
But we also do break it all the time.
Yeah, so I agree.
It's not a perfect analogy,
but MAD,
because
two countries on opposite ends of ideologies had this capability is the reason why we have not seen nukes go off
since.
But you're talking about using nukes.
I'm talking about other nations acquiring nuclear technology.
Yeah, and that's the difference I was making.
There's not a proliferation, not nuclear use.
Yeah, there's a difference between having it and using it.
And I think maybe that's where the analogy does make sense.
We have to have multiple places have equal opportunity to do research in this area.
But there has to be a massive regulatory infrastructure, a global agreement, which is very hard to get to, that says we will not unleash this on the world because once we let it out, we can't put it back in the bottle.
Right.
Well, my argument is that it's already out.
This is already unstoppable.
Yeah, that's why it's in some ways harder.
A nuke is a really good thing.
I think so.
Exactly.
Exactly.
Bob, what are you saying?
AGI is not inevitable at this point.
That genie is not out of the bottle yet.
And
this count is going one step.
Beyond AGI.
But still, I know.
But my point is, this is still, Pandora's box is open.
Do you think
countries?
Steve, what would it take China to stop doing research in AGI and ASI?
What would it take?
What would it take?
Well, the question is a research.
It's using it.
But that's the question, right?
Is it such a small incremental change that we don't notice when it flips over?
And that's why the nuclear arms analogy doesn't really work.
Because dropping a nuke is a really obvious thing.
Aaron Powell, Jr.: But
developing nuclear weapons, it's hard to do that completely in secret.
Although you can do a lot of it.
You can do a lot of it.
It's more about the...
But dropping a nuclear weapon, you can't do that in secret.
And the truth is...
Are you making these small incremental improvements to the AI that over time result in what we're talking about?
Or is there an obvious, you know
the current AIs that are that are in use are not on the path to AGI.
Correct.
So this is good this would require
I think we're probably still decades away from AGI and it would require
a lot of investment and a lot of specific development.
And I think there is time.
I'm just saying it's not hopeless.
It's not inevitable.
It's not out of the bottle.
There is time to start to develop in international institutions and treaties and infrastructure and conversation and standards and intellectuals weighing in, et cetera, et cetera, to get to this consensus like we did on human cloning, like we did on nuclear proliferation, that we are not going to move full speed ahead towards AGI or ASI until we know how to do it safely.
But what I'm saying is, Steve, you think I don't agree with that?
Of course I agree with that.
I just think it's very naive because you you always have, no matter all those good things happening, there's still countries like China that are just not going to care and plow ahead.
What do we do about countries that could have it a generation before other countries because
they're not cowed by these potential problems?
That's the problem to address, Steve.
That's the naive problem.
I agree, but Bob, it's possible, it is possible that if there's enough international consensus, that that could be sufficient pressure on countries like China to go along.
My main problem with this statement is not that it's naive, it's that it doesn't go far enough, and it might actually be counterproductive.
And it doesn't account for
the autocratic countries that will
steam ahead, that will not be slowed.
I agree that that's a problem.
I'm now focusing on an entirely different problem, which is that by
saying the point of danger is 20, 30, 50 years in the future when we get to ASI, it actually creates a false sense of security about our current level of AI, which is more than sufficient to cause a lot of problems.
I don't necessarily think that AGI or ASI is necessary to have an AI apocalypse.
We can have it, you know, just with
the narrow AIs that we have now, depending on how they develop them and how they're used and whether or not they're regulated, et cetera, et cetera.
And so
I would use this as a starting point.
Yes, like this is like putting it way out in the future for our worst case scenario, but we have to talk about AGI and we have to talk about the current AIs, which need to be regulated.
And we need to think very carefully about how they're being developed, how they are being implemented.
Otherwise, we're going to have a replication of all the downsides of social media, but times a thousand.
And that would give us a framework.
I agree, Steve.
No, I was treating that as kind of like out of scope for this specific specific talk because this specific talk deals with with uh artificial superintelligence which is something that's not
discussed that often but it's not out of scope in the specific thing that i said this actually creates cover for the current ais by making them seem not dangerous
by focusing on this future potential danger as if that's the ai that's the danger from quote-unquote ai they should make it clear that this doesn't mean we're safe up until that point i agree All right.
We have to shift to a very serious issue now.
We've talked about these superficial issues now.
Evan, you're going to tell us about this ghost in Connecticut.
Let's get to the hard science here, folks.
Connecticut ghosts.
Wow, Connecticut's our home state, right?
Connecticut's known for many things.
All right.
But for this particular news item, I want to touch on two of the things that Connecticut is well known or relatively well known for.
Number one, we have a newspaper in this state called the Hartford Current.
It is America's oldest continually published newspaper,
1764, and ever since then.
So that's interesting.
Number two, Connecticut is home to a legend in the world of ghost stories, the white lady of Union Cemetery in Easton, Connecticut.
That's world famous right there.
So there you go.
I'm touching on these.
We're on the cusp of Halloween, Bob, in case you didn't know.
Oh, wait, is that, oh, damn.
Okay.
Write, check your calendar.
Oh, it takes me by surprise every year.
Yeah.
Wow.
Didn't see that.
Speaking of surprising, is it not surprising whatsoever that the most prominent newspaper in Connecticut is running an article about the most prominent ghost in Connecticut, of course?
And what do you get when you combine these two things?
Well, you get a news item that's so unworthy and ridiculous that it would be an insult to dead fish if you tried to wrap it in this article.
The headline reads, Connecticut's famous ghost is known to frequent a cemetery.
Seen her?
Why, a paranormal investigator is asking?
Well, I can answer that rhetorical question right off the bat, because tis the season, and desperate newspapers will glom onto anything that might put eyeballs on their product.
But the article basically reads like a promotional ad for a group of local paranormal investigators.
I'll bore you with just a couple of select passages from the article just so you can get the flavor.
Paranormal investigators and amateur ghost hunters alike have been fascinated for years by the sightings of the white lady of Easton in and around Stepney Cemetery in Monroe and Union Cemetery in Easton.
More on that soon.
Now, a paranormal team is taking a deeper dive into the legendary apparition and asking for the public's input.
The result will be a documentary about the female ghost with the long dark hair and flowing white dress, said project leader and paranormal investigator Nicholas Grossman.
Grossman even believes that he may have captured actual footage of her apparition, although he doesn't share it, but that's totally, you know, beside the point.
His fascination about the lady heightened one day, he said, when his psychic colleague, someone named Diane, and their video technician Hector, noticed something unusual.
They said this, quote, the cemetery, usually a hotbed of paranormal activity, was eerily quiet.
Oh gosh, a quiet cemetery, how unusual.
But the psychic used her pendulum to communicate with a spirit who delivered a cryptic message.
You will see the white lady tonight.
And then while he was driving down the road later that evening, this is Grossman's, the guy, the paranormal investigator, a woman in a white dress flew across in front of my car.
She appeared completely physical, not transparent.
She glided across the road in a way no human could.
It was so real, I swerved to avoid her.
Grossman says he regrets not having his video camera on that day.
Darn it.
Dang, he missed it.
Oh, maybe next time, maybe next time.
Then the article goes on to promote his ghost hunting group.
You know, they're encouraging people in the area to contact them to share share their stories and hallucinations of their interactions with the white lady so that they can incorporate it into their upcoming film project.
Well,
since you asked, I do have a story to add.
Because you see, from the years 1982 through 1986, I lived in a house about one quarter of one mile from Union Cemetery in Easton, Connecticut.
And I and some of my high school friends would frequent that cemetery regularly.
We would ride ride our bikes through there, you know, just muck around in there.
We conducted some scientific experiments, you know, such as seeing if tubes of rubber cement are flammable.
They are, by the way,
and some other non-damaging mischief sort of events that teenage boys are wont to do when they explore their surroundings.
But before I tell you about my results from my five years of basically living next door to this cemetery and therefore next to this ghost, I'll give you a little bit of background on the legend of the white lady, who's been cited for, what, decades, many decades.
There have been reports of the white lady.
But the white lady, in an interesting way, is very much described the same way as, oh, I don't know, every other white lady account of similar ghost sightings that have plagued the human mind for as long as, I don't know, there have been human minds.
There are stories like this everywhere.
This is not unique to Connecticut, certainly not to this cemetery, and throughout cultures all across the world all over the world, basically.
Look, she has long dark hair, flowing white dress, an uncanny ability to peer out of nowhere, and apparently,
apparently,
right in front of moving cars where drivers have to slam on the brakes, sometimes
convinced that they actually hit her, but then they find there's nothing there.
Oh my gosh.
Local folklore says
this was a woman drowned by her husband over three centuries ago near a watering hole across from the cemetery.
And like so many other lady-in-white tales, it's a story that's emotionally satisfying, but they say historically fuzzy at best.
I say non-existent, frankly.
There are no records of a drowned wife, woman, or any other person in Easton from the 1700s or the 1800s for that matter, or the 1900s or ever.
But that doesn't stop the folklore.
That doesn't stop the story from gaining a life of its own.
Almost every version of this story anchors the haunting at to the White Ladies Union Cemetery in Easton, again, which they say dates back to the 1760s, scant evidence for it.
But this is where Ed and Lorraine Warren, the Warrens, focused their attention in the late 1980s.
And it was that one fateful night, September 1st, 1990, Ed Warren, he was on the seventh night in a row of filming at the cemetery, where he captured the video of the white of the lady in white, a woman walking across the cemetery.
And he publicized it in his 1992 book called Graveyard.
Now,
Bob, Jay, Steve?
Yes, sir.
Did he.
We were shown the footage of the white lady, right?
We were.
What were your thoughts about that?
So I asked Ed, we asked Ed to show us the best evidence you got.
What's the best?
You know, because he's claimed to have tons of evidence.
All right, just give us the absolute best.
This is what he showed us.
His VHS recording of the white lady in Union Cemetery.
And our reaction was, first of all, it it was crappy evidence.
But it was at that perfect distance to give you a suggestion that something was happening, but not be able to see what it was.
So, was that a living person in a sheet?
It absolutely could have been.
It was not of sufficient quality to rule that out.
And that, I think, was absolutely by design.
Especially if that's your motivation going into this thing in the first place, where after seven nights of this, you're not really getting anything.
How many more nights are you going to do this, Ed?
Seven nights is enough.
Let's get somebody to go, you know, then you do a blob squatch.
Yeah, it's a blob squatch, right?
And I have to add, we asked Ed, you know, and this is at the point where we were kind of still being cooperative and friendly with him.
And we said, yeah, that's interesting.
We'll be happy to take a close look at it.
Can you give us a copy of that tape?
He refused to give it to us for analysis.
That was the end.
Of course.
Right.
That was the beginning of the end of the
joint venture that we had with him for those moments.
And
he did give us a video of somebody disappearing, which, of course, we utterly demolished.
But that was somebody else recorded that.
That was some flunky of his.
That wasn't him.
So he didn't have his own credibility on the line with that one.
Correct.
Yeah.
And he still didn't believe our assessment.
Well, yeah.
He didn't accept it.
Whatever you say,
that kid disappeared.
Sure, he did.
That's right.
That kid,
Gary would say that every time he saw that kid disappear.
Oh, Ed, Ed, Ed.
So, look, as far as I'm concerned,
because I did do some more research into this, into the legend of the white lady.
I looked for articles.
I looked for stories.
I looked for reports.
I looked for account.
I don't even know really that this story had much legs even before Ed Warren.
got, you know, became kind of the toast of the paranormal world as he was on the ascent in those years.
Right?
You know,
he seems to be the one to have suddenly given a name to this particular phenomenon.
Sure, maybe, because again, how many other ghost stories are there of things being seen or a woman and vague descriptions of things?
And suddenly, Ed in his 1990 encounter kind of, you know, codifies this thing in his own way.
And then the media start following it.
Okay, now this is the white lady of Union Cemetery because Ed Warren says it.
So I'm not even sure this thing really even existed before Ed Warren.
That's my take on it.
And it's not even that creative.
Like how many, like you mentioned, how many towns across the world have a ghost dressed in white
walking a graveyard.
It's cliche.
It's not even creative.
I agree.
It's like a flying saucer with gray aliens.
I mean, come on.
You could do better than that.
But I want to do my official contribution to the pool of information that Grossman and his team are collecting.
Here's my first-hand account.
Here you go, Mr.
Grossman.
I spent more days and nights in and around that cemetery than many other people can claim, frankly, especially people who are investigating ghosts.
In all my many hours,
hours upon hours spent at Human Cemetery, I never saw a thing even coming close to a ghost sighting.
We never even had a single noise that scared us in the middle of the night or something that caught the corner of my eye.
Absolutely zero.
We were there in the daytime.
We were there in the nighttime.
And we were occasionally out well past midnight around that graveyard.
This was all before I even became a skeptic of the paranormal, right?
I believed, you know, I believed in anything and everything at that point.
It was a 13, 14-year-old kid, just messing around, having laughs with friends, you know, again, riding bikes.
We didn't vandalize.
We didn't really do anything like that.
We were just.
It was your own personal stranger things.
Exactly.
It was all fun.
Yeah, it was.
We were kids on bikes then.
And that was it.
So there's my contribution.
Nothing happened
upcoming.
Yeah, exactly.
Not a darn thing happened.
So I hope that somehow makes it into the documentary.
Well, happy Halloween, everyone.
All right, Jay.
It's who's that noisy time?
Noisy time.
All right, guys, last week I played this noisy.
What do you people think?
No, it sounded like a water pick from a dentist's office, you know, where they fire that laser-sharp water into your teeth.
Oh, I hate that sound.
Yeah,
I had some fun, varied guesses in here, but I can only talk about a few of them.
Michael Blaney wrote in and said, hi, Jay, I'm guessing the call of the Jacobin hummingbird.
He says that Guinness lists it as the bird with the highest pitched call.
That's a fantastic guess, Michael, but I'm sorry you are incorrect.
Another listener named Hunter Richards wrote in and said, hi, Jay, forgot it was Wednesday.
If it's not too late, I I think the critter in last week's noisy is a small mammal like a flying squirrel.
Maybe a pygmy loris, but I don't know if those animals are in proximity to humans, usually so a flying squirrel.
As a reminder to the listeners of this show, only submit one thing, or at least give me your final guess.
You could say, I think it might be this, this, or this, and give me your final guess.
Because I'm going to go with, if you don't, if you give me multiple and you don't specify what your actual guess is, I can't, I can't count it.
Anyway, Hunter, thanks for that.
You are incorrect.
Louis Morales said, hi, Jay, this week's noisy sounds like a dolphin to me, probably at a sea park.
Okay, but he says it might be a bird, too.
And then he says I'm sticking with dolphin.
Anyway,
I did have a close guess, but no winner this week.
The closest guess I got was sent in by a listener named Evil Eye.
Evil Eye has been listening to the show, I think, since the very beginning, and also is a very regular guesser here.
He said, I'll just go ahead and fail right out.
He says, it's a squirrel monkey.
This is a very close guess.
What's a squirrel monkey?
It is not a squirrel monkey.
It's a
squirrel.
Is that a thing?
Yeah.
Yeah.
I listen to the sounds that squirrel monkeys make, and it's kind of similar, but not fully there.
What this actually is,
is
I'll tell you a couple things.
They live in eastern rainforests in Brazil.
They are arboreal.
You know what that means, Steve?
Yeah, they live in trees.
Thank you.
And can you guys want to make a guess?
Yeah, I'll guess they're arboreal.
Correct.
It's the arboreal arboreals.
No,
this is called a lion tamarin.
This is a little monkey guy.
Yeah, a lion tamarind.
They weigh up to 900 grams or 32 ounces.
They're about 30 centimeters high or 12 inches long, high, however you want to do it.
With tails about 45 centimeters or 18 inches long, they jump through trees.
They use their fingers to hold onto the branches.
They use their claws to dig under the bark to search for insects to eat.
They also eat some snakes, lizards, and small fruits.
They are unfortunately all endangered or critically endangered, in part because their habitat is, of course, being destroyed, and climate change is a big part of that.
So, let me play this for you again.
Keep in mind, this is a little monkey.
I need to ask the audience a very serious question.
I just played a sound that sounds exactly like a million birds all over the planet.
And the vast majority of you did not guess a bird.
Maybe it only sounds that way to you.
Well, or you know, I mean, come on.
It made the tweet sound.
I mean, it's like my expectation was to be flooded with bird guesses, and it didn't happen.
And I need to know what's happening, what is going on out there.
Steve, did you think it was a bird?
No.
What am I hearing?
Is it a high-pitched, squeaky thing that repeats itself and sounds like a song?
It's not tweety enough.
Kara, is it tweety?
I don't know.
I'm not a bird listener too.
Maybe I'm getting old here.
Maybe my.
I don't know.
I was just thinking bird all the way.
Anyway, okay.
So that was this week's Noisy.
Thank you, everyone, for guessing.
I have a new Noisy this week, and this Noisy was sent in by a listener named Jenny Navis.
Very strange noise.
If you think you know what it is, you can email me at wtn at the skepticsguy.org.
You can also send me in any noises that you heard or have happened upon on the internet that you think are cool.
I will take all in consideration.
Steve Novella,
we like to leave Connecticut, and we ask Kara to leave her domicile in Los Angeles and join us for live entertainment because we give it and we want want the people that listen to this show to receive it.
Have I said anything wrong so far?
Nope.
Okay, so where are we going to be?
We're going to Seattle, Washington, and we're going to Madison, Wisconsin.
January 10th of next year, that's 2026, we will be in Seattle, Washington at Washington Hall.
Cool.
And Saturday, May 16th of 2026, we will be in Madison, Wisconsin at Atwood Music Hall.
You can go to our website where the links for the tickets will be available.
And on top of that, we will be doing, of course, private shows on both of those weekends.
It's very likely that we'll be doing those private shows the Saturday morning, which is before the nighttime event.
So you can do both of them in one day.
They typically run from like 11 to 2 or 12 to 3.
I will finalize those details, but tickets will be available this weekend.
And we're going to actually try something new this time around, guys, because we have gotten requests over the years that people want something a little more exclusive and a little more private.
That's more of like just socializing, and there's no shows involved.
So, we decided that we're going to try on both of the Friday nights before the shows that I just mentioned.
We will have a very, very limited ticketed event where you're just going to hang out with us and you're going to basically do whatever we decide to do, right?
It could be anything.
We'll give more details on that as well, but you will see tickets up for those as well.
All right.
Thank you, Jay.
We have one email.
This comes from Keith from, by coincidence, Seattle, Washington.
And Keith writes,
I saw this article referenced in a typical online argument, and he has a link to the article.
The commenter extracted the following one-line quote.
The risk of COVID-19 also increased with time since the most recent prior COVID-19 episode and with the number of vaccine doses previously received.
He goes on, but that's the key question he's asking us.
So basically, there's a study that showed exactly that, that the risk of getting diagnosed with COVID-19
increased with the number of
previous COVID-19 vaccine doses.
So of course, this is going around with the claim like, see, the vaccines, not only do they not work, they increase your risk of getting COVID-19.
But wait, did they compare it to people who weren't vaccinated?
Yeah, because I think the risk of getting COVID increases with time.
Well, it increased with time since your most recent COVID infection.
And also,
which makes sense.
The longer we live, the more likely we are to get COVID.
I think they looked over like a six-month period or something.
Oh, okay.
Okay, so what, you know, as you might imagine, that's not what the study showed.
Okay.
Meaning it didn't show that COVID vaccines don't work.
It showed the opposite.
It showed that, you know, if you look, the core finding of the study and what they were looking for, the actual question for the study was,
how effective is the vaccine given the mutations in the virus?
So
are the vaccine still providing protection given that we know the virus is continuing to mutate?
And they actually correlated it with different waves of infection and what strains were dominant during that period of time.
So they found a number of things.
One, so again, this is just an observational study, and so you know what I'm going to say, right?
Observational studies are subject to confounding factors.
And this study in particular has confounding factors galore, right?
So that's the huge grain of salt you have to take this with.
They found that overall infection rate during the period of times they were looking at was 8.7%.
So that's fairly low.
So you also have to keep that in mind as well.
There's already a pretty low infection rate.
And so
even subtle effects can have a seemingly dramatic effect on the relative risk within that, you know what I mean, of getting affected.
What they found was the estimated vaccine effectiveness was 29%
for the BA.4/5
dominant wave.
It was 20%
for the BQ negative wave and only 4% effective for the XBB dominant phase.
Right?
So 21, 20%, and 4%.
So keep in mind, so that means it worked, right?
The vaccine reduced the risk of getting infected.
But the more different the strain was from the vaccine strain, the less effective it was.
Right.
Okay.
So of course, that is inconsistent with the notion that increased vaccine doses increased your risk of infection, right?
So first of all, that's a relative increased risk on the background of an overall decreased risk just from being vaccinated.
Does that make sense?
Yeah, that's kind of what I was getting at, but like in a less complicated way.
Yeah.
Is that like the more vaccines you get, it means that you are going farther in time and there are more the risks fluctuate in the general population.
So there will be times when you get a vaccine and you're more likely to get COVID, not because of the vaccine, but because COVID is circulating more.
Yeah, I mean,
they tried to control for that stuff as much as possible.
Again, they're doing an observational study where they're just looking at a cohort of people and saying, did they get infected or not?
And what was their vaccine status?
So
there's a couple of things to point out here.
Obviously, and
the authors do not believe that increasing the number of doses actually reduced the protection of the vaccine or increased your risk of getting infected.
There has to be a confounding factor here.
That's the only thing that makes sense.
And they discuss a few possibilities and they try to control for a confounding factor.
It's impossible to do that completely.
You can try.
So, you know, you could say, hey, it's possible that people got more doses because they were in a high-risk group.
And being in a high-risk group increased their risk of getting infected.
But with observational data, you can never know what the error of causation is.
That's like saying being on a diet correlates with being overweight.
Yeah, because people who are overweight go on diets, not the other way around.
It's the same kind of thing.
Also, keep in mind, this was not the risk of having COVID.
This is the risk of being diagnosed as having COVID.
And so you also then, that introduces all the confounding factors of who gets diagnosed.
Maybe you're more likely to get diagnosed if you're also somebody who was more likely to be up to date on your boosters, right?
You're getting more health care.
You're more likely to show up in the system.
Yep.
A lot of people got sick and just didn't report it.
Exactly.
So, yeah, that's that they, and this is like off the top of my head, kind of obvious confounding factors that are almost certainly at play here.
And so you can't conclude that this in any way calls into question the effectiveness of the vaccine.
But I would also challenge just this, the whole approach of this study.
Again, it's fine as far as it goes, but it's not the be-all and end-all of COVID vaccine effectiveness studies.
It's looking at, in fact, the weakest indicator of vaccine effectiveness, which was having been diagnosed with having COVID.
Because this is not taking a look at severity or anything else.
There was also a very recent New England Journal of Medicine article
that looked at outcomes that are, I think, much better markers of vaccine effectiveness for a number of reasons.
So, this was a six-month follow-up study where they looked at the
estimated vaccine effectiveness.
The reduction in COVID-19-associated emergency department visits was 29.3%.
The reduction in COVID-19-associated hospitalizations was 39.2%.
The reduction in COVID-19 associated deaths was 64%.
So you were 30% less likely to go to the emergency room, 40% less likely to get admitted to the hospital, and 64% less likely to die.
So obviously we care a lot more about those outcomes than having a mild case of COVID, right?
So, and we've known for years that the vaccines are better at preventing serious illness than any illness, right?
That's That's another sort of example of you're looking at a subset of the data, it's not giving you the full picture, and
there's lots of problems with this data which you cannot gloss over.
But also, that's also just an individual study.
So the most recent systematic review I found, this is a review of 284 articles found, quote, all the approved vaccines were found safe and efficacious, but mRNA-based vaccines were found to be more efficacious against SARS-CoV-2 than other platforms.
So, all of the vaccines work.
And if you look at the totality of the literature, that's what it shows.
But, of course, if you don't know what you're talking about and you have a political agenda, you could look at this one study and say, see,
vaccines don't work.
But it is absolutely not true.
Yeah, and if you want the details, I wrote about it on science-based medicine.
Okay,
let's go on with science or fiction.
It's time for science or fiction.
Each week I come up with three science news items or facts, two genuine and one fictitious, and then I challenge my panel of skeptics to tell me which one is the fake.
We have a theme this week and the theme is AI.
Flatulance.
Same thing.
Farts.
The theme is farting.
Okay.
Okay.
I didn't plan on having a theme, but sometimes I come across an interesting news item.
Like, I could just flesh this out into a theme.
So, Steve, is this going to focus on our expertise that we have all developed in our old age about passing gas?
Or what's happening?
Yeah, your flatulent expertise may come into bear.
It might help you.
I'm not going to talk about technique or anything, Jay, if that's what you're asking.
Or naming conventions.
All right.
Here we go.
Item number one:
greater than 99% of flatulence is comprised of odorless gases.
Item number two, up to 50% of human flatulence is comprised of hydrogen gas, which is flammable.
And item number three, there are several approved tests for volatile organic compounds in flatulence as an early screen for colorectal cancer.
Jay, as the resident expert, why don't you go first?
Okay, I mean, you know, I don't know about resident expert.
Steve, the first one we have here is that greater than 99% of flatulence is comprised of odorless gases.
I mean, I think 99%
is a lot, but I would think this is, if this seems like science to me, there is quite a bit of gas passing happening, and I think if we, if every one of them smelled, we would all know it in a big way.
Because, you know, I don't know if we discussed this on the show before, but like passing gas is like a true sign that your body is functioning and that you are digesting and processing food.
And, you know, it's a very important part.
You know, it's like it's this part of
having a metabolism.
So that said,
you know, I think that there's lots of gas passing happening with people every day.
And this one is probably science.
Second item: up to 50% of human flatulence is comprised of hydrogen gas, which is flammable.
Hydrogen gas, because I thought it was methane, and I could be embarrassing wrong on this one, but that's that's the gas I thought it was no I don't think it's hydrogen that one is definitely on my I don't think so list number three there are several approved tests for volatile organic compounds called VOCs in flatulence as they as an early sign colorectal cancer thank you for that I think that's science I don't think we're farting I don't think we're fired farting hydrogen I would have heard of that, right?
And does hydrogen even smell?
I've never smelled it.
I wouldn't wouldn't be surprised if it does smell, but I don't know.
I just haven't heard a lot of that.
I haven't heard any of about any of this, and I don't think I would have not heard it at this point.
I think, number two, the hydrogen is the fiction.
Okay, Evan.
Okay, number one,
comprised of odorless gases.
So, wow, that means that that's a 1%
is responsible for, yeah,
that seems wrong, which makes me think it's right.
Right, because I mean, I don't, you know, So I, I, yeah,
that one will probably wind up being science, I think, for purposes of this game.
Uh, the second one, about 50% of human flatulus, up to 50%,
uh, is hydrogen gas.
Can't underestimate the amount of hydrogen that's out there inside, all around.
You know, what?
Most everything is hydrogen, right?
So, to clarify, this is hydrogen gas, this is H2.
This doesn't mean hydrogen as part of other compounds.
Ah, okay.
All right.
Well, that does change the math.
This is the one Jay said was fiction.
Maybe it is fiction.
The last one here, several approved tests for VOCs as an early screen for colorectal cancer.
This doesn't seem right.
Approved tests.
There are several approved tests.
But we've seen...
commercials, we've seen other things for early detection, and they're not this.
It's actual, you know,
fecal matter that you have to look at and stuff.
I hadn't heard anything about
what going into an office and letting out your gas, and then they can screen for it.
I haven't heard that at all.
So I don't know.
Sounds like you made that one up, Steve.
I'll say the VOCs.
I'll say that one's the fiction.
Okay, Bob.
It makes sense that 99% is odorless.
So that one's probably fine.
The second one, though, up to 50% is hydrogen gas.
That seems like a lot, but I think the key words there is up to.
So that might make the difference.
And then this third one, I'm not sure.
I'm skeptical that they've got approved tests for that.
I've never heard anything about it.
This could be the one that's hmm.
Up to.
All right.
I'll say the up to is changing my mind on the second one there, up to 50%.
I think typically, I don't think it's that much, but up to is just killing me here.
So I'm going to go with Evan and say VOC fiction.
Okay, and Kara.
Yeah, I'm leaning in that direction too.
I think that it makes sense.
The one that seems the most like science is the 99%,
because sometimes farts don't smell.
And so you would think that it wouldn't be a large percentage of compounds that comprise the smell.
It's probably just like one thing, like I think it's sulfur.
And if it's only a tiny, tiny bit, sometimes there's even less, or sometimes it maybe it doesn't have that compound.
So that would make sense to me.
But the two, I'm sort of torn between the two, but I'm leaning in the way of Evan and Bob.
I think you can light farts.
Don't do it, though.
Sounds very dangerous.
But I don't know if it's because of hydrogen or other flammable gases.
But Bob kind of convinced me with the up to.
Whereas Like I remember doing some stories years ago about mechanical noses and this idea of like dogs smelling cancer or trying to produce tests that can smell VOCs for different things.
And I think that research is still not where they want it to be.
I agree with Evan, like there are poop tests for screens, for colorectal, for people with like a normal risk.
And then obviously,
you know, colonoscopies and things like that.
But I don't think anybody's getting tested for cancer by farting into a jar.
So I'm going to call that the fiction.
Okay.
So you guys all agree on the first one.
So we'll start there.
Greater than 99% of flatulence is comprised of odorless gases, which means that less than 1% are smelly.
You guys all think this one is science.
And this one is
science.
It is science.
That is correct.
Yes, less than 1%
of the gases in flatulence are sulfur compounds, which are responsible for the odor.
Yes, Kara is correct.
is mostly hydrogen sulfide.
The rest is odorless gases.
I'll give you the breakdown later, though, because that obviously carries over to number two here.
For number,
let's talk about number two and number three for a bit, because you guys made interesting comments about them.
Number two, up to 50% of human flatulence is comprised of hydrogen gas, which is flammable.
So there's never going to be one figure for flatulence because it's so variable based upon diet and gut flora and other variables.
So, right, like you could, there's never going to be one figure for what is the gas constituent of flatulence.
There's so many variables in here.
It's always going to be a range.
And I'm liking this up to.
Right?
So, yeah,
it was always going to be up to.
I would never, there's no way I could ever say in a statement like this, 50% of flatulence is hydrogen gas.
That
no such statement could ever, like that could be true.
And on the third one, Evan, because a test is approved, doesn't mean that it's used or that it's useful enough or cost-effective enough that it's in general use.
Ah, crap.
Way.
All right.
So let's go.
Let's go back to number two.
Up to 50% of human flatulence is comprised of hydrogen gas, which is flammable.
Jay, you think this one is the fiction.
The rest of the rogues think this one is science, and this one is science.
This is science.
Yeah, so it's there's a lot of variability here.
It's like 20 to 50 percent, depending on really depending on your gut microbes and your gut flora.
But this is based upon a recent study where they found that hydrogen is a metabolic mediator of gut flora way more than we thought it was.
So, some microorganisms create hydrogen gas, and other microorganisms eat hydrogen gas.
Oh.
Yeah, so a lot of the gas that's produced gets actually eaten by other microbes.
And then, some of it you burp out, and some of it you fart out, right?
So, how much an individual farts out depends on how much they're making and how much they're consuming and how much is left over.
So, that's why it's always going to be variable.
And 20 to 50 percent is the range that that is most resources I found are giving.
So maybe the average of like 30% or so, but it's still a lot.
That was way more than I thought, which is why I included that.
Yeah, that seems like a whole lot.
Yeah, it's a lot.
It's more than we thought is the answer.
It is actually more than we thought.
This means that there are several approved tests for volatile organic compounds, and flatulence as an early screen for colorectal cancer is the fiction.
Because, yeah, what you guys were saying about this one was otherwise correct.
There are VOC tests for lots of things now,
the volatile organic compounds.
There's a lot of research looking at measuring VOCs in breathalyzers and also in flatulence, but they're not quite there yet.
The ones that
are working are looking for VOCs in actual fecal samples.
So, Evans, correct?
You're looking at fecal samples.
But they are still looking at VOCs.
Yeah, but they are looking for VOCs, is one of the things they're looking for.
So,
this is an up-and-coming thing, and they're hoping that they'll get to the point where they could just do a breathalyzer because you don't have to get it going out the bottom.
You can get the same gases come out the top to some extent.
But if you're looking specifically for, and it's not just cancer, it's also for
other GI diseases as well,
irritable bowel syndrome, for example, or
gluten insensitivity.
Let's talk about the percentage of gases in the gut.
So it's mostly what?
What's the most common gas in farts?
Methane.
Nitrogen.
Because most of it is swallowed air, right?
So you swallow a lot of air that you swallow from eating.
Chewing gum actually makes you fart because you swallow more air.
Oh, interesting.
So there's also, therefore, some oxygen.
There's some carbon dioxide, about 9%.
So it's 59% nitrogen.
These are average figures.
Again, it's all hugely variable.
About 9% carbon dioxide.
Methane, you know, it's anywhere from 7 to 30%.
Methane, also combustible.
So that's the two things that
you can light your farts on fire.
And the two things are methane and hydrogen.
Don't do it.
Don't do it because you'll burn your ass.
And then oxygen is like 4%.
And then the sulfur compounds that give it the odor are less than 1%.
Again, these are average figures.
The range is huge for all of them
because of variables.
How much do people fart on average, do you think, per day?
In like liters?
Oh, God.
That's so hard.
How much?
Well, wait, can you tell us how much is an average fart?
Like, how big?
How many liters is an average fart?
Well,
there's the total volume and then there's the number of times you fart per day.
And then you could figure it out from there.
So one to two liters per day is
average.
That's average.
And some people are going like like that.
It's something like 15 to 23 farting events.
Wow.
Men fart more than women.
Why?
But women's farts smell more than men's.
Interesting.
Wow, that's.
These are all averages.
These are all just, obviously, there's no typical thing.
This is just
a difference between men and women.
Well, yeah, I wonder if it's biological or social.
I wonder if women hold them in the middle.
Maybe.
And then that percentage might creep up.
Fewer farting events, but
same volume.
The same gases, exactly.
Guys are a little bit more free.
A little more bravado.
So you're going to have a few more unsmelly farts than we are because you're farting more often.
Now, the volume and musical characteristic of the farts are almost entirely determined by the anus.
Yeah.
Does your butt cheeks get involved in that?
If they're big enough, I guess they could get involved.
So we're not talking about all outgassing that the body does.
We're just talking about the same thing.
No, we're just talking about...
Yeah, through the back end.
Anus-based.
On the bum-bum.
Okay.
Because you combine it with burps and other things that the body emits.
Oh boy.
Yeah,
that's more.
More gases.
Wow.
We are expending
fluids and gases of every type through many orifices.
And don't forget dander.
So, Steve, what did you say about the music quality?
It totally depends on.
It's mostly determined by the musculature of the anus, yeah.
And also for the
that's wrong.
On the key of C.
That is wrong.
It's also, it's not wrong.
It's also
the pressure that it's under is also a factor.
And butt cheeks have no say in this?
I didn't say they have no say.
I said primarily.
If they're big enough,
they have a say.
Oh my gosh, I can't believe it.
But if they're not in the way, you can still have musicality.
So if it's like, for example, you have low-pressure, loose musculature, it could be silent.
Yeah.
Usually the butt cheeks by themselves are not enough to produce.
They may modulate the sound, but they probably won't produce a sound.
Bob, what do you got?
What do you know?
I'm not going into details.
Well, Bob doesn't want to toot his own horn.
Let's move on to the quote.
All right, Evan, give us the quote.
By all means, let us agree that we are pattern-seeking mammals and that...
Owning to our relentless intelligence and inquisitiveness, we will still prefer a conspiracy theory to no explanation at all.
Well said by Christopher Hitchens.
Yeah, I have to unfortunately agree with that.
Yep.
That the allure of any explanation over no explanation is pretty great.
Yep.
Yeah, we love filling the void with whatever.
Well, and that's, I think, one of the central theses of that frontline episode about RFK is like, how could this happen to dad?
This is so horrible.
This is so...
Like, I can't explain this.
And we think back to Alex Jones.
Remember his whole thing about Sandy Hook?
And we were like, why did people actually believe him?
Like, we get why he made it up because he's a horrible person, but like, why did people believe them?
And one of the theories set forth by social psychologists was that some people said, this is so horrible.
I can't imagine it to be true.
Yeah, a way to cope with the actual horror of the true event.
Yeah.
Isn't that wild?
It's fascinating.
It is fascinating.
The lengths we'll go to
to
make something as comfortable as possible for ourselves.
So dangerous.
Yep.
All right.
Well, thank you all for joining me this week.
You got it, Steve.
Thanks, Steve.
Thanks, Steve.
And until next week, this is your Skeptic's Guide to the Universe.
Skeptics Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking.
For more information, visit us at the skepticsguide.org.
Send your questions to info at the skepticsguide.org.
And if you would like to support the show and all the work that we do, go to patreon.com/slash skepticsguide and consider becoming a patron and becoming part of the SGU community.
Our listeners and supporters are what make SGU possible.