The Skeptics Guide #1043 - Jul 5 2025

Unknown length
Quickie with Bob: Quantum Electronics; News Items: AI Research Collaborators, AI Carbon Footprint, Curing Deafness, Food Myths, AI Enzyme Engineering; Who's That Noisy; Why Didn't I Know This: The Great Attractor; Your Questions and E-mails: Why Scientists Fall for Woo; Science or Fiction

Listen and follow along

Transcript

You're listening to the Skeptic's Guide to the Universe, your escape to reality.

Hello, and welcome to The Skeptic's Guide to the Universe.

Today is Wednesday, July 2nd, 2025, and this is your host, Stephen Novella.

Joining me this week are Bob Novella.

Hey, everybody.

Kara Santa Maria.

Howdy.

Jay Novella.

Hey, guys.

And Evan Bernstein.

Good evening, folks.

Bob, an early happy birthday for you.

Happy birthday, Bob.

Thank you very much.

Bob was born on the 4th of July.

What a birthday.

An awesome birthday.

Never have to go to school on my birthday.

Never have to work on my birthday.

It was good.

And I think we've said this before, but Bob had a big, big family party with tons of friends every year because that was like a big thing that we did every year.

So it was awesome for Bob.

That was a great day.

It was quite the event.

Yeah, like only second after like Halloween.

That was always a great day.

Love that.

So do you guys want to hear about my tech hell?

What tech help hell?

Oh, is this the phone thing?

Yeah, for Verizon.

Oh, Christ.

Tell us all about it.

Gather around, kids.

Very quickly, I had to switch over my phone from work to my personal account.

It's both on Verizon.

So it's Verizon to Verizon.

Just switch it from my, give me the number from my business account to my personal account.

The process should have taken 20 minutes.

Yeah.

Guess how long it took?

20 days.

10 days.

It took 10 days

to do this.

I did the whole process of like requesting that they release the number and making sure it was unlocked, which is a separate thing, apparently.

And then I have to call Verizon and give them permission to switch the number to my account and do whatever things they need me to do.

And they couldn't get it to work.

It just wouldn't switch over.

And they couldn't figure it out.

And so this got bumped up multiple layers, like to hire and hire tech people until I was on the phone with like my tech person from Yale and three people from Verizon at the same time on the same phone call, they still couldn't figure it out.

Wow.

So then they basically said, well, we're going to have to get our software guru in there to try to figure out what's going on.

So then like two days later, so now we're like a week out, they say

they think they fixed it.

So now, you know, again, it was whatever, something in the software.

So go ahead and do it again.

I did it, and it didn't work.

There was a separate problem, a completely separate problem that was also a hard stop that nobody can figure out.

So, first they couldn't get it off of Yale's account, then they couldn't get it onto my account, which of course is the two things that have to happen.

And it was because, again, like some silly thing,

we have insurance, like family insurance, that covers three phones,

but it only covers three phones.

And now I'm trying to add a fourth phone to-do.

So, the thing that kills me, though, is that it creates an error, but it doesn't tell the person what the error is.

You know what I mean?

So they don't know.

It's just, nope, this won't work.

Can't do it.

Right.

Yeah, seriously.

It should have like a number of digit number or something.

Yeah, it should immediately say, you know, this is incompatible with this service.

So that took them two more days to figure that out.

And then finally, you know, I had to like remove that service, then transfer the number over.

So then, so 10 days later, I finally own, you know, I have my phone number onto my personal account.

So then I had to go get a new phone.

And again, I've done this many times before with all my family members.

It's usually not a big deal.

You know, you get a new phone, you activate your new phone, and it takes the number from your old phone, right?

So I did that.

And we went to the Verizon store to do it in person, right?

You know what I mean?

It's always better

to do it in person.

And we do that.

And

something else happened too.

For some reason, we got an Apple watch in the mail from from Verizon, and they charged us like $900 for it.

We never ordered it.

We don't even own it.

None of us own an iPhone.

What?

So we don't know how that happened.

It's still a mystery.

We have no idea how

this got ordered.

So

we were told to bring that in and to return it.

So we returned that at the same time, right?

So we had the person do two things for us.

One, just

take back this iWatch and take the charge off of our account.

And two, get me a new phone, right?

So I got the latest phone.

Get home activate my new phone activate they attached it to the wrong number they attached it to my wife's number so now I simultaneously oh please deactivate her phone and activate my phone to the wrong number so

that was really hard to fix that was just incredibly hard to fix i went again now i'm like hours hours online you know trying to get this fixed and they couldn't do it they got to the point where you're supposed to activate.

Like, I was trying, I had to activate my wife's phone back to her old number, and then I could activate my phone to my number.

And the activation wouldn't happen.

He didn't know why.

He basically bailed on me.

And so I called someone else, and they basically said, You have to go to,

so here's the other thing that happened: they said, Okay, we need to send you a text to validate that

you have permission to do that, this is you, right?

So, like, because I guess they, it was just fine, it's a security thing, they don't want somebody else stealing your number, right?

So at this point,

what the previous person managed to do was make it so that my phone didn't work and my wife's phone didn't work.

So neither of our phones work.

So and again, I'm doing this online.

They say, okay, well, we have to send a text to a phone on your account, which is my daughter's phone who's home for the summer.

But she's the phone that the Apple Watch was ordered for.

It was on her line.

So what the guy did was when he deleted the charge, he deactivated her phone.

So they

killed all to deactivate every phone in my house.

Oh my God.

And there was absolutely no way to activate any of the phones.

Oh, no.

Oh, my God.

You know, remotely, I had to go to another store.

We had to find a Verizon store that was open till 9,

go there physically, and, you know, so we could show them an ID.

And that was a two-hour two-hour cluster.

But it worked.

We eventually got everything taken care of.

Although, just to add one more wrinkle, in the middle of it, the guy we were dealing with

deactivated my iPad by mistake, which is also on the same account.

Did you have any weapons on you?

But

at the end of the night, we got everything working and it's all good.

But it was like a maximal cluster.

It was just unbelievable.

You do realize that that's what hell is like?

Just an endless loop of of tech help.

Steve, were you able to keep it your cool or did you get pissed off at any point?

No, I kept my cool.

The thing is, like the the the per the person I'm dealing with is never the person that screwed up.

You know what I mean?

Right.

Well that that one guy screwed up though.

I mean that guy massively.

Massively.

But their store was closed.

We had to go to a different store to sort it out.

I wouldn't go back to that.

Keeping your cool is is not just the best, but kind of the only way to eventually get where you're trying to go.

Steve, they're going to send you a survey about your experience.

How about your experience?

Oh my God.

And the thing is, you know, I was talking to Jay about this.

Part of the problem is, like, the software backbone that's running all of these is too complicated.

Nobody knows how to really manage it.

The person at the store I was dealing with had to call tech help.

Like, he couldn't resolve it by himself.

He had to get on with Verizon.

to like work the back end.

And I got so now I'm listening to him.

So, you know, the in-store tech guy talking to

the tech person at Verizon, and like they're giving him a bunch of crappy advice of things, too.

He's like, No, that's not going to work.

He's telling them, rejecting most of their suggestions about how to solve the issue until you get to the thing that actually worked.

It's just that they don't understand their own software, and they don't, and I get the feeling like it's changing so frequently, there's so many moving parts, and again, like it's not very user-friendly.

Like, they get an error, and it's a mystery as to what's causing the error.

Like, now they have to go on this hunt to figure out what the problem is.

And Steve, it's not uncommon that these companies have legacy systems.

They could be working on a code base where there aren't many programmers on that code base anymore.

Well, here's the other thing.

Here's another layer to this.

So part of the problem, the reason why it took two hours to undo all of this, is they couldn't activate my wife's phone.

They just couldn't do it.

So again, we were never going to fix this online.

And eventually we figured out that

her phone, which is only like a couple of models back, just wasn't supported anymore.

And they needed to give her a new physical sim to do it.

They couldn't do it on the sim that was in her phone.

Oh my gosh.

And it wouldn't take the new e-Sim, which is like an electronic sim.

So there was no way to fix it without physically swapping out her SIM card.

And if there was an error code, they would have known that right away.

But the thing is,

this is, I think, a bit of planned obsolescence.

Like, they just

are constantly changing the models, and then they sunset the older ones.

After a few years, it's like you they don't really support it anymore.

Yeah.

It becomes incompatible.

And when a problem arises, it's impossible to fix.

Like, my daughter's phone, the one that they accidentally deactivated and we had to have reactivated, again, is a few models old.

And they're like, she better upgrade that soon, or we won't be able to transfer the data to her new phone.

Yikes.

Right?

It'll be so old, we can't even access it to transfer the data.

It'll be incompatible.

So you have to update every few years to keep in the loop.

Otherwise, you get

too old to even update

everything.

It's crazy.

It's frustrating.

So that was my.

Steve, it's fucked up a lot of time.

Steve, legit, like, get off their network, man.

I don't have any problems like this on AT ⁇ T.

Well, I had never had problems like this before.

I'm not going to overreact to one bad experience.

That's how you describe it?

Well, they are the most expensive, too.

I mean, that's part of the other frustration.

You can't tell me this never happens on other services.

I don't believe that.

Yeah, I think it's like everybody has their reason that they stick with who they stick with.

All right.

Let's move on.

Bob, you're going to tell us about how quantum electronics is going to fix all these problems for us.

Oh, absolutely.

Thank you, Steve.

This is your Quickie with Bob, people.

So here is yet another claim that we might be able to make electronics a thousand times faster.

We've heard this so many times, but is this one it?

You know, let's explore it and just see what the details are.

At least.

So,

researchers at Northeastern University published in the journal Nature Communications, and they claim a discovery that they say will allow them to change the electronic state of matter essentially at will, potentially making electronics a thousand times faster and more efficient.

See?

I told you.

So, this material was described as a quantum material.

The name or the designation, I guess, 1T-T-A-S-Sub2, 2.

I don't know what the nickname might be, call it TAS.

It's called a quantum material because it's essentially governed, you know, electronically, magnetically, everything by quantum mechanics.

So it doesn't behave like ordinary silicon or copper at all.

That's right, copper.

That's right, copper.

They found a way to make this material switch its phase to behave like an insulator or as a metal.

And these phases could act like transistors, essentially, switching between allowing current to flow and blocking the flow, which could then, of course, represent the fundamental ones and zeros of today's binary computers.

Now, if this works, and that's a big if, it would it would have a tremendous benefits over silicon.

It would really be so superior in so many ways.

These phase states could exist in just a few atomic layers, so that would that would, of course, would allow ultra-dense packing, which could cram in far more components than silicon ever would.

The switching itself could happen not in billionths of a second, but trillionths of a second, picoseconds, incredibly fast, obviously.

Also, the energy usage could be far less than conventional transistors.

So it's such a win-win-win on those on those just these three basic characteristics would be so dramatic.

This could potentially be just what we need as silicon, you know, every day gets closer and closer to that hard wall of physics limitations.

It's really, it's getting, it's, everything's getting, the components are getting so small that you're having electron leakage that is going to basically just make it unusable and will reach a point where you just can't really make it that much smaller and faster.

As usual, there's lots of hurdles left for this guy.

There's control, integration, stability problems, and all of these, any one of those, it seems to me, could be this technology's undoing.

But like many of these papers that we read about these huge advances in electronics, we just basically just have to wait and see which one takes off and when this is going to happen.

So, fingers crossed, essentially.

This has been your Quantum Cookie with Bob.

Back to you, Steve.

Thanks, Bob.

So, remember that I think it was a science or fiction from a few weeks ago or a couple of weeks ago where one of the items which was true was looking at two-dimensional transistors, basically.

Because that doesn't have the same limit of scalability.

So like with the silicon, again, below a certain,

as you shrink it down, the physical properties change and you get to a point where it just wouldn't function anymore.

But with the 2D material,

it retains its physical properties even at the smallest possible molecular size.

And so that also would be another potential pathway to solving that limitation.

I I mean,

there seems to be lots of avenues that are going down.

And really, just if one hits big, I mean, it could be a dramatic change for the computing industry.

So hopefully we just have just pure numbers to our advantage and possibilities.

It seems like

we read, it's like battery technology.

You see, oh, look at this breakthrough that they have in the lab.

Well, yeah, of course, 99.9% don't pan out.

But I mean, just got to see what happens.

All right, Jay.

Yeah.

We're actually going to have a somewhat of an AI-dense show this week.

You're going to start us off talking about using artificial intelligence as research collaborators.

Yeah, I've been wanting to learn more about this.

You know, I think about AI and all the uses that are spinning out in the world, you know, and this is one, a big one, you know, like what's it helping science do?

And of course, there's tons of examples of it, but this is

a pretty strong thing that they've come up with, pretty pretty interesting, and potentially for the future could be very helpful.

So, there are multiple research teams who are currently developing artificial intelligence systems that are designed.

They're not just there to assist a scientist, but they're to simulate a scientific

collaboration.

Let me tell you how they're doing this.

So, the systems are using teams of AI agents, and each assigned,

you know, like a role, like a specific role.

Like, one of them is going to act as a neuroscientist.

The other one's going to be a pharmacologist.

There's another one that's going to be a critic whose job is to, you know, ask a lot of questions and literally criticize what they're doing to poke holes in it.

And they have to interact with each other, kind of like people do, right?

They're talking to each other.

They debate.

hypotheses.

They propose research directions.

And there's a conversation that's going on between all of them.

And the goal is to mimic the structure and the reasoning of a real lab team using AI to accelerate the idea generation, you know, problem solving and scientific research.

It seems like pretty obvious, right?

Yeah, we do this.

We're going to have

these specific bots who we've given characteristics to, and we're going to have them talk and see what happens.

So the research right now is actually happening in several labs.

We have one in Stanford.

We got one over at Google DeepMind.

There's several of these that are happening in China.

And the researchers are experimenting with what they're calling now, it's called AI Co-Scientist systems.

So how do these systems actually function?

They're similar.

They're very, very similar.

Stanford set up, they called the virtual lab, and they use GPT-4 based agents, and they configured them to play different roles, like I said earlier, in a simulated scientific meeting.

So, for example, you provide a goal.

You assign roles.

So you could be like, okay, we're going to try to find a drug for this specific illness.

What they do is they run several discussion rounds, and then

the ChatGPT or whatever model that they're using, it'll search all the current literature, it'll generate proposals, it'll evaluate a lot of potential answers that have come up with.

And Google is testing something similar, and they're using their Gemini 2.0 system.

So their co-scientist model assigns each agent, again, a specialized task.

They have to come up with a certain number of ideas, and they're supposed to come up with ideas.

That's a big part of this.

They have to do a literature review.

They need a criticism phase synthesis.

Then they run the group through multiple review cycles.

Typically, they say they end up with a written summary at the end.

The appeal is that these systems can generate faster ideation, broader coverage of existing research.

It is a structured internal debate.

which they have control over and they can see what's happening.

And in the early test, these multi-agent systems definitely outperformed single chatbots on scientific reasoning benchmarks, right?

So they're saying, let's take a bunch of chatbots, and how much more money and time does it take to spin up a bunch of them in these roles versus having one chatbot do it?

It really isn't crazy time-consuming.

It isn't.

It's not that much work to set all this up, and then you just flick the marble and see what happens.

It works remarkably better than

a single chatbot doing it on its own.

And this includes graduate-level scientific questions.

So, what I also found even more interesting is that they're capable of proposing new ideas that human researchers might not reach as quickly, and in some cases, at all.

That's the important part of it.

So, let me give you a real-world example that they had here.

So, this comes from liver fibrosis research.

So, Stanford pharmacologist, a guy named Gary Peltz, used Google's AI co-scientist to propose a potential drug targeting epigenetic regulators.

Of these three drugs, the system identified, two of them promised antifibrotic effects in human organoid models.

Steve, you understand all of that?

You should too.

So you know what organoids are.

I know.

I'm just making a joke.

I know.

It's very science-y.

Oh, boy.

So the results that they got exceeded the performance of drugs selected by Peltz himself.

And while, of course, this doesn't mean the AI discovered a cure, it does demonstrate that the potential for these systems to have a legitimate contribution meaningfully to pre-clinical research.

It's not insignificant.

And again, you know, we say this all the time.

We are in the absolute skin of the apple part of AI.

It's not a super progressed technology.

It's essentially brand new.

And it's already able to do things like this.

There are limitations.

I have to say, some reachers point out some really good points that we have to keep in mind.

So out of the AI's suggestions, some of the researchers were saying that humans would likely have gotten to these conclusions on their own, which is perfectly cromulant, right?

You know, it's you know, in that case, you could say, well, the AI got there faster, but humans would still get there.

The other note, so other people said that the simulated team discussions between the AI agents are logically consistent, they stay on topic, but they don't have the spark of the unpredictability of real conversations between people.

They're not really creative.

Right, exactly.

They're not coming up with totally new ideas because they couldn't be trained on a totally new idea, right?

No, and quite simply, and this isn't simple, but they're not wired or programmed to function like a human brain.

I mean, human creativity is super complicated.

They can't get these chat bots to behave like a real human.

Not only are they on rails, but they just don't have the mental capacity.

They don't have any of that stuff.

So there's no intuition.

There's no sudden flashes of insight.

There's no eureka moments like that.

But do they, they could be good in a couple of ways, it seems to me.

One is

good at summarizing existing research.

This is where we are.

This is the questions that are still open, et cetera.

And it may be good at

generating

inspiration for ideas.

They won't come up with the brand new ideas, but they might inspire the researcher to.

That's a good point, Steve, right?

Because

they could be priming the humans they're working with.

And I know artists that use AI-generated content to spark their imagination.

Oh, I never would have thought of that type of thing or whatever, you know.

Yeah, yeah.

Which is good.

I mean, it's, you know, anything that pushes the ball forward here, I think, is a good thing.

We also have to keep in mind, though, guys, that these chatbots hallucinate.

And these systems can and did, and do generate incorrect information, which means the output still requires expert vetting.

So there is still that limitation, and I have not heard anything about hallucinations going away at this point.

You know, know, one thing that I think about having recently done a dissertation is how, in some way, stuck we are academically in thinking about research the way we thought about it 10, 20, 30 years ago.

I'm glad that you mentioned like literary or literature reviews, because that's a really important part of a scientific paper.

It's summarizing the state of the literature right now and kind of pointing to some of the most important or salient findings.

One of of the things that I did when I wrote my dissertation is I picked a topic that just not a lot of people are writing about because I was overwhelmed by the prospect of talking about something that had thousands of hits within the literature.

And I felt like I did a pretty okay job of summarizing the state of the literature on my topic because it's, it was somewhat new.

I think that we talk to students today and academics today as if that task is doable and it's not anymore because there are so many journal articles out there.

Yeah.

Like just the sheer number of publications in existence today compared to 5, 10, 20, 50 years ago is astounding.

And using just like standard library skills, you're going to overlook stuff.

So I see this being hugely helpful for

academics.

You know, it's, this was something that we talked about earlier.

I definitely remember talking about it on the show, but it was the idea that like, let's say it's a code base instead of literature.

Chat GPT, for example, would be able to be more aware of the whole structure of all of the software, right?

A future version, say.

When they finally refine it and are able to expand on how much it can have active memory and they can really get rid of hallucinations.

Imagine if it's able to have all of Amazon's global shipping and business management software, which it's a huge code base.

Like it's bigger than any 100 people could ever fully comprehend, right?

It's just this massive code base with tons of avenues and everything.

And that's the same thing with research.

If you're talking about, you know, it has to process through 10,000 papers and be able to get rid of the bad stuff, flag the good stuff, and then fully understand what's going on with the good stuff.

You know, this could be a game changer to keep up, you know, to let scientists keep up, like you were saying.

Like, okay, good,

we can process through all this, but we have to have systems that we can trust yeah i mean i think again it's a powerful tool and if used properly i think it could be a massive benefit especially in areas where again there's an overwhelming amount of information that we have to deal with like in research and medicine

but it's also can be used poorly right and can

if it's like a substitute for thinking like like the lazy route out then it could be a detriment or like we don't you know again i think the best recent example is RFK Jr.

submitting a report that had like six fake non-existent studies in it.

Because some jackass in his

department used, clearly used AI to generate a report and didn't check it out.

And never went to the references?

Holy moly.

Yep.

So that's easy.

Yeah, but that's going to happen more and more.

Of course it is.

And it's happening all the time across every school in America in the world.

You know, the researchers themselves said the real value here lies in accelerating parts of the scientific process that are time-consuming, but absolutely have to happen.

And again, you know, I use ChatGPT.

You know, it's helping me write a screenplay.

It helps me with SGU work that I'm doing.

You know, not just asking me, not me just asking, hey, help me like frame this email.

You know, like help me generate some copy for an email that I want to send that's complicated or whatever.

But, you know,

I personally will give it multiple articles and I'll say, reduce all this down to bullet points for me.

And like, let me just see, you know, let me see all the important information in that format.

That helps me so much with the work that I do for the SGU in lots of ways.

It's powerful stuff.

It's very powerful and it's super useful.

But man, doctors can't lean back.

Researchers can't lean back and not give it expert

supervision, careful validation.

They have to work with it.

It's another tool and we can't let it.

It's got to remain a tool.

It can't be the thing that does everything.

Jay, let me ask you a question.

Have you asked your chatbot to marry you yet?

No, my God.

Remember?

So we had a live stream today, guys.

Bob Steve and I did a live stream, and there was this guy.

Oh, Christ.

He fell in love with his chatbot.

He fell in love with his chatbot.

And then what happened is

he hit the max memory of that session and

it got wiped.

It got reset.

And he cried.

And the guy was crying.

I'm not putting him down for crying.

Like that movie 51st Dates or whatever that movie was.

Yeah, like it was gone.

So it got creepy, though, guys, because you know, his wife knows about it and she's not happy.

And then the, of course, the interviewer asks this really hard question, like, goes to the husband, like, you know, if your wife wants you to not use this anymore, would you do it to remain healthy in the marriage, pretty much?

And he's like, give it up.

No.

Nope.

And he's like, you know, it's expanding my intelligence.

And it's like, yeah, sure.

The way I use ChatBG GPT, yeah, it's a great research tool and all that.

But the fact is, you know, one thing that Ian noticed when we were looking at it, they had a picture of his phone on his desk,

and we could read the text that he was typing to the freaking chatbot.

No, no, no.

Yes.

And it was sketchy.

It was like, oh, baby, like when you use your tongue and shit, like, what?

The chatbot was saying this.

The sad thing here is that he wasn't.

quote falling in love with a chatbot at all he might think that's what was happening but that a chatbot's not a person right a chatbot is not an entity.

He was falling in love with himself.

That's exactly what I said.

I said illusion.

It was falling in love with an illusion.

And not just with an illusion, though.

Yeah, an illusion.

Yes.

It was like an illusion that is ostensibly a mirror.

He was throwing out the right things to be able to receive the right things.

He just needed utter and complete validation.

But

when we discussed it today, and I think this is worth repeating, like we were

pointing out this idea that chatbots are going to become more personable.

They're going to be more capable of having, you know, real, honest conversations.

And, you know, people will be developing relationships more and more.

Like, the people are, you know, it's happening now.

I mean, people are like having relate, you know, they think they're having relationships with chatbots.

And Steve pointed this out, and I thought it hit this whole thing right on the head.

You know, imagine the chatbot is giving the person what they need, and it's basically them getting for themselves exactly what they need and what they want, which could make them very egocentric.

Of course.

Which makes them enter their own little bubble, not a political bubble that they belong to.

It's like they'll become kind of egoists.

Well, and it's already happening to a lesser and I guess less sophisticated degree when you see grown adults who

And you see this even, I'm not going to talk about a couple who's been married for 40 years, but let's say a new relationship, grown adults on their phones, swiping through social media, but they're sitting right across from each other.

They're not engaging with the real person that's right in front of them and enjoying and experiencing all of the beautiful things that come from socialization, but instead they're bubbling themselves in because they're not actually engaging with social.

Those aren't people, they're engaging with themselves.

So I find it very intimidating and scary to think that in the future, right now, ChatGPT is so cheery and so know rah-rah in your corner um i've told my chat gpt to not do that i want i want it to be hard i want it to be skeptical i don't want it to yes me anything i want it to to push back right

and again it's kind of weird to even say that because i'm not talking about

yes great idea jay yeah

but

you know people will slip into these these egotistical self-serving quote-unquote relationships.

Well, it'll be Pygmalion, right?

They will create for themselves the easiest, most positive, affirming sort of persona.

Sure.

But not a real person that has their own needs, you know, and their own biases and everything.

And so it creates this illusion of a person who is of fantasy, who's completely unrealistic.

And it could spoil them for real relationships.

Yeah.

You know, where they have to actually think of some other person's needs.

You know, Karen, I would imagine that there'll be therapists whose job it is to detune people out of these quote-unquote relationships.

Absolutely.

And I mean, I know that this is going to feel confronting a little bit, so I'm going to caveat this with that.

But from a like feminist psychological perspective, I also worry about the gender component of this because I feel like we're finally in an era, we've still got a long way to go, but we're finally in an era where we're making real progress when it comes to relationship equity.

Historically, a lot of relationships

was

the angle, right?

That men provided the type of security that was required for women to be able to exist in the world because they didn't have the rights to like have a mortgage or to use a credit card.

Or even like long before that, women were really dependent upon their male partner for their existence in the world.

And so you did see this abuse of power in many relationships where men sought out partners that were subservient and docile and just reinforced what they wanted to hear and think.

think.

I wonder if this is going to set us back from a gendered perspective.

Because unfortunately, the sad thing is, and I'm not saying it doesn't happen with women, but these stories often are centered around men.

They often are centered around men finding chat, like falling in love with their chatbots, not women falling in love with their chat bots.

Women haven't historically had to,

we've had to kind of overcome that experience of like the narcissist partner.

It would be interesting to see statistics.

You know, I'm really curious about this now that we saw this news item.

There was a woman who was in charge of this subreddit

who is basically about people in relationships with chatbots.

And she's in it, and all these other people are.

I think it's going to happen.

It will happen to everybody.

But I think similarly when we see people, this was a huge story 15 years ago, 10 years ago when I was doing a lot of television at the kind of dawn of this AI stuff was like the sex bot story.

I remember covering it for multiple outlets, like what happens when sex bots also have large language model capabilities in them.

And I mean, their end user is overwhelmingly male.

You know, look, I could see a scenario in my future where I have a personal assistant who I could kind of be friendly with, like be friends with, right?

You'll have a rapport.

They'll know what you like.

You'll assign it personality traits and all that stuff.

And, you know, I never for a moment thought that I'd be entering, you know, that I know I'm not capable of this because I am so people-centric, like to an absolute, it's the most important thing in my life, is the people in my life.

That's why this is horrifying to me.

I would never, I have no interest in becoming like into getting involved in like a relationship with an artificial being.

Uh-uh.

There's nothing there for me.

And I think we need to start teaching people about this.

We need to talk about

kids need to hear about it.

And we've talked about the other risk.

I know, Steve, we've talked about this on the show before, where, let's say, like Jay, you were saying, you know, having an assistant, and I'm really friendly with it, but a lot of people won't be friendly with it.

And the fear is that it'll reinforce dehumanizing behavior that then will translate into their daily life.

I just said that today, Kara.

Like, I had this thing.

So I told the guys, you know, when I'm doing, you know, I could be talking to ChatGPT about a bread recipe, right?

And it'll always comes back with a long-winded response.

And they give you.

So at one point, I caught myself saying, Stella, I don't want to hear any of that.

I said that to the chat, and I'm like, whoa.

I'm like, I can't let myself talk like that because that could be, I could be training myself to have those kinds of emotions.

I was going there because it wasn't a person.

Right.

Because you were allowed to be egocentric in that moment.

This is all about you.

This query, this help, it's all about you.

But when you're engaging with a person, it's equivalently about both of you.

Well, look at what people do just with the barrier of, you know, like they have internet balls, right?

They're online.

They're not face-to-face.

Look at what people, look at all the dehumanizing things that we've witnessed over the last 10 years that people have been doing online.

It's only going to get worse with this.

And none of us are not like this.

I mean, I listen to myself when I'm driving my car

and people can't hear what I'm saying to them.

You know, the driver's in front of me.

Yeah, like

this is natural behavior.

Well, Kara.

Yes, I know.

There are so many segues.

While we're talking about the impact AI is going to have on us, you know, psychologically, socially, what is the carbon footprint of training all these AIs?

A deceptively difficult question to answer, which I didn't really realize until I started to dig into some of the literature on that.

And it's funny because part of the reason that we don't really know how much energy our AI prompts use is because most of the companies who are developing these large language models don't share that information with us.

On purpose.

On purpose.

Yeah.

We've got all these large companies that are not opening up about how much energy their LLMs are after.

Because they're afraid they're going to get taxed.

Who knows why?

I mean, they're not telling us.

It could be because they don't know.

It could be because, you know, the numbers are all over the place.

It could be because they, obviously, it's not good for PR.

What's so interesting is that if you ask this question,

I haven't asked it to an LLM, to it, to an, okay, like, so here's like a full caveat.

I don't think I've ever used ChatGPT ever.

The thing that's strange is that if you do an internet search, or I guess if you query an LLM about how much energy or how much carbon emission, or, you know, there's different ways of slicing and dicing this,

does an AI prompt use?

You're going to get a wide range of answers, and you're also going to get a wide range of perspectives.

You're going to read articles that are like super fear-mongering, like, oh my God, it's horrible.

It's going to ruin the environment.

Like, don't use chat

GPT at all.

And then you're gonna get articles like the one that I am looking at right here called What's the Carbon Footprint of Using Chat GPT?

Very small compared to most of the other stuff you do.

And like this writer says, and this was just a couple months ago, I used to feel guilty about it.

Now that I've really looked into it, I'm not worried about it at all, and you should stop worrying about it, too.

So you really see two different ends of the spectrum, and you see a lot of different ways that people go through and do their own calculations.

A recent article in Science News, How Much Energy Does Your AI Prompt Use?

It Depends, written by Selena Zhao, talks about the, you know, why it's so difficult to answer this question.

And really, it focuses on a recently published journal article in Frontiers and Communication called Energy Costs of Communicating with AI.

This just came out in June of 2025.

In this study, what researchers did is they focused specifically on large language models that are open source, because that's really the only way that they could do it.

They knew that they wouldn't be able to get behind that curtain of the big players like OpenAI and Anthropic, who have said that they have the data, but they're not sharing it.

And instead, they looked at open source LLMs.

They looked at 14 different models.

Apparently Meta and DeepSeek do publish some of that data.

And they found that part of the reason this question is so incredibly difficult to answer is because there are so many different components that go into a single query.

All queries are not equal.

And so first of all, you have to break it down into two components of the carbon footprint altogether.

There's the carbon footprint that is produced during the training of the LLM, and then there's the carbon footprint that's produced during individual or we can say cumulative queries after or kind of separated from the training.

Apparently, when it comes to how much carbon is produced, what the emissions are from the training, it's still pretty much a black box.

And most of the things you're reading online where

these emissions are estimated are just based on queries.

They're not based on that whole training kind of experience.

The other thing that's so confusing is that there are so many different types of, I guess, parameters.

Different LLMs have, or different AI models have different numbers of parameters that

will result in different types of, or different intensities, I guess you could say, of querying.

So, the way that this article describes a parameter, they say they're kind of like the internal knobs and model, internal knobs that the model adjusts during training to improve its performance.

So, as they say, quote: the more parameters, the more capacity the model has to learn patterns and relationships and data.

GPT-4, for example, is estimated to have over a trillion parameters.

So, when they did their analysis for this scientific publication, they basically looked at 14 open source AI models, like I mentioned, and they ranged from 7 billion to 72 billion parameters.

They looked at them all on a GPU called the NVIDIA A100.

Apparently, we're not even using that anymore.

So, like, even this data is out of date because we're using a much more powerful GPU now.

I'm a little confused about some of the articles that I read because this article says that with a more powerful GPU, we're actually talking about more carbon emissions.

And I've read other articles that say, no, no, moving up to, I think it's called the H100 instead of the A100 from NVIDIA actually

is more efficient.

So it's less carbon emission.

I don't know if you guys have any insights on that or if you've dug that deep into the GPU of these of these kind of chips.

I mean, I do know that the newer, more powerful, better chips, you know, graphic cards, do calculations with less energy.

That's partly why crypto miners use them.

They use the latest and greatest GPUs.

Because

it's all about how much money you're spending for the electricity to run the process versus how much you mine.

Yeah, there's a purely like economic calculation going on with those crypto miners.

I think the issue here is that in order to handle the load that's being put on it, they have to be upgraded, right?

So even though they're more efficient, they're more efficient with a much larger load.

And so then the question is, you know, how is it netting out?

Because

the load is just getting bigger and bigger and bigger over time.

More and more queries are happening every day.

And they also talk about, you know,

you know, these different prompts that are used.

They think that over time, they call them inference, right?

So they start with the training and then the inference is the life of the model where the prompts are being used.

And they say over time, that's expected to account for the bulk of the model's emissions.

Here's a quote by somebody who was interviewed for the article.

You train a model once, then billions of users are using the model so many times.

And they're saying it's hard to quantify the environmental impact because that impact can vary depending on which data center it's routed to, depending on which energy grid powers that data center, depending on the time of day, and that really only the companies that are running them have that information.

So, we're talking about not just how the actual query is routed physically, but we're also talking about the different parameters that are included within the model that's then going to handle those queries.

So,

tokens is a word that's thrown around a lot.

Can you guys define that?

Those are, I mean, they define it as the bits of text a model processes to generate a response.

Yeah,

they break things down into the words, into these tokens

that essentially allows them to quantify the words, and then they can use those tokens to actually build sentences and things like that.

It's like a way of breaking it all down into almost like a language, in a sense.

You know what I mean?

Yeah.

And so, one thing that they mention is that we've talked about this idea before.

We both talked about it when we had

our guest rogue on, and also in previous stories that we've covered.

This idea of reasoning models versus sort of traditional or standard models, where in a standard model, what the LLM is doing is a bit of a black box to us, but in a reasoning model, it sort of shows its work, right?

And says, this is how I got from A to B to C.

Reasoning models use a ton more energy use.

They use a lot more tokens.

That would make sense.

Yeah.

So they say, on average, a reasoning model uses about, or in their study, used about 543.5 tokens per query.

And a standard model only used 37.7.

Well,

there's a cost to the processing speed.

And the more tokens that you have essentially means you have a broader memory base, right?

And

there are token limits.

And if you hit the token limit, it just starts losing the earliest tokens in

that memory space.

I mean, it makes sense because the tokens are the fundamental units of text that the model

processes.

So if you've got a lot, if you've got a lot of those tokens, then you're dealing with a lot more things to manipulate and process.

Aaron Ross Powell, here's a real number, because you sometimes will hear things thrown around, like one

query is equivalent to using your oven for one second.

Like that's one that I've been seeing over and over and over.

But the authors of this study are saying that that's like wildly misleading.

You can never see a single number because it has to be a range because it totally depends on the complexity of the query, where you're making it, who you're making it to.

But here is one place

where the authors actually do use real numbers and compare it to a real-life comparison.

They said at scale, these queries add up.

And they're talking specifically about using reasoning models.

They said using a 70-parameter reasoning model called DeepSeek R1, that's one of the ones that they used in the study, to answer 600,000 questions.

Now, 600,000 questions for a single person sounds insane, but not if you look at any cross-section of time of all the people querying these LLMs.

That would emit as much CO2 as a round-trip flight from London to New York.

That's a lot.

Wow, that's a lot.

That's a lot.

And they're saying, even still, none of this accounts for the emissions generated from manufacturing the hardware, from building the buildings that house it, all these things that they call embodied carbon, like the carbon that's required just for producing the things that will then run.

And so, even in this one article where the author is saying, don't worry about it, don't worry about it, a typical query, they say a typical query is sort of less than the energy cost in watt hours of running a 10-watt light bulb for five minutes or using your laptop for five minutes.

They show that a long input query is more than that, but still less than using a microwave for 30 seconds or the average U.S.

household consumption per minute.

But then their maximum input query, because again, they looked at it on a range, and these are numbers that were released by Epic AI.

They're saying that a maximum input query is twice the average U.S.

household consumption per minute.

That's a lot.

Seems like it.

So, yes, simple, not very complex, high-efficiency queries that are routed to the right data center at the right time of day, you know, at night when the load on the grid isn't very high, could be very, very low.

But incredibly complex, high-parameter, token-rich queries could also be really taxing the system.

And it's not just about the energy being used, but as we mentioned, it's then about the physical carbon that's being put out from it.

So, I guess my take here is I think both extremes aren't really telling the story.

I don't think it's a don't worry about it, but I also don't think it's a it's so dire, the planet's going to burn tomorrow because of chat GPT.

I think we have to look at it in the context of all of the other things that we do that produce large quantities of carbon.

And we have to be more mindful about how we use these LLMs, right?

Like the energy supply just won't be able to sustain it as it grows and grows and grows.

The researchers basically say we can't have all of the, I guess, the pressure on us as individual users.

We have to also think about these large energy companies and how they are externalizing these costs, these tech companies.

They said, I go to conferences where grid operators are just freaking out.

Like these tech companies cannot just keep doing this.

Things are going to start going south.

Because if your model is being used by, say, 10 million users a day or more, it has to have a better energy score.

It just has to.

But things that we can do as individuals,

if it's just as easy to look something up in a traditional way, do it, right?

If it's just as easy to read a Google query or to look something up in a way that you used to do it, choose that.

Also, they say it's very similar to AC.

If the outside temperature is high, if it's the middle of a hot day and all the lights are on, like that's not the best time to be using these LLMs because that means more energy to cool down the inside of the places where these servers are being housed.

So think about the pressure on the grid and engage the same way you would engage with your air conditioning or with other energy-heavy appliances.

You try not to do laundry in the middle of the day at peak time.

You try not to run your dishwasher in the middle of the day at peak time.

Do that.

Also, they said, literally, and I never even thought about this, any extra input takes more processing power.

Yes.

So I was told never say please.

That's what they say in the article.

It costs millions of extra dollars because of thank you and please.

Oh my God, really?

That's like taking the olive out of the salad on an airline.

No, seriously, though.

Every unnecessary word has an influence on the runtime.

And I am cognizant.

Once I read that, I became very cognizant of it, and I did.

I changed that habit in myself.

Now I just keep it to as minimal

amount of words as I possibly can to try to get something out of ChatGPT.

So it's like if we want it to be more efficient, we need to learn how to use it more efficiently.

Yeah.

But this also, I think personally, it needs to be taught

in academic settings.

It needs to be taught to children very early in school.

It needs to be taught at the university level for researchers who actually do, are some of the most heavy users of these products.

Just like we had to take library literacy classes when we were using card catalogs and then when we were using

online catalogs, we need to be learning how to utilize these tools in the most efficient way possible.

Good AI hygiene.

Yeah.

Here's a couple other things that I came across too, Kara.

So this is like an MIT study.

They found that the carbon intensity of electricity used by data centers was 48% higher than the U.S.

average.

I think because it needs on-demand energy, right?

So it's going to be getting more of its energy from fossil fuel plants, right?

It's not going to be using wind.

Yeah.

But they also said, so in 2023, 4.4% of all energy in the U.S.

went to data centers.

By 2028, it could be as high as 22%.

And half of that is the AI.

They say that in this article as well.

So

maybe hard to nail down, but the broad brushstroke is this is going to significantly increase our energy demand.

And

it has altered the projections of how much electricity will we need in 2050.

We've had to revise all those projections because of these large language models.

Yeah, and I think one of the things that is sort of the easiest for us as the end user is to just remember.

It's the same as

that mental shift that happens.

We've talked about this a lot on the show when you realize that throwing something, quote, away doesn't actually make it go, quote, away.

Only away.

When you're the point of center point of that equation, yes, you're throwing it away from your skills.

Exactly.

But like it is going somewhere, right?

Like your trash can is not closer to a lot of other

Yeah.

And so, think that way as well when we're using these digital tools that feel so ineffable, right?

They don't feel like they would require a lot of energy, but they do.

And so, every time you sit down to use one of these tools, be mindful and say, Do I need to do this right now?

And yes, there are plenty of cases where we do need to do it, just like there are plenty of cases where we have to use plastic in our lives or we have to use fossil fuels, but we do not use them that way.

We use them in all the cases where we don't actually, quote, need them.

It's a convenience issue, and that I think is where we really got off track.

We should start to see more regulation around this as well so that it can't be used, I guess, in a wasteful way of too frivolously.

Yeah, yeah.

All right.

Thanks, Kara.

Guys, let's stop there.

All right, guys, do any of you know what the protein odofurlin is?

No, I didn't know.

What would your guess be?

Odofurlin.

What part of the body does that?

Odo is

deep space.

Oto or odo.

O-T-O.

O-O-T-O.

That's different.

Not Odo from deep space.

The word toe is in there.

I think it has to do with your feet.

No.

Wait.

Are you asking about the furlin part?

Is it something that opens something?

So otofurlin is a protein

necessary for the release of neurotransmitters from the inner hair cells, enabling transmission of the signal to the auditory nerve.

So it's necessary to hear.

Yeah.

Need these proteins to hear.

Okay.

You need these proteins to hear, because it would, yeah, it releases the neurotransmitter.

Basically, it makes the

hair cells that detect the sound communicate neuroelectrically to the neurons that then transfer the signal to the brain, right?

So it's transducing sound into...

So if you're protein deficient in this area, you're not going to be able to hear.

Exactly.

So if you have a mutation of that protein, there's the otofurlin gene, O-T-O-F.

If you have a mutation in that gene so that you do not make the protein or you make it an imperfect protein, that can cause deafness, right?

So it's one of the forms of hereditary deafness.

So where do you think this story is going?

CRISPR.

Yeah, yeah.

Yeah.

So just, yeah, I just wanted to, I love these stories about another use for CRISPR to treat a genetic disorder, in this case, this one form of genetic deafness.

It's autosomal recessive.

They looked at just 10 patients

aged 9 to 23.

So these are, you know, older children to adults.

They used an adeno-associated virus as the vector, right?

So this is a viral vector.

And they used, again, gene therapy to

introduce the gene to produce the protein.

This is mainly a safety study, right?

This is sort of a preliminary clinical trial, just making sure that this was safe and well tolerated.

But they did, as a secondary measure, look to see if it affect their hearing.

So the side effects were, the adverse events were all minor, and it was well tolerated.

It was safe.

So that was the primary reason for the study.

That's why, again, it was only a few people.

Is this a mouse hearing?

No, it's a human.

This is human.

Because it's in humans.

Yeah, this is in human trial.

It's already past.

Oh, it's past the animal state.

Okay.

Great.

So the average level of threshold of hearing in the 10 subjects went from 106 decibels to 52 decibels.

So lower is better, right?

So

they were able to hear softer sounds.

106 is loud.

I mean, so

that's why they were functionally deaf.

So that's pretty impressive.

Great.

Yeah.

Yeah.

So it basically worked and was safe.

We have to see how sustainable it is.

And there's a question of

how old the subject can be and have it still work because these are still young adults, you know, at the high end, 23.9 years.

And this was, they followed these participants for at least six months, which is a good follow-up.

But, you know, we need to see what it's like when we follow them up for years.

They did see an age-dependent therapeutic effect, so it was better outcomes in the younger kids than in the young adults.

So, you know, this is the kind of thing where if you get diagnosed with this at a young age, you might be able to treat, you maybe even treat toddlers with this.

Who knows?

You know, you have to get it approved for very young children.

I think the tricky bit is the viral vector.

For these CRISPR-based therapies, the vector is like the big thing.

That is the main limiting factor.

And viral vectors

can be effective, but they could also be risky.

Deadly.

This is why I talked about the nanoparticles, the lipid nanoparticles.

When they're feasible, depending on on where you need to get them in the body, they're much better.

They carry bigger payloads and they don't cause infections.

The fat virus can

be

digging in, Steve, to the incidence rate, and it's really low.

It looks like this qualifies, or at least is listed on a lot of rare diseases.

It's a rare disease.

Yeah.

And so that's

such a great thing about

using CRISPR or using different kind of gene therapies that are so targeted is we can actually target rare diseases where there wasn't a lot of, I don't know, I mean, there wasn't enough, I guess.

Money.

Not just rare, not just rare.

They're all bespoke anyway.

Yeah, yeah, yeah.

Yeah, I mean, Steve, you've mentioned recently a couple where they were just like, this is for one person.

It was, yeah, it was for, it was that child that was born with its own specific mutation

that they were able to treat.

Yeah, that's what I'm saying.

These are all, these are often all bespoke anyway.

So what was the, Steve, what was the decibel rating before?

100 and what?

106.

106.

That's around a chainsaw or a handheld drill.

And 50 decibels I see listed here as a quiet office.

Yes.

60 is normal conversation.

And like 45 is like light rain.

So damn, man, that's that's a huge change.

Yeah, it brought them down into the range of conversation.

Yeah.

Wow.

Exactly.

Very encouraging.

Effectively curing it.

But what can they do for us, Steve?

Us old folks?

Yeah, who knows?

This is a genetic disease.

Genetic fixes are great for genetic diseases.

We need regenerative

something to keep our cells and our hairs from, what, diminishing.

Yeah, but then you've got to toe that line between aging and cancer.

That's if you're getting to the stem cell sort of regenerative kind of approach.

It would be great if we could just regrow those hairs.

Wouldn't it?

Oh, my God.

Once they break off, you basically lose those frequency.

Some monsterism.

All right, Evan, tell us about persistent food myths.

Yeah, food myths.

So this was a neat little article that came out recently titled, Grandma Was Wrong, Food Myths Debunked.

And this caught my attention specifically because

in my family, when I was born,

my paternal grandmother

was already dead.

So I did have, you know, my maternal grandmother was alive, but not, you know, not for long.

I didn't have much of a relationship with her growing up, a bit estranged, and, you know, then she passed away.

So there wasn't grandma's home cooking as part of my upbringing.

But I think for you, do you guys have memories of your grandma's home cooking?

That kind of a manual?

Sure.

Oh, hell yeah.

Yep.

And

what?

Long-lasting memories.

And, you know,

the big meal.

Two Italian grandmas.

What do you think?

Yeah, yeah.

A little different different for me.

So, again, I didn't really have much of that experience in my life.

According to this article, a recent survey found 42% of Americans prefer to cook meals traditionally like their elders.

But they also, you know, as we learn more about food, food science, and other things, as time goes on,

you know, grandma used to do maybe some things in her traditional food cooking that was maybe not the best advice or was outright, you know, just silly and hadn't, you know, no impact.

You You know, and there were, this article lists a couple of examples of that.

So I want to share those with you that they talked about here.

These are the myths.

Rinsing raw chicken before cooking.

I'll tell you, again, I don't have the experience of

having much of a

grandparents and the cooking experience.

I've never rinsed, I never learned to rinse raw chicken before cooking.

It never became a practice.

It was never habitual in my family.

And do you guys know why that's, first of all, a myth and why it's not good?

I mean, I would think because of salmonella or whatever, like something like that,

it spreads it when you wash it.

It spreads it.

That's right.

It doesn't wash it away.

You don't wash it away.

It just splatters around, basically.

You can only make things worse

through its contamination.

Has anyone here rinse their raw chicken before cooking?

No, but what you should do, though, is really limit the surfaces that the chicken comes in contact with and make sure those are cleaned and sanitized very well.

But the chicken itself, you just got to cook it properly.

165 degrees Fahrenheit

is the correct internal temperature.

It should be to kill all that bacteria.

I also have a cutting board that's just a chicken cutting board.

Yeah, I use

vegetable one.

Yeah.

I do.

I have a specific.

All right.

Is it glass, plastic, or wood?

Wood.

Ooh, my cutting board?

Yeah, that's plastic.

It's plastic.

See,

it's hard to know.

I've done a deep dive on this, and there's no clear answer.

Really?

Because when you get little scratches in the plastic, the bacteria can hide in there very well.

Yeah, but I can also then just put it in the dishwasher.

So that's like, you know, basically decontaminating it.

Yeah, I don't put our wood cutting board in the dishwasher.

I do that.

I clean it up.

Right, because you can't show it.

I don't use wood for anything but vegetables

or like charcuterie.

But it's good to keep them separate.

How about this myth?

Bread stays fresher in the refrigerator.

Ooh, I always put my bread in the fridge.

Oh, no.

Well, hold on a second.

Hold on.

Because

if you buy store-bought bread and you put it in the refrigerator, it does have preservatives in it.

It will slow down any mold happening on that bread by a lot.

If you just leave it on the shelf.

Yeah, I mean, I feel like I've observed that.

Am I wrong?

I've never done a controlled study, but I definitely feel like when a loaf of bread is in the fridge, it lasts longer.

That doesn't mean it tastes fresher, but it lasts longer.

For freshness, it says here: sandwich bread, buttermilk biscuits, and rolls should be stored on the counter in a bread box or frozen.

You can freeze them.

Freezing is your friend with bread.

Definitely.

I mean, there is a big difference between refrigerating it and freezing.

Like I, when I make bread, I usually make at least two loaves and I always put one in the freezer.

And I can get that bread back to about 80 to 90 percent of what it was like when I baked it fresh.

I figured out how to do that.

You can't get it back if it goes in the standard refrigerator temperature.

And that's what I was

exciting as well.

Yeah, I think what you guys are perceiving in terms of the better outcome is that you just need to have it sealed.

So putting it in a bread box, and I always put bread, I make sure they are sealed.

You know what I mean?

That it's in something completely airtight It's fine They do they do perfectly fine you then refrigerating it as is no added benefit if it's sealed Yeah, but I I also think you're talking about the difference Am I wrong here Evan?

You're talking about the difference between it tasting fresh

Like like the starches being the right consistency and all of that versus growing mold a refrigerator is going to reduce how quickly mold grows on bread, but it might not be fresh bread.

It'll be stale bread that has less mold.

Yeah, and they did say there are some, you know, it depends on your environment.

Some houses have air conditioning, some don't.

So you may, yes, the refrigerator might be the better option in that case.

But in a controlled temperature environment, they are saying use the bread box when you can.

What happens if you put the bread into the refrigerator?

The cold temperature will cause recrystallization of the starch, and that's moisture loss, and then your bread starts to lose its, you know, taste, consistency, and all the other features that make your bread enjoyable.

What about storing your tomatoes in the refrigerator?

I don't do that.

No.

I've never done that either.

No.

And in fact, I've known for quite some time that you shouldn't do that.

They recommend that you not do that.

Researchers, what, from the University of California, Davis, explain that cold temperatures mess with the enzymes that flavor tomatoes, leaving them mealy and bland.

Yuck.

Keep them out on the counter.

Keep them on the counter, but out of direct sunlight.

Wait till they're ripe and enjoy.

What about this?

I've never heard this one.

Let hot food cool to room temperature.

Oh, yeah.

I did a whole deep dive on this before.

Before putting it in the refrigerator.

Yes.

Yeah.

You're not supposed to do that.

So here's the bottom.

I think the best way to think about this is how much time is your food spending at a temperature where bacteria can grow?

That's the bottom line.

So you always want to get it up to eating temperature and eat it fairly quickly.

And then when you're done with it, you want to get it at refrigerator temperature as quickly as possible.

You don't want it to spend hours and hours at bacterial growth temperature, which room temperature is that.

You don't want it at room temperature, you don't want it warm.

Yeah, it's like that fried rice disease thing.

Yeah, you want to get it, you eat it, and I'm obsessed with that.

I hate when people leave food out after dinner.

Like, as soon as you get three hours later, put that right away, get it into Tupperware, get it into the refrigerator right away.

I feel like this one comes from

the reality that you can't put like boiling hot liquid that's in hot glass into a refrigerator.

That's a separate issue, and that's correct.

Yeah, and that could.

Because you don't want to shatter the glass.

Exactly.

Yeah, right.

You don't want to do that.

And that could

be part of people's calculus as far as how they're thinking about this.

I also usually will vent the lid because if you put it straight in while it's still hot, then it's all full of condensation.

Yeah,

the condensation as well.

Should you be washing your produce with soap?

No, no, and I never did that either.

In fact, I've never even used.

They do have vegetable wash sprays.

Have you seen that?

I've used vegetable wash.

I use vinegar.

You can also use vinegar.

Yeah, I use my salt.

They make vegetable wash, but that's not soap.

Yeah, it's not soap.

I know, I don't, I've never, but that's why this is strange to me.

I'm like, really?

Did people's grandmothers wash their

vegetables with soap?

I never really heard of that.

Apparently, that's a thing.

Water is fine.

As long as you, if you want to make sure it's clean, you just got to scrub it.

Like, just get your, whatever

you wash stuff with, your towel or paper towels, and just want to scrub it a little bit.

Just the

physically wash it.

You don't want to use soap, right?

You don't want to put soap on your food.

Right.

You don't want to eat soap ever.

No, right.

I can't think of it.

Is there a reason to eat soap?

I don't think so.

If you say some nasty curses, you mean there's eat some soap.

There's more on the list.

That's actually abusive.

Yeah.

Let's wash somebody's breath.

Oh, yeah, yeah.

And let me throw out one more.

Because it's a longer list, but I'll just end with this one.

Uh, because this one I had not heard before either: watermelon seeds will sprout in your stomach.

Don't swallow them.

Oh, no, you thought that weird.

That's like chewing the gum, like swallowing the gum kind of thing, right?

I had a plant, the watermelon plant was coming up out of my throat with the vine.

I mean, that's right out of mythology or something, you know, like some Jack in the Beanstalk kind of story.

I don't know.

Had you guys heard of that before as a myth?

Yeah, but why did, I guess, where do these things come from?

Why would grandma not want you to swallow the seeds?

Because she's afraid.

Yeah, no, I get that.

Like, grandma didn't think they would sprout in your skin.

Yeah, why would they?

Grandma just didn't want you eating the seeds.

Is that because she had diverticulitis and when she ate the seeds, she felt like sick all the time?

Well, I mean, apple seeds have a little

arsenic in it, so maybe it stemmed from that.

Maybe.

Perhaps.

But nobody eats apple seeds.

Yeah, you shouldn't.

So somebody that we know, Kara, eats the entire apple.

That is

bananas.

They eat the core of the apple.

What?

Lots of people do that.

Bob, I got that.

No.

You've got to believe me because I don't lie to you.

Whereas with watermelon, like there's seeds in every bite.

Unless you get the seedlings.

The seedless ones.

Seedless ones.

They still have seeds.

They're just white seeds.

They're not thick black seeds.

They're tiny.

Yeah, they're little tiny seeds.

They're seedlings.

There are seeds in a lot of your vegetables.

And there's a whole other list here, but those are some of the fun ones.

Those are the fun ones, too.

All right.

Thanks, Evan.

Bob, another AI article.

Last one to finish up the news.

AI is going to help us with our enzyme engineering.

Oh, boy.

Yeah, this was so much fun.

Okay, so, guys, what happens when you combine automated robotics, synthetic biology, and that ubiquitous two-letter initialism that we call AI?

You get not only a technology that's brimming with potential, I mean, really, wow, but

it's also an exciting solution to a powerful but limited biological tool used in industry, the lowly enzyme.

This is from journal Nature Communications.

The name of the study is a generalized platform for artificial intelligence-powered autonomous enzyme engineering.

Study was led by Hui Mian Zhao, professor of chemical and biomolecular engineering at the University of Illinois.

Okay, so what's going on here?

So it starts with enzymes.

I've got to do a little table setting with the enzymes.

These are specialized proteins, right, like most of the human body is comprised of.

Essentially, strings of hundreds of amino acids or more that fold up into a specific shape, and that shape directly translates into a specific function.

And that specific function for enzymes is absolutely critical for life.

I mean, we're talking like eating, digesting, breathing, reproduction, moving.

None of that would happen without it without enzymes.

And that's the hallmark of them.

And what makes them invaluable?

They speed up chemical reactions by offering a low energy path, essentially a shortcut.

And it's not just a little bit faster.

I've read somewhere that enzymes, that without enzymes,

digestion could take years, something like years or months, whatever.

I mean, you'd be long dead before you got any benefit from it.

So, yeah, pretty important stuff.

But this is just the biological role of enzymes in our body.

It's just one side of the coin.

They have a powerful presence outside of our bodies that you might be less familiar with, and that's for industrial use.

So there's so many industrial applications of enzymes.

We're talking food production, pharmaceuticals.

biofuels, biomaterials, textiles, detergents, wastewater treatment, and that's just to name a few.

It's kind of endless.

So now these enzymes are essentially amazing little machines in this capacity, but they're underutilized because using them often involves some very frustrating roadblocks when you dig deep into it.

They can be inefficient in a lot of ways.

They sometimes don't have the ability that you would like to single out a specific target in this ridiculously complex chemical environment that they find themselves in.

So, all right, to sort of sum this up so far, we've got these amazing biological tools.

These are the enzymes that are critical to life.

But also,

for many industries, they're essentially straitjacketed by the inefficiencies and

inaccuracies at times for these enzymes.

And then, this is where this new study comes in.

The study had a goal to solve this problem by improving protein function.

But, as the lead author Zhao said, he said, improving protein function, particularly enzyme function, is challenging because we don't know exactly what kinds of mutations we should introduce.

And it's usually not just a single mutation, it's a lot of synergistic mutations.

So this is, it's tough to tweak these enzymes and make them better at what they do.

So they describe in their paper their solution to this problem, which brings three technologies together like never before.

And these are, like I said, AI.

automated robotics and synthetic biology.

So let's start with the AI leg of this tripod.

So for AI, they use deep learning with its layered artificial neural networks, right?

And these networks analyze data and learn complex patterns, right?

We've talked about this on the show before.

This deep learning, though, also uses a protein language model, not an LLM, but a PLM, which essentially is using

the language of proteins.

It's fluent not in English, but in the language of proteins.

Now, the AI's job in this role is to look at the genetic code and optimize it for the desired functions.

Remember, guys, you can't just brute force this thing.

You know, there are, you can't just like change, make little tweaks, some random tweaks, and see what happens, test it, and make more.

I mean, there's more possible amino acid combinations than atoms in the universe.

How many atoms are there in the universe?

10 to the power of 80.

10 to the 80th.

10 to the 80th power.

And, of course, you know, I looked this up.

That's 100 octilion sexticillion.

Just saying.

So their AI, so their AI instead, what it does,

it would determine a small amount of possible

sequence changes.

And those sequence changes is based on its training on enzyme function and structure, and also, of course, the fluency of its protein language model.

So it says, all right, here's these suggestions that we, the tweaks that we can make.

to these to these enzymes.

So that's what the AI leg of this enzyme upgrade solution does.

That's the first part.

So next comes the other two legs of this tripod.

We've got the robotic automation component and synthetic biology.

So these AI suggestions would be sent to what the University of Illinois calls its iBio Foundry.

And this seems like it belongs on the goddamn enterprise.

This thing is like, what?

This is fascinating.

So this iBio Foundry uses robotic platforms and computational tools, and it actually builds the enzymes that the AI is suggesting from scratch.

From scratch.

Not just going into an enzyme and making a tweak, it's just building it piece by piece from scratch, and then it tests them.

So it doesn't just build them, it actually tests them to see how well they perform based on

what the desired new updated functionality of that enzyme is.

And then that...

its performance is sent back to the AI and then that makes new suggestions based on this new information and the process repeats over and over and over.

These are being called self-driving labs.

They're powerful automated AI-guided platforms for enzyme engineering.

And once this process starts, as I've described it, it's essentially running on its own with minimal supervision, or essentially no supervision at all at this point.

So that's why they're calling it self-driving labs.

So, of course, the proof of the pudding, as they say, is in the tasting.

So, what are the results?

What did they achieve using this new methodology?

So, they used two key industrial enzymes here, and both of them came back with substantially improved performance.

It's kind of dramatic, I think.

So, one enzyme, they use it in industry to add to animal feed to improve the nutrition of the food.

This process, this new process that they have here, increased its activity by 26 times after being tweaked.

The second enzyme, which is used for just a generic industrial chemical synthesis, the paper says, it had 16 times greater activity, and this enzyme also had 90 times greater substrate preference, which means that it was far less likely to target chemicals that it was not supposed to target.

So that's substrate preference, 90 times greater.

So these seem pretty dramatic to me as basically a proof of concept for this new technique.

So what are we talking about now in the near future?

I mean, this is what they're basically doing now.

And they've designed this to be generic for just...

proteins in general.

It's not just for these two enzymes that they tested.

They made this so that it can be used for enzymes and proteins just in general.

So, in the very near future, what they plan on doing is somewhat predictable, as you might imagine.

Continue improving their AI models.

They want to upgrade the equipment, make it faster, higher throughput, faster testing, all that stuff.

But they also have, and it seems like they've already gone a long ways in having this, they want to create an entirely new user interface that can use simple typed queries.

Because I believe now you need to be able to code it in Python in order to really get this system to do what you want it to do.

But the new interface they're talking about would kind of almost be like an LLM.

Just type in what you want.

Maybe I'm sure there's some structure to it, and make it very easy for non-specialists to use this system so that they can work on improving the enzymes that they want to improve.

Or if they want to improve, you know, in drug development times, or maybe they want to make new innovations in energy and technology,

they can use these systems as well.

What do you guys think?

I mean,

industrial enzymes are a huge industry.

It's a huge part of industry.

Absolutely.

Anything that can, again, automate or increase the ability to make more efficient, more targeted enzymes could have wide-ranging impact across many different industries.

I mean, look at the results of 26 times

the activity for one enzyme or this other enzyme at 90 times greater substrate preference.

That's just

dramatic.

I mean, imagine starting up just applying this.

You know, once they tweak it and get it even more efficient and better AI,

you know, better protein language models.

I mean, it just seems like this has got nowhere to go, but just dramatically up.

But we will see.

Who knows what can kill these things?

But fascinating stuff.

All right.

Thanks, Bob.

Jay, it's who's that noisy time?

All right, guys.

Last week I played This Noisy.

What do you think?

Don't like it.

Don't like it.

Not your fave.

Not a pleasant noise.

I mean, it sounds mechanical, like something grinding or spinning or whatever.

But yeah, the noise up front, like the slapping noise, I don't know what that is.

Well, Visto Tutti wrote in, and Visto said this one sounds like a choringa or bull roar.

This is a carved wooden wing, and it's attached to a length of rope and spun around so that the wing produces a loud sound as it beats the air.

Yeah, so these were used to signal over large distances.

And

you could find these used in Australia, I think, even today.

Didn't Crocodile Dundee use one in the movie?

Yes, that is correct, Steve.

A listener name M-O-J-C-A.

M-O-J-C-A.

Mochka.

Mocha.

Mocha.

Mocha.

Is it Mocha?

Is it the Jay Silent?

And last name is Kolsek.

I said, hi, Jay.

I'm going to guess the noisy is a drill bit, but since this would be too boring and also due to strange frequencies, I'm guessing it's one of those two or three headed drills I've seen all over the internet lately.

I have not seen that.

I don't know what you're talking about.

It's intriguing.

I definitely would like to look that up.

But you are not correct.

But

thanks for guessing.

Hunter Richards wrote in and said, hi, Jay.

I'm not too late.

Is this the mini steam power train, the kind that's big enough to ride on, not in, and not a model train?

Or it's Bender.

Yeah, I don't, I'm not, I don't know what you're what you're hearing there.

I'm not hearing that, but thanks for the guess.

You know, everybody, we have different memories that influence what we think we hear.

So I do have a winner this week.

I had several winners this week.

The person that guessed first was Shane Hamlin.

And Shane says, hey, skeptics, guiders.

I was listening to this podcast with my dad and immediately knew what this was.

The noisy from the June 28th podcast was when you put a bolt or a nut, not a peanut, in a balloon and put air in the balloon and spin it.

You can hear the nut slow down at the end of the noisy until it drops to the bottom of the balloon.

That's exactly what this sound is.

Listen again.

I will warn you that

if you do this and you use a heavier size nut, it could rip through the balloon.

So be careful.

So good guess.

Good job.

Shane, I did have other people guess I wanted to mention this Nathan Drake wrote in said hey Jay listening since before my son was born in 2010 never had any idea but this week to me sounded like a combustion engine starting and revving slowly then dying my son Wyatt in the back seat of the car said I was wrong and that it's a hex nut spinning in a balloon and that you could tell because the thud at the end when it finishes Very cool.

So he guessed it right.

His dad was wrong.

And I like that he heard the little detail at the end, right, about about the nut like basically slowing down and bouncing around in the balloon when it didn't have the momentum to be spinning like it, like going around the circumference of the balloon.

Very good guess, Wyatt.

So, I have a new noisy though for you guys this week, and I'm curious to know what Kara thinks of this.

That's a Space Invaders.

I suspect this week is going to be very difficult,

but I will give you no clues because everybody completely went crazy on the Space Invaders one.

Like, I got so many emails, and everyone's saying, Oh my god, it's too easy, too easy.

A lot of people had fun, you know, writing in and saying, you know, getting one.

The bottom line is: this one's hard.

Good luck.

If you think you know this week's noisy or you heard something cool, you can email me at wtn at the skepticsguy.org.

Quickly, we have a show in Kansas, guys, on September 20th.

We have two shows, a private show, which is a live podcast recording.

And then at night, we'll be doing our stage show, which is the skeptical extravaganza of special significance.

If you're interested in seeing us live on these, you know, two different types of shows, you could come to one or both, whatever you want to do.

Go to the skepticsguy.org.

There's a button on there for each of these.

And, you know, we just would like it if you'd join us because it's a fun day.

Those who will spend the whole day with us, you know, we have a lot of people people that do that.

You know, then there's synergy between the two shows that only you will get if you, you know, at the second show, which is pretty cool.

All right.

Thanks, Jay.

I'm going to hit you guys with a new segment.

I call this segment, Why Didn't I Know This?

Yeah.

Inspired by Evan.

Yes.

Who sent us an email saying,

Why didn't I know this?

talking about the great attractor.

So yeah, let's talk about the great attractor.

And we could see if this works out as a new segment where we just talk about something in the world of whatever, science and reality

that

maybe you never heard of, but it's kind of cool.

And I brought this up specifically.

I was,

you know, YouTube has shorts, right?

They're basically little TikTok videos, vignettes of videos and whatnot.

And I get, you know, like any person, I get stuck down the rabbit hole sometimes.

And one came up of Neil deGrasse Tyson talking about how we are, you know, how fast we're moving through space, you know, everything relative to, you know, our entire

solar system that's moving, right, and everything else.

We're basically going at what, 2.1 million kilometers per hour at one certain measure.

So I was like, okay, yeah, we're going pretty fast.

And then he mentioned the Great Attractor, because apparently that's the direction our entire Milky Way galaxy is generally heading, along with a bunch of other galaxies.

Now,

this was new to me for

never, I had not heard of this before.

I spoke to Bob a little bit about it and asked Bob if we had brought this up on, if this had come up as a subject at all on the Skeptic's Guide before.

And Bob, what?

You said you didn't have any recollection of it either, right?

Yeah, I don't know if we've ever spoken about it.

Okay, so good.

Then nobody,

I'm not misremembering, right?

So here it is.

Yeah, but we've read about this before, but

just getting myself updated on it.

It's interesting.

So in order to know what the great attractor is, you have to know a little bit about the structure of our part of the universe.

So, you guys know that our galaxy is the Milky Way?

Yes.

Right?

Do you know that our galaxy is part of a local group?

Yes,

I already mentioned it about a dozen times.

The Virgo group.

Is it the Virgo group?

No.

You're two levels high.

So the local group includes the Milky Way, the Andromeda Galaxy, the Triangulum Galaxy, and a bunch of dwarf galaxies.

That's our local

group.

It's anywhere.

I've heard numbers from anywhere from 50 to 60 to over 100 galaxies.

Yeah, many of them dwarf galaxies, many of them hidden, they think.

Right.

So we can't see the number.

It's kind of big.

10 million light years across.

That's our local group.

That's the next notch up above our galaxy.

And that's the group that we will all eventually merge with.

Nice.

Eventually.

Now, the local group is part of the local sheet.

The local sheet is a flattened structure containing several local groups.

No sheet.

Yeah, the local sheet.

So there's two other groups, like the M81 group and the Centaurus A group, combined with our local group that make up the local sheet.

Okay.

Okay, next step up.

Next up is the Virgo supercluster.

Okay, that's right.

This is the

local supercluster.

This is about 1,300 galaxies, and it's about 110 million light years across.

But that's not it.

That's at the highest level.

The Virgo supercluster is part of the Lanniakea supercluster, which I talked about like eight, what, nine, ten years ago.

Yeah, which is

520 million light years across.

That's crazy.

I don't like this.

Why are they both called superclusters?

You know, well, one's a supercluster.

It's a bad nomenclature.

It's a super.

We'll call it a super duper cluster.

It should be an Ubercluster.

So there's about 100,000 galaxies in the Lanniakea supercluster.

Now, at the very gravitational center of the Lannia Kia supercluster is the Great Attractor.

And

it's more than a galaxy.

It's a concentration of mass that's 10 to the 16th solar masses.

That's massive.

So basically it's probably a supercluster.

So there's a supercluster in the middle of the

the Lanniakea supercluster, and that is the center of gravity of

the bigger supercluster.

And so everything is moving towards it, including the Milky Way galaxy.

So it's hard to see.

I mean,

they see something there.

They call it the Norma cluster, but that's only

part of what's there.

So it's kind of obscure, so they're not really sure.

And that's why it was kind of mysterious.

But my question, Steve, is

now I've read over and over that we will, all our all the local galaxies, 50 to 100 of our local galaxies in our local group, will merge eventually.

But the question is:

will the expansion of space win out over?

I think so.

I think it's what I've read.

It's too big.

It's too big for

gravitational binding.

You know what I mean?

So,

the expansion will overcome the gravitational attraction of the Lannia Kia supercluster, not our local group.

I don't know about the Virgo supercluster, but definitely the

Lania Ikkli.

Yeah, I think even the Virgo supercluster will eventually go bye bye.

Okay, so when you say it'll win out, you mean eventually, but right now we're moving towards it?

We're moving towards it, but it's but the expansion is greater.

So here's the thing: when we look at all, so we can't see it directly because it's in the quote-unquote zone of avoidance, which basically is

part of a region.

It's a part of the universe we can't see because the center of the galaxy is in our way, right?

So we're

going to have the dust and stars and everything in our own galaxy keep us

from seeing that strip.

And that's where the great attractor is, happens to be obscured by this zone of avoidance.

But when we did a massive survey of the redshift of the galaxies in the Laniakea supercluster, right, they're all redshifted.

They're all moving away from us,

but they also have what they call peculiar velocities.

In addition to the redshift, there is some additional velocity, and all of these additional velocities are moving towards the same point.

That's the great attractor.

So everything is moving towards the great attractor,

but they're moving away from each other even faster because of the expansion of the universe.

We'll never get there.

Yeah, that's interesting.

So the movement towards it, though,

not counting,

let's calculate out the expansion of the universe.

The movement towards it, is it

a collapsing movement or is it a circular like it's a massive?

No, no, we're moving, just we're moving in that direction, just like as a straight line.

I know we are, but I'm saying when you look at everything around, everything's moving towards that point.

Yeah, so it's like a collapsing movement, yeah, yeah, not like a rotation.

Not rotation, like the expansion.

Think of all the different ways we are moving around the sun, the sun around the galaxy, the galaxy, you know, within the local group, and then the local.

I mean, there's so many

moving.

I'm talking about space, like, is all the stuff in space,

not from our perspective.

But, like, if you just look at the great attractor as the arbitrary center of this model, the things that are moving towards it are collapsing in towards it like linearly, or they're rotating around it, like most things do in space.

But we don't have it, I don't think we have enough.

long time of measurement to know.

But then it's actually being pulled apart faster than it's.

But yeah,

they're all still redshifted, which means they're all moving away from us and from each other.

But there's this additional velocity, the peculiar velocity, that's plus 700 to minus 700 kilometers per second, depending on where it is in relation to the great attractor and us as the viewer.

Right.

So something that's on the opposite side of the great attractor is moving away from us at 700 kilometers slower than it should be because it's also being drawn in by the great attractor.

You know, that's where I want to live.

I want to live in the great attractor.

Think about it, though.

There's a lot going on there.

For long-term, for super long-term survivability of whatever's there, you want as much mass as possible in your vicinity so that you could use that mass, you know, the mass energy to survive long into the cold, you know, after it's really just like black holes and white dwarves left in the universe.

That you want as much mass as you can.

If you don't have a lot of mass and you're just like not going to last as long as any civilizations that might be, that might be.

It's like the highest mountain peak during a flood.

Imagine astronomers in the great attractor looking around and eventually figuring out: hey, check this shit out.

Everyone's coming to us.

How awesome is this?

We are the center of the universe.

Right.

That's a lot of matter.

That's a lot of matter.

They must feel very close.

It's very close to it.

Because do you think it does, like, it has like black hole features?

No, I think it's just

galaxy supercluster.

This is the biggest boy in 10.

But that means there would be a black hole in the middle of it.

Yeah, probably.

Most galaxies have supermassive black holes in the middle.

Yeah, I'm sure there's plenty of big galaxies there with lots of supermassive black holes, but not necessarily an uber massive, ridiculous, you know, at the at the at the edge of physics black hole, but just lots of, you know, just a lot,

a big, dense supercluster.

It's not necessarily.

All right, let's move on with science or fiction.

It's time for science or fiction.

Each week I come up with three science news items or facts, two genuine, one fake.

And I challenge my panel of skeptics to tell me which one is the fake.

We have a theme this week.

Theme is genetics.

Guys, ready?

Okay.

Okay.

Item number one: the smallest animal genome

by number of coding genes is the trichoplax adherins at just 3,500 genes, while the largest

belongs to the axolotl with about 90,000 genes.

I number two, deloid rotifers are small aquatic animals with a high rate of horizontal gene transfer, with genes from other kingdoms of life responsible for about 10% of their genome.

And item number three, the Japanese pufferfish, Fugu rubripes,

has the animal genome genome with the highest coding density at 17% compared to 3% in humans.

Jay, go first.

All right, the first one here about the smallest animal genome.

This is a creature I've never heard of before, the Trichoplax

adherence,

whatever that means.

Well, the largest belongs to the axolotl with about 90,000 genes.

Interesting.

Okay, so why would the axolotl have the most genes out of all the animals?

It's a small animal, and I just realized that I don't know if the genome size equates to the size of the creature.

Wow, I can't believe I don't know that.

Deloid rotifers are small aquatic animals with a high rate of horizontal gene transfer, with genes from other kingdoms of life responsible for about 10% of their genome.

What do you mean with genes from other kingdoms?

So they're animals, but they have genes from bacteria, plants, and fungi.

Whoa.

Oh, that's cool.

That's really freaking cool if that's true.

And it wasn't artificially made, correct?

Nope.

Naturally occurring horizontal gene transfer.

Cool.

Okay.

And then finally, the Japanese pupperfish, Fugu Ripardese, Rippardes,

has the animal genome with the highest coding density at 17% compared to 3% in humans.

So the coding density means the percentage of the base pairs that are part of protein-coding genes versus junk DNA, non-coding non-coding regions, et cetera.

This is a remarkably difficult science or fiction, Steve.

I hope you're proud of yourself.

Is this the kind of crap we're going to get from you now that you're retired?

Like, is it going to be like the second thing?

Yeah, I just expect it to be the difficulty to go way up.

Oh, my God.

I'm dying to hear what everybody else has to say.

There's something about the axolotl having 90,000 genes.

Now, this is of all animals, correct, Steve?

Animals, yeah.

I don't know.

Something about that is rubbing me the wrong way, so I'm going to say that one is the fiction.

Okay, Evan.

Coding genes, like you said, Steve,

the good genetic material, the stuff that means stuff, not just the filler and the junk.

And

I could have sworn there were things that did have fewer coding genes, I thought, but maybe they don't fall into the category of animal.

Maybe they are more a bacteria, something else, right?

Something non-animal.

Maybe that's where I'm getting confused with this.

But I kind of think Jay's onto something here.

And I think maybe that 90,000 for the axolotl

is maybe too

small.

Something has more.

You know, whereas I'm just really guessing on the other two.

I don't know about the horizontal gene transfer and the Deloitte Japanese puffer fish fugu.

I remember that from that Simpsons episode way back when.

But I don't know, the most 17% coding density,

but 3% in humans.

Wow.

I suppose that could be.

Couldn't say why, though.

I'll go with Jay.

Jay, you're leading the way on this one.

Okay, Kara.

I'll take them backward.

It's interesting because I feel like with animals, there isn't a huge correlation between

the type of animal or like how we think of the, oh, that's really big and it moves or that's small and it free floats.

And plants, it was smart of you to focus just on animals because plants genomes are bananas because you have like octopoidy genes.

But generally speaking, animals are going to have just two sets.

And so

you can kind of compare them.

It's like comparing apples to apples.

A Japanese puffer fish has the highest coding density compared to humans.

So 17% compared to 3%.

Wow, our coding density is really low.

Oh, yeah.

17% feels high.

So that just means that a lot of stuff was like conserved and really useful.

I don't know if that's true or if it's another animal, like a shark or something that would be closer to that.

Deloid rotifers.

So these little animals.

In this case, the fact that it is small might be important,

especially if they're like free-floating in the water.

So similar to bacterial gene transfer, maybe it's picking up a lot of stuff from the things that are floating around it and mixing with it.

But I don't know if 10% is normal.

I don't don't know how much of our genome is horizontal gene transfer.

It would have been interesting for you to include that.

And then I hate that you do this one by the number of coding genes.

That's like a whole other layer.

Like, I don't know.

I think humans have around 20,000, but I don't, I think that's coding.

That could just be gene.

No, it must be coding genes that we have around 20,000.

And we always thought it was way more before we sequence them.

So my guess is that the smallest one has way fewer than that, and the largest one also has way fewer than that.

I bet you these are overestimates on both sides because we tend to think big when it's actually small, or at least in animals.

So, I'm going to go with the guys and say that one's fiction.

Okay, and Bob.

Let's see.

I'm going to take these backwards as well.

The coding density, yeah, it's 3% for people.

I'm pretty confident about that.

17%, though, for this fugu dude.

I mean, I don't know.

Perhaps coding densities are higher.

I don't know.

It

sounds reasonable.

Probably the most reasonable one here.

Let's see.

The horizontal gene transfer.

So 10%.

It might seem a little low to me because, I mean, there's a lot of conserved genes, the genes that are so good and so fundamental.

Everybody's got them.

So in some sense, that seems maybe a little bit small.

I don't think you understand what this item is saying.

These are not conserved genes.

Why?

They're from other kingdoms, right?

So

they would.

They were just taken on?

Yeah, why couldn't they be conserved genes?

They're not conserving.

Because they're passed on to each other.

Do you say they're not conserved genes?

Yeah, otherwise they wouldn't be due to horizontal gene transfer.

That specifically refers to a gene being added to the genome of another

species later on.

They share it because they didn't get it from a common ancestor.

That's vertical transfer, right?

You're talking about vertical gene transfer.

Horizontal gene transfer is not that.

It's okay.

So vertical, horizontal.

What's the difference?

All right, I get it.

I get it.

All right, so

the first one here.

Let's see.

I love trichoplax.

That's such an awesome name.

Axolotl.

So 3,500, which is that's kind of low.

Not as low as the smallest synthetic organism, but that's not what we're talking about here.

90,000 genes doesn't sound like enough.

Now, I know the axolotl is probably one of the biggest

healers in the animal kingdom.

I mean, you just chop off anything and it grows back.

It's a wolverine, I think, of animal regeneration.

So it kind of makes sense that

it would have a high number, but I think that number is even higher than that.

So I'm going to go with everybody and say that that's fiction as well.

All right.

So I guess I'll take these backwards too.

We'll start with number three.

The Japanese pufferfish, Fudurubripes, has the animal genome with the highest coding density at 17%

compared to 3% in humans.

You guys all think this one is science.

And this one is

science.

This one is science.

Yeah, so it is a very compact genome.

It only has 400 million base pairs compared to

several billion for people.

Yeah.

How many coding genes does it have?

It's very efficient.

About the same.

It's got about the same.

Same what?

As humans.

Yeah, okay.

Yeah.

So, yeah, and the question is why?

Like, was there some selective pressure for a more efficient, if you will, genome that has less junk in it?

And,

you know, that's probably why that's the case.

Sometimes smaller animals have to have more compact genomes.

But there actually isn't, since this came up, there isn't much of a relationship between the size or even the complexity of animals

and their genome size.

There's so many other factors.

Okay, let's go back to number two.

Deloid rotifers are small aquatic animals with a high rate of horizontal gene transfer with genes from other kingdoms of life responsible for about 10% of their genome.

You guys all think this one is science and this one is

science.

You guys got to know.

So, yeah, very unusual.

You know, this is a much higher rate than any other animal.

Now, these things are, they're not just small, they're microscopic.

They're not visible with the naked eye.

Right?

So they're kind of like the little water bears.

Oh, tardigrade.

I was going to say, is it because they're bacteria-like that they have so much horizontal transfer?

Maybe.

Maybe.

Yeah, but they have incorporated genes from plants.

fungi and bacteria into their genome.

So it's a much higher percentage than what about the method of horizontal transfer.

I mean, I obviously haven't read about that in quite a while but what's the how does that work how does it take how does it i wonder if it's from the stuff that eats them like viral usually it's usually from

eating them yeah that's what i would think if they're just floating through the water and they're like rotifers right so if they're just like

the like kind of receiving all this microscopic stuff in their in their little bodies all the time

that's just bizarre i mean eating is one thing but incorporating is you know wholesale is just like how does that yeah but if the bacteria are like you kind of look like a bacterium, I'm going to hang out with you.

So I wonder if it's just true.

It's just mistaken it for bacteria.

Yeah, they do have like, what, they have like circular DNA and they can just plasmids, they can just transfer.

So that's, yeah, that's why they're, what makes them so nasty is they, they can, hey, look what I learned.

Look what I can defend against.

Now you can too.

Now, like water bears, they can enter a state of dormancy known as anhydrobiosis, right?

They get dried out.

Desiccate, yeah.

And then they could survive in this dormant state for a long period of time.

In fact, what do you think the longest duration is?

Thousands of years.

24,000 years.

That's only moody.

That's a lot of money.

It's got some advantages, yeah.

Yeah.

Not forever, but basically, I mean, I don't know if there's an upper limit there.

Well, but that's just the longest we found.

That's the longest we've found, right?

Yeah, it could be.

It could be.

It's found in 24,000-year-old Siberian permafrost.

And they think that that probably the gene transfer is part of why they can do this, right?

They use these genes in order to be able to do this.

Okay, so this means that the smallest animal genome by number of coding genes is the trichoplax adherence at just 3,500 genes, while the largest belongs to the axolotl with about 90,000 genes.

Is the fiction, and I'll tell you that those creatures are correct, just the number of genes is incorrect.

I altered the number of genes.

So what do you think?

Now, Bob, you thought that the axolotl has more than 90,000 genes.

What do the rest of you guys think?

I said the same thing.

I think they both have less.

What about you, Jay?

I'm going to always go with Kara.

90,000 sounds really high for an animal.

You're all wrong.

Wow.

Don't you feel smart.

Wait, how can it not have more or less?

Well,

you said they both have less.

The axolotl has less, the trichoplax has more.

That's more.

No, I went the opposite.

I made the difference more extreme.

I see.

But animals are actually pretty consistent in the number of genes that they have because we're all animals.

We just share a lot of genes just as animalia.

So you get different numbers as estimates, but the trichoplax has about 11,500 genes,

and the axolotl has about 30,000 to 35,000 genes.

Whereas again, humans are 20,000, pretty much in the middle.

So for all of animals, you're talking 11,000 to 30,000 genes is the range, which is not that much.

I mean, you consider about all of animals.

Yeah, it might be part of what makes us animals.

And the trichoplax is a placozoa.

It's a basal group of multicellular animals,

possible relatives of the cnidaria.

Oh, cool.

It looks like just a blob, though.

Guys,

the minimalist synthetic cell that they created a bunch of years ago, how many genes do you think that has?

185?

1,000.

It's got 531,000 base pairs and 473 genes.

I've been pricing right rules.

It's self-replicating.

Smallest genome of any self-replicating organism.

That's scary.

That's pretty cool.

Hopefully, they put a kill switch in there.

Think about it, though.

They probably do.

The reason why I said that, Bob, is because

the creature, not animal, with the fewest genes is the carcinella rudii, which has 182 protein-coding genes.

But this is a

bacterium,

but it's a parasitic bacterium.

Ah, so it's hosting.

It doesn't need that many genes because

it's

off its host.

Living off the host, yeah.

The host with the most.

But we're still not at the very minimum, though, and

that's the goal, right?

They want to find out what are the critical ones for life.

Exactly.

Among free-living organisms,

the fewest genes, this is natural, not artificial, is the mycoplasma genitalium 525 genes.

So 525 is the smallest number for a free-living organism, 182 for a parasite, and then 11,500 for an animal.

And you're right, Kara, I didn't use plants because they're crazy.

They just like

the world are in plants, and like the single biggest genome is in a fern that has

this crazy number of base pairs.

But

it's mostly non-coding.

Well, some animals and plants have, it was like, what, like gene duplications?

Like, bam.

So that's why, yeah, like I said, it's got octopus.

They're like octopoidy.

No, no, no.

It's just that they have, they don't just have pairs of genes.

They'll have like four or eight or 16.

So it like quadruples.

Yeah, yeah, that's why.

So, yeah, so this one fern, the new Caledonian fork fern, has 160 billion base pairs, whereas humans have 3 billion.

What the hell?

But what's its ploidy?

Do we know?

I'd have to take a look at the Caledonian.

But the thing is, it's like 64 billion.

It's actually a disadvantage.

It's slow growing.

It takes a lot more energy and a lot more raw material to reproduce because it's got to copy these massive genomes.

It needs more.

It's octopoid.

Yeah.

So maybe that's the highest it can be.

I said 16.

That's probably not real.

Yeah.

It's octopoid, whereas we're diploid.

Right.

So right there, that's yeah.

Eight versus two.

Divide it by four, and its genome is suddenly not nearly as impressive.

It's still big.

Take that fern.

Well, it's still 40 to 3.

Not just 160 to 3.

Right.

If you divide it by 4.

Still big.

Yeah.

I mean, it's still really impressive, but not as impressive.

It's 50 times more than the human genome, is what it says.

Yeah,

that's it.

It's probably that it's doing it.

All right.

Well, good job, guys.

Thank you.

Thanks, Steve.

Evan, give us a quote.

There's a kind of a spatial association between music and math, the intersection of science and art.

Medicine is an art, and research is an art.

You have to be creative in the way you design experiments.

And that was from an interview with Dexter Holland, Dr.

Dexter Holland, by the way, Doctor of Molecular Biology, a PhD.

You maybe know him as the lead singer of the punk rock band The Offspring.

No idea that he was a doctor.

He had a degree in molecular biology.

That's really cool.

Never knew that about him until very, very recently.

Like today.

All right.

Thanks, Evan.

Thanks.

Well, thank you all for joining me this week.

Thank you, Seville.

Thank you, Steve.

Steve, let me ask you one quick question.

Sure.

Are you happily happy here, retired?

Is everything good?

I mean, my life is really not any different.

I've been busy working the last few days.

You know, it's only been three days where my schedule has been different from not working.

So

we'll see when things things settle in in a few months.

I spent most of it dealing with the phone.

I know.

Spent most of it trying to get my phone updated.

I'll ask you in a couple of weeks.

Yeah, that's going to be your next big hurdle, right, Steve?

I remember a private show once, you kind of admitting to the group that you struggle with feeling lazy

and that you often feel like internalized pressure to fill your time.

So now you're going to have all this extra time.

How well do you think you're going to be able to just sit and do nothing?

nothing?

Not well, but I'm filling it with stuff to do.

Yeah.

I already have way more stuff to do than I have time to do.

But that's what I was saying.

I think it'll take a few months to really settle in.

Like, I've done all my projects.

I've done all the busy work that I could give myself.

We have my new projects for the SGU going.

Bob and I are going to be writing another book.

We're adding another podcast.

We're doing more live streams.

We're bringing back AQ6.

I'm going to have more time to put into just the primary show itself.

And I'll have more downtime, but we'll see.

It'll take a couple months to settle in.

I think you're going to fill all that down.

I will fill it.

I'm good at filling my time.

But some of that, more of that, will be like video games than it is now.

I'll be able to do more of that kind of stuff.

All right.

Thanks again, guys.

Sure, man.

Thank you.

God, Steve.

And until next week, this is your Skeptic's Guide to the Universe.

Skeptics Guide to the Universe is produced by SGU Productions, dedicated to promoting science and critical thinking.

For more information, visit us at the skepticsguide.org.

Send your questions to info at the skepticsguide.org.

And if you would like to support the show and all the work that we do, go to patreon.com/slash skepticsguide and consider becoming a patron and becoming part of the SGU community.

Our listeners and supporters are what make SGU possible.