Improving Science & Restoring Trust in Public Health | Dr. Jay Bhattacharya

4h 27m
My guest is Dr. Jay Bhattacharya, MD, PhD, Director of the National Institutes of Health (NIH) and Professor Emeritus of Health Policy at Stanford University. We discuss which scientific questions ought to be the priority for NIH, how to incentivize bold, innovative science especially from younger labs, how to solve the replication crisis and restore trust and transparency in science and public health, including acknowledging prior failures by the NIH. We discuss the COVID-19 pandemic and the data and sociological factors that motivated lockdowns, masking and vaccine mandates. Dr. Bhattacharya shares his views on how to resolve the vaccine–autism debate and how best to find the causes and cures for autism and chronic diseases. The topics we cover impact everyone: male, female, young and old and, given that NIH is the premier research and public health organization in the world, extend to Americans and non-Americans alike.

Read the episode show notes at hubermanlab.com.

Thank you to our sponsors

AG1: https://drinkag1.com/huberman

David: https://davidprotein.com/huberman

Eight Sleep: https://eightsleep.com/huberman

Levels: ⁠https://levels.link/huberman⁠

LMNT: https://drinklmnt.com/huberman

Timestamps

00:00:00 Jay Bhattacharya

00:06:56 National Institutes of Health (NIH), Mission

00:09:12 Funding, Basic vs. Applied Research

00:18:22 Sponsors: David & Eight Sleep

00:21:20 Indirect Costs (IDC), Policies & Distribution

00:30:43 Taxpayer Funding, Journal Access, Public Transparency

00:38:14 Taxpayer Funding, Patents; Drug Costs in the USA vs Other Countries

00:48:50 Reducing Medication Prices; R&D, Improving Health

01:00:01 Sponsors: AG1 & Levels

01:02:55 Lowering IDC?, Endowments, Monetary Distribution, Scientific Groupthink

01:12:29 Grant Review Process, Innovation

01:21:43 R01s, Tenure, Early Career Scientists & Novel Ideas

01:31:46 Sociology of Grant Evaluation, Careerism in Science, Failures

01:39:08 “Sick Care” System, Health Needs

01:44:01 Sponsor: LMNT

01:45:33 Incentives in Science, H-Index, Replication Crisis

01:58:54 Scientists, Data Fraud, Changing Careers

02:03:59 NIH & Changing Incentive Structure, Replication, Pro-Social Behavior

02:15:26 Scientific Discovery, Careers & Changing Times, Journals & Publications

02:19:56 NIH Grants & Appeals, Under-represented Populations, DEI

02:28:58 Inductive vs Deductive Science; DEI & Grants; Young Scientists & NIH Funding

02:39:38 Grant Funding, Identity & Race; Shift in NIH Priorities

02:51:23 Public Trust & Science, COVID Pandemic, Lockdowns, Masks

03:04:41 Pandemic Mandates & Economic Inequality; Fear; Public Health & Free Speech

03:13:39 Masks, Harms, Public Health Messaging, Uniformity, Groupthink, Vaccines

03:22:48 Academic Ostracism, Public Health Messaging & Opposition

03:30:26 Culture of American Science, Discourse & Disagreement

03:36:03 Vaccines, COVID Vaccines, Benefits & Harms

03:47:05 Vaccine Mandates, Money, Public Health Messaging, Civil Liberties

03:54:52 COVID Vaccines, Long-Term Effects; Long COVID, Vaccine Injury, Flu Shots

04:06:47 Do Vaccines Cause Autism?; What Explains Rise in Autism

04:18:33 Autism & NIH; MAHA & Restructuring NIH?

04:25:47 Zero-Cost Support, YouTube, Spotify & Apple Follow & Reviews, Sponsors, YouTube Feedback, Protocols Book, Social Media, Neural Network Newsletter

Disclaimer & Disclosures
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

Since 2012, there's been no increase in American life expectancy.

From 2012 to 2019, literally,

it was well, not literally, almost entirely flat life expectancy.

Whereas the European countries had advances in life expectancy during that period.

During the pandemic, life expectancy dropped very sharply in the United States, and only just last year did it come back up to 2019 levels.

In Sweden, the life expectancy dropped in 2020 and then came right back up by 2021, 2022 to the previous trend of increasing life expectancy.

Whatever those investments we're making as a nation in the research are not actually translating into meeting the mission of the NIH, which is to advance health and longevity of the American people.

Because they kept saying, we don't care.

And so it's almost like big segments of the public feel like they caught us in something

as scientists and we won't admit it.

And they're not just pissed off.

They're kind of like done.

I hear it all the time.

And again, this isn't the health and wellness supplement taking, you know,

anti-woke crowd.

This is a big segment of the population that is like, I don't want to hear about it.

I don't care if labs get funded.

I want to know why we were lied to or

the scientific community can't admit fault.

I just want to land that message for them because in part I'm here for them and get your thoughts on

what you think about, let's start with lockdowns, masks, and vaccines, just to keep it easy.

And

what do you think the scientific community needs to say in light of those to restore trust?

So first, let me just say I don't think I'm the NIH director unless that were true, unless what you said is true.

Otherwise, I'm not the NIH director.

So I was a very vocal advocate against the lockdowns, against the mask mandates, against the vaccine mandates, and against the sort of anti-scientific

bent of public health throughout the pandemic.

I've also argued that the scientific institutions of this country should come clean about our involvement in very dangerous research that potentially caused the pandemic.

The so-called lab leak process.

Welcome to the Huberman Lab Podcast, where we discuss science and science-based tools for everyday life.

I'm Andrew Huberman, and I'm a professor of neurobiology and ophthalmology at Stanford School of Medicine.

My guest today is Dr.

Jay Bhattacharya.

Dr.

Jay Bhattacharya is a medical doctor and a PhD and the director of the National Institutes of Health.

Prior to that, he was a professor of medicine at Stanford University.

And I should mention that he did all of his formal academic training at Stanford, his undergraduate master's, PhD, and medical school training.

Today, we discuss the past, the present, and the future of publicly funded research in the United States.

The National Institutes of Health is considered throughout the world the crown jewel of basic and medical research, explicitly because the basic and clinical research that it has funded has led to more treatments and cures for disease than any other scientific enterprise.

Basic research is focused on making discoveries without any particular treatment or disease in mind when that work is done.

It is absolutely clear, however, that basic research provides the knowledge base from which all treatments and cures for diseases are eventually made.

Today, Dr.

Bhattacharya shares his vision of which aspects of NIH are especially effective and which need revising and improvement.

We discuss how scientific ideas are evaluated for funding and what can be done to create more funding for more ambitious projects leading to treatments and cures.

This is a very timely issue because despite its strengths, the NIH has gained a reputation over the last two decades for favoring safer and less bold work and therefore leading to fewer discoveries.

We also discuss what will be done about the so-called replication crisis.

The replication crisis is, as the name suggests, the inability for certain findings to be replicated.

Dr.

Bhattacharya shares with us new initiatives soon to take place that are designed to verify findings early and to incentivize replication so that the knowledge base built by NIH science is accurate.

As some of you may know, Dr.

Bhattacharya stepped into a very public role during the COVID-19 pandemic when he co-authored the so-called Great Barrington Declaration, which argued against lockdowns.

He was also quite vocal against mask mandates, and he addressed vaccine efficacy versus safety, especially for young people.

Those stances, of course, were very controversial, and he explains the logic for his stance on those topics.

That discussion leads into a very direct conversation about vaccines more generally, not just COVID-19 vaccines, but also measles, mumps, rubella vaccines, and the very public and controversial issue taking place right now about vaccines and autism.

We also discuss drug prices and why Americans pay 10 times or more for the same prescription drugs sold in other countries and the relationship of that to to public health.

I want to emphasize that the issues we discuss today will impact everybody.

If you're a scientist, they certainly impact you.

If you're a physician, they impact you.

And if you're young, if you're old.

If you're a patient, if you're healthy.

If you're American or if you're outside the United States, they will impact you.

Dr.

Bhattacharya was incredibly generous with his time and his answers, directly answering every single question I asked.

Nothing was cut.

As a consequence, it's a lengthy podcast, but I felt it was very important to get into the nuance of these issues so that you, the listener, can get real clarity on where things stand and where they are headed.

As a final point, my graduate student training, my postdoctoral training, and my laboratory, first at the University of California, San Diego and then at Stanford, where it is now, were funded by the NIH.

So you'll notice throughout today's episode that I'm very impassioned by the issues at hand.

At the same time, I strive to include questions that I keep hearing from my followers on social media and from listeners of the Huberman Lab podcast.

Some of those come from ardent supporters of the NIH, and others, as you'll see, are more skeptical or even critical of the NIH.

I strive to represent all those voices during today's conversation.

I certainly have my own opinions and stance on many of those issues, and I do voice some of those throughout today's episode, but again, I try to be thorough and broad-encompassing.

As you'll see, Dr.

Bhattacharya cares deeply about basic science and the future of medicine and health in this country and throughout the world.

He is our appointed leader in this science discovery public health enterprise, and I'm grateful to him for taking the time to share his vision and for his willingness to listen to the many and wide range of voices, including those critical on these literally life-sustaining topics.

Before we begin, I'd like to emphasize that this podcast is separate from my teaching and research roles at Stanford.

It is, however, part of my desire and effort to bring zero-cost to consumer information about science and science-related tools to the general public.

In keeping with that theme, this episode does include sponsors.

And now for my discussion with Dr.

Jay Bhattacharya.

Dr.

Jay Bhattacharya, welcome.

Thank you for having me, Andrew.

I've been wanting to do this for a very long time.

We are colleagues at Stanford, although now you've formally moved to Washington to be the director of the National Institutes of Health.

But you've played such an essential role in shining a light on certain aspects of public health, mostly that happened during the time of the pandemic, related to lockdowns, vaccines, etc.

We'll talk about that.

But now you are in the chief position of directing research dollars and the initiatives of what is arguably the most important health organization in the entire world, not just in the United States.

So thank you for taking the position.

Thank you for being here.

And the first question I have is:

for those that are not familiar, what is the not just stated mission of the NIH, but what is the really essential mission of the National Institutes of Health?

So let me start with the stated mission, because the stated mission is something entirely worthwhile.

Anyone who listens to it should say, yeah, we should do this.

It is to support research that advances the health and longevity of the American people.

And of course, the research that we do doesn't just advance American health, it advances the health of the entire world.

For

a very long time, the NIH, the National Institute of Health, has been the premier biomedical organization supporting research that translates into almost every drug that you take.

The NIH has had some role in developing almost every, you know, all the fights over

what's the right thing to do to get good sleep, what's the right thing to do for your diet.

The NIH has played some role.

And for American Biomedicine, it's the essential institution.

It supports the careers of a very large number of biomedical scientists around the world,

specifically me.

I mean,

I got NIH funding for most of my career.

I was a reviewer for the NIH,

a scientific reviewer for grants.

It's an absolutely essential organization.

Yeah, I agree.

My lab

ran on NIH money primarily.

So thank you, taxpayers, American taxpayers.

And I think for most people, when they hear that word health and what you just said about the mission statement for NIH, there is this assumption that most of the work being done at or funded by NIH is human clinical studies or even mouse studies that are testing a particular drug, a dose response curve, you know, what's the lethal dose of this?

What's the half-life of that?

But as you and I both know, much of what NIH does is fund basic research, research for which we don't have any clear idea, maybe even the foggiest of ideas, that there could be a potential upside for human health.

Things like what controls the pigmentation patterns of the noses of doberman pincher dogs.

I bet you we could find that grant.

So when we, maybe not anymore, anymore, but when we step back and we look at basic versus applied aka clinical research, what percentage of the NIH budget, which we'll talk about in a moment, is directed toward basic research and what percentage is directed toward clinical studies or the testing of some drug, what we call pre-clinical trials, testing in mice or non-human primates, et cetera?

So there's big fights over exactly what that demarcation line is.

So I'm not going to commit to a single number.

What I will say is that a substantial part of the NIH portfolio appropriately focuses on basic science.

Basic science meaning fundamental biological

facts that can be used in many, many, many

drug studies,

other research, where you don't necessarily know specifically in advance when you're doing it

what the applications are going to be.

The NIH very appropriately funds that work, especially work that's that's not patentable, right?

Because no drug company has an incentive to do that work, and yet it's vital.

Let me give an example just to put some meat on the bone of it of something that the NIH didn't fund,

but actually is within the mission of the NIH to have funded if it had.

Let's just take the research that led to the understanding of

the structure of DNA as a double helix, right?

Matt Watson, Crick, Rosalind Franklin, all those folks

in England

in the 1950s.

Well, that work is not patentable.

It's hard to imagine someone

trying to patent the double helix structure of DNA, right?

So that means that it's not going to be in the interest of any specific company to support those scientists that discovered that.

And yet it's vital to almost everything we do in biology.

The NIH very appropriately funds that kind of work, work that is not in the interest of any particular company to do.

It solves a market failure, if you think like an economist.

The market failure is there's no incentive of the private sector to do that, that kind of basic work, and yet that basic work really advances human health, right, in ways that are sometimes unpredictable.

And so

it's correct and right that the NIH continues to fund that kind of basic science work, as well as the applied work where you take the advances and say, okay, well, does this, here's a drug that might work to treat this disease, right?

That's that also, that kind of work also is appropriate for the NIH to fund.

There's an interesting

dividing line where the question is, like, what should be left to the private sector to do?

So the private sector tends to fund large-scale clinical trials at sort of the tail end of the development process.

Sometimes they'll fund

earlier clinical trials.

But the

private sector has an incentive to fund those kinds of studies because that gives them exclusivity, patents, things.

So, why should the taxpayer pay for that when there's already private actors that are willing to pay for that?

So, there's this interesting dividing line.

You want the NIH work to be translated so that patients can have it.

So, that means the private sector has to be involved to some degree, certainly has to be using the products of the NIH

research.

But that dividing line is fuzzy and controversial.

Same thing with between basic and applied.

As I said earlier,

there are huge, like almost religious wars over where that dividing line is.

Are you a basic scientist or are you an applied scientist?

So all the numbers don't make sense to me exactly given that religious work.

But the fundamental thing, which is we have to fund basic work, that I believe in pretty strongly.

Well, as a...

basic scientist, I'm not a clinician,

but I worked on clinically relevant issues in my lab related to restoration of vision in blinding diseases like glaucoma, things like related to anxiety, et cetera.

I also know that we have some beautiful cases, as you pointed out, of basic research leading to important, I will say, cures to serious diseases.

And there was no thought at the beginning of that basic research that the outcome would be related to human health.

I'll just briefly mention a couple.

I want to ask more questions than I want to speak.

But

my scientific great-grandparents, David Hubel and Torrance and Wiesel,

did the early work defining the structure and function of the visual system, first in cats, then in monkeys.

Eventually, it was clear the same was true of their findings in human work.

And early plasticity, changes in the visual system of, say, there was a cataract or a droopy eyelid or a divergent eye, strabismus, or convergent, you know, so what we call cross-eyedness, things of that sort.

And we know on the basis of that work that children need corrective surgeries early or else the brain is forever blind to the perfectly fine eyeball if the eyes aren't correctly aligned.

Okay, in other words, the old practice of, oh, you don't want to put kids under anesthesia, it's too risky, et cetera.

The work of Hubl and Wiesel saved the vision of millions and millions of children in the U.S.

and abroad.

People with cataract have those cataracts removed early and on and on.

And I would also say as a second example that much of the basic work on cell biology that took place in the second half of the last century, you know, where are the mitochondria?

What's in the mitochondria?

Electron microscopy.

Let's talk about all the folds in the mitochondria.

Let's talk about the Golgi, all that basic cellular biology that is the stuff of textbooks was,

as we say, necessary, perhaps not sufficient, but necessary for the development of essentially every existing cancer treatment.

But the cell biologists that did that work weren't thinking about cancer until much later in that work.

So those are just two examples that I would argue NIH had funded a tremendous amount of.

And the reason I'm setting it up this way is because I think nowadays, part of the reason you're here, is that we are potentially looking at a redirecting of a significant amount of the research dollars that taxpayers provide to the NIH and the NIH to labs away from basic research, which understandably has some people concerned.

That said,

in order to translate things from the lab to the clinic, we also need to think about translational work.

So I just put that out as kind of an offering to elaborate.

Andrew, I have no intention of implementing that,

of shifting the balance between.

I think, as I said, basic science work and applied work are both tremendously important parts of the NIH portfolio.

And the question is, to me, what's scientifically important and interesting

in terms of accomplishing the NIH mission, which is, again, advancing the health and longevity of the American people.

Both basic work and applied work can

contribute to that mission.

And in fact, I think any large-scale scientific

institution that seeks to support the mission that the NIH has has to have both in it.

So I don't have any intention of gutting basic science.

I mean, I personally, I do epidemiology, health policies, health economics, statistics.

That's very, very applied.

But I have great admiration for my colleagues like you who do basic science work.

I think it's what advances and fuels the next generation of advances.

So

it's going to stay part of the NIH mission as long as I'm the director.

Thank you.

I and many others will be very relieved to hear that answer.

I think there is this fear that the new administration is going to eliminate basic research somehow and replace it with only applied research and clinical studies.

And that somehow, and this is not my belief, that there's going to be some private interest related to that and it's all going to get co-opted in some kind of cloudy way.

What I'm hearing from you is that is not the direction that NIH is going to be.

In fact, I have not heard anyone inside the administration tell me to do that or suggest that as the appropriate path.

I've just, I mean,

everyone I've spoken to about my vision has said, yes, that makes sense.

Great.

I'd like to take a quick break and acknowledge one of our sponsors, David.

David makes a protein bar unlike any other.

It has 28 grams of protein, only 150 calories, and zero grams of sugar.

That's right, 28 grams of protein, and 75% of its calories come from protein.

This is 50% higher than the next closest protein bar.

David protein bars also taste amazing.

Even the texture is amazing.

My favorite bar is the chocolate chip cookie dough, but then again, I also like the new chocolate peanut butter flavor and the chocolate brownie flavored.

Basically, I like all the flavors a lot.

They're all incredibly delicious.

In fact, the toughest challenge is knowing which ones to eat on which days and how many times per day.

I limit myself to two per day, but I absolutely love them.

With David, I'm able to get 28 grams of protein in the calories of a snack, which makes it easy to hit my protein goals of one gram of protein per pound of body weight per day, and it allows me to do so without ingesting too many calories.

I'll eat a David protein bar most afternoons as a snack, and I always keep one with me when I'm out of the house or traveling.

They're incredibly delicious, and given that they have 28 grams of protein, they're really satisfying for having just 150 calories.

If you'd like to try try David, you can go to davidprotein.com slash huberman.

Again, that's davidprotein.com slash huberman.

Today's episode is also brought to us by 8 Sleep.

8 Sleep makes smart mattress covers with cooling, heating, and sleep tracking capacity.

One of the best ways to ensure a great night's sleep is to make sure that the temperature of your sleeping environment is correct.

And that's because in order to fall and stay deeply asleep, your body temperature actually has to drop by about one to three degrees.

And in order to wake up feeling refreshed and energized, your body temperature actually has to increase by about one to three degrees.

8Sleep automatically regulates the temperature of your bed throughout the night according to your unique needs.

8Sleep has just launched their latest model, the Pod 5, and the Pod5 has several new important features.

One of these new features is called Autopilot.

Autopilot is an AI engine that learns your sleep patterns to adjust the temperature of your sleeping environment across different sleep stages.

It also elevates your head if you're snoring, and it makes other shifts to optimize your sleep.

The bass on the Pod 5 also has an integrated speaker that syncs to the 8-Sleep app and can play audio to support relaxation and recovery.

The audio catalog includes several NSDR, non-sleep deep rest scripts, that I worked on with 8-Sleep to Record.

If you're not familiar, NSDR involves listening to an audio script that walks you through a deep body relaxation combined with some very simple breathing exercises.

NSDR can help offset some of the negative effects of slight sleep deprivation, and NSDR gets you better at falling back asleep should you wake up in the middle of the night.

It's an extremely powerful tool that anyone can benefit from the first time and every time.

If you'd like to try 8Sleep, go to 8Sleep.com slash Huberman to get up to $350 off the new Pod5.

8Sleep ships to many countries worldwide, including Mexico and the UAE.

Again, that's 8Sleep.com slash Huberman to save up to $350.

I'd like to talk a little bit about

something that most people perhaps are not familiar with in terms of its acronym, but is a very important issue, which is this notion of IDC, indirect costs.

So my lab ran on NIH grants for many years, and my lab and other labs would apply for grants.

If we were fortunate enough to get one of those grants funded, we might receive, let's say, a typical grant would be a million dollars over the course of four years, so $2.50 a year for four years.

But then in addition to that, My home university, Stanford, would get some percentage above that, not a percentage of that million.

I would still get the million to spend on mice, antibodies, graduate student salaries, et cetera.

But some percentage of that 1 million, and I think at Stanford it's roughly 50 X percent, you know, so let's say another 500,000, would be given to the university for so-called indirect costs.

This is not something that just happens at Stanford.

This is typical of every single NIH grant that I'm aware of.

And the indirect costs pay in principle for administrative handling of the grant and

the various infrastructure things related to the mouse care, keeping the lights on, having a janitor empty the trash at night, these sorts of things.

IDC, as it's called, has become a hot-button issue for two reasons.

One, as soon as the new administration came in, the Trump administration came in

just this last year, they cut the IDC rate across the board, not from, say, 55% at Stanford, other places were 75%, some places were as low as 30%, they said, nope, we're not paying this stuff anymore.

The National Institutes of Health, in other words, the taxpayers will pay up to but no more than 15, 1, 5%

above any given grant.

I'd like your thoughts on that, because this weaves into some bigger issues that relate to a lot of the sentiment that, you know, why should taxpayers be paying for these universities to run, especially when universities, some, not all, have large endowments.

Right.

So actually, I just preface my remarks by saying that there was litigation against that 15%, which essentially said the government couldn't impose that 15%.

So it's been blocked?

Yes.

So

right now, the rates are whatever they were.

They're not the 15% based on that court order.

I can't comment on the litigation, and I can't comment as a result of I'm now a member of the government, it's like I'm not allowed to do that.

But I do want to talk about the broader issues related to indirect costs.

And I want to put it in a broader context.

So the context is this.

So in the mid-40s,

Vannever Bush, who was like one of the main science administrators in the United States,

he made

an argument that the federal government should partner with universities in

organizing the scientific infrastructure of the United States.

That the universities were tremendously important parts of the scientific infrastructure, and the federal government had an appropriate role in supporting the universities of the country to do scientific research of interest to the American people.

So the indirect costs kind of structure came out of that commitment.

And frankly, it makes sense to me.

It's appropriate that the federal government have some role in deciding how to support the universities of the country to be organized around research that is in the American interest.

The question is, how much should it be?

How should it be structured?

In what way?

Those are the key policy issues that we're really talking about.

We're not talking about, should there be

some federal support for the universities.

The question is, how?

Let me just step back and talk about

the current structure, the way it works,

because it's really non-intuitive, right?

So there's, there's, so, first,

you're a brilliant scientist.

You apply to the NIH.

You get

a grant that gives you $1 million a year.

I'll just make a clean number, right?

So, a million dollars for the next five years.

The federal government is going to give you money to run your lab and do all this kind of stuff.

You work at Stanford.

Stanford has a 55% indirect rate.

So, that's on top of the million dollars a year.

The administrators at Stanford then will get $550,000, right?

So, that's so for the million dollars of work, the taxpayers will pay $1.5 million roughly to Stanford uh a year right so that's um now that as you said correctly that that that that half a million dollars will go to the fixed cost of doing research right the stuff that's like not specific to the you're you know like the lab the lab you're running the the the people you have to hire to do the the work that you propose but the fixed cost the building the the the maintenance the uh the you know the uh the the all the all the stuff's got to take the biohazard stuff away all that stuff and it's not just you like you're there are other folks who are like using the the same material, like radioactive materials.

And so it can support many, many research projects, not just one, right?

So it's funding that kind of work, right?

So again, that's a legitimate use of that money.

So, right.

Here's the way that the economics of this work.

In order to get fixed cost support, you have to have brilliant scientists like you that can win NIH grants.

If you don't win NIH grants, Stanford doesn't get the 550.

But in order to attract brilliant scientists, you have to have the infrastructure where the scientists can do their work.

So it's a ratchet, right?

So in order to have the money, the infrastructure support, fixed-cost support, you have to have scientists.

In order to have the scientists, you have to have the infrastructure.

It's a ratchet that essentially makes it so that we concentrate the federal support for the money

to a select few universities.

They're winners and losers.

And so

the scientific infrastructure of the country is concentrated in a relatively few universities, mainly on the coast.

And there are brilliant scientists in other places that are not at those select few universities that have trouble getting NIH grants, even though they're brilliant scientists.

It draws the federal support away

in a structure that essentially says

lots and lots of states, lots and lots of institutions are going to have trouble getting the infrastructure support that they need in order to have the scientists come there.

So

that's the basic economics of

the way indirect costs actually work.

And so the question is, is that the right structure?

There's also questions about, you know, like, so for instance,

your science involves, you're a basic scientist, your science involves lots and lots of fixed costs, right?

Radioactive disposal, all the stuff.

The research I did, you know, epidemiology, health policy, statistics,

it's basically a computer.

Me with a data set and a computer, I can hire some biostatisticians to help me.

We call that a carpet lab.

Yeah.

And so like,

does the university need the same indirect cost support to support my fixed costs as it does yours?

And the answer is obviously no.

And yet that's the structure we currently have.

So there are policy questions to be answered about are we, have we structured the indirect cost support in the right way?

Are we inducing the right incentives?

Can the American taxpayer be sure that we're auditing the use of the indirect costs appropriately?

Those are the policy questions I think that are at issue in the indirect cost fight.

Again, I won't get into the litigation.

I'm not allowed to actually comment on that.

So

I wanted to abstract it to a higher level, because I think the policy question is

not should the federal government support universities to do this kind of research, to have sort of the facilities.

The question is, how should it be distributed across the country?

To what extent should the researchers get it versus the

administrators get it?

And then on the back of that, there's also other research institutions that have very different indirect cost recovery rates for the same university.

So, like, you know, I think Gates Foundation is, I don't know the exact number, like 15%,

something on that order, whereas

the NIH is 50%

to the same university.

It looks funny.

The question is, I mean,

sometimes I've heard, well,

the

Gates Foundation puts more of the money into the directs, right?

So maybe they'll charge you for

the rental cost of the building or something.

I don't know exactly.

I'm very familiar with foundation versus NIH money, and it differs by foundation.

But typically a university, and I've been at two, I'm tenured at Stanford, but my lab started off at University of California, San Diego, a public university.

Typically, when foundation money comes in,

the university imposes a minimum of about 8% administrative costs just for handling, like just to do the paperwork, to pay the admins that do the handling.

There's something very important in what you're bringing about.

There are actually two issues.

So I want to backtrack to one issue to make sure that we, that people really understand this, because I realize that some of this might sound a little bit down in the weeds, but it's just so important.

The first thing that I really want to

draw up from earlier in our conversation is you pointed out that the current model of NIH is that taxpayer dollars pay for the basic research and for the exploration of whether or not the findings from that basic research will benefit disease.

If there's any technology, device, drug, whatever, that is brought to the public through the private sector.

Put differently, the taxpayers fund the research and development, but they don't capture any of the upside from the private companies that make money selling you the SSRI, selling you the not hopefully someday novel Alzheimer's treatment.

We don't yet have a satisfactory treatment for Alzheimer's, as we'll get into.

So

the general public who are not basic scientists, in other words, if I take off my hat as a basic scientist and I say, yeah, I'm a taxpayer.

I give a significant amount of my income to the state of California and to the federal government.

I like science.

I certainly would like to live a long, healthy life.

And I hope some of that science helps me do that.

But I'm going to have to buy back the results of what I paid for.

That's where I think a lot of the general public sit.

And I'm not saying they don't like, appreciate, and respect science and scientists.

But to any rational person, You don't need a degree in economics to say, that kind of sucks.

I'm paying and made worse,

if I want to read a paper that was published with the work that I provided for my tax dollars,

I have to buy that from the journal.

By the way, that changes in July.

Okay.

Yeah, that's a huge issue.

That's one of the decisions I made.

So $34.

Listen, I've been grateful to publish in Nature and Science.

You know, these are like Super Bowl rings for scientists.

I'm sure it's part of the reason I got tenure at Stanford.

And I had great fun doing the work and I believe in the work.

It stood the test of time.

time.

But

were I not an employee of Stanford that pays for the subscriptions to those journals, I have to buy the work back using my tax dollars that funded the work.

This is crazy.

This is like me giving you the money for the supplies to build a home.

I get to, you get to live in the home.

I don't even get to see the home.

I have to purchase a ticket to see the home.

That's how irrational it is from the perspective of somebody who's just not understanding the pipeline

of basic to applied research.

So let's just, I want to return to that briefly because this relates, in my opinion, directly to IDC.

So that's a crazy picture for anyone that doesn't understand how one piece relates to the next, relates to the next.

And now that I'm in public, I'm in media, I'm public facing, what I've come to learn is that the general public is very smart.

Max Del Brooke was right.

You know, assume infinite intelligence and zero knowledge, but it's very hard for people to connect more than two or three dots.

They're busy.

So we could talk all day about how this leads to that, leads to this, and the brick-on-the-wall model, and then there's this treatment.

And they're like, I'm paying for this stuff, and I can't even read the paper about it, let alone glean the positive benefits without paying out the no's.

Yeah, so, so a couple of things.

Let me go backwards because you had too many issues you brought up.

So first, the journal thing.

My predecessor, Monica Bergnoli,

who was the NIH director, the National Social Health Director before me, she made a decision, a really great decision, essentially to say if the NIH supports a scientist's work, and then that work leads to a journal publication,

that publication ought to be available free to the public immediately upon publication.

You're not allowed as an NIH-funded scientist to publish in a journal that doesn't have that as a policy.

That policy was due to go into effect in December of this year.

I think it's a policy because I agree with your analysis entirely.

If the American taxpayer pays for the research, why shouldn't the American taxpayer be able to read the research for free?

Because they already paid for it.

Why do they pay a second time on the back end after the research

is published?

And it's not like it's free if you're a university employee.

The university has to purchase a very costly subscription to the journal in order to for a faculty member to read the papers.

Now, I'm lucky enough I can access pretty much any paper in the world, but that's because Stanford spends millions and millions of dollars.

And it's made worse.

I forgot the one real stinger in this.

When you publish a paper, you use taxpayer dollars to pay the journal

thousands of dollars to publish it, then they sell it back to the general public.

Nature charges $12,000 for like the major communities.

So that's a racket.

Right.

Yeah.

Sorry, I realize I'm talking more than I'm asking questions.

No, no, this is, I mean, like, I'm agreeing with you.

So, like, so I, um, the

Monic Bertnoli, the previous NIH director, in December of this year, was the,

she made a policy that those papers have to be available to the public for free.

I made a decision.

One of my first things I did was I said, why wait till December?

Let's just do it in July.

Great.

Thank you.

And so starting in July, what you just said will no longer be the case.

Americans and everybody will have access to the papers that the Americans already paid for,

if they're NIH funded, for free.

Thank you.

On the behalf of,

literally, this isn't a political statement, on the behalf of myself and every other American citizen, thank you.

We've been paying for this research forever

and have had to pay to get it back.

I mean, it's not like journal editors make that much money, but the journals make a fortune.

So Macmillan Press, Elsevier, I've done my homework on this.

We're talking billions of dollars in income.

And the marginal cost of publishing now is effectively zero.

It's just you put it online, right?

And there's, there's, I mean, yeah, there's some costs for maintaining the web page and all that,

and there's some editorial staff.

But

the level of investments that the public have been making for the NIH

to then be asked to pay $30, $50, $100 for the papers itself that are published, I mean, it's just insulting.

And actually, it impedes the progress of science because it makes it so that there's this barrier where regular people can't get access to the things that the scientists are talking about.

So there's like this public transparency aspect of it

where the scientists ought to be engaging with the public about their ideas.

The idea is that we are just living in this ivory tower and only we get to decide what's true and false, and then we impose it on the public.

During the pandemic, we saw the folly of that model.

So it's, I think, a small step forward, but an important one.

I think you're being humble, and I'd like to point out that I think it's a big step forward because it's not just

a token to the public for all their dollars over the last, how old is the NIH?

100 and some years.

100 and some years.

It's really what should have happened a long time ago.

So thank you very much.

And I guess thank you to Monica as well for initiating this, but thanks for accelerating that.

I think when people start to understand

how the NIH works a bit, and they understand this IDC thing, this indirect cost thing, the question comes to mind: you know, how much of the costs of running science at a university, public or private university, should the public be responsible for?

I mean, that's a kind of really interesting question.

Yeah, I mean, I think,

so let me tie it back, as you said, these are all interlinked topics.

Let me tie it back to something else you just said earlier, which is,

okay, so the NIH funds your work.

Your work then results in

maybe not necessarily you, but somebody else who uses your work to like create a product that they patent and they make a lot of money off of.

They

sell it to the public.

At least indirectly or sometimes directly, those patents are funded by American taxpayers.

Well, the NIH also has a big intramural program, but it's like a scientist who works directly for the NIH.

They make some advances and sometimes those advances result in patents.

And those patents then result in products that are sold

above marginal costs.

And so the question is by, again, by American taxpayers, because the patent protects

entry into those markets.

So the question is,

how much should the American taxpayer be funding for this kind of work?

Should there be private actors to be allowed to

make money off of this research the American taxpayer funded?

And the question,

as an economist, I'll say the question is complicated.

And the reason it's complicated is you might say, okay, well, there should not be a patent at all, right?

It shouldn't be patented at all.

There was a law called the Bayh-Dole Act in the mid-80s, I forget the exact date,

that essentially said that NIH-funded work ought to be patentable.

And the reason was that it's the last mile problem.

Like you have some fantastic basic science research that has some like fantastic biomedical results

that there's no way for it to patent, right?

Then there's no interest to develop into a product that then advances health.

The wisdom of the Baidel Act was to say, well, look, if you allow there to be patented on the last mile,

then now we've created a commercial interest to take the basic science advances and translate them into something that actually benefits people.

Now, the price is going to be higher, at least while the patent is still in place, but then eventually the patent will go away, and then

the thing will be available to the public at large to accelerate the transition from the basic science investments we make to things that actually benefit the public very directly.

In a sense, there's a trade-off there, right?

So you're trading off the fact that for a while there's products funded by the American taxpayers that are at higher prices than it kind of would be in a purely competitive market for the fact that you get more rapid access to the benefits of that investment.

So that's the basic trade-off at play.

And that's why I say it's complicated.

When I joined UCSD and when I joined Stanford, I signed something saying if I make a discovery here that translates to an important device or drug,

that the university is going to capture some of that upside.

And Stanford is a place where there's,

let's just say, a history of people going into biotech and to neurotech and because of the influence of the engineering school.

There's actually a great joke about Stanford that a former president of Stanford told me, which is there's only two kinds of Stanford faculty, Stanford faculty with companies and Stanford faculty with successful companies.

A discussion for another time.

But

it's commonplace for faculty at Stanford to have companies, to split their time between the university and their companies.

But most places, like most of the NIH grants that I reviewed when I was on study section reviewing grants, most of the great work I would hear about at meetings.

came from people at universities who were really focused on charting the cell types in the retina, understanding the activity patterns in the brain during sleep and how it relates to neuroplasticity.

Very few of them were involved with companies in a serious way, let alone had their own companies.

So for the taxpayer who make up the majority of our listenership, giving money to universities and the universities are spending that money, making discoveries.

I think most of the time that the university and the scientists who do that work are not capturing the upside.

The general public isn't capturing the upside.

They're actually paying for the upside.

So it's a little bit like the journal situation.

That's why I brought that up.

It's a little bit like the journal situation all over again, where

we're, as taxpayers, funding a lot of this and then have to buy it back over and over again.

Okay, so there's one other complication about the United States versus the rest of the world.

So let's just put that aside for just a second.

Let's get back to that.

Before I get there, I want to say in response that,

in fact, when you take a medication or when you have some health advice that actually works,

often the NIH research was

somewhere in the path leading up to that involved.

And there are huge returns to that, right?

If you have a drug that

treats your disease well,

you know,

your congestive heart failure, and now you have a drug that

allows you to live longer,

in a way that allows you to live more fully,

or if you have diabetes and you slow the progress of the disease so it doesn't result in your kidneys failing,

you're going blind or whatnot.

Those are advances that are really worthwhile.

And even if the price is higher than marginal cost,

it still could be very worthwhile.

So you take metformin, it's a very cheap drug now, but once upon a time it was a patented drug, and you prevent the progress of type 2 diabetes.

That's a big advance, right?

For creation.

So

the value that you get from the NIH-sponsored research then is potentially very, very high in terms of improving your health, even more than the marginal price for the drugs that you end up paying or the products or the advice or whatever it is.

So you're saying it was a good investment for the taxpayers.

Even for the taxpayer, right?

Now, I wanted to put aside

the business about international,

like the U.S.

versus the rest of the world.

Now I want to bring that to to the forefront.

It is also true that American taxpayers and Americans pay somewhere between 2 to 10 times more for the same product, the same drug product, as people in Europe pay.

Why is that?

There's, again, a lot of complicated reasons around to do that.

But let me just

a very, very simple observation.

There's something in economics called the law of one price.

When you have one country charging 10, there's a market in one country where the price is 10 times more than for another country, what you would expect is somebody to go buy the goods from the other country,

from the cheap country,

let's

pay the cheap price, then go resell it in the country that has a high price.

And now what would end up happening is that you'd get an equalization of the price.

As long as there's sort of like the capacity to move across

and essentially close this arbitrage opportunity through competition, you'd see those price differences collapse.

And yet, for decades, Americans pay two to ten times more for the same product, often made in the same manufacturing facility, than Europeans do.

And

that it's again a complicated reasons why, but it has to do partly with the way that American health insurers interact with drug companies.

Drug Drug companies essentially use Americans as a way to fund their research and development efforts.

That's what they say.

The higher prices that we pay fund the last mile research that the drug companies do to test the new products.

Are you saying the last mile research is the most expensive because it's the stage four clinical

faces?

So right before we go into humans at large,

we want to know if anyone's going to drop that.

That's the argument that they make, that the drug companies make, is that, well, yes,

the Americans are paying this high price.

It's really worth it to do that.

And then they go to Europe and the Europe says, well, we're not going to pay those high prices.

We're going to charge you,

if you're going to market the drug

in France, in Belgium,

in Germany or wherever, you can do it, but you're going to have to charge us essentially marginal costs.

So if I understand correctly, the United States taxpayer is funding the late stage and most expensive research and development that the drug companies do.

They sell the drugs to us at a premium and they use the difference between the real cost and the sort of allowed cost abroad to make it very cheap overseas.

In other words, we are paying for

the insurance, so to speak, that the drugs that are marketed in Europe and elsewhere are safe.

Yes.

So the taxpayers in the United States are funding the basic research and the clinical late stage research

for the entire world.

Yes, in large part.

I mean, like, Europe does have some institutions

that invest in

basic research.

So it's not entirely zero.

And there are, of course, private foundations that do it.

But through the NIH, that's the single largest investment in basic science research in the world,

and also applied research.

And also,

by higher drug prices in the United States relative to the rest of the world, we are funding

the Phase III trials, all the research and development efforts that happen at the tail end of the research pipeline that the drug companies do.

So

essentially, American taxpayers are the piggy bank for the world for almost all of this research pipeline.

Wow.

Okay.

What is being done to bring drug prices down in the United States?

I heard this recently as a press release from President Trump that drug prices in the United States are soon to come down.

Knowing what I know now, based on what you just told us, the immediate question becomes, who's going to pay for that late-stage safety research?

I mean, it's not expensive because it's fun to do expensive research.

It's not expensive because they're still exploring the basic chemistry of these molecules or functioning of the devices.

It's expensive because you have to make sure that people aren't going to drop dead or form some other worse pattern of illness through the use of these drugs.

And that that means a lot of human subjects and many, many measures.

It's not just one endpoint.

Like, did it lower blood sugar?

It's like, did it lower blood sugar?

And also, did you blow a gasket in here, you know, some capillary in a critical part of your brain?

So, I mean, this is very expensive work.

So, who it still needs to be done is what I'm saying.

Who's going to pay for it?

Okay, so

let me just take a couple of cuts at this.

So, first,

like that phase four surveillance that happens after the drug's been marketed.

That's typically the FDA that conducts that work.

The NIH can fund some of it, but it's mostly the FDA that tracks the safety and efficacy of drugs in broader populations after the drug has been approved for use.

So again, American taxpayers are paying for that.

The phase three studies, the studies of large-scale clinical studies to check the effectiveness of a drug, check the again the safety profiles of larger populations

That's typically the drug companies paying paying paying for that, right?

In principle.

But then American taxpayers pay for that with

higher drug costs.

President Trump, in the last couple of weeks, issued an executive order essentially saying we have to make the other countries of the world pay their fair share of this.

He put an executive order in place with various mechanisms.

If you wanted, we can talk about some of those mechanisms

that will reduce the difference in price between what the U.S.

pays and what the rest of the world pays.

What will likely happen is that Europe will pay

a slightly higher price, again, funding the research and development efforts to do that last mile of research.

The U.S.

will pay a lower price, and so the world will share that R D burden more equally than we currently do.

Currently, it is American taxpayers on whose shoulders that burden of R D currently falls.

What President Trump has said is that

that is not an equilibrium that should hold, that there ought to be policies that allow us to equalize those prices.

And the kind of mechanisms used

include things like

including drug price discussions in trade negotiations,

so the tariff linking it to the tariff policies he's implemented,

allowing re-importation of drugs.

So the idea is that

let's say

I'm in Europe and I'm charging basically nothing for some drug and you're the United States.

Someone can come to me, buy the drugs from Europe or Canada or wherever, bring them to the United States, resell them at a much cheaper price,

and make a little bit of money, but that then would equalize the price and various other mechanisms to try to bring the United States much more close to where the price of the rest of the world.

It's not that the RD won't happen, it's just that the prices everywhere will be more equal so that the burden of R D is shared more equally across the developed world.

Aaron Powell, what is to say that these other countries will simply say, no, we're not going to absorb more of the cost.

People don't like to see prices go up.

They're comfortable with seeing prices go down for obvious reasons.

And I can think of one example, maybe not the most critically important example in most people's minds.

There's a class of drugs that was released last year, or about last year, called the DORAs.

These are drugs that encourage sleep

by suppressing the wakefulness mechanism as opposed to promoting the sleepiness mechanism in loose terms.

They have much lower abuse potential than a lot of other sleep medications.

And given the essential role of sleep in mental and physical health, you know, and I'm a strong believer that behavioral tools, sunlight, et cetera, are critical, but some people truly struggle with, you know, clinical grade insomnia and it's extremely detrimental.

It's widespread, these drugs are very expensive, $300 a month or more in the United States.

Knowing what I know now, just the idea that some of that $300, let's say, let's make up a number, $200 of those dollars is to cover the research costs so that in Northern Europe it can be available for $50 a month.

I mean,

that borders on upsetting for me.

Yeah, it is upsetting.

And I think I understand why President Trump issued that executive order.

It's upsetting for me, too.

Like, it makes no sense that the American taxpayer should bear the burden of this R D expenditures when there are lots of rich countries in the world.

Why shouldn't it be more equally distributed?

The question is, like, what will happen?

How the drug companies respond to the executive order and how our allied nations respond to the executive order is open still.

I don't know what it's going to look like.

But what I can say is that the current equilibrium is not sustainable.

The American taxpayers, once they understand what's actually been happening, this is decades long,

they're going to say no.

And so the way that it plays itself out, it's hard to project exactly.

But what I do do know is that every effort, the government currently is making every effort to make sure that those prices get more equalized.

I think, just take it from the perspective of a European citizen, right?

Someone is a French citizen or

Spanish or Portuguese or English citizen, or citizens of Great Britain.

For them,

allowing this prices more equalized in a way that's so they share the burden essentially creates an interest of the drug companies to focus on the kinds of health conditions that they have.

Most of the research now, since it's paid for by Americans,

the drug companies are focused on problems that Americans have.

It aligns the interests of the drug companies to think more broadly about what they should be investing in

to include the health problems that Europe has.

Aaron Powell,

is it true that, I've heard this before, 90% of the psychoactive drugs, like the antidepressants, the SSRIs, and related things

in the world are prescribed and consumed in the United States?

Aaron Powell, again, I don't know the specific number, but it is pretty substantial.

I think as far as drug profits go, I think it's like two-thirds or three-quarters of all drug profits are had in the United States.

Aaron Powell, and are most of those for the sort of Adderall and psychotropic type stuff?

No, so the whole thing.

Sorry, I don't know if psychotropic is the correct term.

And I'm going to get beaten up by people if I don't get this right.

Let's just say psychoactive, excuse me, I meant to say psychoactive

drugs like SSRIs, which by the way, in my view of the literature, they're not always bad, but we hear that they are bad in some instances or many instances.

But for the treatment of clinical-grade OCD, the SSRIs have been a tremendous tool.

They haven't cured OCD in every case, but they've been a tremendous tool.

So I don't want to make sure not to demonize them.

Trevor Burrus, Jr.:

So I don't know the specific numbers for psychoactive drugs,

but

as an industry as a whole, it's the United States that drives drug company profits, that pays for drug company profits.

I think it's like two-thirds or three-quarters.

I forget the exact number.

And so what are these American problems?

So it's obese.

Are they obesity-related issues?

Yes, obesity, depression.

I mean, those are a lot of the obese.

I mean, the United States is,

I think it's like Mexico is now above us, but like for a long time was the most obese nation in the world,

you know, big nation in the world.

So like the diseases related to obesity.

Now, admittedly, the European countries have those problems too, but just to a lesser degree.

The drug companies, their research and development efforts naturally go to where they're making the most money.

And so what this will end up doing is it'll align the drug company incentives to focus on the problems that Europeans have at slightly at higher levels than the Americans have relative.

Now, these are all rich countries, so it's not like there are unique diseases that happen in Europe that don't also happen in the U.S.

It's a question of relative levels of investment, right?

And so, you know, I don't think that's necessarily bad.

Like

an excessive investment in just the things that Americans have at scale don't necessarily translate to better health for Americans.

So you can see this

since 2012, there's been no increase in American life expectancy.

From 2012 to 2019, literally, it was, well, not literally, almost entirely flat life expectancy,

whereas the European countries had advances in life expectancy during that period.

During the pandemic, life expectancy dropped very sharply in the United States, and only just last year did it come back up to 2019 levels.

In Sweden, the life expectancy dropped in 2020 and then came right back up by 2021, 2022 to the previous trend of increasing life expectancy.

Whatever those investments we're making as a nation in the research are not actually translating into meeting the mission of the NIH, which is to advance health and longevity of the American people.

We've had some tremendous biomedical advances that have now allowed us to treat diseases that were previously untreatable,

which is great, that's a good thing, but it's not actually as far as the broad health of the American public address the chronic disease crisis that we face or address the

crisis in longevity that we face.

The next generation of kids, our kids, are likely to live shorter, less healthy lives than we have lived as parents and

as American parents.

And I think that

that, I think, is an indictment on this entire

entire industry.

We focused on managing illnesses and treating illnesses and try to hold on,

especially chronic diseases

and as a result, and we're failing at it.

Europe, on the other hand, is seeing expanded life expectancy.

This, I think, this change of trying to equalize drug prices, aligning our portfolio of NIH investments to meet the health needs of the American people, it's a long-needed corrective.

You asked if will it succeed?

I hope so.

That's the reason I took this job.

I'd like to take a quick break and acknowledge our sponsor, AG1.

AG1 is a vitamin mineral probiotic drink that also includes prebiotics and adaptogens.

As somebody who's been involved in research science for almost three decades and in health and fitness for equally as long, I'm constantly looking for the best tools to improve my mental health, physical health, and performance.

I discovered AG1 back in 2012, long before I ever had a podcast, and I've been taking it every day since.

I find it improves all aspects of my health, my energy, my focus, and I simply feel much better when I take it.

AG1 uses the highest quality ingredients in the right combinations and they're constantly improving their formulas without increasing the cost.

In fact, AG1 just launched their latest formula upgrade.

This next-gen formula is based on exciting new research on the effects of probiotics on the gut microbiome and it now includes several clinically studied probiotic strains shown to support both digestive health and immune system health, as well as to improve bowel regularity and to reduce bloating.

Whenever I'm asked if I could take just one supplement, what that supplement would be, I always say AG1.

If you'd like to try AG1, you can go to drinkag1.com slash huberman.

For a limited time, AG1 is giving away a free one-month supply of omega-3 fish oil along with a bottle of vitamin D3 plus K2.

As I've highlighted before on this podcast, omega-3 fish oil and vitamin D3K2 have been shown to help with everything from mood and brain health to heart health to healthy hormone status and much more.

Again, that's drinkag1.com slash huberman to get a free one-month supply of omega-3 fish oil plus a bottle of vitamin D3 plus K2 with your subscription.

Today's episode is also brought to us by Levels.

Levels is a program that lets you see how different foods affect your health by giving you real-time feedback on your diet using a continuous glucose monitor.

One of the most important factors in both short and long-term health is your body's ability to manage glucose.

This is something I've discussed in depth on this podcast with experts such as Dr.

Chris Palmer, Dr.

Robert Lustig, Lustig, and Dr.

Casey Means.

One thing that's abundantly clear is that to maintain energy and focus throughout the day, you want to keep your blood glucose relatively steady without any big spikes or crashes.

I first started using Levels about three years ago as a way to try and understand how different foods impact my blood glucose levels.

Levels has proven to be incredibly informative for helping me determine what food choices I should make and when best to eat relative to things like exercise, sleep, and work.

Indeed, using levels has helped me shape my entire schedule.

I now now have more energy than ever and I sleep better than ever.

And I attribute that largely to understanding how different foods and behaviors impact my blood glucose.

So if you're interested in learning more about levels and trying a CGM yourself, go to levels.link slash Huberman.

Right now, Levels is offering an additional two free months of membership when signing up.

Again, that's levels.link, spelled, of course, L-I-N-K slash Huberman, to get the additional two free months of membership.

Well, I really appreciate that you explain so clearly what's going on with this drug price differential and who's paying for it.

I was not aware of that.

Perhaps I should have been, but I was not aware of that.

And as we talked about a little bit earlier, most of the general public, even the science and engineering, mathematics trained, they can connect two or three dots, but they're also very busy.

And the general public, like I said, I believe, are smart, but it has to be spelled out very clearly the way you did for people to really understand.

I'm a health economist, actually.

Right.

Well, I think,

and I mentioned that in my introduction, but I think it is very important for people to understand that you look at things through the lens of science and medicine, but also epidemiology and economics.

You know, there's a saying in laboratories, which is that, you know, just adding more money doesn't improve the science, but it certainly allows you to take bigger risks in service to health and discovery.

And without money, no science gets done.

I mean, no money, no science.

You can't pay graduate students, postdocs, et cetera.

I don't want to spend too much time on the structure of basic laboratories, although that's my leaning.

I could spend hours talking to you about what's going to happen with the universities, et cetera.

We'll come back to that.

But there is one piece that we opened up earlier that I think it's important that we close the hatch on, which is the notion of indirect costs being now,

well, it's pending litigation, but leveled to a lower number, 15%, if the administration has their way, back to the variable rates depending on the university, if

this lawsuit has its way.

And here's what I hear a lot, to just put in the simplest of terms.

Stanford, Harvard, UT Austin, big universities, often the private universities, have big endowments.

So money that's been given by donors, some might have come in through tuition.

It's been invested.

They sometimes will spend the interest.

But as you and I both know, no university likes to spend the endowment.

Just like no one really likes to spend their savings, right?

People like to spend the interest they make on their investments from their savings.

Nobody likes to spend their savings, universities included.

The general public tells me all the time, not just on X, but on all platforms and whenever I interact with the public, why should we pay for research at these universities that have these large endowments?

To which I say, now it's true, Stanford has a very large endowment, Harvard as well, UT Austin and other places.

But many universities, fine universities, superb universities throughout the United States do not have extremely large endowments.

And as you pointed out, there's excellent work, important work, I should say, being done those places.

So to cut the IDC to 15% for everybody,

I can see where I'd say, well, why don't they just dip into their savings, the endowment?

But if you're, I'm not going to name names, but if you're at a smaller public university, in particular in certain areas of the country, not on the coast, unless you're at like a Washu in St.

Louis or UT Southwestern and they got riches, if I'm honest, they have a lot of money, there isn't a savings account to go into.

The buildings don't look the way they do at these other universities.

You don't have these impressive lawns and thousands of gardeners, which we're so blessed to have at places like Stanford and Caltech that have tons of money.

So to cut the IDC across the board for everybody isn't just sort of trying to restore order to the rich.

I do think it potentially punishes the less wealthy universities and important research.

I say that in service to them and frankly, just being at Stanford, it wouldn't be right for me to be like, oh, yeah, 15%, we'll dip into the savings.

It doesn't quite work that way if you're at a public university.

Well, I think you're hitting on the exact policy question, the right policy question.

The question is, how should the federal investment in fixed cost of research be distributed?

Right now, it's distributed in a very unequal way, where the top universities

have access to that money because they have scientists that can win NIH grants.

It's a funny thing, because if you think of it as

support for the fixed cost of research, you have to have scientists who are good at getting support for the marginal cost of research in order to get the fixed cost of research.

But if they're fixed, why would you do that?

Why wouldn't you have the money go

more equally spread across, right?

The endowment money is another more complicated question.

I think that

the endowment monies often are focused on particular projects.

There are restrictions on it.

But you're absolutely right.

It does make a buffer for some of the bigger universities that allow it to survive the vicissitudes of NIH funding or the economy more so than for universities that don't have that endowment.

But from the federal perspective, the key thing is

how should the funds be distributed across the universities?

There's a program called IDEAS program that the NIH, the National Literary Health has.

And I apologize because I don't remember the acronym, but I'll tell you what it does.

It says, for research institutions

in the 25 states that are in the bottom half of the distribution of NIH funding, it gives them a leg up in being able to get access to this federal funding

for the fixed cost of research.

I think that's a great program because what it does, it says, look,

the federal government shouldn't just be funding the top universities.

It doesn't make sense

from the point of view of trying to get

the biggest bang for the buck in scientific knowledge.

Just

a very narrow, like this isn't a narrow thing, it's like an important thing.

I think scientific groupthink happens when scientists are all just on the coasts.

And

the only scientists you interact with are scientists who already agree with you.

Geographic dispersion of scientific support allows

more

richer conversations about science that allows different scientific ideas to develop

just simply because it's more geographically dispersed.

It combats scientific groupthink.

There's other reasons too, as you said, like are there excellent scientists in universities that aren't in the, you know, like the Stanfords, Harvards, or whatever,

that

if you gave them

an environment where they could do their work, they would have

make tremendous advances, right?

So I think for lots of reasons, it makes sense to do that.

I don't want to comment on the specific 15% of them are subject to litigation.

I will say that the key policy issue is exactly the thing you said.

How should the money be distributed for fixed costs of research across the universities?

Like one system you could imagine would be where like different universities compete on costs.

So a university that is able to more inexpensively provide a square foot of lab space,

fully supported with radioactive disposal and all other stuff,

maybe the NIH ought to be giving money to that university more than a university that has to provide it at much more expensive rates.

That's not the current system, but you can imagine a system like that.

So

I think this fight over this 15%,

I think it's a great time now to rethink how the NIH and the Federal Government supports the research infrastructure of the country.

It's for the first time in, like I think in 40 years, it's now part of the public consciousness,

this thought.

And I don't think, I've not seen anybody who says that we shouldn't have federal support for universities.

The question is, how should it be structured and to what extent?

Those are, I think, legitimate questions for public policy debate.

Yeah.

Well, before moving on from funding and the relationship between tax dollars and universities, I want to ask one more question.

Then we'll move into issues of public health specifically.

But having been on study section, I realize I never explained what study section is.

Study section is when a group of scientists convene,

it used to be in different cities or virtually, and they review grants.

Typically, the people who review the grants are expert or near expert in a given area, typically three primary reviewers, a bunch of people vote on the grant.

And to make a long story short, whether you get money to do research from the federal government, aka the taxpayers, is voted on by a jury of your peers.

This has distinct advantages, in my opinion, because real experts or close to experts are evaluating your work, and they either have to advocate for it or they actively try and kill it.

From the perspective of a reviewer, you're given 12 grants and you know that only three of those can be funded or so.

And so you literally have to advocate for the one or two that you feel most strongly about and you...

find ways to legitimately make sure that the other grants are

not scored as well.

And you evaluate each one on the basis of its merits.

But you go into those study sections knowing, like, goodness, like this grant, I sure would like to see this one.

And this other work is kind of pedestrian.

It's kind of like all the others.

Now, this is a great model in principle.

However, you talked about groupthink.

It lends itself very well to

people who are very good at grant writing, which is important, grantsmanship is important, continuing to get money.

And in particular, new ideas, ideas that are outside the vein of what a researcher has been doing for the last five, ten years,

promoting the idea of doing new ideas, of chasing new concepts, new hypotheses.

It tends to make science move very slowly and very incrementally.

And so that's one issue.

However, I realize I'm weaving two questions, but there's what you described before, that the The majority of science that's funded at these universities on the coast, as is geographic effect, groupthink effect.

What about the rest of the country and these other places?

The study sections, the people who review the grants, intentionally include people from throughout the country.

It's related, in fact, I think, to the distribution of

the electoral bodies and people who lobby in Congress.

So in other words, there's no study section on a given topic, say Alzheimer's, where you don't see people from the coast, but where you also don't see somebody from the Midwest, somebody from the desert southwest.

There's always been geographic coverage in

the people who decide which grants get funded.

So I just,

this is a historical component here.

But so the question is a very straightforward one, which is:

given that a jury of peers decides what gets funded,

that checks off the box of are they experts?

Yes, more or less.

But it also means that nothing really that new can get funded.

Aaron Powell, Jr.: Yeah, I mean, I think you've hit on a real problem, which is, I think,

let me contrast it with Silicon Valley, right?

So in Silicon Valley,

you're an angel investor or VC or something, and you're a venture capitalist, and you invest in a portfolio of 50 projects,

and 49 of them fail, and the 50th succeeds, it becomes Google or Apple or something.

That's a very successful portfolio.

The process of how

we, if the NIH, review grants embeds in it a certain conservatism, in

a desire to make sure that every grant that's funded succeeds.

You can have a portfolio where every grant succeeds, but then the portfolio as a whole is not as productive as it ought to be.

Because how do you make every grant succeed?

Well, you just fund incremental work that you know will work.

We call that turning the crank.

There was a professor at the Salk Institute, a superb institution down in San Diego,

said to me, you know, there are two kinds of science.

There's the kind of science where you really test a really bold hypothesis, and most of the time it will be wrong.

But if you hit something, it's apt to be spectacular, maybe even open up an entire field, maybe cure a disease.

This has happened before many times over.

Or there's the science that will get you funded where you turn the crank.

You look at a different protein in a pathway that is marginally interesting, but is predictable in terms of its ability to create papers.

Students need papers, postdocs need papers.

Most of them don't want to go on to be lab heads.

So they just kind of need papers in a PhD.

And you learn something along the way.

And hey, you might stumble on something really interesting.

But it's kind of like stand on one foot, stand on the other, spin around.

Without money, there is no science.

So you could understand why people would be incentivized to do this kind of more incremental, I'll just call it pedestrian, kind of like, really?

They're showing this again.

You go to the meetings and it's like, they've been doing this stuff for like 15 years, but they keep their NIH grants.

And then at the end, they go, they were, we were funded for 30 years.

I've had this, when people brag about having the same grant for 30 years, I just go, oh my goodness, that's, you should be embarrassed.

You, now, how about seven different grants over the course of 30 years?

Yeah.

And tell me that one of them led to something interesting.

But don't kid yourself into thinking that having a grant, an R01 that lasted lasted 30 years with five renewals, it's like, I look at a lot of those careers of some of my senior colleagues, and I'm like, you made the interesting discovery in the third year of the first iteration of the grant.

The only thing you've proven is that you can, that tenure keeps people around too long.

And this is coming from a tenured professor.

Yeah.

So

I was formerly a tenured professor until recently.

But you gave it up by choice.

We should guess that.

Okay.

So

before the pandemic in 2020, it was

actually for a a decade before, I'd been working on measuring the innovativeness of the scientific portfolios.

I had a paper that was published on the eve of the pandemic asking how innovative is the NIH portfolio in particular?

And so let me just describe the methodology because it's easy to understand, right?

So take every single published paper.

published in biomedicine in 1940.

Take all the words and word combinations in it and just list them.

Then you do the same thing for all the papers published in 1941

and subtract off all the 1940 words and word combinations.

What you're left with are the unique words that were introduced into the biomedical literature in 1941.

You do this for 42, 43, 44,

into 2020, and what you get is a history of biomedicine.

that comes right out of the words that were actually published.

You can do this because computers, right?

And so

you have an age for every single idea that was introduced in biomedicine that just comes out of this automatic process.

You go back to the papers and ask, how new are the newest ideas in the papers when they were published?

Right, so just to take a concrete example, polymerase chain reaction in 1982, 83 was a new idea.

And so if you were Carrie Mullis publishing a paper with the words polymerase chain reaction in 1982, that's a paper that's relying on new ideas.

If the newest idea in your paper in 2020 is polymerase chain reaction, well, that's an idea that's almost 40 years old, 40 plus years old, right?

And now it's in the methods section.

Barely.

Right.

Right.

Because it's just like Xerox, right?

You just barely mention it, right?

So

the point is that

you can use this method to ask how new are the ideas in every single biomedical paper that's ever been published.

So we did that.

Me and my colleague Miko Pakalin at the University of Waterloo,

we asked, and then we asked, for NIH-funded papers, has the age of the ideas in the paper shifted over time?

And the answer is yes.

Papers that were published in the 1980s with NIH support tended to work on ideas that were one, two, three years old.

Papers published in the 200 teens were working on ideas that were seven, eight years old.

At the same time, in the 1980s,

the age at which you could win a large grant at the NIH, they're called R01s.

I mean, you know all about that, but like folks that, but the reason why these large grants are important is because they are the ticket first to getting funding so that you can actually test your ideas and do the experiments you want to do, but also they're the ticket to

getting tenure at fancy universities.

In part, I should say because R01s, these large grants, carry large amounts of IDC indirect costs.

Let me put it differently.

If a professor comes to a university and does absolutely groundbreaking work, but does it entirely on

foundation money, which carries very little indirect funds to provide to the university, there's a chance they'll get tenure, but

very small chance.

Professors that have RO1s stand a much higher probability of getting permanent employment at that university, called tenure.

There are ways to lose tenure, but in principle, it's academic freedom.

Tenure was never really about a job for life.

It was really about the freedom to explore ideas.

It turns out there's some subtleties in that.

There are some subtleties in that.

But I think it's so important for people to understand, so much so that when I heard about this perhaps reduction in IDC to 15%, my first thought was,

whoa, that's a big cut.

My second thought was, who will get tenure and who won't get tenure?

Now it will have to be based on the merits of the work.

Now, there is a correlation, right?

People who do spectacular work tend to get grants.

People who get grants tend to get more money and then you can explore more, et cetera.

And the dirty secret in all the R01 stuff is that everybody knows that the R01s are used to fund the next bout of research, but what you propose in an R01, sorry to break every, it's work that's already completed.

This is the inside secret of every scientist.

Oh, every scientist, because you want to say, look, I can do this.

I mean, I've had RO1 support also.

I mean, I guess.

You show them the preliminary data.

This is what I'm going to do for the next five years.

But the dirty secret is, this is what I already did for the past five years.

I get the money.

I do the next thing.

This is the shell game that every scientist learns to play, because otherwise, as you say, you get it in the neck, which is grant speak for you're done.

You fire your, you have to, you can't take students or postdocs.

You got to fire your technicians.

You close your lab, and you become what's called dead wood.

So there's there's a game that's being played and it's not a dirty game, but it's this kind of like how about like kind of don't ask, don't tell game.

Everyone knows that people are doing this.

And look, scientists are good people.

I want to be very clear.

They're just trying to survive.

Most scientists, I think.

I believe most scientists are trying to get it right.

I think that local culture can contaminate things.

And this grant, this need to be funded.

I'll grant you most of them.

Okay.

Yeah.

And, you know, I'm here in part as an advocate for the public and in part as an advocate for the science community.

I can't split myself.

So Andrew,

of course, right.

But with lower IDC, who will get tenure?

I mean, who will get tenure?

What's it going to be based on?

Yes.

I mean, that background is really helpful.

But here's a fact.

In the 1980s, the age at which scientists won their first large grant, R01, was mid-30s.

Okay, I got mine.

Let's see.

I started my lab when I was 35.

I got mine at my first R01 I got when I was 37.

But I started my lab in 2011.

Right.

In 2011 to 2020,

you were young for R01 functionality.

I was, yeah.

Right.

It's a typical scientist within the mid-40s before they got their first R01.

If I didn't have a family, I worked 90 hours a week.

Right.

So the point is that young early career scientists take much longer now to be able to get support to test their ideas out than they did in the 1980s.

This is important for innovation because it turns out that, this is another paper that I published before the pandemic.

It turns out that it's early career scientists that are most likely to try out new ideas in their work, in their published work.

Right?

So, in fact, this is depressing, but for me with a man with gray hair, but like it's monotonic.

Like the first year after your PhD is when you're most likely to have newer ideas in your papers.

And then every year after that, for every single year of chronological age, the age of the ideas you tend to work on tends to increase by about a year.

Well, the late Ben Barris, my postdoc advisor and beloved colleague at Stanford, who unfortunately passed away in 2017, he used to say, he used to like, he was 60 when he died, roughly.

He used to say, he's like, nobody does anything after they get full professor.

And I was like, that's crazy.

We have Howard Hughes investigators, people that win Noble.

He goes, all the critical work is done early.

I said, what about you, Ben?

You're there.

He's like, oh, yeah, I'm done.

You know, this was before he knew he was dying.

You know, I mean, this is the dirty secret because when you're young, you're hungry.

Given the space from your previous mentors,

you're going to go for it because you have to go for it.

And

if nothing else comes of today's discussion, already a lot has come of today's discussion, I want to put in a really strong vote for encouraging.

I'm going to catch so much heat for this, but the older labs talk about funding the next generation of science while taking most of the pie for themselves.

I really believe, like, if I could just,

I'm not going to beg, but I am going to

begin to spad.

This is this.

We need young labs to be able to do that.

This is an open door.

In my Senate testimony

before I became an IEEE director, this is a major initiative that I want.

I mean, I think that early career,

let me put

probably too sharp a point on it, right?

So right now what we do is we take the careers of young scientists and effectively put them at the service of older scientists, more established scientists.

So the early career scientists are essentially doing the work of the older career scientists.

So you have to have postdoc one, postdoc two, postdoc three before you have any chance of getting an assistant professor job where you could test your own ideas out.

Essentially, the labor of young scientists is devoted to the ideas of older scientists in the current system.

That wasn't always true.

And the NIH

has played a role in that.

And it's part of the reason why we have had essentially this sort of

more incremental progress than I would have hoped for.

When I

did my PhD and did my MD in the early 90s and then

into the mid-90s, I envisioned a career where there would be huge advances in science that I would spend my entire career thinking about and chasing, right?

And there have been some huge advances.

But frankly, I have the sense that there have been fewer of them than I would have expected as

the 1990 version of me.

Especially in the biomedical sciences, because I think we see the expansion of AI, we see the expansion of computer science, et cetera.

I could not agree more.

I actually think some of the programs, like the post-bac programs at NIH, I don't want to destroy this program by saying this, but these are where people finish college and they decide to go two years of research before they decide to go to graduate school.

This, in my mind, delays and kind of drains the initiative of a lot of look, there's nothing more beautiful than someone graduating college who's still excited about biomedical science, taking that energy.

Usually they don't have a lot of other commitments yet.

I think we should fund them so they can have a healthy life.

They don't need to have a lavish lifestyle, but a healthy life, and spend as many hours as is reasonable in the lab making discoveries to get through their PhD, do like.

it used to be a short postdoc, start a lab and hit the ground running in their 30s and get major funding to be able to test new ideas.

It's not just the Silicon Valley model.

It captures everything we know about brain plasticity.

Their brains are still plastic.

They're full of energy.

They're full of dopamine naturally.

And I'm not saying that everyone past 60 is like dead wood, old wood.

There's some amazing work being done at the, but it's very top-heavy.

And of course, no one wants to give up their lab.

I know people in their 70s and 80s, they don't know what to do.

If they retire, they think they'll, I don't care, get a hobby, let the next generation in.

Actually, there's a one good result.

One result that was

made me a little bit comforted was in this paper that I did with Miko Pakalin on age and the trying out of new ideas.

And that is that teams of young scientists, first author relatively young, teaming with a mid-career or later career scientist as a senior author, that combination is most likely to try out newer ideas in their work.

It's like you kind of need the...

So keep the old folks around.

By the way, I'm turning 50 in September, so I'm nearing these numbers.

You're still a young man, I'm not sure.

All right.

Well, I have plenty of, I'm very passionate about this, in part because some of my former graduate students and postdocs are now professors at universities, working extremely hard on extremely interesting questions.

But I know they would be pursuing even bolder questions related to immune system function and autism, related to

visual repair to cure blindness.

I mean, these are not trivial issues that they're trying to pursue.

They deserve and their peers deserve the majority of the taxpayer dollars for discovery because I think that therein lie the discoveries.

And there is this culture in academia of people kind of pinning awards on each other as you go up the ladder.

Some of those awards are nice.

A good friend of mine just was, he's a member of the National Academy of Sciences.

He called me.

I said, congratulations.

I was like, this is fantastic.

And he said, feels good, but like, you know,

I want to be in lab.

I want to be in clinic.

I mean, that's what's important.

The titles are, in the end, they're meaningless.

I've seen so many colleagues die.

Like their offices get cleaned out within a week.

They're gone.

And so the discoveries that young scientists make with tax dollars, to me, is the most important and beautiful thing that can happen.

I mean, we'll soon migrate into a discussion about public health, but I'm so relieved to hear, A, that journals are going to be accessible to the public, and B, that you feel this way about young scientists, because I got nothing against the old.

I'm not an ageist, but...

Let's face it,

youth is when discovery happens.

I think let's bring this back to something you

brought up earlier and I haven't yet addressed, which is how we evaluate science at the NIH, right, these study sections.

They're inherently, as you and you actually alluded to this, they're inherently conservative, right?

So just to put a real fine point on it, so I think in the 2010s there was a policy that in order to be on an active member of a study section, standing member of these grant review panels, you had to have an active ROI, a large grant, an active active large grant.

Think about that, right?

So I am a scientist.

I'm really well accomplished in my field.

I have a large grant.

By every measure of scientific success, I'm a

success.

And now I'm sitting judging young scientists pitching their ideas, some of which, if they turn out to be true, maybe undermine my ideas.

I mean, it's really hard to like open your brain and say, oh, okay,

I'm going to support a project that might undermine my entire career.

I mean, everything we know about cognitive bias

supports what you're saying.

There's another aspect too, which is, you know,

letting go of one's own ideas, especially if your funding and your ability to pay your people depends on them, is tricky.

There's another kind of, this is not just inside ball.

If you're on study section, your grants are evaluated differently.

A lot of people are on study section because you get what's called a special, where people you know and you know who they are, a small team of people that generally like you and you like them, you even can suggest names for who's going to review your grant.

Being on study section helps you get grants.

You have to get one first in the open water of grant study section.

But I hope what people are starting to understand is that the system isn't corrupt.

It's just structured in a way that doesn't favor bold innovative change.

And those words bold innovative change are thrown around a lot.

I was part of the National Eye Institute's Audacious Goals Initiative.

We'd get into a room every year, we'd sit around.

How are we going to clear blindness?

What are we going to do about pigmentosa, macular degeneration?

And then

everyone went back to doing the same work they were doing before.

And so a lot of times these phrases get thrown out there, websites get put up, and like nothing changes.

When I talk to the public about science,

like there's a couple of modes.

Like now, post-pandemic, a lot of it is just purely cynical.

But there's another mode of thinking about scientists that are just sitting around thinking deep thoughts, making big advances.

But in fact, what you're saying, and I agree with, is true, and it's not entirely cynical, but like the fact is that there's a sociology to science, right?

So

there's a sort of like careerism inside science.

And sometimes it can lead to good, right?

You know, your competition with other scientists to like make the next big advance.

But I think in the current way we structure incentives in biomedicine,

very often we discourage that kind of sharp innovation.

We encourage essentially incremental advances so I have a safe scientific career for the rest of my life rather than take a big scientific risk where like I might fail,

but if I succeed,

I cure macular degeneration, I cure type 2 diabetes,

or whatever.

The structure of this, essentially, if you want to put it down as the key problem, is that in biomedicine, academic biomedicine, we are too intolerant to failure.

If you have a big idea that doesn't work, essentially you're out.

That's not true in Silicon Valley.

Silicon Valley, a failed startup, doesn't mean that you can't get another draw

at trying to make a successful startup.

Silicon Valley does not punish failure that sharply.

And that is the key to its success.

Whereas in biomedicine, the current version of it we have now, we punish failure way too sharply.

Yeah.

No, I completely agree.

And

I should definitely point out, I never had trouble getting grants.

So I'm not coming to this with any cynicism.

I moved on to podcasting and I still teach and closed my lab out of a joy of what I'm currently doing.

It wasn't that I couldn't fund myself.

I did see excellent grants get killed.

I also saw some excellent work progress.

I definitely agree with this analysis that you did.

Thanks for doing that paper.

I'll take a look at it.

We'll put a link to it.

That work

early in one's career tends to be the really innovative stuff.

There's just something about the younger brain that is more ambitious.

It's a higher risk-taking.

And unfortunately, now there's so much pressure to get funding for IDC reasons and to get tenure that oftentimes

young investigators will lean toward the more pedestrian, turn-the-crank type of science, get tenure, and then think they're going to go do something, but typically there's something bigger

i am very relieved to hear that young investigators young scientists new ideas um are going to be prioritized hopefully through the where it really matters like brass tacks like um

i think early career r01s should be bigger than late career r01s it should be inversely related to the size of a laboratory i i think smaller universities should get a bigger piece of the pie i do if the work is up to par right you don't just want to give them money just because.

But I imagine if R01s were, I don't know, 50, 75% bigger for new investigators.

Maybe they weren't four years or five years, maybe they were six years.

You could really take a run at something or multiple things.

And then maybe older investigators who've had grants for a while, You don't want to turn them out to pasture too fast.

You want to pivot them slowly.

I'm kind of joking.

But maybe their R01s should be smaller and they should be more selective about what they're doing because with a lot of grants top heavy in the older generation, they can kind of just spread it around.

Well, that postdoc went back overseas and this, that didn't work out.

I hear a lot about a lot more kind of

quiet exit type failures as opposed to we tried really hard.

We thought this signaling pathway was going to be the thing.

It wasn't.

Close that hatch, pivot quickly to the next thing.

There's a few things we could, I mean, like one of the nice things of being the NIH director, there's lots of smart people who've given me fantastic suggestions, especially for this specific problem, which I think is the key, probably the most important thing I'm going to be dealing with.

That plus the replication crisis, which we talked about, right?

And I'm not sure exactly what

the exact portfolio of things we do will fix this, but we have to support young scientists, early career scientists.

We have to punish failure less.

And we have to change the incentives around so that people want to test the big thing.

the big thing that translates into advances

for some of the most intractable health problems we face.

And if we don't do that, the NIH, we're going to look back and say, well, the NIH portfolio of investments the American taxpayer made have not paid off,

just from a macro scale.

I mean,

you can frankly say this for the last, at least since 2012, we have had no increase in life expectancy in the United States.

The NIH portfolio, in that sense, did not pay off.

I think I heard, and I think it was the former director of NIH in a public forum at the end of last year, it was November of last year, I tuned in for that,

said that we've developed more treatments to extend the life of older people or at least to limit their suffering somewhat.

So cerebrovascular disease, cardiovascular disease, things related to dementia, small differences to keep them alive longer, but the real dearth of meaningful treatments sits around younger populations who are dying deaths of despair or whose health is in really just in

dire condition due to obesity, diabetes, and mental health issues.

So in other words, young people are getting sicker earlier and staying sicker, and older people are

getting sick but holding on to some remnants of health longer.

And most of the treatments are geared toward the older population.

Is that true?

Yeah, that's true.

That's exactly right.

That's a terrible situation because it essentially is not preparing for the future.

Right.

So what we have is a system, is a sick care system.

The advances we've made have allowed people to stay sick longer.

It hasn't translated to longer life, right?

It just, it's, there was a hope, I think, when I first started doing research in 2001

in population aging,

there was this idea of a compression of morbidity.

That is, you live a long life, and the time you spent really sick and disabled was compressed at the very end of your life.

Rather than spending a long time disabled and sick, and you die after having spent like a decade or more very sick.

The idea was that

we have advances in our cultures as produced results so that you live a long life and you only spend

a few months really sick at the end of your life.

That hasn't panned out.

In fact, we have

very little increase in life expectancy.

And for many, many people,

unfortunately, a very long period of time.

in a state where

the quality of life is not that high, not that good.

Dementia,

chronic disease leading to, you know, say diabetes leading to all kinds of

kidney failure, macular degeneration, you name it, perfect vascular disease, heart disease.

You end up with a situation where all of these amazing biomedical advances that we've had over the last decades have not translated to actually improving the health and well-being and longevity of the American people.

I think that

the biomedical infrastructure, research infrastructure of the country has to translate over for results

for real people, for the American people.

Otherwise, people can ask us,

why are we doing what we're doing?

It can't just be that we're doing cool things.

I mean, not that we're not doing cool things.

A lot of cool things are getting done.

But if they don't somehow eventually translate over,

again, I don't mean to distinguish basic science work.

I think basic science work is really important.

But eventually it has to translate over, or else people will say, well, why have we made these vast investments?

The key thing is,

if we're not actually improving health as a result of the research we do, then we haven't accomplished our mission.

And

the research agenda of the NIH, as we've talked about, it's like we talked about

international relations as determining in part what scientists work on,

in drug pricing.

We've talked about how politics determines the agenda that scientists work on.

So you talked about HIV, right?

So the political focus on HIV led to the vast investments the NIH has made in HIV with some positive effect, actually a lot of positive effect.

And then also the sociology professions, the scientific profession determining these are all complicated things that result in the portfolio.

But if the portfolio ultimately doesn't meet the health needs of the American people, then it's not doing what it's supposed to be doing.

Part of my job is to make sure that it does meet those health needs.

The Make America Healthy Again movement,

that's what it's asking for: that the health institutions of this country actually meet the health needs of the people where they are.

And in large part, we've not successfully done that in this country for decades.

Otherwise, we wouldn't have this major chronic disease crisis we're currently facing.

And so, that's, you know, it's a complicated question.

It's not like, you know, it's not just solved by funding one grant or making specific decisions.

It's about the incentives of the system at large to focus on

to create incentives

so that scientists turn their ingenuity toward those health needs

rather than just advancing their careers incrementally.

I'd like to take a quick break and acknowledge one of our sponsors, Element.

Element is an electrolyte drink that has everything you need and nothing you don't.

That means the electrolytes, sodium, magnesium, and potassium in the correct amounts, but no sugar.

Proper hydration is critical for optimal brain and body function.

Even a slight degree of dehydration can diminish cognitive and physical performance.

It's also important that you get adequate electrolytes.

The electrolytes, sodium, magnesium, and potassium, are vital for functioning of all the cells in your body, especially your neurons or your nerve cells.

Drinking element dissolved in water makes it very easy to ensure that you're getting adequate hydration and adequate electrolytes.

To make sure that I'm getting proper amounts of hydration and electrolytes, I dissolve one packet of element in about 16 to 32 ounces of water when I first wake up in the morning, and I drink that basically first thing in the morning.

I'll also drink Element dissolved in water during any kind of physical exercise that I'm doing, especially on hot days when I'm sweating a lot and losing water and electrolytes.

Element has a bunch of great tasting flavors.

I love the raspberry, I love the citrus flavor.

Right now, Element has a limited edition lemonade flavor that is absolutely delicious.

I hate to say that I love one more than all the others, but this lemonade flavor is right up there with my favorite other one, which is raspberry or watermelon.

Again, I can't pick just one flavor.

I love them all.

If you'd like to to try Element, you can go to drink element.com slash Huberman, spelled drinklemnt.com slash huberman, to claim a free Element sample pack with a purchase of any Element drink mix.

Again, that's drinklement.com slash Huberman to claim a free sample pack.

This is a perfect segue for a discussion about the replication crisis.

It's a perfect segue because up until now, and still now, the independent investigator model, for those that aren't familiar, is Andrew Huberman gets hired as a assistant professor who might get tenure at a university.

And then the so-called Huberman lab, before it was a podcast, it was also an actual laboratory space, physical space,

has to come up with a set of ideas that hopefully pan out.

You get funded for, you get tenure, and then you can pursue new ideas.

But it's an independent kind of startup of its own.

My neighbor, two doors down in the hallway, works on something else.

One of the major issues I believe that led to the so-called replication crisis is that it is very difficult, even with the best of intentions, for two laboratories to do the same work in an identical way.

Five minutes longer on a countertop at room temperature might change an antibody that could lead to a different outcome.

I mean, there are so many variables.

The solution to this is collaboration.

Instead of having independent investigators, you have clusters of laboratories, hopefully distributed throughout the country, working on the same problems, collaborating.

There are grants of this sort, but here's the problem, as you point out, it's a sociological issue.

The graduate student in my lab needs a first author paper if they want to eventually get their own lab.

The postdoc in another laboratory doesn't want to be a middle author with 20 other authors.

To continue to flesh out the world of science with scientists, the independent investigator model works.

Those independent laboratories are naturally going to come up with different answers, talk about them at meetings, and maybe there'll be some convergence of ideas.

But

wouldn't it be beautiful if laboratories collaborated to try to solve important problems related to public health and everyone was incentivized through

perhaps not easier, but more plentiful funding to do the research, salaries that these people can live on reasonably while they're graduate students and postdocs, and maybe even laboratories that are more structured around a problem, so it's not called the Huberman Lab.

It's called the Laboratory for Curing Blindness.

And there's another laboratory for curing blindness at WashU and another one in the University of Illinois.

And we all collaborate and we try and cure blindness as opposed to making it all about the principal investigator, the independent investigator.

The rock star model of science

kind of works and it kind of is part of the problem, in my opinion.

I agree with you about collaboration in the following sense.

So science is a collaborative process, but the incentives within science

for individual advance

can often lead to a sort of a structure that

elevates careers without necessarily producing truth.

So

let me flesh this out.

Very tactfully put.

Okay, so there's a colleague of ours at Stanford named Johnny Annides.

He wrote a paper in 2005.

absolutely brilliant scientists, I think the most highly cited living scientists in the world, right?

So

he wrote a paper in 2005 with the title, Why Most Published Biomedical Papers Are False.

I mean, when you make a title like that for a scientific paper, it better be convincing.

And in just a few pages, it's an utterly convincing paper.

And it's not because scientists commit fraud.

That's not the reasoning behind it.

Because science is hard.

It's exactly in the heart in exactly the way you just said, Andrew.

So you publish a result, you believe it to be true, you have some statistically significant result at some level.

You know, we say P equals 0.05, what does that mean?

That

some percentage of the time,

even though you believe the result is true, it's been peer-reviewed by your colleagues.

The peer review actually doesn't involve, as you know,

the peer reviewers

taking your data, rerunning your experiments.

It doesn't mean any of that.

They just read your paper, looked for logical flaws, didn't find any, and then they recommend to the editor to be published.

So the peer review is not a guarantee that it's true.

You have some statistical significance that say

your data meet.

Even with that, some percentage of the time, the published result's going to be false.

Now, if you think of science, a priori is hard.

Any result

that you publish is most likely going to be a false positive result.

So-called negative results aren't incentivized.

They're very hard to get a good paper published for showing that something isn't true.

It happens.

I had a paper published in Science, which argued that at least one aspect of a theory was not true.

It was a very prominent theory.

Turns out other aspects of that theory were true.

So sometimes it happens, but no self-respecting graduate student or postdoc who values their life is going to say, hey, I want to go in and try and disprove the hypothesis of one of the more famous people in the field.

In fact, fact, I didn't set out to do that.

It just so happened that's the way it landed.

And no one shows up in graduate school and says, you know, I love these papers.

Let's replicate them.

Yeah.

No.

Right.

So

let's get back to that because that's, you're absolutely right about the incentives.

And that this is, but this is the, let's just, before we get to that, in the incentives, we analyze that, just put a fine point of the nature of the problem.

The published biomedical literature, something that I've searched basically every day for the last 30 years, 40 years, oh my God, 40 years.

That published biomedical literature,

most of the time that I'm reading papers in that literature, the papers I'm reading, even though they say results, their result is true, is likely not true.

Look, I had a professor in medical school who once told me, this is one of my favorite professors, he told me, look, half of what we're teaching you is false.

Well, okay, so I'm glad you're pointing this out.

I asked a very prominent neurosurgeon,

perhaps one of the most prominent neurosurgeons in the world.

I said, what percentage?

Someone else asked him, but I was right there.

What percentage of information in medical school textbooks do you think is false?

And he said, half.

And then the second question was, what do you think the implication is for people, for human health?

And he said, incalculable.

Right, exactly.

And that's true of the biomedical literature as well, right?

So the published, peer-reviewed biomedical literature is not reliable.

is the bottom line.

So a lot of the things that we think we know, even with some fair degree of certainty, are probably not true.

And

the question is, like, which half?

Well, we don't know the answer to that question.

It's probably a mix.

Parts of papers are probably true, and other parts are not.

Right.

It's not like all the papers from one.

Well, there are those labs, but they don't last long.

And this is done even with pure goodwill and no fraud at all.

And the reason is a combination of the fact that science is hard and the incentives we created for publication.

Those two together mean that

the scientific literature is, the biomedical scientific literature is not reliable.

I've talked with drug developers who tell me that they,

before they make vast investments in a phase three randomized trial or

even phase one or phase two trials, studies,

they conduct independent replication efforts of the basic biomedical literature to see if it actually is true.

Now, those are private replication efforts so that the drug developers know which parts of the literature

literature are true and false, but

the scientific community at large doesn't know.

We've set up a system of publication that guarantees that much of what we think is true is not true.

That's a major problem for science.

And it's linked to this idea that you have to publish or you're out.

It's linked to this idea that if you fail, if you publish failure, you're out.

It's linked to this sort sort of

reward that we give to scientific volume, like the number of papers we publish, and scientific influence.

That's what citation counts are.

There's a number, I'm sure you know this, Andrews, I'm explaining it to the folks who are listening, something called an H-index.

So if you go to a site called Google Scholar,

every scientist listening to this, I'm sure, has gone and looked at their Google Scholar page.

They have a little card at the top right that essentially looks like a baseball card to me.

And it has a few statistics.

And if you're not a scientist, you won't necessarily know what those statistics are.

But what they are are things like an H index is,

okay, so if you have an H index of 10, that means you have at least 10 papers published in the peer-reviewed journals that have 10 citations each.

But you don't have 11 papers with 11 citations each.

So in order to get a high H index, you have to have both a lot of papers and a lot of citations to those papers.

It's a funny number because you can imagine, just to bring back Watson and Crick, imagine Watson and Crick, the only paper they ever published was the structure of DNA.

Good papers.

The double heat.

Let's say it has a million citations.

Not peer-reviewed, but good papers.

That's a fantastic paper.

And was never peer-reviewed.

Right.

But a million citations.

And imagine it was their only paper.

Well, they have one paper with at least one citation, but they don't have two papers with two citations, so their H-index is one.

Or you could have a million papers in the Journal of Irreproducible Results, each with one citation each, and you have one paper, at least one paper with one citation,

but you don't have two papers with two citations, so your H index is one.

Or you could write a lot of reviews because reviews get cited like crazy.

Yes.

Okay.

So now, what you have then

is an

incentive for scientists.

embedded in Google Scholar that says, look, you have to publish a lot of papers.

You have to have a lot of influence because that's what a citation is.

It's a measure of influence.

You go to scientific meetings in order to

sort of shop your ideas around, right?

And so we reward scientists for the influence that they have, and we reward scientists for the volume of papers they publish.

What we don't reward scientists for is

honesty about their failures.

We don't reward scientists for pro-social behavior, like the sort you suggested, but where you collaborate, you share your data openly and honestly

in fact we punish scientists for that right so right now if somebody comes to me and says I Jay I want to replicate your work

I've trained myself not to think this way but it's really hard not to given the the structure we're in

I'm gonna think of that as a threat

what if they don't find what I've what I found

Now I'm a failure, right?

The failure to replicate is seen as a failure of the scientist rather than the fact that science is hard and it's difficult to get results that are true that are true even with the best of will.

And we punish scientists for that.

So we essentially reward scientists for

a set of things that creates incentives for the replication to have crisis to happen.

I see.

Right?

So the solution to the replication crisis is to address those things, measure the pro-social things that scientists could do, recreate the incentives away from simply influence and volume.

I'm not saying you shouldn't reward influence and volume.

I'm saying you should reward a fuller set of things.

It's like in baseball,

you know, you reward a hitter for home runs, but you don't also measure strikeouts.

Well, you're going to get a lot of strikeouts and not necessarily, you may get a lot of home runs, but

that may be bad for the team

in total.

So you want a full set of statistics measuring the things

you actually want scientists to do in order order to solve the problem.

So let's say we had statistics that said, look,

do you share data with others in your published research work?

And we have that as

among the baseball statistics we put in in

Google Scholar, right?

Let's say we ask, is your s work subject to replication?

Actually, if your s work is subject to replication, you have ideas that are worth replic looking at by other scientists, that's a success no matter what they find.

How frequently do you publish your false results, or results that turn out to be not true?

Imagine we had those statistics.

We would have a fuller picture of what scientists, like the capabilities of scientists, the outcomes of scientific work, and we would reward the pro-social things that would solve the replication crisis.

And so what you have now is a real problem.

that's not been addressed.

We've known about this now for decades, but it had not been addressed adequately.

There have been a number of efforts by the NIH over the last several couple of decades to try to address it, but it hasn't solved the problem.

Well, I feel like that the issue that really cracked this open, the reason the general public might have heard of the so-called replication crisis, is this idea that there were some findings in the field of Alzheimer's research that were false, but they were wrong potentially for the wrong reasons.

As a scientist, you learn it's okay to be wrong for the right reasons, meaning your measurement tool was inaccurate, but it was the best you had at the the time, and you thought it was accurate, but better tool comes along, you get a different measurement, new result.

Okay, so you were wrong for the right reasons.

But you're not fudging data, you're not hiding data.

There is this idea that in the field of Alzheimer's research, that somebody might have fudged data, made up data, and that the field kind of went along with it.

That's not my understanding of what happened.

My understanding is that somebody fudged data, and then nobody went back to check.

the primary data in that paper.

And as a consequence, many years down the line, a number of subsequent findings were nested on a false finding and the whole thing tumbled like a house of cards, more or less.

The process you just described is the replication price is playing itself out, right?

So you make investments built on a house of sand,

on a foundation of sand,

and

you eventually get fancy drugs that are supposed to prevent you from getting the disease that you're trying to prevent.

In this case,

prevent you from progressing so you can't remember the the name of your kids and you can't live a normal,

it's a full life

as your memory goes away.

The drugs don't work for those things.

And your question is why?

They're built on the best science going all the way down.

It turns out the best science all the way down is not replicable.

The fraud aspect of it is actually

not even the most, it's important, but it's not the most important part of it.

Like it's almost just an afterthought, right?

Ask yourself, why have there been

so many scandals

brought down the former Stanford president?

In the NIH, again, just within Alzheimer's, there was a director of neuroscience

who apparently had 100 or more papers with his Photoshop fraud.

So the question is, why have so many prominent scientists been brought down

where their work has been shown to be fraudulent?

It's not a moral failure on the part of any individual scientist.

The structure of incentives we've created produced those behaviors.

We created them, is what you're saying.

Yes.

We said you will get advances in your career if you publish a lot of papers and have a lot of influence.

And if you admit that you were wrong about something, your life is over.

Your career is over.

Yes.

I think one of the most beautiful things in science was when Linda Buck, co-recipient of the Nobel Prize with Richard Axel for the discovery of the molecular structure of olfactory receptors, retracted, I think it was three papers from her laboratory.

A postdoc

either was sloppy or fudged data.

She retracted the papers because the papers were wrong.

People told her, this stuff doesn't replicate.

Not only did it not hurt her career, it helped her career.

She was right about the olfaction work that got her the Nobel Prize, but she was willing to admit a mistake.

Someone in her laboratory made a mistake.

Ergo, she needed to retract those papers.

What happened in the case of our former colleague, still,

well, you're not at Stanford anymore, was let's just put it this way, in every major laboratory that's publishing at a phenomenal rate, inside the field, there is always discussion.

Postdocs talk, graduate students talk constantly, and people know that work is solid and other work, like there's something just gets said at meetings.

Like, no, nobody believes that.

When somebody says, and that gets passed around, so then no one follows up on it.

But it's rare that somebody goes and whistleblows the way that those papers got whistleblown.

And then the right thing to do, in my opinion,

is you correct or retract the paper.

If you make a mistake, you correct the mistake.

There are ways to do that.

People publish corrections all the time.

Or you retract the paper if it's wrong.

I think that the system, as you pointed out, has made it feel very dangerous for scientists that are approaching the pinnacle of science, like within reach of Nobel Prizes, winning Laskers, winning international awards, as was the case in all these instances,

that they could admit that they were wrong.

Andrew, it's all up and down the system.

Imagine you're a postdoc and you have to get your paper retract.

You retract your paper.

Yeah, you're essentially starting over or leaving science.

Yeah, you're leaving science.

It's existential.

So

the problem of fraud in science then is a symptom of the broader problem of the replication crisis rather than the main driver of it.

So the right solution then is not root out the fraud.

The right solution is change the incentives of science so that we, as scientists, engage in pro-social behavior.

Pro-social in this case, meaning behavior that rewards truth rather than rewards volume and influence alone.

Music to my ears.

How is NIH going to do that?

So we were talking about the innovation crisis.

That's a much more complicated crisis.

This one, actually, I think, is doable within the context of the NIH.

I think you have to do three things.

So, first,

you have to make it a viable career path to engage in replication work in creative ways.

To some extent, there's some of this with like meta-analysis.

Meta-analysis is the science of analyzing

the scientific literature to ask what the scientific literature as a whole says about a particular question.

That's what meta-analysis is.

And so there are people who make careers on meta-analysis.

And so that's, in a sense, a kind of replication work.

Studying studies.

Yes, studying studies, right?

But it's really difficult to make a career out of doing replication work as a general matter within science.

You can't win a large grant at the NIH currently where you say, oh, I'm going to do meta-analysis.

I'm going to do replication work, which means then you're not going to get tenure at a top university.

Because you can't win the large grant

that you're required to get in order to win.

So you're not going to focus on replication work as a young scientist, even if you were very good at it, even if you could think creatively of how to do it at scale.

But it is discovery, right?

Like I think we need to reframe it, right?

Replication is kind of a dirty word.

It shouldn't be.

But

years ago when gene arrays first became available, where you could look at gene expression in cells or tissues.

Now you do single-cell sequencing and you can do deep sequencing and this has really evolved.

None of those

dare I say, are experiments.

You're not testing a hypothesis.

They are hypothesis-generating experiments.

You get a bunch of genes, and you go, well, that one's much higher in the cancer cell, and that one's much lower in a non-cancerous cell.

I think I'm going to go do like a knockout of that gene or overexpress that gene.

I mean, that's testing hypotheses.

But there is work that's necessary, but not sufficient.

And what you're describing in terms of meta-analyses, AKA replication, maybe should be recast simply because branding matters, it shouldn't, but it does, and

incentivize it as

discovering whether or not discoveries are actually discoveries.

What's more important than that?

Yeah, like essentially

saying,

is the scientific literature true?

Like

assessing the truth of the scientific literature, right?

That's what that is.

And that's a real, fundamental, actual advance, right?

Exactly the way you say.

But we don't reward it.

The NIH doesn't reward it.

will change.

Well, drug companies, it occurs to me, should be incentivized to do it because it will save them perhaps

extensively, but perhaps they won't have to do it as extensively because if the work that they're getting down the funnel has been checked multiple times by multiple laboratories, they have an increased confidence that molecule A, B, or C does A, B, or C.

Sure, they'll test it again because they're about to put dollars behind it.

No one wants to put dollars behind something that they aren't absolutely sure is true.

But you'd like the funnel to be narrower.

Yeah, I mean,

and right now they test it.

They do the replication work.

Drug companies, before they make those investments, do the replication work, but it's private so that only they know which results are true and false in the literature.

So if the NIH does it, the knowledge about which results are true becomes public, which makes the entire scientific literature much more reliable as a basis, not just for drug discovery, but also for individual behavior.

Which health behaviors should I, I mean, what food should I eat

to make myself healthier?

Well, that.

No one can agree on that.

I know, but part of the reason why.

You can only agree on what you shouldn't eat.

And even there.

Yeah.

I mean, I shouldn't eat the Skittles with the prolonged.

I heard that processed foods are bad, but the other day I saw, you're not going to believe this, but there's a kind of a emerging movement in one sector of the media

that the demonization of highly processed foods is a conspiracy theory, which is like,

if that, but that's a perfect example of sort of what we're talking about more generally, which is that language matters.

You can throw something in the trash bin very quickly by calling it a conspiracy theory until somebody makes or a group makes the effort to bring it out over and over again and determine if it indeed is.

You can also throw something in the trash bin very quickly if you just call it just a replication study or a so-called negative result.

A negative result says this particular pathway, molecule, mechanism, et cetera, is not doing what we hypothesized it would.

That's a real advance in scientific knowledge.

Absolutely.

Without question.

So the reason why we don't have consensus on what the right thing to eat is because the scientific literature, well, first it's a more complicated question than just science, but like part of it is that the scientific literature around it is not

replicable.

And those studies are really hard.

Get people to eat the same thing.

You know, people are probably sneaking Skittles.

People lie about what they eat people.

By the way, I don't like Skittles.

I was more of an M ⁇ M person just for the reference, whatever.

Let's just leave that aside.

Well, it's clear that the new administration both both champions healthy unprocessed foods, but every once in a while you'll see one of them consuming it.

I've cut down the Skittles since I've joined the Maha movement, whatever.

Or the M ⁇ Ms.

Okay, so

let's go back to what we were talking about.

You asked, how do you fix this, right?

So one is you give large grants to people in the scientific community who do replication work in creative, important ways,

scalable ways.

You farm out to the scientific community the question of what results in the scientific literature really need replication,

the key sort of rate-limiting step kind of results that we need to know if they're true to advance science and advance human knowledge about what about questions of health.

So you reward large-scale

large grants for

scientists.

So now all of a sudden their status is lifted.

compared to where they were before, which is down in the basement.

Will there soon be an institute or a set of grants set aside specifically for meta-analyses and to resolve this to help resolve some of this so-called replication?

I'm planning to do that.

Fantastic.

I don't think you'll get any pushback on that.

However, every dollar spent one place is a dollar not spent elsewhere.

Yes, but at the same time, making the entire scientific literature more reliable is money well spent.

That is my belief as well.

Second, you have to have a place where you can publish this work.

Right now, if you send your replication result to a New England Journal of Medicine or science or cell or nature, they will not look at it at all.

The NIH can stand up and will stand up a journal where these replication results can be published and made searchable in an easy way so that you have some scientific paper you ask yourself, is this something that other people have found?

You can go to

the scientific journal that we're going to stand up.

You can search it very easily and ask,

where are the other papers that look at the same question and what do they find?

and get a summary of it?

This is a little bit like community notes on X.

Aaron Powell, in a way, but it's the scientific literature producing the community notes.

Right.

These are formal papers with method sections and credentials.

Not just anyone doing this work.

It's part of the community of people that are looking at this question in rigorous ways, right?

So the point is that you'll have kind of a Cochrane collaborative.

Cochrane is this group in the UK that grades scientific evidence on a whole bunch of different health questions in a way where they elevate

rigorous

randomized control studies is the highest level of evidence, and then N of one kind of studies is the lowest level and a whole bunch in between.

And they'll produce reports that say, well, there's weak evidence to suggest this is true.

There's excellent evidence to suggest this is true.

There's no evidence to suggest one way or the other on this.

They're very, very nuanced in summaries.

You should be able to do that, but with the published replication work as the core of it.

And a scientific journal put out by the NIH, a high-profile journal, will then make publishing replication work a high-profile, scientific, high-prestige scientific activity.

And the journal could also publish negative results.

I tested this idea out, it didn't work.

published in the journal, and now it's discoverable.

It's no longer the threshold of you have to have a significant result in order to get your result published.

You just publish the result because it's interesting and true, even if it was a negative result, right?

The journal then

that the NIH will stand up will

plug a hole in the literature where we don't reward, where we punish failure.

Instead, we would reward it where the constructive failures are published and communicated to the public, to the

scientific community at large.

We reward replication work.

Fund replication work, create a place where it's publishable and essentially rewarded.

And then third, this is probably the most important,

measure pro-social behavior by scientists.

Make it part of the suite of statistics we use to measure scientific productivity.

Not just publication, not just influence, but also do you share your data?

Is your work, has it been subject to replication?

Do you cooperate with those replication efforts?

Do you yourself engage in replication efforts of others?

And make that part of the suite of statistics we measure for scientists to measure their productivity.

And now all of a sudden replication becomes something you want to participate in, even if you yourself are not doing it.

It fundamentally alters the culture of science so that it rewards truth.

Scientific truth is determined by replication, right?

By independent research teams, rather than influence.

It's hard to think about as scientists.

We think about

scientific truth as, well, you published in the New England Journal, or you published in Science Cell or Nature or whatever.

That's truth, peer-reviewed papers.

But in fact, the ground truth of science is determined by something really much more humble than that.

It's by replication.

We need to reward the things that produce the ground truth rather than the things that reward just pure influence.

And we don't do that.

It's hard, and it's almost impossible as scientists grown up in a community of people that reward influence as the primary measure of success to think what it would be like if we were to reward truth.

But I think that if we do these three things,

it'll completely transform the nature of science.

Why would you want to commit any fraud?

You're not going to get a reward for it.

Yeah, you get a published paper, you might even be a top journal, but no one's going to replicate it.

You won't want to share your data with people because they'll find out you committed some fraud.

All the incentives to commit fraud will just dissipate.

It'll be liberating for scientists to be able to focus on the things we actually care about, which is learning about the world,

true things about the world, things that the reasons why we went into science to begin with, rather rather than this sort of like competitive process of trying to

climb up a ladder that doesn't necessarily produce any truth.

Amen to all of that.

I feel very blessed that I had a graduate advisor who said,

she said, it was wild.

She unfortunately passed away young as well, but she said, you know, why would any scientists make up data?

It's crazy, right?

You're trying to figure out what's true.

So that essentially means they're willing to lie to other people about their data and to themselves in some sense, right?

The other thing is I'll never forget revising a paper with her.

And I remember thinking like, oh, well, we have this, this.

And she said, whatever we do, we can't give the reviewers what they want.

And I thought, that's a weird statement.

All you ever hear is, you know, you got to give the reviewers what they want.

But it's a very dangerous statement.

And the reason she was saying, don't give the reviewers what they want is you have to stay you know, wholeheartedly committed to what you know and observe to be true.

And you were closest to the data, so you would know.

The other thing that I learned from her, and this relates to what you're saying, is that it not only is okay, but it should be encouraged to publish papers in an array of journals.

You know, I think the pressure to publish in high-profile journals in order to get a really great job is so great that it leads some postdocs, as it did in some of the cases we were talking about earlier, to either make up data or to throw away data that didn't fit in order to please the boss.

Then the boss gets pulled into it.

Then the boss tries to dissociate.

This has been going on for so long.

I feel very blessed that I was encouraged to publish some papers, if they had a chance, in Science and Nature, but other papers in fine journals like the Journal of Neuroscience, where the accuracy and in some sense the volume of data was also encouraged.

You could put a lot more data there.

But now with online publishing in electronic formats, there's no limit to the amount of data you can put.

So you can no longer use the excuse, well, you know, the high-profile journals, you only can have four figures.

So I think everything you're saying is very reassuring and should be reassuring to people um it's music to my ears frankly and i think it will be music to the ears of graduate students and postdocs who feel this immense pressure to make a major discovery to make the lab head happy so that then they can get promoted to getting a job because most of the job process is powerful pis picking up the phone and saying i've got this postdoc you should hire them that's like it's a lot of it it's not all of it but that's a lot of it so having an elder that supports you is huge the other thing that I just am so relieved to hear is that the system has been around a long time.

And it sounds like from what you described, it worked really well up until about the 90s, mid 80s, 90s, and that at some point something happened, something changed.

And I don't doubt that scientific fraud took place a long time ago.

There wasn't replication, but I feel like some of the pure essence of science that you were alluding to earlier, people tackling new issues, that there isn't really

it's more survivalist, careerist now than it is about the spirit of discovery, which is really about the spirit of finding out the truth.

So

any reflections on this notion that we're sort of in a more careerist mode of science?

I think part of it is just the sheer funding levels have been so high.

Well, I mean, over the,

you know, I think over the time period you're talking about, there was a doubling of the NIH budget.

There was all kinds of like increases on the sheer volume of research.

I think the way that we manage, I think it's worthwhile investments to have those investments, right?

But to have such high volume relative to what we had in the 80s, to have such high levels of funding relative to what we had in the 80s.

Are you saying that we have too many scientists?

No, I'm not saying that.

What I'm saying is that we have to create structures that are appropriate for the volume that we have.

So that we produce in this volume, it's like a fundamentally different problem than we had in the 80s.

So the structures that we had in the 80s where we rewarded publication and peer-reviewed journals as the measure of success,

it might have worked

to create incentives for pro-social behavior in the 80s,

but it doesn't work at the volume and the levels that we have now.

And so we have to change the structures we have so that given this volume of investments, people have the right incentives to have those pro-social.

We have to change how we think about we structure the incentives in science to create the kind of pro-social incentives that we once had.

All right.

Now that we're through the easy stuff, let's get to some of the harder stuff.

I'm just kidding.

You have a tough job, my friend.

Let's talk about some of the recent changes in NIH funding that most people have heard about.

And then we will segue to the barbed wire topics of vaccines and lockdowns.

But before we do that,

I heard, or at least my understanding was, that when the new administration came in, they essentially went through and looked for the letters DEI and for the word transgender

and basically halted or

eliminated some lines of funding to particular labs.

I also saw on social media, and I didn't validate this, that some studies that were focused on transgenic, not transgender, but transgenic mice, which is a very common tool in biomedical research,

got flushed in that process so that maybe it wasn't a

clean vetting vetting of transgender versus transgenic um look every every uh administration every person makes mistakes so i'm not trying to uh highlight mistakes but i think this blew up um and

it would be great because you have an opportunity here to reach a lot of people to just sort of clarify what the rationale of eliminating grants that had a DEI or transgender component was.

And then we can talk about this, what appeared to be a mistake.

Yeah, so uh, first, let me just talk about the mistake.

First, all a lot, most of this happened before I became an IEEE director.

There's like early April is when I started.

I think much of the when did you start?

Uh, April 2nd.

Right, so don't come after Jay for anything that happened prior to that.

It was actually quite frustrating to be on the outside looking and going, I can't, I can just look.

Yeah, they were waiting for you to step in so you could take a responsibility.

Yeah, anyway, obviously, so they could blame you for something.

I don't mean, I don't mean to say, like, I, I, I, you know, like, this, I, I, I'm still, like, responsible for like like

addressing this going, like going forward.

So

the trans, so I don't, I don't, actually don't know specifically about transgenic.

That's obviously a mistake.

The transgenic mice are a key tool for discovery.

If that was cut, I think we.

Maybe it might have been a wording in a public address from the president.

I don't know that they actually eliminated grants.

simply for studying transgenic mice.

I know that grants focused on,

look, years ago I studied sex differentiation in the brain and body.

So, not all studies where you give a male rodent estrogen or a female rodent testosterone are studies of transgender biology.

Those hormones are active in both sexes.

And there are a lot of grants that you can imagine that got flushed that were studying hormones and sex differences.

My sense is there were some false positives like this, and

I've worked very hard to make sure that those are corrected.

There's an appeals process that I've set up so that researchers were stuck in this with a false positive.

We've restored a whole bunch of grants like this.

Great.

Where it's good science, but it got caught up in this DEI

kind of

focus on like of refocusing the NIH portfolio away from sort of politicized ideologies and more toward things that actually advance health.

So let me just address DEI specifically.

First, and this is really important for me.

In my own research, I focused a lot on the health and well-being of vulnerable populations.

A lot of my research is focused on the health of minority populations.

And there are legitimate scientific questions

that where

somebody's race, sex matter pretty fundamentally to the biology.

And so, of course, as the NIH, we have to be able to look at that.

Yeah, some mutations only exist in certain races or

breast cancer and the BRCA mutation more common, much more common in women.

I mean, you can't

pretend this stuff doesn't exist.

Correct.

And so like that's part of science.

And and and the NIH absolutely supports that kind of research still, despite all of the changes in DEI.

So I I want to let me give you another example of an NIH success

is the is the

research on sickle cell sickle cell anemia, right?

So the research on

the strategy is

this gene editing strategy essentially is to switch

the cells so that they express the fetal hemoglobin rather than the adult hemoglobin that has this problem that causes sickling.

That's a fantastic result.

It's going to, I think, result essentially in a cure for sickle cell anemia.

Amazing.

Right?

Amazing.

And

it's a thing that affects African Americans much more frequently than

it does white Americans.

It's just based on the genetics of the thing.

So the NIH

has in the past and will continue in the future to focus on research that advances the health and well-being of minority populations.

It absolutely must, right?

If the mission is to improve the health and longevity of the American people, that includes people,

African Americans, it includes Native Americans, it includes

women, it includes minorities, it includes people of all different sexual orientation.

All of that is still part of the portfolio of the NIH.

I want to distinguish that from DEI.

DEI, I think, think, is something where,

just to give you a sense of this, right, so in 2020, I was quite upset with Stanford, with the way that it was,

we can talk about this maybe, maybe later in the podcast or a different podcast, but

I'd grown disillusioned with the academic freedom kind of that I as a scientist enjoyed at Stanford, despite being a tenured professor.

And so I applied for

a job outside of Stanford.

I applied to university, and one of the things they had me fill out essentially was a DEI, a loyalty oath,

right?

Where, where you had to say essentially your commitment to the DEI ideology.

Which was, I mean, just maybe we put a, as you would say, a finer point on it, just because, I mean, I think these were, you know, these words, diversity, equality, and inclusion,

I think, or equity and inclusion are, you know, they're words, but what are...

Maybe

what are they really talking about?

That you're committed to having a lab where you include a certain number of people of different backgrounds?

Or is it just sort of saying, I care about these groups?

The key thing is race essentialism.

That what makes you you is your race.

First and foremost, there may be other things about you that matter, but the most important thing about you is your race, and nothing else matters

of the same scale.

That essentially is the heart and soul of the DEI.

So just to give you another, again, a concrete thing, the idea that structural racism is responsible for the health outcomes of the minority populations primarily.

Now, if you think about that, you say, okay, well, you know, it may be true, you may think it's true, you may think it's not true, depending on who you are, what you're listening to.

But all I'll say is that I cannot think of a scientific experiment to do that would, in principle, falsify that idea.

Now, I can think of experiments to do that would say, okay, well, look, minorities

are more likely to live in food deserts.

So the food they get access to easily makes their health worse.

That's a scientific hypothesis.

You can test it.

You can imagine the result being not true or true depending on the data you find, right?

That's a scientific question.

That's not DEI.

That's a scientific question about the health outcomes of minority populations.

You can test scientifically.

Whereas the idea that structural racism is responsible for the health outcomes of the people of the minority population of the country, that's not actually scientific in the same sense.

You mean there isn't a clear variable to focus on?

There are a lot of variables that could support or refute that idea.

I don't think so.

I think the problem is one of

the demarcation between what is science and not science.

I think it's like a structural.

So

Karl Popper had this demarcation.

He's a philosopher

of science in the 20th century, probably the most important, one of the most important philosophers of science in the 20th century.

He had this demarcation criteria that said, look,

is your scientific hypothesis in principle falsifiable?

So the structure of the atom involves certain hypotheses about what you can and can't observe about the momentum

and the position

of an electron at a particular time.

Like what's falsifiable?

Now, if there's falsifiable questions, you can do an experiment that in principle could have falsified the Heisenberg idea, right?

Versus, for instance, Freudian psychology.

He made the point that there was in principle no scientific experiment that was outside the system so that you could falsify the Freudian idea.

Everything inside the system was,

so it's not scientific.

Yeah, I see exactly where you're coming from.

I will just push back a little bit in service to the conversation,

which is for descriptive work in science, there's no hypothesis.

billions of dollars of NIH money went to gene array single cell sequencing.

Those were hypothesis generating experiments.

Could you falsify those experiments?

Okay, a given cell, let's say a cancer cell and a non-cancer cell from the same tissue, express gene list A and gene list B.

Could you falsify those lists?

Well, you could run it again and get a different list, but at some point you're running statistics on those.

And did you falsify the first one?

Not really.

So anything descriptive, like an electron micrograph, for instance, of a nerve cell, you see lots of stuff.

Wow, the mitochondria are there, the vesicles are there.

Now I get a more powerful microscope.

And I look and I go, oh, what I thought was one thing is actually two things.

Did I falsify it?

In some sense, yes, but I actually just separate it with a better tool.

So a lot of descriptive science upon which like many of the great truths rest, including the double helix, right?

Crystallography to find the double helix structure, it's still a double helix.

Thank goodness, as of this morning, I think it's still a double helix.

No one's proposed different yet.

Most science isn't subject to this idea that you could like just falsify it with a counter hypothesis.

Or I would say a lot of science doesn't quite work that way.

Now, what you're describing is a merge of sociological phenomena and scientific principles.

And so maybe I'll just pose the question a little bit differently in an area that falls squarely in your court.

Up until, I think pretty recently, maybe still now, but I think this was eliminated.

If I had a grant from the NIH and someone was potentially coming to my lab who was an underrepresented minority, I could call up my program officer.

That's not a parole officer, by the way, but they're kind of similar in that they control a lot of your life.

and i could say hey listen i've got a terrific young scientist coming to my lab i don't even need to say that i'd say hey i've got a scientist who wants to come to my lab that's an underrepresented minority and they would say great

we will now add funding to your grant specifically to fund that person i mean they have to be what we call above the bar they have to be capable of doing the work etc that has been eliminated I'm neither advocating for that nor fighting against it, but that's something that lands squarely in your camp.

And it is clearly DEI.

It's not a question of whether or not they're the best person for it.

It's just more taxpayer money specifically to fund a researcher

who would not otherwise have the opportunity, that's key, because they are an underrepresented minority.

Okay, so let me, you have two items there.

Let me address them both.

Per usual, yeah.

So the question about

like hypothesis driven science, right?

So like inductive versus deductive science, the NIH funds both, and it should fund both, right?

So

the idea of a scientific project demonstrating

differences based on race or some other variable that's biologically relevant for some health outcome without necessarily having a hypothesis, that is good science often.

Women get breast cancer more often than men.

So there's nothing wrong with that.

And

there's no policy of the NIH not to fund that now.

In fact, the NIH still funds and will continue to fund exactly that kind of science, right?

Because it's still science.

It's part of the scientific method.

Whereas like purely structural racism

causes your health problems for a minority, I don't believe is science.

That's more of a psychology question than a biosciences question.

I don't even think it's a

psychology question, it's not a scientific psychology question.

I don't think it's science.

I just think it fails the demarcation problem.

Again, that's a falsified session, right?

So there's no problem then with hypothesis-driven science if it's actually sort of focused on health problems that matter rather than just purely trying to demonstrate you know sort of uh sociological outcomes that are outside the purview of the of the NIH to try to address right um okay that's so let's leave that aside um before we do that there's an old saying that I learned uh from a very famous excellent scientist also deceased uh he used to say a great dead scientist all my advisors are dead so the joke in my field is you don't want me to work for them oh my gosh okay but um but i didn't have to deal with competing with my mentors and I did not have to deal with disappointing them or pleasing them.

So there you go.

But I would do anything to have them back.

Truly, they were wonderful people.

I was very blessed.

But there's a saying, which is, a drug is a substance that when injected into an animal or person, produces a scientific paper,

which is basically to say that there are many things that when you, many studies that when you introduce a variable, you're sure to get a difference.

Like if I want a paper, I give a drug to a person and I measure the amount of rapid eye movement sleep because basically every compound alters rapid eye movement sleep, usually for the worse.

It's kind of wild.

An aspirin will do it.

You know, I don't want to discourage anyone from taking aspirin, but it's so easy to tease out effects when you just introduce a dramatic variable.

So I think that's what you're referring to.

Yeah.

And it's not junk science, but it's not.

It's not great science.

Yeah.

I mean, so like, right, for instance, that, you don't have a control group.

You're like, okay, what's the.

point of it?

Yeah.

Okay.

So let's just leave that aside.

So some of it is good science, some of it is not good science, some of it is not science.

The DEI shift has been, in terms of funded science, has been to try to excise from the portfolio things that are purely ideological boondoggles.

Can you do that?

Give me an example of some of these grant titles because that no longer exists.

I don't want to

single anybody out.

So I don't want to do it.

But just sort of of a general flavor.

I mean, I'm having a hard time.

Structural racism

is the cause of

worse cardiovascular disease

in African-American populations.

So something like that.

That would be an example.

It's not actually a specific example.

Again, I don't want to point anything.

No, it's a

thematic example.

Exactly.

So that would be an example, right?

Now let's talk about the support for underrepresented minorities and the set-asides.

The position of the administration is that we should follow the civil rights laws of the country.

The civil rights laws of the country say that we shouldn't be discriminating against people based on race.

When you have

an institution like the NIH that essentially says, we are going to consider your race when we decide whether we are going to give you support,

you can understand why, for a large part of the American public, they say, well, why are you doing that?

With their tax dollars.

With their tax dollars, right?

And actually, I should say, like, from the perspective of a minority student,

it's actually quite condescending.

Like, I believe very fundamentally, based on lots and lots of experience with some excellent students I've had, that minority students

are often, if they make the right investments in the time and effort they put in, they can have become excellent scientists.

Sure.

There's no barrier to that in the scientists.

The only barrier are the structural problems with

what the incentive scientists have to make make those investments in young careers and so on.

But those are common across race.

I think that if you solve those problems so that we invest in young scientists,

not just at the level of the, you know, where they're like competing for NIH dollars, but even before, where everyone has access to those kinds of resources that the URM scientists used to differentially have, first, you're going to end up with better

set of scientists that actually are more capable, and you're also going to have minority scientists represented in

proportionally to

the kind of desire that people have to become scientists.

There's no field of human endeavor where you say, well, I have to have exactly the right proportion of race.

I mean, if that's the true, then what you have to have is like Indians and Chinese

represented all the time.

That's like almost 3 billion people of the

8 billion people of the earth.

You don't,

justice isn't that, isn't that kind of like race essentialist representation.

Justice is,

are people who want to make the investments to become scientists have the capacity, the resources that we, as a society, providing it so they can become excellent scientists?

That has to be the case, right?

And we're not,

by

shifting the investment portfolio toward this race essentialist thing,

all that matters is

a URM, underrepresented minority.

It doesn't matter if you're an excellent scientist.

It doesn't matter so much.

It may matter some, but that's not the key thing.

It doesn't matter

if you have a fantastic

idea that challenges entire fields.

All that matters is what's your race.

It moves the emphasis in science away from what really matters in science.

Like, what are your ideas?

Are they advancing human knowledge?

Are they translating into health for

large populations?

Are they true?

Are you working in things that advance our knowledge and reliability of the entire scientific literature?

I mean, those are the things that matter, really, for scientists, right?

Why are we caught up then in this idea that somehow we can address, I mean,

I want to be very, very clear.

There are real problems

that minority populations have faced based on the history of the country.

There are real injustices that have happened as a consequence of them.

But we're essentially asking the scientific,

like the scientific institutions of the country to somehow solve these like deeper problems of

essentially cosmic injustice in ways that we don't actually have the capacity to do.

And in some ways,

A, distort the investments we make and B,

cause large chunks of the American people to distrust us.

Say, look, you're not really focused on the things that really will improve my life.

You're interested in

sort of cosmic justice

rather than actual science.

I think it's the right thing to do to say, let's focus on the mission.

The mission is how do we advance, how do we make investments in research that advance the health and longevity of the American people?

And I don't believe there's any place for this sort of race essentialism in it.

So

you've talked about the DEI

topic slash issue from the perspective of which science does or does not get funded.

Okay, so like testing race as a theory, a non-falsifiable theory is not something that the NIH is going to continue to support.

We are also discussing DI in terms of which scientists get to be called scientists and which ones get funded.

I suppose the universities decide who they hire and then NIH plays a major role in deciding who gets funded.

So if I understand correctly, as now,

the funding of a given grant can't have anything to do with somebody's race or background.

To which I say, why not just make it blind to who the investigator actually is?

Now, I realize when people write grants, they say, previously we've shown, or my lab does this, but why not just eliminate identity entirely and just say, what is the best proposal on the table?

Let's fund those proposals.

When we talked about earlier, we talked about early career scientists and providing support to them.

That's essentially along the same lines, right?

So we're saying we're going to de-emphasize the track record of scientists in deciding

which scientific projects to fund.

That's essentially what you're saying when you say we're going to fund early career scientists, because the early career scientists tend to have less of a track record.

I agree with that.

I think the key thing is the ideas.

Are the ideas powerful?

Are they promising?

Are they worthwhile in terms of being able to translate to improved health

for populations, right?

So

I don't know if it's possible to get rid of some elements of identity.

Like, you know, you kind of want to make sure that they've had training as a scientist.

Sure.

Well, they could check some boxes.

I'm not here to solve every

aspect of the mechanics, but

relevant identity, like relevant, like your race is not relevant to whether you have excellent scientific ideas.

I've learned from people of all races scientific ideas that have changed how I think about the world.

And it doesn't matter, the race was not the key element in deciding whether they knew had a great idea or not.

What was really mattered was the idea.

Now, it may be the case that some people have, based on their background, will have an idea, more likely to have

an idea in a particular field than a different.

with a different background, right?

So allowing people of lots of different backgrounds to have their say

matters, right?

But rather than focusing on the race, focus on the idea.

Is the idea important?

Is it likely to translate to improved health for populations?

Well, having sat on a fair number of study sections over the course of like more than 10 years, either as an ad hoc or regular member, I don't recall ever

feeling in the room or anyone explicitly saying we need to fund this grant because it comes from somebody who's an underrepresented minority.

There were grants that came from underrepresented minorities, some of which were terrific grants and some of which didn't get funded because they weren't as terrific.

So are you telling me, and it's been a little while for me, not a long while, but

that there has been a recent pattern.

I'm not trying to seed the question, but are you telling me that some grants were getting funded specifically because of the identity of the person writing the grant?

I always thought grants were funded or not funded on the basis of the science in them.

And

I never saw that to not be the case.

I mean, I think there are markers of that that were increasingly emphasized.

And you already mentioned one, actually, Andrew.

You said like you could call up your

program officer and say, look, I've got a great

a great postdoc who's a URM, which essentially means the minority, and would you like to fund him?

And the answer would be yes.

No, there was a pool of money.

It was always, it was a, if, no, it actually ran in the other direction.

It was well communicated from NIH that if we had someone who was underrepresented, minority, who wanted to join on our grant, that there was additional money to be had.

I think there was a website.

It told us this.

And, you know,

okay, well, it's clear that

NIH, as it stands now in the new administration, it's clear where their stance on DEI is.

I am relieved to hear that grants that might have been caught in the filter

of this recent change can be, that did not qualify for what you're describing.

that there's an appeal process because I think that shocked some of us in the science community.

We're like, oh my goodness, there could be terrific grants.

It just got the axe.

So there's an appeals process to fix that.

I think let me just make

an analogy to something that happened during my career.

I think it was around 2010, the NIH put out a priority statement that said they was not going to fund health economics research, more or less.

It was in the wake of Obamacare.

There was a whole fight over cost-effectiveness research.

And cost-effectiveness research became this like political football where, and the NIH said, look, we're we're not going to fund this kind of work anymore.

Actually impacted my career.

Some of the work I'd done previously had to do with like the relative cost effectiveness of various drugs or whatever.

And so the question was, so I had to like, I had to pivot away from that research if I wanted research support for the NIH.

It actually impacted my career quite negatively.

There's priorities.

And the thing is,

I don't want to argue the wisdom whether that was right or wrong to do.

I personally think it was wrong, but let's just leave that aside.

I think the thing is like it's normal for the NIH to put out priorities that reflect the sort of social circumstances that are around us.

Here, I think what we have is a shift to priorities that focus on

the quality of the ideas the science has done rather than the racial identity of the people doing the science.

And I think fundamentally, it's more healthy both because we'll end up having a set of scientific ideas that are more likely to be replicated and more likely to be able to translate it into

advances for health.

And also, it's better from a sort of

racial

social point of view because it de-emphasizes things that are irrelevant to the progress of, mostly irrelevant in terms of the progress of science.

It shouldn't matter if you're

a minority student, a very promising minority student, or if you're a very promising non-minority student for the NIH to support you.

Both should get support.

It shouldn't make any difference whether you're minority or not.

And for the American public at large,

I mean,

a lot of there's this sense of like unfairness, right?

Where like, why are you, like, I just, let's leave, let's move aside from the NIH and like move to like Harvard University and the case that it lost over the admissions.

I'm sure you remember this case, right?

Where

Asian students were found to be at a disadvantage in admissions into Harvard.

They had, actually,

the facts of this case are really shocking, right?

So, what happened was

Asian students who applied to Harvard and non-Asian students would be evaluated by alumni interviews, where the alumni would evaluate their personality.

Asians and

African-American kids both had roughly the same average personality score as evaluated by an interview with alumni.

Then the Harvard Admissions officers would find similar kinds of

scores based on essentially personality.

But the admissions officers had never met the kids.

And Asian kids routinely had much lower personality scores than African American kids that applied.

That's what led the Supreme Court to say that was an illegal act of discrimination by

Harvard against Asian kids.

I think this focus on race,

I can understand it because we have a history where race

has been the crux of so much pain and suffering and injustice in this country.

We have a legacy of slavery that goes back centuries.

We have

laws against

that...

discriminate against African Americans in

like the Jim Crow laws.

We have this painful legacy of

slow progress in civil rights that goes back

generations,

centuries.

So I understand that that's the backdrop.

I'm not naive about that.

What I'm saying is that these kinds of scientific,

these kinds of, like using the NIH to solve that

problem is an inappropriate use of taxpayer funds.

And actually, I think it makes things worse for those problems than better.

And in particular, and for me as the director of the NIH, this is the most important thing.

It doesn't allow me to meet my mission.

The mission is to

support research that advances the health and longevity of the American people, all of the American people, whether you're minority, whether you're American Indian, no matter who you are, we should be doing research that advances your well-being.

And that means to me,

I shouldn't be using the NIH for the sort of cosmic justice purposes for which the NIH is poorly suited, but instead we should be using the NIH for the purpose it is well suited, which is to advance advance science that advances the health and well-being of the American people.

Aaron Powell, yeah, I can see the parallels to something like the space program, where

the space program is incentivized to try and figure out the best way to meet the

specific goals of the program that year and in subsequent years.

And

if the public thought that taxpayer dollars were being diverted according to a social justice issue in order to try and advance the space program in that way as opposed to getting onto Mars or whatever it is.

Maybe that's a bad example.

It's so specific to Elon, but you get the idea.

So it's very clear based on what you've said that you believe that the best way to serve everybody in the country in terms of health and longevity is to make the discoveries.

verify those discoveries and then distribute

the devices and therapeutics for those discoveries and behavioral tools that will allow for the health of all Americans.

And anytime someone says all Americans, it sounds like a political statement.

I realize that.

And to leave aside social justice issues en route to that goal.

That's what I'm hearing.

Yeah, I mean,

except to the extent that the social justice issues can be articulated as clean scientific hypotheses that actually matter,

right?

So like, you know, race differences in biological variables.

It's a fact that matters.

Sure.

Certain mutations run in certain populations.

So certain advantages, right?

So the NFL still supports that kind of research.

research, but again, that's in service of the scientific goal, not in service of some social justice goal that the NIH is ill-suited to achieve.

Aaron Powell, yeah, as somebody who worked on vision science for many years, glaucoma is much, much more common in darker-skinned races.

There's certain areas of the world where glaucoma is at an outrageously high percentage of the population.

And it's not lost on people that there's a genetic inheritable component, and some of the treatments might

need to be tailored to those specific populations.

My grandfather went blind from glaucoma.

get your pressures checked, everybody.

Take your drops, get your pressures checked.

This episode is brought to you by Progressive Commercial Insurance.

Business owners meet Progressive Insurance.

They make it easy to get discounts on commercial auto insurance and find coverages to grow with your business.

Quote in as little as six minutes at progressivecommercial.com.

Progressive Casualty Insurance Company, coverage provided and serviced by affiliated and third-party insurers.

Discounts and coverage selections not available in all states or situations.

I'd like to pivot slightly to some issues related directly to public health.

We have a kind of fork in the road here as to whether or not we focus on issues of public health from the recent past, for which you became best known,

aka COVID and the lockdowns, or whether or not we focus on public health issues that are more relevant now.

I was told by many, many people who are not scientists but care a lot about science

that,

quote, until the scientific community acknowledges two things, they don't want to give another dollar to science.

Those two things are, one, the replication crisis.

We talked about this.

And by the way, I think your plans to deal with that are fantastic.

I love this idea.

And I think many students and postdocs will be excited to be part of the correction process that will evolve science.

And the second one is an admittance of error in our past.

I want to be very clear, not to protect myself.

I have plenty of work to do

no matter what, but these are not my words, but the words were

the scientific community did us wrong.

The lockdowns were unfair to, in particular, working-class populations.

We were told one thing about masks, then told another.

We got a kind of loop-de-loop of

foggy speak, politico messaging about vaccines and what they did do or wouldn't do.

And basically, I hear from a lot of the general population, not just people on the MAGA, Maha, whatever you want to call it side, but also a lot of stated Democrats and people who are truly in the center, that they lost trust in science and scientists, and they will not consider restoring that trust until scientists admit that they made some mistakes.

And it took me a while to hear that message because I'm like, hey, listen, I have friends trying to cure blindness, cure Alzheimer's, use brain machine interface to cure epilepsy and get paralyzed people to walk.

And you're talking to me about something that happened, but I finally had to just stop and listen because they kept saying, we don't care.

And so it's almost like big segments of the public feel like they caught us in something and as scientists and we won't admit it.

And they're not just pissed off.

They're kind of like done.

I hear it all the time.

And again, this isn't the health and wellness supplement taking, you know,

anti-woke crowd.

This is a big segment of the population that is like, I don't want to hear about it.

I don't care if labs get funded.

I want to know why we were lied to or the scientific community can't admit fault.

I just want to land that message for them, because in part I'm here for them, and get your thoughts on

what you think about, let's start with lockdowns, masks, and vaccines, just to keep it easy.

And

what do you think the scientific community needs to say in light of those to restore trust?

So first, let me just say, I don't think I'm the NIH director unless that were true.

Unless what you said is true.

Otherwise, I'm not the NIH director.

So I was a very vocal advocate against the lockdowns, against the mask mandates, against the vaccine mandates, and against the sort of anti-scientific

bent of public health throughout the pandemic.

I've also argued that the scientific institutions of this country should come clean about our involvement in very dangerous research that potentially caused the pandemic.

The so-called lab leak.

Yeah.

Right.

So

let's just stay focused on lockdowns.

And just I want to make the scientific case that they were a tremendous mistake.

And that that was known at the time they were a tremendous mistake.

And let me just focus on one aspect of it.

We'll get broadened out to other lockdowns, just the school closures.

So

what the public at large

now sees is that American kids, especially minority kids,

are

two years or more behind in their schooling.

We decided during the pandemic that children ought to learn to read as five-year-olds or six-year-olds remotely in Zoom.

We decided that in-person schooling didn't matter anymore.

My kids in California were kept out of school, public school, for

a year and a half.

If they saw the inside of a classroom, it was

with plexiglass, separated from their friends, eating lunch, isolated, alone.

The message to American school kids was essentially, your school doesn't matter.

Your future doesn't matter.

American public health embraced that entirely.

In Sweden, they didn't close schools for kids under 16 at all.

That was not a policy of the Swedish Anders Tegnal,

the head of Swedish public health.

explicitly made that a priority.

In the summer of 2020, the Finns and the Swedes compared their results.

The Finns had closed schools in the spring of 2020 and

the Swedes had not.

And they found that there was no difference in health outcomes for COVID.

The teachers in the schools, in the Swedish schools, actually,

they had no worse outcomes than other workers in the population.

And on the basis of that evidence and the fact that we know that closing schools harms the future health and well-being of kids, even short interruptions in school.

We knew that for a fact based on a vast literature that existed before the pandemic.

Many schools around Europe opened up in the fall of 2020.

The scientific evidence was abundant and clear, even by late spring 2020, that the closure of schools and kids was a tremendous mistake.

And yet, when I wrote the Great Barrington Declaration with Sunetra Gupta of Oxford Oxford University and Martin Kuldorf of Harvard University in October 2020, I faced

vicious attacks by the scientific community and the medical community for being unscientific about school closures.

Were there threats to your job at Stanford?

Yes.

Like real threats or something?

Or just people saying we're going to take away your job?

Okay, in March of 2021, I was part of a roundtable with Governor DeSantis, a policy roundtable, where he asked me whether there was any evidence that masking children had any impact on the spread of the disease.

And the answer is there's not a single randomized study that looked at kids.

The U.S.

was an outlier in recommending that kids as young as two years old get masked.

In Europe, like 12 was the age.

There were no studies.

In response to that, 100 of our colleagues signed a secret petition, essentially, effectively asking the president of the university to silence me.

Were you contacted by the university administration?

No, I found out about the petition from a couple of my friends who leaked it to me.

And then I went to the press and said, look,

you should go ask the President about this.

And then he had to say that

he had this mealy mouth statement about academic freedom, but also essentially that

it's really important that we obey public health authorities or something.

So like

political.

Like boilerplate speak.

Yeah.

And in 2020, I'd been subject to all kinds of sort of attacks on

me.

I mean, just I don't want to relitigate this history, but I'll just say that Stanford failed the academic freedom.

It didn't hold a scientific conference on COVID with alternative viewpoints,

with viewpoints that were anti-lockdown until 2024 when I organized it.

Even though I asked to have a conference in 2021 and 2022.

But your job security wasn't threatened

in a direct sense.

No, that's not true.

No one came along and said, hey, quiet down or else you're going to lose your job.

So in that sense, you had academic freedom from the stop.

That's not true.

So I was asked to stop going on the press in 2020.

I was

by the university, the dean of the medical school.

My academic freedom was pretty directly threatened.

I wrote and published a study on

measuring antibodies in the population, a study that's now

replicated dozens of times around the world.

And

I was essentially ordered to redo that study.

They interfered even before I had sent the paper in for publication.

When I say they, I mean

the administration of the medical school.

I mean, my academic freedom was pretty directly attacked.

And I wrote a piece with How Stanford Failed the Academic Freedom Test.

You can go

read it if folks would want to read about it.

Again, I don't want to relitigate the past.

No, I ask, listen, I'm not trying to dig for dirt.

I ask because,

well, I never saw a petition cross my email path.

I did see a petition pass my email path about Scott Atlas, who

was in our Department of Radiology.

He's a physician, as you know, and was appointed Trump's coronavirus task force, head of Trump's coronavirus task force.

And then there was a petition basically asking him

what to take away his job.

I don't know what it was, but that passed through.

But I see a lot of petitions passed through my email.

And as everybody knows and the press has pointed out, I'm not great at email and communications.

But I guess the reason I ask is academic freedom means many things.

Like, can you tweet what you want to tweet?

I guess I don't call them tweets anymore.

At the time, could you tweet what what you wanted to tweet?

Could you continue to do the science that you were doing?

Did you continue to collect a salary?

It sounds like you were able to keep your job, but there was some pressure to not communicate your ideas.

Is that about right?

Yeah.

I mean, or there's a threat to my job as well.

I think the issue here is one of like, okay, imagine what a university, there's a sense of like positive and negative academic freedom.

A negative academic freedom means there's no active attack on me and my capacity to do work.

I think Stanford failed that as well.

Like there was an active attack on me.

So for instance, there was a poster campaign all around campus with my face on it, essentially accusing me of killing people in Florida for advising President DeSantis, Governor DeSantis, that there was no evidence that masking children benefited anybody.

Right?

And essentially it was a threat.

It was like

at the same time I was getting death threats.

from people, the former head of the NIH wrote an email to Tony Fauci four days after we wrote the Great Branton Declaration calling for a devastating takedown of the premises of the Declaration.

And then that resulted in essentially press propaganda pieces, the New York Times and elsewhere,

essentially

mischaracterizing what the Great Branton Declaration said, which was to protect older people better and open schools,

let kids go to school.

Essentially

mischaracterizing is in a propagandist way, we're saying we wanted to let the virus rip.

And that led to death threats against me.

At the same time, there's this poster campaign all around campus.

I called the campus police, I told the department of

the folks in the department, the medical school, that this was happening.

And

their response was to send me to a counselor to reduce my online presence.

Stanford absolutely failed during the pandemic.

In 2020,

the former president,

John Hennessy, approached me wanting to organize a discussion,

some sort of panel where different perspectives about how to manage the pandemic, the lockdowns elsewhere,

could be had.

And even he couldn't get this organized.

Hennessy couldn't?

No.

Hennessy is one of the most beloved presidents of Stanford.

I have tremendous admiration for him, but the pressure was absolutely enormous.

The fact that he approached me at all was actually a credit to him.

He's one of the few officials at Stanford who approached me during the pandemic to try to allow me to have...

I mean, you know I might have been right or wrong it turns out I was right but I in principle that you know that with Stanford should have had those debates in 2020 we had prominent faculty people like John Ianede Scott Atlas

and and others Mike Levitt who were opposed to the lockdowns and yet we couldn't get a hearing yeah Levitt reached out to me at one point I I you know as I've been criticized for before you know with this podcast I mainly focused at that time on what we launched in 2021 on ways to deal with anxiety circadian rhythm sleep because people were dealing with those issues.

I'm not a virologist, so I couldn't talk about virology or epidemiology.

But I...

Andrew, it wasn't on you to like

put us on a platform.

It was on the Stanford University administration to organize discussions and debates on the most important topics of the day.

And that included in 2020, were school closures the right approach?

I read comments enough and get calls and emails that I do read enough to know that when people hear this, their minds will go to questions about like what is the incentive, financial or otherwise, for Stanford to not allow you to have these discussions.

Or let's broaden the discussion for any university for that matter, right?

I mean, Stanford's not the only university on the planet

for

a panel, a discussion about these issues

to be held.

Well, we have a health policy department.

What's the purpose of it, if not to like

impanel the most important debates about health policy of the day?

So what do you think was going on?

I mean,

the vaccine technology was developed at multiple sites, right?

I think Stanford had something to do with the development of the technology.

There were other universities that were involved in the development of the technology as well, right?

And I think in the back of this conversation, I know what's buzzing.

Let's just be direct here.

You and I were,

there was a vaccine mandate.

Everyone that'll be right.

But eventually there was a vaccine mandate.

If you wanted to keep your job, unless you had a religious or other, what was a medical reason, religious or medical reason, you were told you had to take the the vaccine people did what they did some people did some people i know colleagues that falsified cards i know uh colleagues that got nine vaccines i everything in between right um but there there was a there were mandates so to be clear you were opposed to the lockdowns yes

um

and you were opposed to vaccine mandates were you also vocal about that yes because that's even i mean that's even touched i was an expert witness in uh a number of cases on the vaccine mandates including one that reached the Supreme Court and overturned the OSHA vaccine mandate.

So, yeah, I mean, I was vocally opposed to the vaccine mandates.

I was vocally opposed to the mask mandates.

On the lockdowns, I was vocally opposed to the school closures.

I emphasized the harm that the lockdowns did to the world's poor.

So, in April of 2020, there was a UN report

that calculated that 100 million people would be subject to starvation as a consequence of the economic dislocation caused by the lockdowns.

I was opposed to that.

I think the

idea that the lockdowns were the right strategy, well, they're unique in world history of having lockdowns at the scale we had.

And they were no part of previous pandemic plans where such a lockdown of such a length,

of such a scale, were no part of any previous pandemic plan or any previous pandemic management experience.

And it was very clear to me, with my my background in

health policy that we were going to harm the poor, we were going to harm children, and we were going to harm the working class at scale.

The lockdowns were a luxury of the laptop class.

And that's what I was advocating at the time.

The university It wasn't just Stanford, you're right, but

in fact, there were almost no universities that impaneled these kinds of discussions into 2022.

So what do you think happened?

Do you think that there was a fear?

I'm not seeding the question, leading the witness, whatever, but do you think that there was a fear among the academic and science community that if anyone, if it were allowed for people to

speak out or consider different aspects, positive or negative, about lockdowns or vaccine mandates, that somehow their existence would be at risk?

Like that this got to an issue bigger than the lockdowns and bigger than vaccines?

because I do.

I think that this whole issue was really a question of whether or not

we consider scientists experts.

The word expert has become a very touchy thing.

Like, who gets to be called an expert?

Who designates which experts are really the experts?

I mean, it's all, you know, all you have to do is accuse someone of misinformation and suddenly their expert card is taken away, even if they hold a position in a given area that they've...

I've been a tenured faculty member at Stanford School of Medicine

for decades, right?

I've been a full professor with a long scientific background, like history of published papers in some of the top medical journals, the top

statistics journals, the health policy journals, and so on, economics journals.

And that wasn't enough.

The problem is you have.

Okay, let me just say one version of this that you can go,

there are other aspects of play.

Like, for instance, I think people were genuinely scared, scientists were were genuinely scared for their own mortality, especially in the early days of the pandemic.

And that clouded the way they thought about that.

Especially since there are a lot of older scientists.

I'm not trying to pick on older, but there are a lot of them.

Yeah.

Yeah.

And older people were dying more, right?

Absolutely.

Yeah.

I mean, that was actually the most important epidemiological fact about COVID was that it was this very steep age gradient in the mortality profile.

Young people, very low mortality risk, older people, much higher mortality risk.

What was the rate of mortality among people 70 to 85 years old, roughly?

5 to 7 percent, somewhere in there.

Okay, so not a trivial number.

No, it's huge, like 1 in 20 to one in, you know, one in 18 or whatever, 14.

And that was due directly from COVID itself, not some

salary variable.

So, okay, so

but I want to leave aside the personal fear, although I do think that played a tremendously important role in the, in the thinking about of scientists, especially since scientists as a class tend to be part of the laptop class, right?

People who

have the economic resources to shield themselves

for extended periods of time without any threat to their

livelihood.

That's not true for most of the world, but that's true for scientists.

So let's leave that aside, and let's just focus on

what I think was a core dynamic, right?

So

there's two norms, two ethical norms in science.

And

they competed with each other.

In science, free speech is an absolute must.

If you have an idea that's different from mine, you should be able to express it.

And then

we can test each other's ideas out.

We can maybe devise an experiment to decide between us.

And whatever the experiment says, we'll say, okay, you're right and I'm wrong.

And I'll buy you dinner or something.

That's good.

That's how science advances.

through this kind of like this process of people talking to each other, having

free speech, the ability to come up with ideas and articulate them, defend them is absolutely fundamental to the progress of science.

Public health has a different ethical norm.

Public health has an ethical norm of unanimity of messaging.

This ethical norm has, as its moral basis, that the

communications that public health puts out

are grounded in consensus science.

So, for instance, if I were, as a former professor at Stanford, I'm an emeritus professor at Stanford, I go out and say, I'm the director of the NIH, I go out and say, smoking is good for you.

Well, I've committed an ethical sin, right?

I've done something really deeply wrong because the scientific basis for the idea that smoking is a terrible thing for you, it really harms your health in concrete ways,

that's

I mean, that's just like rock solid in science.

So,

the idea that I, as a person who works in public health, shouldn't go out and say smoking is good for you, that has a good ethical basis rooted in science.

The idea that closing schools is good for you,

the idea that wearing a cloth mask prevents you from getting COVID, the idea that immunity after COVID recovery doesn't exist,

the idea that

the vaccine will protect you from getting and spreading COVID forever, none of that was rooted in science.

And yet, the public health authorities of this country decided that they were going to enforce the same kind of ethical approach, the sort of ethical constrictions

on those topics as they do to smoking.

When you say none of it was rooted in science, are you saying the science was mixed or there was literally no science?

There was literally no science.

So, for instance, the idea that cloth masks prevent you from getting and spreading respiratory diseases.

There were a dozen randomized trials on flu before the pandemic, and none there was a Cochrane report looking at the literature

on masking and influenza.

And it concluded that the evidence was weak at best, that these kinds of cloth masking in population settings actually prevent anyone from the spread of influenza.

Aaron Powell, I heard a number of people say, like, what's the big deal about wearing a mask?

There was also that argument.

It's not the same thing as a vaccine.

It's like, it's a mask.

You could argue over-inhaling excess carbon dioxide you're not you know you're not getting smiles you're not social interaction listen i'm i'm just opening this up for sake of of consideration so

why did the masks become such an issue was it because it was a mandate is that what it's really that mandate mattered um but i'll i'll say there were there were harms some of which were recognized some which were not so like for instance i heard from parents of autistic kids that the wear that the or sorry hearing impaired kids that the that the mask wearing impaired the ability of the kids to learn to lip read.

So it seems illogical.

I heard,

but it's also true that if you adopt and embrace

public health messaging that's self-evidently not rooted in science, you're going to undermine the public trust in science and in public health.

I will say, based on these voices that I hear from a lot, that's what they're asking for.

They're asking for the exact message that you're delivering now, which is,

I'll state it differently.

They want to hear the scientific community say, we messed up.

Yeah, and we should.

We should absolutely say that.

So for instance,

you wear a mask while you walk into the restaurant, you sit down to eat, and you take your mask off.

And that

protects people from getting and spreading COVID.

How?

Like everyone could see that.

You didn't need to be a scientist to see that.

That was obviously

ridiculous public health messaging.

It was a weird time.

And let's just say, is it, you asked,

could this public health messaging be dangerous?

Well, yeah, imagine someone who's like 80 years old.

They have a lot of chronic conditions.

It's the height of the pandemic, like July 2020 or something, or June 2020.

And they're told, if you wear a cloth mask, you're safe.

They go out in public and take risks that they otherwise would not have taken on the idea that they're safe wearing a cloth mask and they get COVID.

The recommendation, not rooted in science, actually could end up killing people and probably did.

Right.

So it's not, none of these things are just basically, well, it's low cost.

I mean, it may be low cost

to somebody who's like, you know,

who's not particularly, I mean, particularly bothered by mask wearing, but they can still nevertheless end up causing harm.

And I think it kind of did.

Why weren't there panels of scientists as opposed to one individual,

Tony Fauci?

By the way, I invited him on the podcast,

did not get a response.

This was a long time ago.

I thought if I was going to hear about it, you know, these issues from anybody at that time, it made sense to contact him.

And he apparently wasn't interested.

We would have, of course, done it remotely.

Why wasn't there a panel?

So my feeling is when you have an individual,

it changes the whole discussion.

But when you have a panel that looks kind of like the United States, and this isn't for like diversity reasons per se.

This is about

just a collection of smart people is way better than one person always, in my opinion.

And they could come to some sort of consensus or maybe even disagree publicly.

I think panels would have been better.

Well, I think

let's leave aside Tony Fauci.

Because I think he was

a very important figure and of course was basically a major spokesperson for the public health point of view.

But there was essentially a

groupthink at scale.

It was impossible to organize a panel

with the kind of diversity of opinion that was needed.

There were millions or more.

I know this from the set of people who signed the Great Branch Declaration.

Tens of thousands of scientists and doctors who disagreed, but they were afraid to stick their head up for fear of getting chopped off.

It's not an accident that Stanford didn't allow a scientific panel with this kind of

my point of view about the effects of lockdowns until 2024.

The idea was that we needed to have unanimity of messaging.

And if you had prominent professors at Stanford, Harvard, Oxford, or elsewhere saying that the lockdowns were a bad idea, which they were, right,

that then you were going to undermine public compliance with the orders that were being put out.

You know, just

a quick diversion, how do I know

that the lockdowns are a bad idea?

If you look at, if you ask which country had the lowest all-cause excess deaths in all of Europe, all-cause excess deaths meaning deaths from all causes, excess meaning,

given the age structure of the population, how many people die would you have expected, even if there wasn't a pandemic, versus how many there were.

Which country in Europe had the lowest all-cause excess deaths?

It turns out it's Sweden, which didn't follow the lockdown.

So the lockdowns were not a

necessary policy in order to protect human life.

And they weren't sufficient to protect human life either, right?

So you had

sharply locked down countries like Peru that had tremendous deaths.

So the lockdowns were neither necessarily sufficient and they caused collateral harm at scale to the poor, to the working class, to children that we're still paying for, that we still, that people are still suffering from, the long tail of the lockdown.

For years in the United States, from 2020, 2021, 2022, the deaths from overdoses of drugs were like

100,000 people died a year.

This past year, it was 80,000.

We declared success.

We went down 20,000.

Before the lockdowns, it was maybe 20,000 deaths a year.

And that was a catastrophic failure.

So

the problem here is that the scientific community embraced an ethical norm about unanimity of messaging and then enforced it on fellow scientists and then cooperated with the Biden administration to put in place a censorship regime that made it impossible even for legitimate conversations to happen.

Like so after the vaccines, COVID vaccines came out, there are a community of people who were vaccine, legitimately vaccine injured.

The Biden administration went to Facebook and told them, essentially ordered them, that you need to shut down the patient groups that are discussing the vaccine injuries.

Or else what?

The threat was usually implied, or else essentially destruction of your company.

President Biden goes on national TV, says, and he has a complete right to do this.

He has the right to do this as president, to say, look,

Mark Zuckerberg is killing people.

He did that.

He actually did that.

And then, quietly behind the scenes, they pressured Facebook to censor

patient groups that were discussing their vaccine injuries, even in private groups.

And no one was putting their stuff out on X then called Twitter?

X did the same thing, right?

So I joined Twitter in August of 2021.

My first thing I posted was the Great Barrington Declaration.

The day I joined Twitter, I was put on a blacklist to suppress...

the spread of my ideas on Twitter.

And almost certainly...

That's confirmed.

I mean, I'm not questioning the validity validity of what you're saying.

I saw it with my own eyes.

But that was confirmed by the so-called Twitter files.

Yeah.

So the Twitter, when Elon bought Twitter,

he opened up the

databases, invited me to go see them at the Twitter headquarters.

I saw with my own eyes, I saw my face and it said the word blacklist on it.

Which meant what, that when you would post, no one would see your post?

Well, it was a shadow ban type.

It was a trends blacklist.

So yeah, it was a shadow ban.

It would make sure, I didn't know I was on this.

It just made sure that only my followers, strict followers, would see the post and nobody else had any chance of seeing it.

I mean, the whole reason I joined Twitter in the first place was to

engage with people that didn't know my ideas.

And the blacklist made sure that my ideas were not seen by those people.

So this is part of the reason why I think podcasts like the Joe Rogan podcasts

became such a lightning rod for this discussion.

What's interesting is that,

remember they used to put a little tag on podcasts.

You know, it would say, this may contain misinformation.

What they forgot, whoever was imposing that, because I don't think it was

from the podcast houses themselves, but whoever directed that.

The federal government.

Yeah, forgot about the 90s when there were explicit lyrics in albums.

And they would say, Warning contains explicit lyrics.

And everyone goes and clicks on those or listens to those.

They sort of forgot human psychology.

That's the beauty of the American people.

We are.

We like rebels.

Yeah, exactly.

It's so pinheaded, it's almost unimaginable.

Like we basically, the public health authorities of the country

and the government around it decided that it knew best,

that

it was going to control the conversations

of the public at large, essentially propagandize them.

The real question is why?

And

people are probably thinking, ask them about big pharma.

Ask them about the amount of money that

Tony Fauci was made.

You hear these theories right but like most biomedical scientists running labs at universities aren't going to make a dime from pharma most there if you saw their salaries most people will be unimpressed by those salaries if you look at the salaries relative to their hours worked you would be even less impressed so sure some people stood to get really rich

but i can't imagine that's the reason

So why?

So the question becomes why?

Why all this suppression?

Why all this groupthink?

What were people so darn afraid of?

Trevor Burrus, Jr.: I think, let's put yourself back in 2020, 2021.

I think that while, again, I'm not naive, I do think monetary factors played a tremendously important role.

I don't think that's a good thing.

Who was making money besides that?

I don't think they were the central reason.

I agree with you about that.

I think the central reason is that the scientists that supported the censorship efforts, the scientists that embraced the sort of Omerta around opposing lockdown, that supported that,

essentially the vilification of fellow scientists who disagreed with them were doing it because they thought they were doing good.

They thought they were doing good.

Yes.

I think essentially what happened was that

rather than thinking like scientists, they were thinking like propagandists.

And in this case, they were public health propagandists.

They thought that their job as scientists

was to echo public health propaganda rather than act like scientists and ask questions about the messages

that the public health authorities were putting forward.

I'm going to push back a little bit, in fairness.

Perfectly valid hypothesis, and you were at the center of this and I wasn't.

But

many of these people are very, very smart people.

I mean, we can talk about universities as like these places, but these are places made up of people.

And while not everyone is brilliant at these places, some of them are truly brilliant people.

And they are,

dare I say enough, on a sort of a left-brained-ish left-brained-ish spectrum-y type phenotype where they're not pulled into emotional issues the same way that we might think they are.

And so it's hard for me to imagine that really smart people

would

join a dialogue that didn't consider all aspects.

And yet that's exactly what happened, Andrew.

Like, think about that, right?

So, like, I mean, I've thought about that quite a bit.

I don't think it had anything to do with being smart or not smart.

I think

there were a lot of really smart biologists in the Soviet Union.

When Lysenko

told Stalin that

Mendelian genetics was a capitalist plot and that Lysenko was the way forward,

a lot of excellent biologists, for fear of not wanting to be sent to Siberia, kept their head down and said nothing.

about even in areas where they were like directly in their field.

So it was fear of being being ostracized and shamed by one's community.

And I took just a few examples.

Like, so you mentioned, I think I mentioned earlier, Scott Atlas, who's a colleague of mine and friend.

In 2020, the faculty senate of Stanford voted to censure him.

Stanford has a history of censuring three professors ever in his history.

One was a man named Edward Ross, who was a who was a eugenicist in the early part of the 20th century.

He was

one of the leading eugenicists in the country, and Jane Stanford hated him and worked to get rid of him for the faculty.

He was fired.

He was.

Or resigned or left.

I'm not sure exactly, but he was let go.

I think he was assistant professor.

Then Bruce Franklin, who was an English professor at Stanford, I think he worked on science fiction, but he was an anti-Vietnam war activist.

And he brought essentially a terrorist group to campus.

And he was given, he was like, just like there had been massive public focus on it.

So he was given a chance to like defend his points of view.

It eventually was like censored by the stamp, by Stanford.

For being anti-Vietnam War or for bringing terrorists.

For bringing the terrorists onto campus.

Yeah, I mean, bringing terrorists onto campus is bad.

Well, in any case, there was kind of due process around both of those things.

Like they got their say.

Scott, his major sin was he

advised President Trump during the pandemic.

And he advocated for keeping schools open, again, consistent with what was happening in Sweden, and for protecting older people better because they were at a higher risk of dying if they got COVID.

That was his sin.

He was seen next to President Trump.

And that led the faculty senate of Stanford, for something they haven't taken back, to issue a censure of him that has, if you look at it, religious language.

They declared him anathema.

They effectively excommunicated him.

His family essentially was ostracized by their neighbors.

He lives on campus.

It was an absolutely disgusting act.

And it was meant not just at Scott, but generally to send a signal to anyone who agreed with Scott to keep their head down.

And it succeeded.

He's a Hoover, right?

He's a Hughes Hoover, but he was formerly at the medical school as the head of neuroradiology.

He's a very accomplished scientist and has a textbook on neuroradiology, leading textbook on neuroradiology.

For a decade, he'd been an advisor to presidential candidates on health policy.

So he understood from a broader point of view, he also comes from a working class background.

So it was guilt by adjacency.

Yeah.

But it was aimed at silencing opposition to the lockdowns.

And it worked in large part.

I lost count of people from inside Stanford and around the country who would write to me saying, I'm glad you're speaking up on these issues.

Please keep it up.

I don't want to do this because I don't want to risk my job.

Well, you weren't completely alone.

So Levitt has a Nobel Prize.

And

you had some buddies who were pretty smart and pretty powerful.

I mean, they don't give Nobel Prizes to anybody.

No,

Mike is incredible.

He's a very brilliant man.

But

Stanford, in that sense, was better off, right?

We had a sort of underground that opposed the lockdowns, very prominent scientists like Johnny Anites, Mike Levitt, Scott.

There were people at places like Harvard and Oxford.

Harvard, there was Martin Kuldorf.

At Oxford, there was Sunetra Gupta.

There were folks all over the world.

But

institutionally,

the universities of the world made it almost impossible.

You had to essentially decide, and this is what I decided in 2020, that I did not care about my career anymore, that I owed it to the people who were being harmed by the lockdowns to speak up more than I owed it to myself to preserve my career.

And that's why I continued to speak even after, even with the death threats, even with the vilification,

and even with essentially the failure

my own institution to protect my academic freedom.

I did decide I was willing to give all of that up.

That's why I kept speaking.

Aaron Powell,

given

your experience and given this

thing that I hear that

people want to hear scientists admit that they are at least sometimes wrong, maybe not even a specific instance in which they're wrong, will the NIH, perhaps you, be making a statement on behalf of scientists to, I mean, you have the opportunity to address the entire world.

Here, you're doing some of this, obviously.

But will this be part of the messaging of

the NIH?

Like, we need to revise what we think of when we talk about academic freedom.

We need to revise what we actually do.

And,

you know, God forbid there's another pandemic.

We need to really be ready for the kind of discourse that is going to unify people as opposed to divide people.

Aaron Powell, Jr.: You know, after a patient dies,

often in a hospital, there'll be a conference where the doctors who manage the patient

will

bluntly say to each other, often behind closed doors, what went wrong.

And the goal isn't to actually point fingers.

The goal is to figure out what happened so that you don't make the same mistakes.

We haven't really had that conversation as a country or as a world

over the pandemic, and yet the harms from it still persist.

I think

what I would love to do as an IH director is,

I mean, I want to reform the scientific community so that

the values that I thought it had, which is the values of free discourse

and academic inquiry,

curiosity, those are central to the way we function going forward.

We want to make sure that

those values are at the center, because you can't do science if you don't have that.

So you just think about science in the Soviet Union under Lysenko, right?

There was no real biology going on if you couldn't say Mendelian genetics was real.

No, I actually can imagine that the small-scale example that I'm familiar with of a laboratory meeting where you discuss someone's data is the perfect microcosm for what we're talking about, where you sit back, someone presents their data, and the idea is to challenge the data.

The idea is for everybody to try and punch holes in it, make helpful suggestions.

And sometimes, sadly, at the end of that meeting, you end up sitting there with the postdoc or graduate student and you're discussing what the next project ought to be because that one is just an utter failure.

Or you're discussing something much more interesting than you ever thought was possible in the data set that neither of you could have thought of because you needed some fresh eyes on it.

But you can't have a...

a culture in a laboratory where people can't

oppose the person in quote unquote unquote in charge.

I mean, this is so important.

If you can't tell the lab head, no,

I think you're wrong.

If you can't say that, the lab can't progress.

The culture of American science

has gotten away from that ideal.

And in fact, it has this weird,

this ironically weird thing where like on small matters you can have that kind of discussion, but on large matters you cannot.

And

that actually is anathema to science.

Like that actually means that we cannot as scientists address the most important questions of the day

without fear of essentially getting our heads cut off.

We had this conversation about DEI earlier.

Wasn't it uncomfortable?

Like it was, I felt myself being uncomfortable saying what I believe is true because I know this one of those issues where as a scientist, if you start talking about it, you better talk a particular way or else you're going to get your head chopped off.

Yeah, I mean, all these topics are uncomfortable, frankly.

I, you know, in part because I see them through a lot of different lenses.

The audience lens, my role as a basic scientist, my

role as a podcaster, you know, the quote-unquote field of podcasters completely transformed this kind of discussion and public health.

It's really healthy.

We can have these conversations openly and in public.

I mean, maybe I'll get my head chopped off again, but like, you know, I've got to talk about

it once.

I think you're safe.

I mean, maybe I have to remind you, you are the director.

It is an incredible thing if you really think about it, right?

Given your position in 2020 and 2021, 22, 23, sorry, you're now

at the top of the pyramid.

It is hierarchical.

And

I believe your intentions are pure and good.

I do.

I think it's important to have checks and balances, but I really believe that you want to do right by people.

I feel that's a felt thing.

But yeah, it's a remarkable arc that you're now in the position to make major decisions for the entire enterprise of science.

What I would love to do is I would like to make the lives of scientists who disagree with me easier.

I want them to be able to disagree with me.

I want to create a culture of science focused on developing truth rather than obeying

tops of hierarchies.

If I can accomplish that, that would be a major thing in my view.

Well, I think that's a magnificent sub-vision for the NIH.

I think it's super important that all voices are heard.

It's kind of interesting.

We have these discussions about diversity and inclusion, but like all voices need to be heard in the context of analyzing data.

And certainly the revision of the entire structure of the science enterprise, as you point out, is sociological, it's financial.

There are

a lot of different aspects to this.

Vaccines are a very hot-button issue these days, in part because Bobby Kennedy has been associated with the anti-vax movement.

I've heard

him

say with his own words that he's not anti-vax, but he's suspicious or very concerned about certain vaccines.

Let's just start with a very basic question.

You're an MD.

Do you believe that there are any vaccines that are useful?

Yes.

Okay.

Well, I think it's just, let's build up from there.

Do you believe that some vaccines save lives?

Yes.

Okay.

Many vaccines save lives.

Do you believe that some vaccines that are given to children save lives?

Yes.

Do you believe that some vaccines are known to be harmful and yet still given?

Let me say the specific one.

I think the COVID vaccine for children in particular, I don't think is net beneficial for kids.

Not but you said not net beneficial.

Does that mean it's harmful?

Net harmful.

You believe that the COVID vaccine is net harmful, especially for young men.

Can you define the age cutoff there?

We can argue about this.

Like there's a scientific, but I think it's pretty clear that,

I don't know, between age 12 and

30 or something

for boys and young men, the COVID vaccine is probably not harmful.

Again, with boys who have no other

underlying conditions and all that,

not obese, no heart conditions.

Well, I mean, even obese, you have to look at the numbers.

I mean, I there's lots of debates and fights over this in the scientific literature.

So

I hesitate to actually give you a specific age threshold.

But I think just as a general matter, there exists groups for whom the COVID vaccine was net harmful, specifically young men.

Do you think there's any reason to think that

the adjuvants,

essentially what the vaccines are suspended in, not the vaccines themselves, are potentially harmful?

I've heard this.

I am personally not aware of any strong evidence for it.

Aaron Powell,

I think

these are the kind of things that ought to be investigated, but it's very difficult to investigate just because of the sort of like political aura around vaccines, where if you ask,

if you really do investigate it and find something

that the public health authorities don't like,

you're going to have trouble.

I think there, I don't know the answer to that question

from a scientific point of view.

Let's start with COVID vaccine and dig a little further into that.

The COVID vaccine was promoted slash mandated, certainly was mandated at Stanford,

but was promoted as the best line of defense for avoiding infection and reducing the symptoms of infection and reducing the probability of death.

That's what I heard.

What is the evidence for or against that statement now, given what we know about who took it, who didn't take it, and transmission and death rates?

Okay, so can we go back to December 2020?

Sure.

Because then I'll answer your question, I promise.

Answer all the other questions you have.

So

in December of 2020, there were a couple of really important randomized trials published regarding the COVID mRNA vaccines.

Could you describe what one of these looks like?

Because I'm not trying to slow your roll here, but some people get vaccines, some people don't get vaccine, and you look at who gets sick and who lives and who dies.

Yeah, basically.

So

the large-scale randomized trials

flipped a coin, said 20,000 people, I forget exact numbers, get the vaccine.

20,000 people get a placebo or

something

placebo-like.

And then

you follow them for a certain number of months, and you ask, which group's more likely to get COVID?

Have a diagnosed version of COVID.

Which group's more likely to die?

Which group's more likely to be hospitalized?

And if the vaccinated group is less likely to get COVID, you report that.

If not, then you report that.

There were randomized trials then published for several high-profile vaccines that were used during the pandemic in December of 2020,

I guess, November of 2020.

So

the mRNA vaccines from Moderna and Pfizer, the Johnson Johnson vaccine, the

AstraZeneca vaccine, probably the four most important ones used in England and the United States, or Great Britain and the United States,

Europe.

Okay, so what did those studies show?

For the mRNA vaccines,

in fact for all of these studies,

they were run, these are studies that were done, again, randomized, like high-quality studies,

large numbers of patients,

but they were tracked for about two months.

Right, so you can't say from the randomized trials in December of 2020 what's going to happen after two months.

because the trials themselves only track patients for about two months.

What they showed was that among patients who had never before had COVID, right, because they excluded them from the analysis of

efficacy,

among patients who never had COVID before,

the patients who were randomized to the vaccine had lower rates of getting COVID

in those two months, I'm sorry, symptomatic COVID in those two months than the people who were randomly assigned to the placebo.

Okay.

The mRNA vaccines had more deaths in the treatment arm than in the placebo arm, but the size of the samples were such that you couldn't say that that was a statistically meaningful result.

Okay.

Couldn't say it, right?

Because it's, and it's, and that made sense.

The death rate from COVID was something like, you know, three, four out of a thousand.

You would have had to enroll populations

in the hundreds of thousands or millions in order to get get a significant result about deaths.

And age range really matters here.

Yeah, so that the vaccine trials tended to focus more on younger people.

It had some older people in it, but it didn't.

If I had designed the trial, what I would have argued for is to have the older population more represented because that's who was dying from COVID.

And then having the prevention of death or hospitalization as the primary end point.

Instead, the end point was prevention of symptomatic COVID for two months.

Okay.

Now,

they didn't ask whether you got COVID, actually,

because they're people who got COVID and never had any symptoms.

So they didn't ask in the trial about prevention of transmission.

They could have, right?

So for instance, the people who were in the placebo arm, you could ask whether their household members

had

COVID at higher rates than the household members of the people who were in the treatment arm.

Compare the household members and ask.

They didn't ask that.

So

what could you infer from the trial?

You could infer that

for two months, people who had lower, who had the vaccine were likely to have much less likely to get COVID for those two months, symptomatic COVID for those two months.

That's all you could say.

You couldn't say they reduced death rates because it didn't actually in the point estimate, and there was not, again, any statistically significant difference.

In the AstraZeneca and the JNJ vaccines, if you combine those, it turns out that you actually did get lower death rates in the vaccinated arm than in the placebo arm.

The JNJ vaccine had lower death rates, statistically significant, once you combine the trials.

Was the JNJ vaccine an mRNA vaccine as well?

No, it's an adenovirus vector vaccine.

And it was the single shot.

Yeah, and it was like the AstraZeneca vaccine, similar technology, adenovirus vector vaccine.

Okay, so, but again, those were only two months long.

And

the death rate difference was like, you know, it's hard to,

it was not statistically powered to find one, although it happened to find one in the adenovirus vaccines, not the adenovirus vector vaccines.

And the MRNA vaccines, you couldn't say from the randomized trial, one way or the other.

Okay, so that's the information base we had in December of 2020.

I wrote an op-ed in December of 2020 with Sinetra Gupta, where I argue that that is sufficient to say we should give the vaccine, recommend that older people get the vaccine,

but that we shouldn't give it necessarily to young people.

The reason was that young people died at very low rates relative to young, older people, if they got COVID.

And so the thing you were protecting them from was less of a risk to them than was for older people.

And so the benefit-harm calculation

would tilt toward if you have something that's a big threat and you have something that is known to prevent it for two you know, if you doesn't prevent, if it prevents symptomatic infection, then it probably prevents death in the older population.

I can't say that for sure from the trial, but I can extrapolate, it's extrapolation, right?

Seemed like a reasonable extrapolation in December 2020.

Then it makes sense to give it, even if there are side effects, which are not known in the trial.

The trial is only

tens of thousands of people.

If you give it to billions of people,

you're going to find out side effects you didn't know about, right?

But there are these unknown side effects, but it seems like based on the benefit-harm expectation, older people, it makes more sense to give it to, whereas younger people, the benefit-harm calculation runs in the opposite direction.

There are unknown harms, some harms actually you saw in the trial itself, but you know, you don't know it once you give it to billions,

and the benefit's small.

So, you, I would, what I wrote is you should, you should recommend it for older people and then lift the lockdowns.

That's the op-ed I wrote.

And published in the Wall Street Journal.

Instead, what public health authorities decided to do was to take the vaccine and say that we could use it to eradicate COVID.

They implied it.

They didn't exactly say that, but

they would say things like, well, if

80, 90% of the population gets the vaccine, then we will achieve herd immunity.

as if it were some permanent state rather than a transitory state having to do with the fraction of the population that are currently immune versus the

you know

that's a herd immunity is

a clear mathematical construct in epidemiological models of disease spread.

They were, the public health authorities that were talking about 70, 80, 90 percent were using it as essentially a synonym for disease eradication, which it is not.

Aaron Ross Powell,

was this message only in the United States, or was this message kind of uniform across the world?

Yeah.

Now,

just consider the

I don't know if she's uniform.

Like, for instance, I don't think Sweden ever mandated the vaccine.

Right.

With the exception of Sweden, I just sort of...

A few other places.

Because for one public health science system to kind of collaborate in this,

let's assume that

the public messaging

they were a bit out over their skis, so to speak.

But for Northern Europe to do that and for Brazil to do that and for Australia to do that, it sounds like there had to have been a

collaboration of kind of massive scale.

It's a little hard to imagine everyone collaborating in some sort of secret agenda that extends across international borders.

Well, just to push this back in December of November 2020 when the news about the vaccine came out, right?

It was like a sense of joy that we've been a liberal, like this science had delivered us from this deadly plague.

It was definitely exciting.

Yeah.

And so like, and there was this sense of hope, right?

That

sort of like large numbers of people around the world, I think, shared.

Public health authorities shared that sense of hope.

But that, I think, partly led them to extrapolate far beyond what the data actually showed and make promises to the public that were not in the randomized data that were available at the time.

The companies that made these vaccines, are they American-based companies?

I think AstraZeneca is a UK company.

J ⁇ J is an American.

Pfizer, I think, is an American company.

Yeah, Merck.

For some reason, I thought Pfizer was over on the Merck.

Moderna.

Moderna has German roots, I think.

I'm not sure.

BioNTech is German.

Moderna is American.

I'm not sure exactly.

Because many of the people that are suspicious about vaccines or skeptical about vaccines argue that it's all financial incentives.

I mean, was a lot of money generated from that.

The billionaires were created out of this.

And a lot of, in fact, the NIH

is collecting patent royalties

from

the licensing for the technology that went into the vaccines.

Still now.

Yeah.

But Project Warp Speed, the development of the vaccine, aka Project Warp Speed, was a Trump program, right?

Yeah, President Trump authorized

the program in order to accelerate the development and testing of the vaccines.

I remember seeing him getting the injection on the news.

So I think people forget that because of Maha and this sort of assumption that vaccines and Maha are diametrically opposed, in some sense that, you know, Maha and Bobby Kennedy are,

to my knowledge, it's the first time that anyone's forcing a look at vaccines with the kind of level of detail that they are

doing it or going to do it.

People assume that the Trump administration is not aligned with vaccines, but the Trump administration initiated project works to be correct.

Yes.

Yeah, the idea that Bobby or President Trump is anti-vax is ridiculous.

This is frankly

at odds with what the data actually show.

Okay, let's go back to the COVID vaccine because I think the story is really important.

Public health authorities on the basis of an extrapolation that they should not have made decided to

essentially promise the public that if they got the COVID vaccine that they would not ever get COVID again.

That was the implicit

public health messaging.

You can become free.

Just take this shot.

You become free.

You no longer have to worry about lockdowns and

mask mandates or whatnot.

It very quickly became clear that that was not true.

So I remember seeing the outbreak of cases in Gibraltar, which was like 95%,

90 plus percent vaccinated.

And I was looking at them going, why is Gibraltar like they were using, I think they were using the AstraZeneca vaccines, like,

why are they seeing this huge spread of COVID?

I saw data from,

well, I forget which country.

It was mostly using the Chinese vaccine, the Sinopharm, which had a more traditional technology.

Again, with a huge outbreak of cases in like February or March of 2021,

than Israel.

Country after country, they've been heavily vaccinated, seeing large outbreaks of cases.

And

that meant that the extrapolation was false, that the vaccine was going to stop you after two months from getting COVID and spreading COVID was not true.

Instead of acknowledging that fact, public health officials decided that the problem was the unvaccinated.

And they embraced the idea that you have to force people to get vaccinated for the public good.

So they doubled down on their highlights.

It was like July, August 2021, that

the Biden administration decided to use OSHA, to use CMS.

OSHA is the

safety

function safety.

And then there's CMS, the Center for Medicare and Medicaid Services, to mandate the vaccine for populations that they had control over.

And when we talk about mandates,

were there criminal charges or civil charges if somebody didn't get it?

Just lose your job.

Yeah, you just lose your job.

I recall at Stanford, there was an insistence that everyone get vaccinated, but that if people had religious reasons to not get vaccinated or some special health reason, that they could essentially not get it.

Stanford made it difficult to not get vaccinated, but possible.

Like if you had religious exemptions, they made it possible.

Other universities made it much more difficult.

So for instance, my colleague and friend, Martin Kuldorf, who was a tenured faculty member at Harvard University, got fired

because he didn't take the COVID vaccine, even though he'd already had had COVID and recovered.

He is currently still fired?

Yeah.

So there were consequences for not getting it.

Yes.

Because we hear this word mandates, right?

But I don't recall anyone coming around to my house

and insisting.

I just recall that if I need to go a certain place, I needed a vaccine card signed.

And so

essentially it's widespread restriction on your basic liberties, civil liberties.

That was the consequence, including potentially your employment.

And other countries were even worse.

Like, so Canada, you couldn't go on public transportation.

You couldn't fly if you weren't vaccinated.

You couldn't go to a restaurant if you weren't vaccinated.

That's true in New York City, by the way.

You had to bring a vaccine card.

Yeah.

And if you didn't have one, you couldn't go in.

Wow.

Essentially,

the regime was essentially to ostracize people who decided that they didn't want or need the COVID vaccine.

Even though the scientific evidence was that there was no scientific evidence that demonstrated that if you had the COVID vaccine, you were less of a threat to other people as far as spreading COVID than if you hadn't had the COVID vaccine, specifically for people who had already had COVID and recovered and weren't vaccinated.

Actually, there was quite good evidence from studies in Israel, especially, that

you were less of a threat for someone who'd never had COVID and was vaccinated in three or four or five months since the vaccine.

Evidence out of Qatar showed a...

pretty

sharp reduction in the efficacy of the vaccine against getting COVID by four, five, six months after the vaccination.

Aaron Powell, and what, if any, evidence was there that the COVID vaccine, any of them, caused any specific harm in adults?

Aaron Powell, right.

So, in young men specifically, like adults as old as 35, 40 years old, there was evidence of

heart inflammation, myocarditis.

Transient myocarditis?

Yes, but also more severe.

myocarditis in

post-the-vaccine.

There was, I mean, that was clear, clear evidence.

Why just boys, do we know?

I don't fully understand the biology of that.

A reason to do sex-specific studies.

And I'm in favor of that.

Couldn't pass up the opportunity.

Interesting.

So

was there any evidence that the vaccine had long-term detrimental effects that we're still looking at now?

You know, you hear this stuff, you see it.

circulating.

You hear more about long COVID.

We should talk about long COVID.

But

is there any evidence that the vaccine caused long-term issues for people?

I think they're

likely that there's some people

who have particular immunological responses or that there's there's also like evidence that the production process for some of the vaccines involved using DNA plasmids, which may persist

in producing some of the products of the vaccine, the

vaccine.

I'm not actually frankly not,

I mean, I've looked at the literature and there's a lot of controversy around the literature and I have not made up my mind fully on the extent of it.

What I will say is that it's very difficult to ask questions about long-term effects of a vaccine just generally.

You can't run a randomized trial.

That's done, right?

That vaccine trial was terminated where the placebo arm was vaccinated in January of 2021.

And so you're not going to tell from the randomized studies about the long-term effects.

So now you're left with observational studies where you need to have a real control group constructed properly.

And

it's been difficult to get the public health authorities who are supposed to do this to actually do this at scale.

I've seen some of this.

Like I think the FDA put out a report of babies getting the vaccine, having epilepsy or seizures at slightly higher rates.

I think it was a report in 2022.

There's claims online I've seen about cancer, but I haven't seen anything where they've done a very careful, people have done careful control groups.

I don't know.

I'm not leaving out the possibility.

I'm just saying that the kind of studies that I would like to see done, rigorous studies that have

control groups, even in observational settings, it's hard to find them in the literature.

And whatever they're in the literature, they seem to get attacked.

Sometimes for reasons that make sense, sometimes for reasons that don't.

It's very difficult to address this from a purely scientific point of view because the literature itself seems like it's poisoned.

Do you believe long COVID is a real thing, or is this something that people have constructed?

No,

I think it's real.

So I do think that the extent of it is, again, unclear,

but it's very clear that there are some.

So for instance, I saw a study, I think it was in 2021,

from France, where

they looked at people who previously had COVID and previously never had COVID.

And among kids, and then they were measuring subsequent long COVID rates after infection,

long COVID rates, comparing the

matched people who previously never had COVID versus who did.

And in kids, the rates of measured long COVID, which back in that study I think was like,

did you have one of some number of symptoms in the WHO list of long COVID symptoms three months after the COVID infection?

And the matched study were roughly the same rate for kids.

But for adults, it was higher for the people who'd had COVID before.

And it was like, I mean, so I don't know the exact rate, but it is certainly a real phenomenon.

I mean, I've met people who've had it.

Same thing with vaccine injuries.

Like I've met people who have vaccine injuries who report having had

concrete, discrete injuries after they've been vaccinated.

And I believe them.

I mean, I think that

I generally tend to believe patients when they say things about themselves, and especially when they have no incentive to dissemble about it.

And yeah, so I think that these are real phenomena that we need to address

with open minds.

Will the NIH and or CDC be making public statements about some of what you just described, that the messaging around vaccines was,

in your view, inaccurate?

Well, I'm still saying this.

And I've been saying this.

I think that...

But in your new, I mean, you're saying it here, and

we hear you, but in your new role, like at the at the level of

a country of 300-plus million people, like,

hey, folks, you know, we've looked at this, and

I wasn't in charge then, but here's the deal.

Aaron Powell, I mean,

in my role, I have to focus on stuff going forward more than, like, I mean, the past, I think, is worth addressing, but it has to be a broader look than just me coming out and saying my opinion about it.

This podcast is fun, but that's not the

purpose is.

So I'll just give you a specific thing.

My colleague

Marty McCary, who runs now is the commissioner of the FDA, he has issued a new framework for evaluating COVID booster shots.

So rather than just requiring to show that the COVID booster, the new variant COVID booster, whatever it is, in the future produce antibodies

either in lab animals or in humans in order to approve the vaccine for use, now going forward, the boosters have to show some efficacy against preventing COVID and preventing deaths and hospitalizations

in order to get approved.

That's an evidence-based framework to essentially say, if you're going to sell the vaccines, at least show in humans that it actually works for something we care about.

If you produce antibodies and it doesn't translate to reduction in morbidity or mortality, then

why recommend it or why approve it?

Some people might want to take the vaccine to reduce symptom severity, not just to avoid death.

There's now at this point not there's not evidence.

If you've already had COVID and recovered, there's no evidence that it would do that at this point for the boosters.

I mean, again, like I want to distinguish,

that's why I wanted to start with December 2020.

This was like, you know, we knew about these large-scale studies from the vaccines that were new.

And we knew what I wanted to distinguish what we knew and didn't know.

The boosters are a different vaccine, and they don't have the same large-scale studies behind them.

They've been approved on the basis of relatively small-scale studies asking whether they produce antibodies,

not

things that clinically matter to people.

Is it going to prevent me from getting sick?

Is it going to prevent me from being hospitalized?

Is it going to prevent me from dying?

The boosters don't have that kind of evidence behind it.

And so

I think it was just a couple weeks ago, the FDA decided that it was going to ask the manufacturers to produce produce much better evidence for the boosters before it was going to approve them.

It shouldn't just be a routine thing.

This is not a flu shot.

The framework, the regulatory framework that governs flu shots

are based on decades of experience with flu vaccines.

Are you a fan of the flu shot?

I mean, I've had lots and lots of flu shots in my life.

Really?

Yeah.

You get it every year?

Generally, yeah.

And it's designed to guard against most of the most common strains of flu that year.

Is that how you're going to?

Yeah, I mean, sometimes they guess wrong, it doesn't do much, and sometimes they guess right, and it does better.

But I generally been gotten, I mean, I don't think I got it last year.

Too busy, I guess.

But you don't, it sounds like you don't have any specific safety concerns about the flu shot for otherwise healthy adults.

Is that right?

Yeah, I mean, there's

as a scientist, I want

the safety of these vaccines evaluated in a rigorous way.

So I'm not, I wholeheartedly support that.

And if the data show that

are

bad outcomes, then I'm going to say that, right?

But as a general matter, the flu shot, the technology used for it is, I mean, it's a traditional technology that has a long history behind it.

And the regulatory framework, while I do think that like the production of antibodies is, I think that's actually still the standard for the flu shot,

it makes some sense, right?

The flu strain that circulates is a different one every year.

And if you required this like long-term clinical trial for the flu strain that's currently circulating, by the time you actually recommend it, it would be useless.

Now, you can say that's true for COVID as well.

but we don't have decades-long

experience with the safety profiles and also the efficacy profiles.

And the flu shot, it's hit or miss, right?

Sometimes it works and sometimes it doesn't.

What we need is an excellent universal flu vaccine, which there's still a lot of research to try to get.

But

I think the key thing is, what I want to convey is if you are in favor of vaccines, you should not be treating this as a religious matter, where vaccine is good, therefore, and you believe that, therefore, you're a good person, vaccine is bad, therefore, if you believe that, you're a bad person.

You should be treating this the same way we treat other

drugs that we recommend to the population at large.

Evaluate the benefits, evaluate the harms in rigorous ways, including randomized studies.

Understand patient nuances.

It might be right for some patients and wrong for others.

If you're going to say something, don't extrapolate beyond what the evidence actually shows,

right?

Or else you risk losing the trust of the public, especially the public that would potentially most benefit from the thing.

What I'm arguing for is an actual, honest, evidence-based evaluation of vaccines.

And that's essentially what Bobby Kennedy is asking for, right?

So that's what he's asked me to do.

Not for vaccines generally,

but

for the COVID vaccine, that's essentially the policy.

Now,

the problem that we have in public health is that

you asked me earlier about, do I think there are certain vaccines that are worthwhile?

And the answer is yes.

I do think that.

I think that if we have a public health authority that's gotten it so deeply wrong about this one vaccine

where people lost their jobs over it.

People got injured and they were silenced over it.

People essentially felt, you know, felt like they were made to feel like, you know,

you remember like in 2021 where people would disinvite family members from Thanksgiving if they weren't vaccinated.

Yeah, or worse.

People were kind of excommunicated from families and workplaces.

Yeah, essentially we created

a class of unclean people as a matter of public policy.

you can understand why people who went through that would say,

given that the vaccine didn't turn out to stop you from getting and spreading COVID, why should I trust you on anything else?

That's where we currently are.

The way forward isn't to force people to say, look, you must acknowledge how great science is on these other things.

The way forward is to be utterly honest about what we know and don't know.

and treat people as partners rather than as subjects.

So in keeping with that,

there's perhaps no issue more sensitive than the vaccine autism issue.

My understanding of the current literature as it stands is that the Andrew Wakefield data,

this British physician

who was really the first to popularize the idea that vaccines could in his words, cause autism or were highly correlated with autism.

Those data were essentially retracted by the journals.

He lost his medical license.

And my understanding is there was evidence of fraud,

that he was either made up data or contorted data.

I've had guests on this podcast, including a colleague from Stanford, Karen Parker, who works on autism, who verified that indeed

the frequency of autism is vastly increased in recent years in ways that cannot just be attributed to improved sensitivity of tests, etc.

One in 32 births is the current number.

And so

you can understand why parents who love their kids more than anything and would do anything for their kids are understandably concerned about any possibility that vaccines could increase the probability of autism.

My stance as a scientist is,

well, if the data are robust that vaccines don't cause autism, then

run a proper trial.

The Wakefield data are clearly contaminated, if not outright, certainly by story and narrative.

I mean, there's just no way that those data are going to be resurrected.

And I don't think they should be resurrected, right?

I mean, unless there's something I'm not aware of, he said too many things that weren't true.

And whatever happened are,

you know, is history.

So what is the evidence, if any, that a vaccine, some specific vaccine, causes autism?

And is the NIH and CDC and the new administration going to take a serious

second look at this?

Yeah, so I don't want to comment on the Wakefield

situation because I don't know the ins and outs of it.

All we know is what happened.

He lost his medical license.

We're talking about one study, right?

I believe that replication matters.

And so, like, there are, I think, on the MMR vaccine, some excellent studies that fail to find

a correlation

or a causal link between

MMR vaccination,

measles-mummsrebelic, a vaccine that's really, I think, important for the childhood, for kids,

and autism.

Like, there's a massive Danish study that tracks patients who were vaccinated, kids who were vaccinated, matched with patients, similar patients who were not, tracks them

for a year or longer and years, and finds no difference,

fails to find a difference in autism rates.

There's people who've, I mean, there's all kinds of, if you look

online and elsewhere, there's all kinds of fights over that.

But to me, that's pretty good evidence.

for the MR vaccine.

For some of the other vaccines,

there has been less of a focus to ask

whether it's correlated to vaccines.

Such as polio vaccine.

I don't know this literature, so I shouldn't comment, but

I don't remember seeing a study specifically asking whether the polio vaccine is linked to autism.

When I was growing up, every kid got the polio vaccine.

Measles, mumps, rubella, and DPT,

and a couple others.

Like there were probably four or five vaccines, as I recall.

I think that there's good evidence on the MMR vaccine that of failing to find a link with autism.

There's and I don't know the full extent of this literature so I shouldn't comment too much, but I've what I when I've looked I've I haven't seen quite the same level of evidence for some of the other vaccines failing that just haven't that again they just haven't looked.

As a general matter,

I think it's an unlikely, just from a biological point of view, unlikely to be the main reason why you autism, the rise in autism, which is now well documented, you talked about,

has occurred.

So to me, the question then is, thinking about autism, you're asking,

you want to tell parent, answer for parents, well, what does cause it?

What does has led to the rise in the prevalence of autism?

The honest answer is, I don't know.

You focused on, well, we focused now in this conversation on just one potential

cause, vaccines.

vaccines, to me, it's unlikely that they are the reason for the rise in the cause of autism.

But there are many other potential hypotheses for the rise in the prevalence of autism that I've seen.

You know, alterations of the gut microbiome I've seen.

Retinoids.

There was a paper out of Pashko Rakishi's lab at Yale years ago looking at the migration of cells in the cerebral cortex and developing fetuses, primate fetuses, but it's a great model.

And he was exploring the idea that ultrasound was altering cell migration, which may lead to changes in circuit connectivity.

Never really got followed up on because that would be wild.

It would be wild.

It would be wild.

I'm not suggesting that ultrasound causes autism, but there were a lot of interesting ideas early on that I thought ought to be explored.

Well, so the point is that unless you know the etiology, it's very difficult to talk about the treatment.

Now, of course, autism

has a very wide range of clinical presentations, right?

You have kids who have

some social awkwardness, but otherwise are well-adjusted,

have no problems.

Think

Sheldon from Big Bang Theory or something.

Or many of our colleagues.

Yeah, our colleagues, maybe me, I don't know.

And then you also have kids who have very severe disabilities, a lot of biologically

sort of driven co-occurring conditions, apraxia,

difficulty toilet training, for example.

They will never live on a system.

Right.

And so you have a very wide range of outcomes.

It's very possible that the biology is very different for folks along the spectrum.

And unless you understand the etiology, it might be different etiology for kids in different parts of the spectrum,

then you're never going to have good answers.

both for prevention and also

for therapies.

So

it's that question that Bobby Kennedy has asked me to answer or try to get an answer, and that President Trump has asked to get an answer.

And I think it's appropriate because if you ask me what is, I mean, we just talked about vaccines as a potential cause.

I think it's unlikely to be the cause, but

you can see my mind is open

for depending on the levels of evidence I've seen.

Now, this is not my area.

I should say this, like I'm saying this as someone who's now like tried to wade into it some just to get a sense of it.

But as I've waded into it, it's very, very clear that there is not a scientific question, a consensus answering the question of what causes the rise in autism or what is the etiology of autism.

But it seems that encouraging a spirit of open discourse about these other potential causes, right?

I mean, and I'm not suggesting, by the way, that ultrasound causes autism.

I want to be very clear, but if you read scientific papers focused on brainwiring it and you make the not-so-outrageous leap that autism has something to do with brainwiring, maybe gut and brain and a bunch of other things, but you come across a number of very interesting preclinical model hypotheses that hopefully will be tested at some point.

Well, things, oh, there's like environmental exposures to various, various kinds of chemicals, tens of thousands of chemicals

in the environment.

There's

events that happen in utero potentially.

There's, you know,

there's nutritional issues potentially.

There's, I mean, I've seen a, you name the hypothesis.

I'm just trying to wait in this literature from someone from the outside.

And it's just, it's bewildering.

And I can't even imagine what a parent looking at this would look like.

Oh, it's got to be.

Right.

So, and to me,

when there is no scientific question to an important thing that actually impacts health,

the answer is let's do excellent science on it.

Now, the question, I've seen a lot of excellent science about how to manage manage autism.

Lots of fights over is it, should I, is it psycho, is this like psychotherapy the right approach, behavioral modification?

There's lots of fights over that.

Do we address the co-occurring

biological conditions?

How do we

address that?

Is it different?

I mean, I've seen lots of like of literature around that, which strikes me as

more advanced

and sort of closer to sort of the right

answers, although again, there's lots of controversies even there.

On the etiology of autism it strikes me is that the literature is not all that far advanced that there's lots and lots of competing hypotheses the data are conflicting on many of them um you know i can i could and if i could give you my most promising one but they would mean nothing really um the the the the the the right thing to do in that setting is to have an open-minded investigation to try to address this problem.

And now the question is, why haven't we had that so far?

And I'll tell you, I think the reason we have not had the kind of open-minded, deep investigation by the scientific community at large on the etiology that the parents deserve, the kids deserve, is because it's dangerous to ask that question if you're a scientist.

All of a sudden, you're going to be accused, often incorrectly, of being an anti-vaxxer.

And that's the end of your scientific career.

That kind of sort of suppression of scientific curiosity means that we won't have an answer to this question.

Right.

So

what I've done is I've organized an initiative inside the NIH

to address this question of the etiology of autism.

Aaron Ross Powell, not limited to vaccines.

No.

Wide-ranging.

It includes

basic science work.

It includes epidemiological work.

It'll includes environmental exposure work, includes all, and we'll bring together data sets that we'll make available to the researchers.

We'll have a competition among scientists, just like the normal NIH way, with peer-review panels to ask

who should get the awards.

We'll have a dozen or more scientific teams asking the question,

what is the etiology of autism?

We'll have that.

I think that normally it takes a year or longer to set up a thing like this.

By September, we'll have

an open competition for these scientific projects.

And you know, we can't brush science, but I'm hoping within a relatively short period of time, you know, who knows how long exactly, depends on how science works, we'll have a much better understanding of the ideology of autism than we have at this current moment.

Fantastic.

I mean, just fantastic.

I mean, regardless of where one sits.

On the vaccine discussion.

On vaccines, I say one thing.

Now, I don't want to, as the NIH director, I don't want...

to put my thumb on the scale on any part of these potential etiologies, right?

As I already said, I'm not particularly an expert in this area.

And so, you know, if we were to put my thumb on the scale, it would be not from the point of view of expertise.

It would just be the point of view of like, I just happened to read the literature and I was impressed by X, Y, or Z.

But if I were to put my thumb on the scale, I think it would make it more difficult, A, for scientists to ask the question honestly, because they want to impress the NIH director or something.

And then B, for the public to trust.

the result at the end.

I want an open-minded.

So this is why, like, I was asked, well, if you don't believe that these vaccines cause autism, why would you allow people to ask that as a part of the research agenda?

And my answer is there are a lot of people, especially in the public,

and even some scientists who disagree with me.

And I want them to have their say.

I want an honest conversation.

I think that if you have an honest evaluation, you're not going to find that vaccines are the primary reason for the cause of the rise in autism.

It's going to be something much more fundamental and complicated.

But I don't want the results to be

disbelieved because I put my thumb on the scale.

I eagerly await the results of the unbiased studies.

Yeah.

I really do.

And thank you for spending that time explaining what that initiative is going to look like.

And I'm

delighted to hear that it's not emphasizing one particular hypothesis.

The other thing about the initiative, it's very important to understand.

We're working with autistic parents, we're working with the autism community, right?

It's a lot of times scientists, when they study things, we put ourselves above, and it's like we're examining amoeba or something

on a slide.

When you do population research, you have to work with the communities that you were actually trying to help.

And that's exactly the spirit of this.

We're going to work with

communities of autistic kids and parents, and we're going to apply rigorous research methods with control groups and just the normal

sort of

like high quality,

the term of art now it is gold standard science.

Like we're going to apply gold standard science to this and subject it to the same kind of replicability standards I want all science subject to.

Can we expect that the National Institutes of Health, which indeed is a plural statement institutes,

NIMH, Mental Health, National Eye Institute, et cetera,

will be restructured in some way

in part to reflect the Maha movement, make America Healthy Again.

And by the way, no one told me to ask that question.

I'm asking it out of genuine curiosity.

There are these theories.

I'm like part of the, I'm not, I'm politically,

I'm a free agent.

Because

the budget is limited.

It's not an infinite budget.

Depending on how the IDC thing goes, there may be more or less money to devote directly to the laboratories around the country.

And given that fixed amount of money,

you can't do everything.

I love the way you're encouraging innovative exploratory science that's rigorous with open discourse.

But can we expect that the institutes of the National Institutes of Health will take on some new names, maybe a new institute starting to emerge?

I mean,

it's really Congress that determines that.

There's a process.

The administration has put forward its suggestion for a reorganization.

I think it's down to eight institutes from 27 or institutes and centers.

Congress over the past decades have had several suggestions for how to do this.

It's one of these things where I could focus my efforts on things that I think are going to make

big changes,

or I could focus my efforts on like reorganization efforts.

I'll do what Congress, the administration asks of me.

But from my point of view, we'll let that fight happen as it happens and we'll respond to it as it happens rather than like where I'm active.

I think the key thing is not

the structure of the institutes to me.

The key thing is the content of the research and the standards we hold ourselves to in the research.

Those are the things I want restructured.

That's really the fundamental

question for me as an NIH director.

If I can accomplish

some of the things we've we've talked about during this podcast,

having replicability be the core of deciding what scientific truth is, refocusing the portfolio so that we enable early career scientists to test their ideas out,

that we aim big for trying to, and we address the key health problems that Americans face.

If we can do those things,

I'll consider myself a success.

Well, Dr.

Bhattacharya,

you have a tall task and you're clearly ready for it.

I want to thank you for taking time out of your extremely busy schedule.

Those aren't just words.

You are extremely busy to come here and have this discussion and to

tackle head-on questions that were not all easy questions.

Some of them quite difficult, actually, because there's a lot of...

nuance, a lot of different lenses one can look through.

It's clear to me that you're a data guy.

You love data.

And it's also clear to me that you like dissent.

Maybe because you've been in the position of.

That's not always true, right?

Okay, well, yeah,

it sounds like it's in your nature.

I didn't know the younger you, but

I love that you encourage dissent.

I do believe that great science emerges from discourse that includes sometimes even just outright arguments, provided it doesn't get physical or cruel,

that are aimed at getting at the truth, if it's possible, getting at the truth.

And it's also very clear that you care about exploration.

And I am, I must must say, especially warmed by your enthusiasm for protecting and promoting the science of young investigators, meaning

in the first 10 years of having their labs, as well as trainees.

I think I'm not trying to speak in nomenclature.

This is so important.

It's vital that we're going to be able to do it.

It's so important.

And yes, there are some older labs doing some wonderful work, but even they will eventually retire and die.

We all do.

And the younger younger generation of scientists in this country, it's so key.

And so I just really appreciate you coming here to share.

I do want to check back with you in a year or two,

see how things are going.

And

science and public health really need

you and to really get behind discovery and the mission statement of the NIH.

So thank you for coming here today.

You didn't have to do it.

And I look forward to more discussion.

Andrew, thank you so much for having me.

Really a pleasure.

Thank you for joining me for today's discussion with Dr.

Jay Bhattacharya.

To learn more about Jay's previous work and to find links to his current post at the NIH, please see the show note captions.

If you're learning from and/or enjoying this podcast, please subscribe to our YouTube channel.

That's a terrific zero-cost way to support us.

In addition, please follow the podcast by clicking the follow button on both Spotify and Apple.

And on both Spotify and Apple, you can leave us up to a five-star review.

And you can now leave us comments at both Spotify and Apple.

Please also check out the sponsors mentioned at the beginning and throughout today's episode.

That's the best way to support this podcast.

If you have questions for me or comments about the podcasts or guests or topics that you'd like me to consider for the Hubertman Lab podcast, please put those in the comments section on YouTube.

I do read all the comments.

For those of you that haven't heard, I have a new book coming out.

It's my very first book.

It's entitled Protocols, an operating manual for the human body.

This is a book that I've been working on for more than five years and that's based on more than 30 years of research and experience.

And it covers protocols for everything from sleep to exercise to stress control, protocols related to focus and motivation.

And of course, I provide the scientific substantiation for the protocols that are included.

The book is now available by pre-sale at protocolsbook.com.

There you can find links to various vendors.

You can pick the one that you like best.

Again, the book is called Protocols, an operating manual for the human body.

And if you're not already following me on social media, I am Huberman Lab on all social media platforms.

So that's Instagram, X, Threads, Facebook, and LinkedIn.

And on all those platforms, I discuss science and science-related tools, some of which overlaps with the content of the Huberman Lab podcast, but much of which is distinct from the information on the Huberman Lab podcast.

Again, it's Huberman Lab on all social media platforms.

And if you haven't already subscribed to our Neural Network newsletter, the Neural Network newsletter is a zero-cost monthly newsletter that includes podcast summaries as well as what we call protocols in the form of one to three page PDFs that cover everything from how to optimize your sleep, how to to optimize dopamine, deliberate cold exposure.

We have a foundational fitness protocol that covers cardiovascular training and resistance training.

All of that is available completely zero cost.

You simply go to hubermanlab.com, go to the menu tab in the top right corner, scroll down to newsletter, and enter your email.

And I should emphasize that we do not share your email with anybody.

Thank you once again for joining me for today's discussion with Dr.

Jay Bhattacharya.

And last but certainly not least, thank you for your interest in science.

But I terrano tiny que suffer for the mode with the precious vaccos of

classes of Amazon.

Amazon, gastamenos, sonriemas.