Mindreading with Jean-Rémi King

57m
What would it take to actually read someone’s mind? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O’Reilly explore the science and ethics of decoding thoughts with Jean-Rémi King, a neuroscience researcher at Meta’s Paris lab.

Listen and follow along

Transcript

Hey, you wondering how you can invest in yourself and work towards a goal that will last?

Rosetta Stone makes it easy to turn a few minutes a day into real language progress.

You've heard me talk about Rosetta Stone in the past, and you know that I love the anytime, anywhere bite-sized lessons that allow me to go ahead and continue to learn Spanish so that I can know exactly what my mother-in-law is saying about me.

Maybe you're gearing up for a trip to another country, or maybe you're going to reconnect with family roots.

Or maybe you just want to impress people with the fact that, yeah, you know another language.

Well, now Star Talk Radio listeners can grab Rosetta Stone's Lifetime Membership for 50% off.

Visit RosettaStone.com slash StarTalk to get started and claim your 50% off today.

Jas La Order Miamigos.

Rosettastone.com slash StarTalk.

You ever walked around a neighborhood and wish you could see inside somewhere that was available for rent?

Well, let me just give you a tip.

Don't climb up on the ledge and look in the window.

People will call the cops.

Well, maybe you've walked past a place for rent and you wished you could peek inside.

Maybe even explore the layout, envision the natural light streaming through the windows, or plan where your vinyl record collection would go.

Well, at apartments.com, you can.

With tools like their 3D virtual tours, you can see the exact unit you could be living in at all from the comfort of your couch.

And if you end up wanting to see it in person, you can book a tour online without having to speak to a leasing rep.

Really envision yourself in your new home with apartments.com.

The place to find a place.

Gary, you're taking us inside the brain again.

I know it's the inner space, and it's fascinating.

Is it as fascinating as outer space?

You'll argue it's not.

I knew you were both going to say that.

Are you reading our mind?

Absolutely.

Okay.

An expert tells us the future of AI and reading your mind.

Coming right up on Star Talk Special Edition.

Welcome to Star Talk.

Your place in the universe where science and pop culture collide.

Star Talk begins right now.

This is Star Talk.

Neil deGrasse Tyson, your personal astrophysicist.

And I see to my right, Gary O'Reilly.

That must mean it's Special Edition.

Yes.

Gary.

Hey, Neil.

How are you doing man?

I'm good.

Former soccer pro.

Allegedly.

Allegedly.

Are you better here than you were when you were playing soccer?

As I get older,

I do get better.

Good answer.

Just hey.

As you get older, you get

older.

That is what happens to me.

That's about it.

So I'm looking at the title you propose: Reading Your Mind.

Ooh.

And I thought this was a science show.

I know.

Start the seance now.

Right.

Buckle along.

Okay.

I'm getting an M.

An M.

There's a, you have a relative somewhere.

Somewhere in the hemisphere.

Right.

You had a mother.

Okay, sorry.

That's just it.

You're not.

All right.

AI will be driving our cars, our trucks, our trains soon enough.

And probably, if not already, it will help us solve our everyday problem.

It already is.

It has.

Exactly.

And it'll probably solve some of our big problems.

It may even help us tidy up some of the mess we've made over the years.

But surely it's never going to be able to read our minds, is it?

Well, actually, yeah, it can.

And

our guest today leads a research team using AI to decode.

the language of our brains.

But before you start shouting at your devices, stop and think about the positivity that could come with this as a tool.

But those who can think but not speak, who will get a voice.

So, for that, and if that happens, that would be truly amazing.

So, the ethics of that, too.

Absolutely, that's what I'm talking about.

Okay, so if we would introduce our beasts, delighted to.

Thank you.

Jean-Rémy King.

Cool.

Jean-Rémy King.

Oh, you're going to be saying that for hours, aren't you?

From Pais France.

Did I say all that right?

Absolutely.

Perfect accents.

Welcome to Star Talk.

Welcome to my office here at the Hayden Planetarium.

Thank you very much for having me.

And you work for Meta.

That's right.

Facebook, basically.

Absolutely, yes.

But Meta.

I mean,

I think it's not just one singular thing.

It's not a thing anymore.

It's Meta.

Right, yeah.

All right, you work for Meta in Paris.

You have a background in neuroscience.

I love neuroscience.

We have neuroscientists on the show all the time.

We really do.

We're all in the situation when we have a neuroscientist.

And describe to us what your goals are.

Aside from world domination.

So

we have a lab at MENA which is called FAIR for fundamental AI research, which is structured as an academic lab in a sense.

The goal is really to

understand more about the principles of artificial intelligence.

And within that lab, I'm working with a team that interfaces two disciplines, neuroscience on the one hand and AI on the other hand, try to both better understand how the brain works and also try to perhaps improve AI algorithms in light of these principles.

How do you have any idea at all how the brain is processing information?

So we have tools for this, of course, in neuroscience.

Tools.

Interesting.

Tools do you put on people's brains.

This is not hammers and chisels.

Tools.

That's a euphemism for something and I want to know what.

Sure, yeah.

You have a really a wide battery of tools that you can use.

The one that we tend to- On human brains.

On human brains, yeah.

So the one we tend to use the most in the team are non-invasive neuroimaging techniques.

So from

magnetic resonance imaging, like the big scanner you have in hospitals,

to electronencephology.

These are like the small nets that you can put on people's.

These little caps that you put on your head with all of the

electrodes.

It looks for fields.

And each of those.

Electromagnetic fields that come through your skull.

That's right.

So each of those work with different principles.

So for EEG, for electroencephology, and MEG, the magnetoencephalography, you measure the fluctuations of electric and magnetic fields, which are elicited by neural activity.

So typically what...

Thoughts.

Yes.

The biological instantations of thoughts, yeah.

So is every

brain?

Would you have precisely what?

The biological thoughts.

In what's the word?

thought.

So does that mean that every action in the brain has an electrical counterpart?

Or

the firing of a synapse is an actual electrical, you know, actually you have a

lot going on in the brain which is not electric or doesn't lead to electric fields.

In fact, even the neurons which are firing, not all of them are being measured with EEG EEG or MEG.

And we tend to only measure those that are spatially aligned.

So in the cortex, which is the part of the brain which is folded,

you have a lot of neurons, which we call pyramidal cells, that tend to be positioned in the same way.

So when they discharge electricity, the electric field can build up over space because they actually are aligned spatially.

So it strengthens it.

the signal.

Yeah, if they were facing any direction.

You get some cancelling.

Yeah, you just average down to zero, zero, basically.

Because they're all

aligned with one another, then you can measure these electric fields at a macroscopic level, even with electrodes that are positioned on the scalp, so not

inside the brain.

So if we're in an fMRI and you're offering images.

Functional magnetic.

Exactly, yes.

So that's what you are actively, you're awake.

talking to the person while they're messing with your brain.

Well, they're not messing with your brain.

They'll offer you an image and that then gets picked up through the data.

But while you're offering an image to a patient, there's other noise.

But you declared something he hasn't declared yet.

Can we get him to say it first?

Okay.

When you read the brain, what do you see?

We see a lot of noise.

But maybe

I didn't say my brain.

I say

when you read a regular brain, what do you see?

You see a lot of noise.

But just a clarification on the fMRI.

So fMRI is a different type of technology that does not pick up electric and magnetic magnetic fields like EEG and MEG.

It actually picks up a proxy of neural activity, which is the deoxygenation or the blood flow in the brain.

So when neurons are active, they consume oxygen, and so you have a change in the vascular flow, which you pick up with fMRI.

So you're getting the geography of the brain as to what's happening and where.

Absolutely.

And this is a very different type of signal that you would measure with EEG and MEG.

And it's very slow.

So of course the blood comes in only like it doesn't change every millisecond, let's say.

And so, you have a very different type of signal that you would observe depending on the device of choice, whether it's fMRI or EEG or MEG or intraquanial recordings when you can have access to this type of signals.

Intracranial means you actually have probes inside the brain, inside the brain.

Absolutely.

So, this is very common.

People say, go ahead and do that.

No, so

you do have patients, typically patients who suffer from intractable epilepsy, who need to have the part of the brain which generates the seizures to be removed.

And before doing this, it is common to have a procedure where you, well, the neurosurgeons and the epileptologist decides to put electrodes inside of the areas which is believed to be pathological in order to be sure that this is indeed the brain region that should be removed.

Right.

You don't want to cut out the wrong part of the brain.

Absolutely.

And so these individuals typically would stay about a week in the hospital during which the signals can be analyzed by a neurologist.

And during that week, you can ask them whether they would like to participate in, for instance, an experiment that involves, I don't know, story listening or watching a movie.

I mean, we're already in your brain.

So why not?

I mean, we're already in here.

You know, it's like when the mechanic goes, listen, I got to go in there anyway, so I might as well get the calipers done on the brakes, you know?

Yeah,

right.

So when you're decoding the brain waves, whether it's blood flow or the magnetic fields, and you said there's noise,

how is your algorithm filtering out that?

And how is it breaking down?

Because you said there's different data

and the way the data comes is different.

from an fMRI that it does from an MEG.

So how can you explain how the algorithm is reinterpreting that?

Sure.

So maybe just to start with, the reason why I said that when you look at it, it looks like noise is because these signals are impacted not just by neural activity but by a lot of different factors.

So for instance, magnetic fields are constantly evolving.

I shouldn't try to say this in front of you guys because you know more about this than I do.

But we are in a flux of magnetic fields all the time.

And the magnetic fluctuations that are being generated in the brain are extremely small, orders of magnitude smaller than the objects that surround us and move around when they have metatic parts.

And so the signals that can be picked up are basically contaminated by all of these things.

So when you look at the raw data, it's very difficult to guess anything, actually.

You would probably need to start to do the very same task again and again to try to average out

the noise and start to see what is the average brain response.

So you're really looking for patterns

better than anything else.

Then what better than use AI to recognize patterns.

That makes perfect sense.

Wait, so let's back up for a minute.

I understand you can look inside someone's brain and see the image that they're seeing as though you were somehow their eyes behind what the brain processed.

Do I understand this correctly?

The goal is to try to understand how the brain represents perception.

In the case of this experiment you're alluding to, the individuals are typically watching images one at a time.

Each image is lasting for about a couple of years.

But you did this on mice before you did it on humans.

But you only saw a big chunks of cheese.

Are you saying this because I'm a French researcher?

We do not work with non-human animals in our team, but of course in neuroscience you have a wide variety of approaches and a lot of people are indeed working on the visual system.

Rather in macaques and mice, mice are not so great for vision.

But yes, there are a lot of different things.

They're bad parents, if I remember correctly.

Well, I'm not an expert in this, but I think they do see things, but they don't count on vision as much as we do.

Right.

So what you're really doing is you're measuring these signals as a person is seeing something.

And that, what you're measuring, once you filter it, you're able to determine that this is the pattern.

And if we match that from person to person,

what are you measuring against is really my

question.

So you really have two types of things, right?

You have the images that you present to the participants, and you have the brain responses to those images.

And the whole goal is to try to find the linking function between the two.

Okay, so you could use the same person actually and just replicate that over and over again.

If you keep seeing the same pattern, then you know from this pattern that represents

a sports car or a this or a that.

So you don't have to

be able to pressure the signals.

So wait a minute, here's the real rub, though.

The human brain varies from person to person,

not in its general regional response to stimuli, but it does vary in how we actually perceive things.

So, how do you make sure that what you're measuring in one person is actually going to be what you're measuring in another person?

Like, if I were to lose my sight, my occipital lobe would go like dead,

but other parts of my brain would take up that activity.

And so you would be measuring a completely different data set because I'm blind.

But in my mind, I would still be seeing stuff.

So.

I think you're highlighting something which is actually an open question at the moment, which is the inter-individual variability of neural representations.

Yeah, that's what I mean.

And it's so up to recently, most of the human neuroscience research was really trying to focus on what was common across individuals.

So typically the very sort of standard experiment is you take 20 or 40 participants like you and me and you make them do a task for about an hour in the scanner.

And then you try to see whether their brain responds similarly to the same stimulus.

For instance, if you present half of the images with faces and half of the images with houses, is it the case that the brain areas that responds to faces is similar across individuals?

And the result is that there is a surprisingly common structure across individuals in ways which raise questions.

For instance, you have an area in the brain called the face fusiform gyrus, which is an area that responds specifically to faces.

And this area tends to be located in the similar part of the brain for every individual, which is fine.

You can say, okay, maybe genetically this was pre-programmed.

We have some neurons in the brain which are specifically tuned for this.

But it also is the case for reading, for instance, for orthography.

So if you present words,

you can find that indeed some part of the brain are specifically responding to letters, or the letters that you know, or the words that you know.

And this is, this tends to be in an in a brain region which tends to be similar across individuals.

But this is this cannot be genetically programmed, right?

Because words is something that emanates from culture.

This is a recent trait.

So trying to understand why the same high-level representations end up being represented in the same place in the brain is a major question.

Now, having said that,

the field is shifting towards

more and more focus on individuals.

And we do realize that indeed the representations are very specific to some extent to individual brains.

And that so far we may have emphasized too much the similarity across individuals and not pay enough attention about the individual specificities.

But if you have to calibrate against the individual for the individual's thoughts, then you can't just come up to a stranger and know anything about them.

So we would, so for instance, we would know that auditory inputs, so sounds that comes into your ear, tend to be processed in the same brain regions at first, right?

It's not that the ear is connected to a random part of the cortex.

It tends to arrive ultimately in the primary auditory cortex.

And this would be common to most people except if you have brain lesions or a variety of pathologies.

And that would be the same for vision and that would be the same also for the sense of numbers, for instance.

If you have a sense of magnitude, this is typically posted

in the parietal cortex and this tends to be the same across individuals.

But as soon as you want to get more specifics, you want to really try to get a more fine-grained level of representation, then this becomes really specific to individuals and it's difficult indeed to transfer the knowledge that we observe from one participant to another.

Tron Aries has arrived.

I would like you to meet Ares, the ultimate AI soldier.

He is biblically strong and supremely intelligent.

You think you're in control of this?

You're not.

On October 10th.

What are you?

My world is coming to destroy yours, but I can help you.

The war for our world begins in IMAX.

Tron Aries, suited PG-13.

Maybe inappropriate for children under 13.

Only in theaters October 10th.

Get tickets now.

The best business-to-business marketing gets wasted on the wrong people.

Think of the guy on the third floor of a 10-story apartment block who's getting bombarded with ads for solar panels.

What a waste.

So when you want to reach the right professionals, use LinkedIn ads.

LinkedIn has grown to a network of over 1 billion professionals and 130 million decision makers and that's where it stands apart from other ad buyers.

You can target your buyers by job title, industry, company, role, seniority, skills, company revenue.

So you can stop wasting budget on the wrong audience.

It's why LinkedIn ads generates the highest business to business revenue on ad spend of all online ad networks.

Seriously, all of them.

Spend $250 on your first campaign on LinkedIn ads and get a free $250 credit for the next one.

No strings attached.

Just go to linkedin.com slash startalk.

That's linkedin.com slash startalk.

Terms and conditions apply.

Huge savings on Dell AI PCs are here and it's a big deal.

Why?

Because Dell AI PCs with Intel Core Ultra processors are newly designed to help you do more faster.

It's pretty amazing what they can do in a day's work.

They can generate code, edit images, multitask without lag, draft emails, summarize documents, create live translations.

They can even extend your battery life so you never have to worry about forgetting your charger.

It's like having a personal assistant built right into your PC to cover the menial tasks so you can focus on what matters.

That's the power of Dell AI with Intel Inside.

With deals on Dell AI PCs like the Dell 16 Plus starting at $749.99, it's the perfect time to refresh your tech and take back your time.

Upgrade your AI PC today by visiting dell.com slash deals.

That's dell.com slash deals.

This is Ken the Nerdneck Zabara from Michigan and I support support Star Talk on Patreon.

This is Star Talk Radio with Neil deGrasse Tyson.

If we step back into the offering an image to a patient, how accurate now is your algorithm in terms of replicating as much of that first image and how much does the algorithm say, well, I'll take a calculated guess at filling in the blanks?

That's a very difficult question.

How many blanks are there, right?

To fill in, yeah.

Because the

metric that we use for evaluating how well we reconstruct the images in this case is not well posed.

So if you take for instance a pixel level, you want to compare how good your image, the image that you managed to decode from brain activity, is compared to the true image.

You may get every individual pixel wrong because perhaps, I don't know, the color is slightly off and the objects are slightly to the left or to the right.

And so you would have a very bad decoding metric.

But if the image has the same content, if it's, I don't know, the true image has a horse and you also decoded a horse, you don't want to say that this was a terrible reconstruction.

You want to say, well, it's maybe not...

pixel accurate, but it tends to have the right concept.

And so there is, for now, a difficulty in even quantifying the quality of the reconstructions.

However, what is striking is to see that when you have a lot of hours per participants, typically 20, 40 hours per participants of them just watching

images in the scanner,

and you have a very good scanning technique, like an ultra-high field.

Yeah, this would be a huge amount of data for neuroscience, not for physicists.

The universe is very good.

I was going to say, they're only mapping the entire universe,

of which your brain is a part of.

So once you have a lot of data per individuals, then you can really start to reconstruct what they perceive in a surprisingly accurate way.

However, going beyond perception currently remains very difficult.

Okay, so if you've offered an image to a patient,

you get a certain set of data back, depending on the subject matter of that.

What's the difference if the patient is asked to imagine

an image?

And do you get a barren?

Yes, yeah.

We're talking mind's eye, for want of a better term.

Right.

So in the case of perception, this is where the most progress has been made.

So when you watch an image or when you hear a sound, it is becoming increasingly easy to decode what the person has seen or has heard.

However, when you do the same type of tasks, but on imagination, you can get performances above chance level from a statistical point of view.

But frankly,

it's not very convincing to anyone who don't want just to look at the stats and just want to see the reconstruction.

And the reason for this is, well, there are two reasons.

The first reason is that the signal-to-noise ratio in imagination is much lower than in perception.

So when you look at the brain signal, on average, they are weaker when you try to imagine, let's say, an apple than the same.

So people have vivid imaginations, though.

And still?

And I don't think we know this.

So I think this

trying to evaluate whether the people, for instance, who claim not to have any visual

imagination indeed do not have

representation that would be decodable at all.

Because I just learned days ago that a colleague of mine, he went around the room and said,

picture an apple in your head.

Picture an apple.

Okay.

Picture an apple.

He can't picture an apple in his head.

And I know.

Right.

Is this some rare...

Not even the computer?

He cannot conjure an image on command in his head.

We all thought of apple, red apple, green apples.

But any image on demand.

Well, he used that as a simple one.

So I didn't know this was an issue.

Yeah, I think this is actually quite common.

I am not an expert in this, but I think that the term is aphantasia, I think.

This is something which is more than 5% of the population, I think, that claim not to be able to imagine visually objects in the world.

So they also become artists.

I don't think artists are restrained to just imagining objects in the head.

You have musicians that may not engage in this monologue.

How much more, in terms of a percentile do you think your research is going to take to how the brain interprets images?

This is a very uh this is a very difficult question.

Again, the um sorry the question was easy.

Is your answer this tool?

Probably, yeah.

I don't know about our research specifically, but what what is clear is that there is a huge progress which is being made in thanks to AI, but not as a tool like you would see in other sciences.

So for instance, in, I don't know, in biology,

in cosmology, in sciences where you have a lot of data, you use AI as tools.

You have a lot of numbers, you don't know how to crunch them, you train a system to do whatever you're looking for, and it helps you process this data.

In neuroscience, we also do this in the pattern matching that

we discussed earlier.

But we also use this as a modeling framework because the AI system, in a sense, is also trying to do something that we do.

We train AI system to perceive the world, to try to recognize objects, to reason upon the world,

to discuss with us

in a linguistic form.

And so this creates basically systems that can then be used as models of how the brain works.

This is really

accelerating the, I think, the understanding of how the brain functions.

So you talked about linguistics there.

If you presented a sentence to a patient, then you're going to have that sort of perceptual stage of where they perceive what the sentence, they see the sentence.

Then you go through what they call a lexical stage and then a contextualization stage.

That all makes sense.

Good.

That's basically how we communicate.

I know, but are you able to get the algorithm?

I don't know.

Are you able to get the algorithm to feel the nuances?

of the brain and actually see how that breaks down.

Is that just the future?

I'm waiting for this answer.

Maybe I can say

how we do this in the first place, right?

So we can have individuals like you and me, and I'm often a subject of my own experiments, going to the scanner and reading a sentence, right?

And so you flash

a sentence word by word, once upon a time, and so forth.

And for each millisecond, you can see, okay, what is the brain activity now?

What is the brain activity now?

So you end up with an activation pattern associated with each moment of time and that you can time-lock to words or to syllables, phonemes.

And then you can do this same approach in the AI algorithms.

You can present a sentence and deep learning networks nowadays have activation patterns inside of them which are known to be difficult to interpret.

But nevertheless we can do the same trick.

We can time lock the activations of the deep nets in response to words, syllables and so forth.

And then we can do the comparison between

the activations of the AI systems to the activations of the brain.

And we don't know what these two things represent, but we can still try to do correspondence, to try to see whether they tend to be similar in the geometrical structure that

they hold.

And what we observe is that this helps us decompose the stages of processing that you mentioned.

So we can first see that you have algorithms that are trained to do visual processing but know nothing about

words,

about language, that you can map and corresponds to the activations of the perceptual system.

And then you can do the same type of comparison with an algorithm which this time is not trained to recognize images or pixels or to transform pixels but is trying to analyze words and combine them together.

And you will see that the activation patterns of these algorithms that are processing

things at a language level and not at a perceptual level, they do have activation that corresponds to other brain regions and other time moments.

And so we can try to do this sort of one-to-one correspondence between the model and the brain to try to understand the structure of these representations.

And where exactly in that process

do you get

the language model to,

I'll say,

mimic

perception and the nuance that we have, which is experientially based.

So when you look at once upon a time, there is an activation

pattern, right?

Right.

And you can replicate that activation pattern in the AI.

But what you can't do in the AI is replicate all the different things that once means to you.

I went to the movies once.

Really?

Only once you went to the movies?

Once upon a time.

I know that as the beginning of all fairy tales.

So it brings in a completely different contextual meaning.

So where along that line of comparison do you get to interject

what we do that machines don't, which is intuit and find nuance?

Right.

That's that's a great question.

And maybe I should emphasize one thing, which is that when we do this comparison, we don't actually train or tune the algorithms to resemble the brain.

We don't actually try to inject this knowledge.

Okay.

We just have these AI algorithms that we can use off the shelf, open source models, either produced by our colleagues or by the rest of the scientific community.

And these algorithms, they're not trained to mimic the brain.

They're trained for whatever other purposes, to be chatbots and to recognize cats from dogs in images.

But what we observe empirically is that training these algorithms tend to make them generate representations which are comparable to those of

what we do in our brains.

Okay, first of all, that is scary AF, okay?

I mean, it's fascinating and it's really cool, but it's also kind of scary.

Tell them what AF means.

It's scary as

but the reason why it's a little scary is because,

on the one hand, it kind of diminishes us as this crowning jewel in all of creation with the zenith of intellect that we believe that we're going to be.

We'll be the zenith of anything.

Right.

That's what, yeah.

That's what we're doing.

Couldn't we do with a little bit of humbling every now and again?

I don't know about you.

No, no, here, no, no, here's how you get out of that.

Here's a good idea.

go ahead.

We are so brilliant, we created something more brilliant than ourselves.

So

I wouldn't say this quite yet, because AI is really limited in many ways today, in spite of the hype.

I understand the emotional reaction, but frankly, I also think that there is a source of marvel here, right?

For the first time, we have AI systems or systems that we train for a task, right?

The task is surprisingly arbitrary or even mundane, right?

For instance, trying to predict the next word given the preceding words, that sounds like

this is what all LLMs do.

Exactly, yeah.

Large language models.

Thank you, yes.

Large language models.

And this simple task pushes the algorithm to generate hidden latent representations which resemble those that we have in our own heads.

And that suggests something to me which is very profound, right?

Which that they exist general principles that push these systems, biological artificial systems, to generate a similar computational path, a similar set of representations.

So

is there a similarity

between the brain and how it processes data in its architecture and that of a large language model that it's learning in a very similar way to the human brain?

Because as I understand it, the original idea of neural nets as invoked in computers was an attempt to mimic what we thought our brain wiring was doing.

And we learned that that's not really how our brains work.

So it's just dangling there now as its own thing with its own utility, but it's no longer biologically

biological analog.

Yeah, so the history of AI and neuroscience intertwined quite a lot, but for a long time it was, these links were metaphorical.

Like the idea of a neural network was, it wasn't, I think, a useful concept, but the goal was not to be as close to the brain as possible.

In fact, it's really a huge simplification, this idea of artificial neurons, as compared to what was already known at the time.

And you've had these bridges between the two disciplines

for many decades.

What is different now is that this comparison is not just like conceptual, like kind of loose, it's very precise.

We can't quantify the extent to which the activation patterns in the brain and the activation patterns in these AI systems do look alike or not.

And

even though these systems are not built for that purpose.

Now,

having said that, I also feel the need to mitigate these results because this is a tendency that we have, but we also see a lot of edge cases where this does not work.

So typically if you take the very best model, the largest model, this similarity tends to break down.

So we do have cases where what we call the convergence of representations between AI systems and the brain is not monotonous, is not systematically the same.

All right.

One thing I haven't sort of got to grips with, the speed at which

image back of brain and

how quickly that is and how quickly you're able to then process that data back through an algorithm.

In the head?

Yeah.

In the head it's quite slow actually.

So when you look at reading for instance, you flash a word onto your retina.

This takes for about 70, 100 milliseconds to really blow up the visual cortex in the occipital lobe.

And from there you'll get another 50 milliseconds for this visual information to be processed.

A millisecond is a thousandths of a second.

A thousandths of a second, yeah.

So, fifty thousandths of a second would be five hundredths of a second.

Yeah, that's correct.

I'm not good with math, especially not in my native language.

We got messing in my head if you're not good with math.

But so, yeah, around 100 milliseconds, this is really when the activity peaks in the uh in the visual cortex for the sensory processing, let's say.

And then this information is being analyzed into edges that will eventually construct

the representations of letters and of morphemes of words.

And this is around 200 milliseconds, so one fifth of a second.

And then it takes another 200 milliseconds, so around 400 milliseconds, does

the semantics part of words really

rise in the brain and is broadcast to a wide variety of brain regions.

And so this process is relatively slow.

It takes about half a second for you to analyze the sensory.

How fast would machine learning do it?

Or if, let's say, an OCR,

how fast would it know?

Yeah, in terms of inference, the machine would be much faster.

It would be just a few milliseconds.

A few milliseconds to do the whole process, we take like half, a full half a second.

Absolutely.

So we're basically like, duh.

Well,

at the inference stage, so what we're giving AI ideas about what to do with us when it becomes our overlord.

What is fast is what we call the inference stage, right?

So

once you already train the algorithm, using it is actually very fast.

What is typically slow is loading the information onto the graphic card.

But once it's there, it's actually very fast.

However, training these algorithms is ridiculously slow.

If you want to train an LLM today, a large language models today, you need trillions of words, which represents many, many, many lifetimes of just reading all of the text that we've That's because it's created in humanity.

That takes us back to what we were talking about earlier.

So, in order for it to know, it has to see all the words.

In order for us to know, we just have to see like a word and then something similar.

And we're like, oh yeah, it's that.

You know, so that's what we're, for instance, a ball.

If you show us a ball, you can show us one ball.

We've never seen a ball before in our life.

And you show us a ball.

And then you show us a basketball and we'll say, that's a ball.

And then you show us a baseball and we'll say, say that's a ball but the machine is like

well I have never seen that before so that's the difference.

Yeah, this is one of the many differences.

I mean when when we emphasize similarities we emphasize the similarities because

Because we are in a field of differences, everything is different.

The architecture is different, the type of data that they receive is different, the training,

of course the physical and sensation

is different.

This is also highlighting why we are all the more surprised and interested in the fact that in spite of all of these differences we can still find similarities in the way they process information.

Wow.

And you've got fast when you've got these sort of

caps that you're putting on, they're sensitive enough to be able to operate at that sort of speed.

I know you say it's slow, but that for me is really quite fast.

So for it depends on the device.

So with functional magnetic resonance imaging, fMRI, you get a snapshot of brain activity approximately every two seconds.

So a lot can go on within two seconds.

However, if you take magnetoencephology you can get a snapshot every millisecond so you'll get a much more well resolved uh signal in time but the spatial resolution now is much lower so you tend to have a blurry image let's say of of brain brain activity so you have a trade-off between uh between these different technologies wow so the it's kind of like cosmology the better your

like looking tools get,

the easier it is.

General truth or science.

It's just going to be so much easier for you to figure out anything you need to figure out.

It's just a matter of we got to be able to see it faster and then know more clarity.

I got a question.

When I think of the brain, I think of it as this organ,

and there are these parts of the brain that are similar from one person to another, even if there's differences in detail.

Are we to believe that the brain knows that in advance how it would divide up its territory?

Or are we all just socialized the same way?

We all all grow up in a civilization, and so we all have the same influence on our developing brain for it to take the shape the way it does.

So this is a very profound question.

There is a tension.

I'm not a historian of science, but there is a tension in the field that dates back to philosophy between empiricists, people who think that the structure of representations come from the data to which you are exposed, and rationalists, the people who would rather emphasize the importance of innate representations and innate structures so you have for instance Plato on the one hand if we take this back to ancient Greece that would really be in the rationalist point of view with this idea that you there they exist innate representations in ourselves and ultimately we can approximate them with reasoning whereas other people and I think that the whole study of AI is really on the extreme empiricist side is just let's take a blank system and just press press a lot of data onto it and ultimately

this system will manage to perform a task.

And what is interesting nowadays is to see that irrespective of whether the representations are innate or acquired through

exposition, not necessarily culture, but even just sensory data,

they seem to at least have

some similarities.

This is what I think is interesting in the case of this comparison between AI and the brain for language.

The brain is obviously structured very differently

to these AI algorithms.

And obviously, there must be some innate structuring in our brain.

This is why only human have language in a sense of being able to combine words together

in order to reason and to communicate.

This is not a problem.

You remind me of the New Yorker comic, two dolphins swimming together.

Right.

They're in a water show.

Oh, okay, like the SeaWorld technology.

SeaWorld and they're swimming together.

One says to the other.

Of the humans.

They face each other and make sounds, but we're not sure they're actually communicating.

Right.

That's pretty funny.

But there are a lot of experiments.

Their brain's bigger than our brain.

Yeah.

For some of them, not all of them.

But so the reason why there is, I mean, there's been a lot of experiments on behavior with dolphins, but also with apes, to try to see whether they would be able to combine concepts.

And there are some experiments that show that in some edge cases they are able to do this.

But for now, we don't have any evidence suggesting suggesting that you have any other species that can learn this vast amount of concepts and be able to combine these concepts together in order to produce a sentence or to understand a sentence, a new meaning that they've never heard before.

So this ability must be to some extent input in our genome

and be

an innate structure.

Well, it has to be.

I mean, we're the only...

And it's so funny because it's disassociated from everything else that we are and have, language.

For instance, I can be deaf, dumb, and blind, and you can teach me any language.

I don't have to have an actual reference like everybody else does.

So, I mean, we are truly unique in the way that we do communicate.

I don't know if, I mean, I'm sure other animals communicate too.

I'm not sure.

Yeah,

all animals communicate, but we're very unique in that if I don't know how to communicate with somebody I meet from halfway across the world, we will find mediums that allow us to know each other's language.

Right.

And this is coming back to the whole empiricist versus rationalist tension.

This is why there is something very interesting here.

So we established that the human brain must have some genetic or innate properties for it to acquire language.

And this is why it differentiates itself in part

from other animals.

And we also know that these deep learning algorithms, they have very little, what we call inductive biases.

The architecture that we use in deep learning, they are remarkably blank and versatile.

And so it's really the data with which they are trained that pushes them to build the representation that they have.

And nevertheless, it seems to be comparable, at least to some extent, to those of the brain.

Not in every way, but in some ways.

And so that suggests that no matter where you come from, whether you come from this really rationalist type of approach to cognitive science or much more from an empirical, empiricist

approach,

there seems to be some sort of convergence between these two approaches.

What I want to get into now is the application of your research, where it could go as we progress with this.

Now I said in the opening about people who can think but can't speak.

Is there an opportunity with this research to give them a voice, to have their understandings made made public, made aware?

Right.

So you have indeed a lot of patients who suffer from an inability to communicate typically because of a brain lesion, so either a traumatic brain injury or anoxia, that will lesion

the part of the brain which is responsible for, for instance, motor control.

So they will be paralyzed or

lose an ability to move their facial movements.

And there are now a few teams that have shown that it is possible to put a set of electrodes in the motor cortex and to use these neural signals signals to feed an algorithm that can then be used

to do a brain-to-text translation and allow

the individuals to regain communication abilities.

So, this is already something which is happening with invasive approaches, so with electrodes

which are implanted with neurosurgery.

One of the goal, of course, is to try to see whether it would be possible to push this approach with non-invasive devices, which do not require a brain surgery in order to rehabilitate communications in patients, but also perhaps to diagnose.

So sometimes you have patients who do not respond, but they are awake.

It's a paradoxical state which occurs sometimes after a coma.

And in these patients, you want to know whether they don't communicate because they're not conscious of the environment or whether they don't communicate because they are fully paralyzed, for instance.

Or they just don't like you.

And perhaps

they just don't want to, which is actually an issue, right?

If you have lesions to the part of the brain that are intrinsically linked to motivation, that could also be a cause of

having devices that would allow us to

communicate, but even to allow us to know whether indeed they are conscious or not of the environment is certainly

of a prime use.

Are we likely to find that sort of ability in the near future or are we having to wait?

For invasive electrodes this is this is already happening

yeah you know our boy lee elon that's what he wants to do neural link yeah i want to put a link i want to put a chip in everybody

but everyone you can do that through the vaccines yeah that's well no that's bill gates let's let's get our billionaire straight okay elon wants to put a chip in your head i mean an electrode in your head bill gates already did it

Tron Aries has arrived.

I would like you to meet Ares, the ultimate AI soldier.

He is biblically strong and supremely intelligent.

You think you're in control of this?

You're not.

On October 10th.

What are you?

My world is coming to destroy yours, but I can help you.

The war for our world begins in IMAX.

Tron Aries, suited PG-13, may be inappropriate for children under 13.

Only in theaters, October 10th.

Get tickets now.

This episode is brought to you by Progressive, where drivers who save by switching save nearly $750 on average.

Plus, auto customers qualify for an average of seven discounts.

Quote now at progressive.com to see if you could save.

Progressive Casualty Insurance Company and affiliates, national average 12-month savings of $744 by new customers surveyed who saved with Progressive between June 2022 and May 2023.

Potential savings will vary.

Discounts not available in all states and situations.

You know, between work and workouts, who has time to read every label in your food and supplement routine?

Guess what?

Now you don't need to.

Thanks to Thrive Market.

Their on-site filters do it for you.

Organic, low-sugar, high-protein, you name it, they can filter it.

Then your pantry staples arrive fast and here's the best part, to your door.

Their groceries are high quality and no junk.

Over 1,000 sketchy ingredients restricted so you can shop worry-free.

You know, summer is a time of letting go, but it's time to get back to your regular routine.

And Thrive Market makes healthy grocery shopping effortless and affordable giving members clean high quality groceries delivered to their door no label confusion no stress no markups looking to cut out artificial dyes process sugars or try out some vegan alternatives thrive market has a ton of on-site filters so you can easily filter your preferences.

For me, it's all about not messing up my workouts.

So what do I want?

High-protein, tasty snacks that I can snack on without getting my trainer upset with me.

Go to thrivemarket.com/slash Star Talk to get 30% off your first order and a free $60 gift.

That's thrivemarket.com/slash Star Talk.

I think the limits of the technology, as June has pointed out, will be reduced because of AI and they'll find solutions sooner.

But it's the ethics of being able to potentially sort of decode the brain's messages and then reverse engineer it so as you can read someone's mind.

It's the ethics of that being possible because I think that's going to freak not just Chuck out.

You know, I'm going to be honest though.

Isn't that already happening when you look at metadata that's taken from our phones and our location and other phones that are around us?

Couldn't you pretty much tell what I'm thinking?

Well,

I don't know.

But what I can say is these are certainly topics that come up very often.

And there are several things to say.

The first thing is that...

What is possible today in terms of decoding brain activity is really limited to specific cases like perception and motor control.

And the reason for this is because

we know what the person sees, so we can attach the image to the the brain patterns.

However, as soon as we try to do this in imagination, for instance, as we mentioned before, then things become drastically more difficult, not just because of an inability of the algorithm to work, but really because a signal is just not there.

That means it's not likely that you will anytime soon read someone's dreams.

Until you get a signal booster.

From a statistical point of view, for fundamental research,

there is research on the science of dreams.

However,

all of the evidence point out to the fact that it will be very difficult with the state of knowledge that we have to have a device which can read your mind in the way that people think, like with your train of thought and all this.

And the reason for this is because even with the largest multi-million dollar type of devices that are being used, the signals remain extremely noisy and it's very difficult to go beyond this.

So the physics of the signal that we pick up is really the main constraining factor, not the AI algorithm part.

So the AI algorithms can be used as a useful tool, but in terms of the signal that you can pick up, this is...

generate the input necessary for them to do a good job.

Yeah, the data that can be collected with these devices remains extremely, extremely noisy.

And so from that point of view, the risks seem limited.

Now, this is the current state of affairs, but our role here as scientists is also to say

what is possible, what is the state of the art, and to share this through the research, through open sourcing and all this.

That's the reason why we do this work.

In science, you're always limited by your signal to noise.

Absolutely.

You'd have to add up days, weeks, months of measurements to pull a signal out of that noise.

But you want to have then, this only works if you have

the same signal that comes up again and again.

Whereas when people think of uh mind reading they think of reading the mind at a given instant.

You don't you don't think of the same thing again and again and just repeat this until your noise average is out.

So this is this is why at currently all of the evidence suggests that there is not a systemic risk.

However, technology continues to evolve and we want to make sure that the the risks are limited and this is also why we engage in in this kind of discussions of course to ensure that the discussion does not just happen within the scientific community but with the rest of the so when is the time to make that determination is it now before you actually have the equipment to measure this or

the determination as to the ethics like codifying the ethics themselves guardrails going forward when do you come up with those guardrails because if you come up with them after you're able to do it it's the you know the barn is the horse is out of the barn as they say yeah absolutely so this has already uh started right There is already a lot of regulations on what you can and cannot do.

For instance, I work in France, and so we have the GDPR in Europe that's constrained the way the data that is being collected from brain imaging can be used.

In France, for instance, you're not even allowed to do neuromarketing.

You're not allowed to use brain data for marketing purposes.

So this discussion is obviously already engaged.

And along the way, we need to continue and update these decisions with the state of knowledge that continues to evolve.

Yes.

What was was the movie minority report yeah where they had this sort of pre-cogs pre-cogs yeah precogs i mean that's great movie that's everyone's default thought as regards this research and where it leads to and i think that's what scares them and i think they'd be grabbing not just for the guardrail i mean do you look well precogs are you were not digging out of their head what they saw from the past you were digging out of their head what they foresaw in the future.

In the future.

Right.

So that was different.

So they would see you committing a crime that you haven't yet.

That you haven't committed yet, but you were definitely going to commit.

And then they'd just go arrest you.

Right.

They started doing that.

It's talking about immigration in America right now.

Oh, how interesting.

You pre-arrest people.

You just pre-arrest people.

So what is the end game for Meta in this?

I actually don't know why Meta hired me in the first place.

I can only tell you what we're trying to do

within our team.

So the goal here is, well, the goal is well posed, right?

We have now some preliminary evidence suggesting that they have you have similarities between AI systems and the brain and that suggests something which to me is very intriguing that there exists these general principles that shape the information processing in AI system and the brain so discovering what those laws are and also trying to understand what is missing in AI systems for them to be as intelligent, as efficient as us remains a major topic of research.

So this is why we're we're pushing on this frontier to better understand the brain and make better AI algorithms.

So if you're able to achieve that, people are going to feel an invasion of privacy.

They're going to feel thought security becomes potentially compromised.

I mean, you said you've, you know, there's discussion over the ethical point of view.

Are we looking at those sort of features as well?

So, yeah, it's the same topic that we briefly discussed before.

We have an ongoing discussion to try to see whether AI and neuroscience developments are changing the risks associated with, for instance, mental privacy.

As of now, the discussion is ongoing, but I don't think we have a change in the technology that

changes the risk.

What we observe is that it is possible to decode brain activity in certain cases, typically for motor control or for visual perception, but it is not possible to decode what you are thinking at a given moment or your train of thoughts or to extract your password from brain activity.

The reason for this is because

the signal that we have.

That's the thing that just takes the password out of your head.

Right.

Like the readers that they have now that steal your credit card shit right from the US radio frequency.

All of the physics on which we base this analysis prevents us to work outside of the lab, right?

So with an MRI, you need to plunge someone into a very high magnetic field.

This is not something that can be translated for, I don't know, consumer products.

But what you could envision is a dystopic future where the state who has the power and the money to actually have a machine that could read your brain and during an interrogation extract information from you that you don't want to give up, you know, basically like, you know, you violated my mental privates.

So, you know, that's actually foreseeable based on just what we talked about today.

Yeah, I mean, if we if we go down to the dystopic possibilities, uh I s I suspect that the states will not need an MRI to force you to give away your password.

But it is an important point.

Good point.

Just like, I'm not going to get MRI machine.

I refuse.

Just like, yeah, okay, yeah.

This baton says different.

But still, if the risk does exist, we should try to characterize it to ensure like what is the path to this risk.

And this is part of the scientific enterprise too.

Cool, man.

You've got some new research research that you're about to release into the public domain.

Can you sort of expand upon that for us please?

Sure, yeah.

So so far we've done this comparison between AI systems and the brain with adult participants.

And to some extent this is frustrating because there is something which is missing here in the picture, which is the learning process, right?

So in the case of language, we don't just want to understand how the brain process language, but we want to understand what makes it able to acquire it so efficiently.

Like with just a few words,

we acquire language.

The average number of words that we hear is typically around a few thousands per day, a few dozens of thousands per day.

And if you compare this amount of data to the data that which is input for the training of AI models, this is really a droplet of information compared to the oceans of data that these algorithms use.

And so what this means is that fundamentally, the architecture or the training principle that we use for AI, they are really, really mediocre, right?

We need to understand much better how you can get to a system that learns much more much more efficiently.

So if you train your AI on children,

you may end up learning how we actually learn or acquire language, but then you're also going to have AI saying things like, I hate you so much, I hate you, you never let me do anything.

But certainly it would be important to understand, not necessarily to train AI models with this data, but to understand the principles that allow young children to acquire language so efficiently.

This is one of the big marvels of our species, and this is certainly what we're trying to avoid.

Yeah, so this is actually a work with a hospital, the Rothschild Hospital in Paris, that has a unit for epilepsy

and young patients down to two-year-old years old patients.

So you have these patients who suffer from intractable epilepsy, again same patient as we mentioned before, who have electrodes that are implanted inside the brain in order to identify the location that is generating the seizure and who can stay for about a week in the hospital and listen in that context to audiobooks.

And then we can time lock the brain responses to each individual words to try to understand how the representations of language are processed in

these young patients and how this evolves with age.

Let me see if I can offer a perspective here.

I'm as big a champion of AI as the next person, but I still enjoy being human.

And

whatever I can do to distinguish being human from a machine, I will embrace, leaving me to wonder whether the true creativity of what it is to be human may actually lurk within the noise that can be never read by a machine.

The first person to paint an impressionistic representation of reality,

could a machine have had that first thought?

Or is that a human being rummaging within the noisy confusion of our own brains, pulling out something that no one had done before and no one had imagined before,

and in the end, genuinely creating that which is human and can never be machine

i just wonder

that

is a cosmic perspective and that was beautifully said except the first impressionist was just some dude who was nearsighted

it's all just fuzzy it's not just fuzzy it's just how he saw the world

people were like what an incredible interpretation he's like what are you talking about

So, John and me, thank you for visiting.

Oh, yeah.

All right, Chuck, always good to have you, man.

Always a pleasure.

All right, yeah, Ray.

Pleasure, Neil.

Thank you.

Thanks to you and Lane and others for coming up with these topics.

Oh, they keep coming up with them, so we're going to keep finding them.

We chase them down.

Yep.

All right.

This has been Star Talk Special Edition, Neil deGrasse Tyson, your personal astrophysicist.

Do keep looking up.

And then.

Did you know that the United States produces 13 million barrels of crude oil every day, enough to fill 800 Olympic swimming pools?

Oil and natural gas are refined into gasoline, diesel, and jet fuel and used to make unexpected everyday essentials like shoes, cell phones, even life-saving medicines.

People rely on oil and gas and on energy transfer to safely deliver it through an underground system of pipelines across the country.

Learn more at energytransfer.com.

Honey, do not make plans Saturday, September 13th, okay?

Why, what's happening?

The Walmart Wellness Event.

Flu shots, health screenings, free samples from those brands you like.

All that at Walmart.

We can just walk right in.

No appointment needed.

Who knew we could cover our health and wellness needs at Walmart?

Check the calendar Saturday, September 13th.

Walmart Wellness Event.

You knew.

I knew.

Check in on your health at the same place you already shopped.

Visit Walmart Saturday, September 13th for our semi-annual wellness event.

Flu shots subject to availability and applicable state law.

Age restrictions apply.

Free samples while supplies last.