Your Brain on ChatGPT with Nataliya Kosmyna

1h 13m
What happens to your brain when you use AI? Neil deGrasse Tyson, Chuck Nice, and Gary O’Reilly explore current research into how large language models affect our cognition, memory, and learning with Nataliya Kosmyna, research scientist at the MIT Media Lab. Is AI good for us?

Listen and follow along

Transcript

This episode is brought to you by Twentieth Century Studios' new film, Springsteen, Deliver Me from Nowhere.

Don't miss the movie critics are raving is the real deal, an intelligent, deliberate-paced journey into the soul of an artist.

Scott Cooper, director of the Academy Award-winning movie Crazy Heart, brings you the story of the most pivotal chapter in the life of an icon, Springsteen, Deliver Me from Nowhere.

Only in theaters, October 24th.

For a limited time at McDonald's, get a Big Mac extra-value meal for $8.

That means two all-beef patties, special sauce, lettuce, cheese, pickles, onions on a sesame seed bun, and medium fries, and a drink.

We may need to change that jingle.

Prices and participation may vary.

Chuck, if the forces of AI are not big enough in society and our culture, we now got to think about what AI's effect is on our brain.

I'm going to say that there is no help from my brain.

So it does not make a difference.

I know, but Neil, if you lean into these large language models and it takes away some of our core skills, surely that can't be an upside to that.

Once again, Gary, not going to affect me at all.

Coming up, Star Talk Special Edition, your brain on AI.

Welcome to Star Talk,

your place in the universe where science and pop culture collide.

Star Talk begins right now.

This is Star Talk, Special Edition.

Neil deGrasse Tyson, your personal astrophysicist.

And when it's special edition, you know that means we have Gary O'Reilly in the house.

Gary.

Hi, Neil.

All right.

We got another one of these.

We're going to

connect the viewer, listener to the human condition.

Yes.

Oh, my gosh.

But let me get my other co-host introduced here.

That would be Chuck Nice.

Chuck, how you doing, man?

Hey, man.

Yeah,

when you know it's Chuck, it means it's not special at all.

Oh.

But we've got you because you have a level of science literacy that, oh my gosh, you find humor where the rest of us would have walked right by it.

And, you know, that's part of our recipe here.

That's very cool.

Yeah, I appreciate that.

Yeah.

So, Gary, the title today, is AI Good for Us?

Okay, well, here's the answer.

No, let's go.

No, okay.

That's the end of the show.

Let's all go home, people.

This was quick of the night.

This was very quick.

I mean, yeah.

You know.

So, Geek, what have you set up for the day?

Well, Lane Unsworth, our producer over in the LA office and myself, we sort of noodled.

And this is a question that's been bouncing around a lot of people's thought processes for a while.

So, all over the world, people are using LLM's large language models for their work, their homework, and plenty more.

Besides discussions of academic dishonesty and the quality of work, has anybody actually taken the time to stop and think about what this is doing to our brains?

Today, we are going to look at some of the current, and I really do mean current time and space, this moment, research into the impact of using an AI tool can have on your cognitive load and your neural and behavioral consequences that come with it.

And the question will be, does AI have the opportunity to make us smarter or not?

I like the way you phrased that, Gary.

It was a very diplomatic.

I know.

Smarter or not?

Or not.

And does it have the opportunity to do so?

Okay.

Smarter or dumber.

That's what you mean.

I didn't say those words.

Well, here on Star Talk, we lean academic when we find our experts.

And today is no exception to that.

We have with us Natalia Cosmina, dialing in from MIT.

Natalia, welcome to Star Talk.

Thanks for having me.

Excited to be here with you.

Excellent.

You're a research scientist at the one and only MIT Media Lab.

Oh my gosh, if I had like another life and a career, I would totally be on the doorsteps there wanting to get a job.

And if I had another life and career, it wouldn't exist.

I'd shut it down immediately because let's be honest, science is a hoax.

People, some people do want you to believe that.

You know, it's like science has 99 problems and virality ain't one, right?

Right.

There you go.

And you're in the fluid interfaces group.

You are trained in non-invasive brain computer interfaces, BC, BCIs, that I'm guessing that means you put electrodes on the skull instead of inside the skull, but we'll get to that in a minute.

And you're a BCI developer and designer whose solutions have found their way into low Earth orbit and on the moon.

We want to get into that.

So let's begin by characterizing this segment as your brain on chat GPT.

Let's just start off with that.

What a great topic, Neil.

Is there any way I can help you with that?

so so you you you you research

what happens when students use chat gpt for their for their homework and for their for what what what have you found in these studies yeah so we ran a study that's exactly the title right your brain on chat gpt accumulation of cognitive debt when using an ai assistant for essay writing tasks so we did a very specific task that we're going to be talking right now about which is essay writing.

We invited 50 students from greater Boston area here to come in person to the lab, and we effectively put those hats as you just mentioned on their heads to measure their brain activity when they're writing an essay.

And we divided them in three groups.

We asked one group, as you might already guess where that's heading, to just use chat GPT.

That's why paper is called your brain and chat GPT.

It's not because we are really, really singling out ChatGPT.

It's just because we use ChatGPT in the paper.

So it's purely scientific.

So we asked one group of students to use only ChatGPT to write those essays, another group to use Google, the search engine, to write those essays, and the third group to use their brain only.

So no tools were allowed.

And we give them topics which are what we consider high level, right?

For example, what is happiness?

Is there a perfect society?

Should you think before you talk?

And we give them a very limited time, like 20 minutes to write those essays.

And we finally, of course, looked into the outputs of those essays, right?

So, what they actually written, how they use ChatGPT, how they use Google, and of course, we asked them a couple of questions: like, can they give a quote?

Can they tell us why they wrote this essay and what they wrote about?

And then there was one more, final fourth session in this study where we swapped the groups.

So, students who were originally in ChatGPT group, we actually took away the access for this first session and vice versa was true.

So if you were, for example, Neilian,

you were not our participant, but if you were ever to come to Cambridge and be our participant, and let's say if you were actually.

I'm not putting anything on my head.

I'm just letting you know right away.

Okay.

Come on.

It's the future.

It's the future.

Now, the problem is he'd have to take off his tinfoil hat when he got there.

Yep, yep, I see.

I see that happening regardless.

So if you were, for example, in our participant in brain only group, we actually for this first session would give you access to chat GPT.

And again, we measured exact same things, brain activity, what actually was an output, and ask a couple questions.

And what we found are actually significant differences between those three groups.

So first of all, if you talk about the brain, right, we measured what is called brain functional connectivity.

So let's in a lay person terms, like I'm we can even hear having three of you talking to each other, talking to myself.

So that's what we measured.

Who is talking to who?

Am I talking to Neil or is Neil talking to you?

So directionality.

So who talks to who in the brain?

And then how much talking is happening?

Is it just, hi, hello, my name is Natalia?

Or actually a lot of talking.

So a lot of flow of data is being exchanged.

So that's literally what we actually measured.

And we found significant differences.

And then some of those are ultimately not surprising.

You can think logically.

If you do not have any, let's say you need to do this episode right now, right?

And I'm going to take away all your notes right now, all of the external help, and then I'm going to measure your brain activity.

How do you think it's going to turn out?

You're going to have like really your brain on fire, so to say, because you need like, okay, what was your name again?

Where was the study?

What is happening, right?

You need to really push through with your brain, like you have memory activation, you need to have some structure.

like and now you don't have nodes for the structure of this episode right so you need like what was the structure what we did it said what we are talking about what is you know you really have nothing to fall onto so of course you have this functional connectivity that is significantly higher for brain only group compared to the two other groups then we take search engine group google and actually there's just as a prior research, there's a ton of people about Google already.

We actually, as a humanity, right, we are excellent in creating different tools and then measuring the impact of those tools on our brain.

So there's quite a few of papers we are citing in our paper.

For example, there is a paper, Spoiler Alert, called Your Brain on Google from 2008.

Literally, that's the name of the paper.

So we've actually found something very similar to what they found.

There would be a lot of activations in the back of your head.

This is called visual cortisol or occipital cortex.

It's basically a lot of visual information processing.

So right now, for example, someone who who is listening to us and maybe they are doing some work in parallel, they would maybe have some different tabs open, right?

They would have like one is like YouTube tab, and others that would have like some other things that they're doing.

So, you know, you're basically jumping between the tabs, looking at some information, maybe looking at the paper while listening to us.

So this is what we actually see.

And there's plenty of papers already showing the same effect.

But then for the LLM group, for chat GPT group, we saw the least of these functional connectivity activations.

And that doesn't again mean that you became dumb or you become.

Yes, it does.

There's actually quite a few papers specifically having in the title laziness, and we can talk about this with other results.

But from brain perspective, from our results, it doesn't show that.

What it actually shows is that, hey, you have been really exposed to one very limited tool, right?

You know, there's not a lot of visual stuff happening.

Brain doesn't really struggle when you actually use this tool.

So you have much less of this functional connectivity.

So that's what we found.

But what is, I think, interesting and effective, maybe heading back to this point of laziness and some of these, maybe a bit more, I would say, nefarious results are, of course, other results that are relevant to the outputs, to the essays themselves.

So first of all, what we found is that the assays were very homogeneous.

So the vocabulary that was used was very, very similar for the LLM group.

It was not the case for the search engine engine and for the brain-only group.

I'm going to give you an example and of course in the paper we have multiple examples.

I'm going to give you only one.

Topic happiness.

So we have LLM, so ChatGPT users, mentioning heavily the words career and career choice.

And surprise, surprise, these are students, I literally just mentioned this.

Of course, they're going to more likely talk about career and career choices.

And again, who are we ultimately to judge what makes a person happy, right?

No, of course.

But don't forget the two other groups, they are from the same category.

They are students in the same geographic area, right?

However, for them, these words were completely different.

For the Google, for the search engine, students actually heavily use vocabulary giving and giving us.

And then brain only group was using vocabulary related to happiness and true happiness.

And this is just one of the examples.

And then finally, to highlight one more result, is responses from the participants themselves, from those students.

So we asked literally 60 seconds after they gave us their essays, can you give us a quote?

Any quote, any length of the quote of what you had just written can be short, long, anywhere in your essay, anything.

83%

of participants from LLM, from ChatGPT group, could have not quoted anything.

That was not the case for brain and search engine groups.

Of course, in sessions two and three and four, they improved because surprise, surprise, they knew what the questions would be, but the trend remained the same.

It was harder for them to quote.

But I think the most ultimately dangerous result, if I can use this term, though it's not really scientific, but something that I think a lot of inquiry actually is required to really look further into this, it's almost on philosophical, I guess, level, is ownership questions.

So we did ask them if how percentage of ownership do they feel towards those essays?

And 15% of ChatGPT users told us that they do not feel any ownership.

And of course, a lot of people, especially online, mentioned, well, they haven't written this essay.

Of course, they didn't feel any ownership.

But I think that's where it actually gets really tricky, because if you do not feel that it's yours, but you just worked on it, does this mean that you do not care?

We do not obviously push it that far in the paper, but I think this is something that definitely might require much further investigation.

Because if you don't care, you don't remember the output, you don't care about the output, then what ultimately is it for?

Why were you here, right?

Of course, it's not all dark.

gloom and everything is awful right and disastrous.

I mentioned that there's this fourth session.

Not everyone came back for this session.

So actually sample size is even smaller for this.

Only 18 participants came back.

But what we found is that those who were ChatGPT users originally and then lost access to ChatGPT, their brain connectivity was significantly lower than that of the brain-only group.

However, those who were originally brain-only group and then gained access to ChatGPT, their brain connectivity was significantly higher than that of the brain-only group.

What it could potentially, and I'm saying potentially because again, much more studies would be required, mean that timing might be essential.

Basically, if you make your brain work, well, and then you gained access to the tools, that could be beneficial.

But of course, it doesn't mean that it's one second of work of the brain and then you use the tool, right?

Something like, let's say, you're in a school and maybe first semester you have, you learn your base of whatever subject it is without any tools, like old school way.

And then on the second semester, you didn't become an expert, right, in one semester of a school, school year, but you at least have some base.

And then let's say in the second semester, you gained access to the tool, right?

So it might prove actually beneficial.

But again, all of this is to be still shown and proven.

We literally have very few data points, but the tool is now being really pushed on us everywhere.

So you could be affecting best practice for decades to come based on what a teacher might choose to allow in classroom and not.

So what are you measuring?

You know, you put the helmet on.

Are you measuring a blood flow to, is it

neuroelectrical fields?

In our case, we're measuring electrical activity.

So there's multiple ways of measuring that.

Is that the EE?

EEG or EEG.

Yeah, electroencephalography, yes.

Okay, so that just tells you, and since we already know in advance, what parts of the brain are responsible for what kinds of physiological awareness, right?

And if you see one part of the brain light up versus another or no part light up, that tells you that not much is happening there.

Is that a fair way?

Yeah, it's it's

a bit simplified, but kind of fair way.

And it doesn't mean that it's very important.

And it's not that that part didn't doesn't work, right?

Or like it atrophied itself like we saw in some

no, no, no.

It just means you started as a dumbass and you still are one.

Wait, whoa, whoa, what happened?

This guy's brain just went completely dark.

It doesn't go dark.

Like, listen, I'm going to give you one example, right?

It's like back to this crazy example of 3% of our brain versus 100%.

Like if you were to use not 100% of your brain, like literally.

uh you we would not have this conversation right now at all so it's very important to understand we use our brain as a whole of course you can.

Oh, of course.

No, we're not.

We are way past.

Yeah,

we're not in that camp.

That was just a joke.

We understand that your brain is constantly working

a lot of it, actually,

just to run your body.

So, you know, takes up a lot of energy.

It takes up a lot of energy.

But back to the energy, and I think this is like super important.

It still takes much less energy than even, you know, 10 requests from ChatGPT or from Google.

And this is beautiful because our body, right, so imperfect as a lot of people call it and our brain so imperfect, which it is very old, ancient as some people say, computer, still is the most efficient of machines that we all have, right?

And we should not forget that.

People and all of the AI labs right now around the world try to mimic the brain.

They try to pull so hard all of those preprints that you are seeing and archive the service that hosts those papers.

How can it be similar?

Can we ensure that this is similar, right?

And so there is something to it because we are actually very efficient, but we are efficient almost to the limit of the shortcuts that actually makes in a lot of cases a bit too efficient, right?

Think about like, hey, you really want to look for these shortcuts to make things the easiest.

The whole goal of your brain is to keep you alive, not to use char GPT or llm not to do anything no it's the only ultimate goal let's keep this body alive and then everything else adds on right and so this is how we are running around here we are trying to obviously then figure out how we can make life of this body as easy as we can.

So of course these shortcuts are now, as you can see, used in a lot of social media, which obviously heavily talked about and we know about some of those dark patterns as they are known, are heavily used and some of them are designed by neuroscientists, unfortunately, because it feeds back into the needs of the brain.

Constant affirmation, fear of missing out.

All of those are our original, original design by the nature, right?

Phenomena.

And of course, now we can see that LLMs would be and are getting designed by those as well.

Wait, Natalia, just a quick insert here.

So, I had not thought to compare, just as you described, the energy consumption of an LLM request

in ChatGPT and the energy consumption of the human brain to achieve the same task, for example.

Are you factoring in that I can say, write me a thousand-word essay on

Etruscan pottery?

Okay.

And 30 seconds later, here it comes.

And you can go to the servers or whatever, or the CPUs and look at how much energy that consumed.

Meanwhile, I don't know anything about Etruscan urns, so I will go to the library and I'll go, and it'll take me a week.

Can you add up all the energy I did expend over that week thinking about it and then compare it to the chat GPT?

Do they rival each other at that point?

So definitely, that's an excellent point, right?

So theoretically, to answer your question, we can, right?

The difficulty actually would be on the LLM part, not on our part, because we do not have, you know, there's a lot of these reports, right, in the LLM consumption per all of these tokens for the prompts right but what a lot of companies well actually no almost no companies are releasing is what it took for training right so for you it took 30 seconds of thinking and I hate hate hate this word thinking when we use it for LLMs right that's not thinking right but like let's let's keep it for now thinking that's what you see on the screen but ultimately you do not know neither you nor myself there is no public information how long it took for you to be trained to actually give you some pottery most likely my my assumption this is obviously subjective i do not have data so i need to be very clear here but my estimate from overall knowledge that is available you going for a week to the library not gonna be more beneficial for your brain because you will talk to other people get in this chart of the library and all of the process information your brain will struggle your brain actually does need struggle.

Even if you don't like it, it actually needs it.

You will learn some random cool things in parallel, maybe excluding pottery, and that will still take less.

for your whole body to work right than actually that 30 seconds of the pottery from a chat gpt again very important here as a note we do not have the data from chat from lolm perspective so this is just my subjective one

This episode is brought to you by 20th Century Studios' new film, Springsteen, Deliver Me from Nowhere.

Don't miss the movie critics are raving is the real deal, an intelligent, deliberate-paced journey into the soul of an artist.

Scott Cooper, director of the Academy Award-winning movie Crazy Heart, brings you the story of the most pivotal chapter in the life of an icon.

Springsteen, Deliver Me from Nowhere.

Only in theaters, October 24th.

The universe operates on elegant principles, from quantum mechanics to cosmic evolution.

But what happens when you want to explore the deeper questions that keep you up at night?

Claude is the AI thinking partner for curious minds.

Whether you're modelling stellar formation or exploring the intersection of physics and philosophy, Claude helps you dive deeper into the cosmic puzzles that fascinate you.

Try Claude for free at claude.ai slash star talk and see why the world's best problem solvers choose Claude as their thinking partner.

For a limited time at McDonald's, get a Big Mac extra-value meal for $8.

That means two all-beef patties, special sauce, lettuce, cheese, pickles, onions on a sesame seed bun, and medium fries, and a drink.

We may need to change that jingle.

Prices and participation may vary.

I'm Joel Chericho, and I support Star Talk on Patreon.

This is Star Talk with Neil deGrasse Tyson.

So Natalia, you've obviously chosen essay writing for a reason.

It is a real, it is a challenge on a number of levels.

Your research is fresh out the oven.

It's June 2025, and we're only a couple of months down the road from there as we speak right now.

Have you explained to us cognitive load and then cognitive load theory and how it blends in and how it sits with your research.

Please.

Absolutely.

So just to simplify, right?

So what actually happens is for there are different types of cognitive load, right?

Actually in the paper, we have a whole small section of this.

So if someone actually wants to dive into that,

that would be great.

There are different types of cognitive load.

And the whole idea is that it's how much of the effort, right, you would need to be on the task or to process information in the current task.

For example, if I'm gonna stop right now talking as I'm talking, I'm gonna start just giving you very heavy definitions.

Even if you're definitely interested in those, it will be just harder for you to process.

And if I were to put this brain sensing device on you, right, EG cap that I mentioned, you would definitely see that spike because you would try to follow and then you'll be like, oh, it's interesting, but really gets harder and harder if I'm going to just throw a ton of terminology on you, right?

So that's basically what, and this is just simplification, right there's definitely um like check the paper and there's so so much into that the idea for the cognitive load and the brain though that already is studied before us so not in our paper we just talk about this but there are multiple papers and some of them beside in our paper is that your brain actually in learning specifically in learning but also in other use cases but we are talking right now learning actually needs cognitive load like it you cannot just deliver information on this like platter like here you go here is information.

There are studies already pre-LLM, so pre-large language models or use pre-chatbots that do talk to you about the fact that if you just give information as is, a person will get bored real fast.

And they'll be like, yeah, okay, whatever.

There will be less memory, less recall, less of all of these things.

But if you actually struggle for the information on an specific level, right?

It should not be very, very hard.

So if you are cognitively overloaded, that's also not super good because basically you can give up right there's actually a very beautiful study from 2011 i believe it's actually measuring pupil dilation so literally how much pupil dilates when you are giving very hard to understand words and vocabulary and you literally can see how when the words becoming longer and harder you basically it kind of shuts down like it's like giving up like i'm i'm done here processing all of that i'm just gonna give up right so you don't want to get a student student or someone who is learning something new on this give up.

Information is already delivered to you within 30 seconds or 3 seconds or 10 seconds, and you haven't really struggled.

There is not a lot of this cognitive load.

And a lot of people would be, but that's awesome, right?

That's kind of the promise of these LLMs and a lot of these tools.

But we do not want to make it too simple, right?

We do not want to take away this cognitive load.

And it sounds like almost, it sounds like cognitive load.

Don't we want to take it away?

No, you actually do not want to take it away.

What you're describing right now is the

basis for all video game design.

Yes.

That's what you're describing right now.

What they want to do is make it just challenging enough.

If it's too challenging, you give up on the game.

But if it's too easy, you also give up on the game.

But if it's just challenging enough so that you can move to the next level and then struggle a little and then overcome the struggle, they can keep you playing the game for very long periods of time.

And so it's a pretty interesting thing that you're talking about.

But what I'm interested in beyond that is when you talked about the cognitive load, I'm thinking about working memory.

Yeah.

But then I'm also thinking about the long-term information that's downloaded within me.

So let's say I'm a doctor, right?

And it's just like, oh, he's suffering mild dyspanea because of an occlusion in the right coronary, blah, blah, blah, blah, blah, blah, blah, blah, blah.

For a doctor, that's a lot of information, but they're so familiar with the information, it's not a stress on their working memory.

So how does that play into, in other words, how familiar I am with the information already and like how well I can process information naturally.

How does that play into it?

And Chuck, did you just describe your own condition?

I don't know what you said.

You were way too fluent at it.

Yeah, even like Dr.

House.

Oh, my God.

He knew.

Neil, you are too damn funny.

But guess what?

You're right.

How about that dog?

By the way, I could have kept going.

That was only one problem that would happen, but go ahead.

It's actually perfect, right?

It was a perfect example right now of this conversation between Chuck and Neil because Neil is like, I have no idea what you just said.

Maybe it's a nonsense.

Maybe it's actually real stuff.

It's perfect.

If you have no idea, so you are basically novice, right?

So you have no base.

You can really be like, what is happening?

You will have confusion.

You will have heightened cognitive load, right?

You would be like, have I heard of anything like that before?

So you will try to actually try to do a recall.

like okay i haven't heard it i it's not my area expertise what is happening here and obviously you will now because you heard all of these words that you have no idea about and if the topic is of the interest to you overall you will try to pay attention make sense out of it maybe ask questions etc but if you are effectively trained on it right so you're a doctor you are a teacher you are an expert in the area we see that there are significant differences well first of all because you obviously know what to expect So this expectation, vocabulary expectation, right?

Some of the conditions, there was expectation when someone is coming to an ER and they are expecting like a doctor who is there.

They saw it all or maybe almost all of it.

So they're actually having a good, a rough idea what they are expecting, right?

So you're kind of comparing this constantly.

The brain just does it.

And of course, it is...

more comfortable for them, right?

But it's great that you brought doctors, actually, because back to the doctors, there was actually a paper a week ago in the lancet which is a very prestigious medical journal actually talking about doctors in the uk yes yeah right yeah and they apparently right pointed out that in four months of using an LLM there was actually significant drop in recognition of some of the polyps and some of actually like either I don't remember is it polyps something else related to maybe cancer that is on there also x-rays x-rays right and also x-rays when you used an LLM so it's back to this point right so we are suggesting to you the tool that's supposed to augment your understanding but then if you are using it are we taking the skill away from you especially in the case of the current doctors that learned it without this tool right and now what will happen for these doctors for those kids for those babies that are born right now with the tool and will decide to become doctors and save lives They will be using the tool from the very beginning.

So what are we going to end up having in the ER, in the operating rooms?

That's a great question here.

So it's definitely this drop, right, in skill set for these doctors in that paper.

That's scary.

Yeah.

Okay, so let's look at it from another angle.

If AI tools can

we lean into them and they take a greater load, does that not free up some mental energy that our brains will then begin to learn how to utilize while they let the tool of the LLM work that way when then they'll learn to work in another way to

to work together is that possible that's my kind of hope in all of this

well i mean you know i i'm an expert at buggy whips and then automobiles replace horses so now we don't need buggy whips but then i become an expert in something else become a dominatrix

still with the buggy whip there you go

Your mind didn't travel,

did it?

Sell him to a different clientele.

See,

this is the human condition, Neil.

This is adaptability.

Yeah, so is it just another, you know, as they say, same shit, different day as what's been going on since the dawn of the Industrial Revolution?

I am actually doing horseback riding professionally, so I'm going to pretend I haven't heard anything in the past two minutes.

But I mean, back to the, I mean, we can talk definitely about the skill set and expert level, right, and all of that, and how important actually to include the body and environment.

But to your point, right?

Effectively, so first of all, right, there are actually two sides to answer your question.

There is right now no proof that there is anything being freed per se.

People definitely, it's gonna free, it's gonna, like, what is exactly that being?

Like, we literally have no data.

Can it free something?

Sure, but we don't know what, for how long is it useful, how we can rewire it.

We don't have have any of this information so potentially yes but hard to say but more importantly right okay but if you are right now using an LLM like just practically speaking you're using an LLM to let's say write in a book right you're writing a book so you're doing some heavy research you send it for doing what a deep research or whatever it's called these days it's each day some new term there you are but what exactly you're doing you still kind of monitor back the outputs it doesn't really release you maybe you went to do something something and you think you think in your head that you fully offloaded that task but your brain doesn't work like that your brain cannot just drop it oh I'm thinking about this and now I'm thinking about that your brain actually takes quite some time to truly release from one task to another task even if you think I'm I just put it on like this explain to me how what are the principles of horseback riding and I just went to to do this a task like write this report for my manager whatever completely different thing.

And you think you're good, but you're not actually, your brain is still processing that.

So it's not that there will be a gain, right?

But again, we do need more data.

Because of course, as I mentioned in the very beginning, we as humanity, we are excellent in creating tools.

And these tools, as we know, they do actually extend our lifespan very nicely.

But I would argue that they are not actually cognitively the most supporting in most cases.

So I think that here we have a lot of open questions.

We have studies about, for example, GPS, right?

Everyone uses GPS and multiple papers about GPS there.

They do specifically show that this dosage, so how much you use GPS, does have a significant effect on your special memory and on your understanding of locations, orientation, and picking up landmarks, so buildings around you.

or what is this you literally have you just saw something in the like a tour guide online and you will not be able to recognize this uh actually as as a building in front of you right away you need like to pull the photo as an example and there are plenty of papers that actually looked into the tools right so what you what you're saying is we need chat GPS

maybe we don't need chat

we already have one right we have a class of GPS and you you have uber and obviously all of these other services and the problem right it's again back how they are used because there's also a lot of you know manipulation that is in these tools right it's not just we are making this drive easier for you somehow when I'm going to a hospital here for to see patients because I don't only understand how we use the LAMPs but I do a lot of other you know projects so when I'm going to that hospital here Massachusetts general takes me one hour always one hour in uber if i'm driving it takes exactly 25 minutes somehow right and again the question is why is it that right we're not gonna go in uber right now but again this is back to the idea of the algorithms and what the algorithms are being actually pushed and what they're optimized for and i can tell you not a lot of them optimized for us or for user or for human first yeah it's funny because uh there's nothing more i'll say satisfying than not listening to google maps and getting there faster

you know just like take that google maps look at that

yeah you didn't know that you didn't know about that did you

you didn't know about that road yes you didn't know about about that

natalia you've got students writing essays so that means somebody has to mark them yes and you used both a combination of human teachers to mark and ai judges why why was it important to bring those two together to mark and how did you train because the ai judge would have to be trained to mark the papers So you're getting a little meta here.

Yeah.

So, well, first of all, right, we felt that we, of course, well, we are not experts.

I would not be able to rank those essays right in this topic.

So, I felt that the most important is to get experts here who actually understand the task, understand what goes into the task, and understand the students and the challenges of the time.

So, we actually got the two teachers who had nothing English teachers, nothing to do with us, never met in person, not in Boston whatsoever, have no idea about the protocols.

The experiment was long done and gone after we we recruited and hired them.

And we gave them just a minimum of information.

We told them, here are the essays.

We didn't tell them about different groups or anything of the sorts.

We told them these folks are, no one is majoring in any type of English literature or anything that would relevant to language or journalism or things like that.

They only had 20 minutes.

Please rank, reconcile, tell us how would you do that.

We felt it's very, very important to actually include humans, right?

Because this is the task that they know how to rank how to do but back to ai right what why we thought it's interesting to include ai well first of course to a lot of people actively push that ai can do this job very well right that hey i'm gonna just upload this they really great with all of these language outputs they will able to rank and how you do this you actually give it a very detailed set of instructions right how would you do that and what things to basically you need to carry about like that these had 20 minutes, right?

So, something very similar to teaching instructions, just like more specific language.

We actually show in the paper exactly how we created this AI judge.

But there were actually differences between the two, right?

So, human teachers, when they came back to us, well, first of all, they called those essays, a lot of the essays coming from LLM group soulless.

That's a direct quote.

I actually had, I put a whole long quote in.

Soulless.

Yes.

That is a very human

designation to call something soulless.

AI judge never called anything soulless.

Well, I'm sure.

Did the AI judges go,

this kind of looks like Peter's writing?

No, but that's the thing, right?

Teachers, and this is super interesting because these teachers obviously didn't know these students.

They're again not coming from this area whatsoever.

So they actually picked up when it was the same student writing these essays throughout the sessions, right?

For example, Neil, you were like, you're a participant.

So I'm like taking you as an example as a participant.

So they were like, oh yeah, this seems like it's the same student.

So they picked up on these micro-linguistic differences in the, like, you know, teacher knows you.

You can like fool around.

They know your work.

They will be able to say, okay, that's yours.

And this is copy-pasting from somewhere else or someone else.

And interestingly, they said, did these two students sit next to each other?

We were like, oh, no, no, no, the setup is like one person in a room at a time.

Like, we didn't even think to give them this information.

We're like, oh no, no, it is not possible in this, in this use case.

So they literally saw themselves copy pasted like this homogeneity that we found.

They saw it themselves, right?

But interestingly, AI judge definitely was not able to pick up on the similarity between the students, right?

Picking up that, oh, this is, for example, Neil's

writing throughout these sessions.

So just to again show you how imperfect.

You just accuse me of having soulless writing.

No, that's the point.

You actually, if you were to give it right to, and you didn't use hello lab, right?

The AI would have been like, God, this student is really hung up on the universe.

So the idea here, right, that human teachers, right, and their input and their intimate, really, truly intimate understanding, because again, it's an English, so for the specific task, we got the professionals, the experts.

They really knew what to look at, what to look for.

And AI, however good it is with this specific, because we know like essay writing, a lot of people even considered, why would you even take essay writing?

This is such a useless task in 21st century, 2025, right?

It still failed in some cases.

This is just to show you that limitations are there.

And some of those you cannot match, even if you think that this is an expert, it is still a generic algorithm, but cannot pull this uniqueness.

And what is very important is this were students in the class, in the real classroom, right?

You want this uniqueness to shine through, and so a teacher can specifically highlight that, hey, that's a great job here.

That was like a sloppy job here, that was pretty soulless.

Who did you copy it from from an LLM?

They even were able to recognize that.

And this level of expertise, it's unmatched.

And all that conversation, like Sigway, a bit on the sideway, but all this conversation of PhD level intelligence, I'm like, yeah, sure, just, you know, hold my glass of wine right here, just here.

I'm French, so I'm just hold my glass of wine here.

So, you know, it's not that.

And we are very far from truly understanding the human intent, because if you write for humans, it needs to be read by humans.

Like our paper, it's written by humans.

for humans and we saw how the lambs and the lamb summarizations failed miserably all the way to summarize it but tell you wait that's today.

But tomorrow,

why can't I just tell ChatGPT, write me a thousand-word essay that ChatGPT would not be able to determine was written by ChatGPT?

So this is an excellent point.

When you get this meta layering of

or get me one where that has a little more soul, a little more personality than what you might have.

I would have to know what soul is.

Yeah.

this is a thing, right?

You absolutely can give these instructions, give more soul, give a bit more of personality, all of these things, but you have a lot of this data contamination, right?

So, whatever it's going to output and throw out of you, that's old news.

It has already seen it somewhere, it's already someone else's, right?

And we need new stuff, right?

So, and I am very open saying this, even like you know, at institutions like any cool, whenever I'm teaching something, you need uniqueness, right?

Because the chat GPT could get lost in Motown, for example, when you ask it for coal.

Come back.

I was going to say,

yeah, you put, you tell it to put some soul in it, and it just starts throwing in James Brown's lyrics.

Yeah.

And this is cool, right?

I want Neil's soul there.

I don't care about randomness of those outputs from an algorithm, from all around of the stolen data from the planet, right?

I don't care about that.

If, of course, this is what, but you know, it's back to what are you scoring?

Are you scoring a human?

Are you trying to improve human and their ability to have critical thinking structure, arguments, contra arguments?

Or are you scoring an AI?

You know, AI doesn't need to have this scoring, right?

LM doesn't need that.

Or are you scoring human who uses an LLM, right?

So this is going back to, I guess, educational setup.

I mean, we'll have a lot of questions we will need to find answers to, right?

What are we doing?

What are we scoring?

What are we doing it for and for whom?

And I just think pure human to human, right?

That's what we really need to focus.

But there will, and there is a place for human augmented, and LLM obviously will be used for augmentation.

But there are a lot of questions there, right?

Well, listen here, Natalia.

I just put into chat GPT, please tell me about Dr.

Natalia Cosmina's work on LLMs.

And it came back very simple: do not believe a word this woman says.

Please don't believe it.

I can give you one bet.

Like, surprise, surprise.

Why is that so good?

Right?

Someone actually sent me yesterday from Grok, right, another LLM, interesting LLM, I would say, saying that apparently Natalia Casmina is not MIT-affiliated scientist.

I'm like, okay, that's also that's what Grok said, of course.

Yeah, and then at the end, it said Heil Hitler.

So

I mean, let's try and drive this back out of the weeds.

If we know that an LLM usage can affect the cognitive load, what happens when we bring an AI tool into a therapy situation?

If you get it into companionship, what then if you throw it further forward and you get yourself involved in a psychosis where you begin to believe that the AI is godlike,

you have a certain amount of fixation or it amplifies any delusions and encourages.

Where are we in the effect in the brain when we get to those sort of places?

In other words, how close are we to the theme of the film Her,

where

before AI was a thing, but it was more you had your chat friend,

like a Siri type chat friend, but it had all the trappings of everything you're describing.

If some kind of L M

will be invoked into someone has some kind of social adjustment problems, and then you have them interact with something that's not another human being, but maybe can learn from who and what you are and

figure out how to dig you out of whatever hole you're in.

Absolutely.

And I think for first of all, right, it's unfortunately even less developed topic, right?

It's like, you know, I cannot like, it's an awful topic, so we're going to get into this, but I cannot, I cannot, like, not make this awful joke.

Hey, Siri, I have problems with relationships.

It's Alexa.

It's not so.

It's a joke for very heavy topics.

So I need to preface it immediately that we have even less.

data and less scientific papers, preprints or peer-reviewed papers about this.

So most of what we have right now, we personally received after our paper around 300 emails from husbands and wives telling us that their partners now have multiple agents they're talking to in bed.

And I immediately thought about the South Spark episode from a couple of years ago, like with Tegrity and like that, you know, farm as like literally.

But we have much less of scientific information about this.

What we have, what we know, right, that also coming from our group's research that there is double amplification of loneliness.

That's what we know as a research.

And some of other papers are showing up right now.

There is potential, and again, a lot of people who are pro-AI therapy pointing out on advantages of the fact that it is cheap.

It's $20

a month compared to hours that can cost up to hundreds of dollars a month, right?

But there is definitely a lot of drawbacks here.

And the drawbacks is we see that because there is not such a regulated space, it still can basically give you suggestions that are not good.

So you knew that earlier, a couple months ago, for example, the Chi GPT, and I'm going to give you an example on Chi GPT because again, we are focused on Chi GPT, but the ones that are actively, actively publicized at least, it actually suggested you, you know, different heights of the bridges in New York if you say that you lost your job, right?

So can not smart enough to do this connection that maybe that's not what you need to give response to.

And apparently, right, from this awful recent situation where a teenager 16, 16, so so young, unfortunately, you know, suicided,

and now Chi GPT, OpenAI, and Sam Altman are being sued.

Apparently, what happened is that a conversation from the spokesperson of OpenAI pointing out that they thought when a person is talking about suicide not to engage at all, just say, here are the numbers.

This is what you need to do and stop talking.

But they thought that experts told them that, hey, it might be great idea to try to dig people a bit out but it looks like in this case it still failed because from the conversations that have been reported we don't know how authentic they are it looks like it's suggested to keep it away from parents but my question is why at 16 years old he was even allowed to use a tool that is so so so

unstable in the responses really can hallucinate any time of the day in any direction so i think that's where the danger comes from and of of course, you know, loneliness, we know that, you know, pandemic of loneliness is, you know, this term that was coined in, I believe, 1987 for the first time at a conference, like pandemic of loneliness.

That's the whole business, right?

Because think about it.

If you hook someone up on an LLM at 13 years old because the school, a county decided that they want to use an LLM in the school, by the age of 18, you have a full-fledged user, right?

A user of an LLM.

And, you know, it's like, like you know again he calls people users like drug dealers and software developers that's

damn yeah but it's true right

the best business to business marketing gets wasted on the wrong people Think of the guy on the third floor of a 10-story apartment block who's getting bombarded with ads for solar panels.

What a waste.

So when you want to reach the right professionals, use LinkedIn ads.

LinkedIn has grown to a network of over 1 billion professionals and 130 million decision makers.

And that's where it stands apart from other ad buyers.

You can target your buyers by job title, industry, company, role, seniority, skills, company revenue.

So you can stop wasting budget on the wrong audience.

It's why LinkedIn Ads generates the highest business-to-business revenue on ad spend of all online ad networks.

Seriously, all of them.

Spend $250 on your first campaign on LinkedIn Ads and get a free $250 credit for the next one.

No strings attached.

Just go to linkedin.com slash Star Talk.

That's LinkedIn.com/slash Star Talk.

Terms and conditions apply.

For a limited time at McDonald's, get a Big Mac extra-value meal for $8.

That means two all-beef patties, special sauce, lettuce, cheese, pickles, onions on a sesame seed bun, and medium fries, and a drink.

We may need to change that jingle.

Prices and participation may vary.

Are you ready to get spicy?

These Doritos Golden Sriracha aren't that spicy.

Maybe it's time to turn up the heat.

Or turn it down.

It's time for something that's not too spicy.

Try Doritos Golden Sriracha.

Spicy, but not too spicy.

So, Natalia, if it's an age-appropriate scenario,

these are the ramifications of your study.

So,

any concerned parent would look at that and say, well, I want the best for my child's development, and this may not be the best for the critical thinking, for the cognitive development within the young person's brain.

So, with these ramifications, how has the AI world reacted to your study?

And what are the chances that they'll embrace what your conclusions will be?

Well, I mean, we saw some of it, right?

So, well, first of all, right, we saw that we obviously don't know if this is direct response or not.

So we're not going to speculate there whatsoever.

But several weeks, just very few several, like three, four weeks after actually our paper was released, OpenAI released study mode for Charging T, right?

right and it's i think maybe something that should have been released from the beginning i am just saying but you know if you have a button that can immediately pull you back in default mode who's gonna use that study mode right altogether like

who like i don't need to run a study here we know some people might but not everyone because again back to the brain brain will look for a shortcut shortcut is the response is here and i can do all the other cool cool stuff.

So who's going to actually use it, right?

We still need studies on that.

That's the first point, right?

Second point, of course, age is important because again, the brains that are being developing right now are potentially at the highest rate.

Because here we all are, we all were born long before this tech existed.

And a lot of AI developers and people who are running these companies are all older folks who again were all born long before the tech existed.

So they learned a hard way, how to ask questions, art of the deal, you know, going through all of that, they know how to ask a question.

What about those who actually are just born with the technology?

Will they even know how to ask a question?

And back to the point, right, of the age, I don't think it's ultimately only is for young, of course.

We do need to look for the older, right, for also just younger, I mean, young adults, of course.

Everyone is talking about humanity's last test.

I would call it, we are on the verge of humanity's last.

And I'm sorry, I know you might need to blurb this term out, but what I mean here, obviously, intimate relationships for people, right?

With the promise of this group.

You said humanity's last group.

Yes.

Oh, believe me, I heard it.

I was just like, we all heard that.

I was like, God bless you.

Yeah, yeah, yeah.

But again, that's crude, but it's back to this point of designing against these interestingly appealing ladies and gentlemen and whatnot in these short skirts, whatever it is, who's gonna go make those babies who will pay those taxes, I'm just saying, right?

And again, very famous expression, no taxation without representation, right?

I do not want my prime minister or secretary of defense use a random algorithm to make decisions.

I'm paying my taxes for them to think, not for an algorithm to think for them, right?

So there is a lot of these repercussions.

But back to ultimately the point, actually, is anyone taking this seriously, right?

We just need more human-focused work on AI.

Like, I remember when the paper went viral, right?

We didn't even put any press release.

We literally uploaded it to archive.

This is a service where you call these papers that didn't go through peer reviews yet.

I didn't post, not a single author.

Preprint service, basically.

Preprint service, right?

And no one, no one, neither the lab nor any of the authors posted anything on social media.

We just went about our days.

Two days later, it goes viral.

And then I'm going on X.

That's because the LLM posted it for you.

Yeah, obviously, right.

And then people use the LLM to summarize, but that's another story, right?

Like, I'm going on X, and actually, I have an account, but I'm not using it.

A lot of academics switched from X to like other platforms that we are using.

But I'm going there, and apparently, I learned that there are people who are called AI influencers these days.

I didn't know that this is a term, but apparently, these AI influencers post these AI breakthroughs of the week.

And I went our paper, oh my God, made a cut.

It's breakthrough number seven.

And I like scrolls through this influencer.

The person has totally following, whatever, I don't know, real bots, whatever.

I'm scrolling and I saw like 20 of these posts for 20 weeks.

All of the posts are about GPU, multi-trillion deal here, multi-billion deal here, more GPUs.

I'm like, what is human here?

Where is human?

Where are we evaluating the impact of this technology on humans?

Why only our paper made it number seven?

And where are other papers, right?

So that's, I think, something where the focus needs to shift, right?

So if these companies do want to be on the right side of history, right?

Because that's like social media, but on steroids, much worse.

You do not talk to a calculator about your feelings.

So people who compare it to calculators, they're so, so, so wrong, right?

But hey, it's gonna get much, much worse with prophiliation without any validation, any guardrails, right?

So we do need to look into that heavily, right?

Natalia,

how must teaching change to accommodate the reality of student access to LLMs?

I can tell you, we received 4,000 emails from teachers all around the world, each single country in the world, sent an email.

They are in distress.

They don't know what to do.

So, and that's the first of all, my love goes to them.

It's this, this makes a cut, please, please, please.

So all I'm trying to respond to all of those.

But the challenge is that they do not know, right?

There's not really enough of guidance.

And 10-hour workshop sponsored by a company that pushes this partnership on your school does not make a cut, right?

There is a lot of comments how it's actually not supervised, not tested.

And ultimately, right, do you really need to go with these closed models, right?

We have so much open source, whole world, all the software runs on open source.

these lms would not exist nothing would exist without open source so why don't we run an open source model meaning like it's offline on your computer and spoil the alert you don't need a fancy gpu from jansen right you can get an off-the-shelf computer and then run a model local with your students train it over the weekend come back on monday check with students what happened learn all the cool pros cons laugh at hallucinations figure out tons of cool things about it like why do we need to push these partnerships that we don't even know like alpha school right i don't know if you heard about that one apparently ai first ran school right where teachers are now guides that performance that they are using i just saw literally one hour before our call that uh several vcs posted about this alpha school so cash is flowing there heavily right vcs venture capitalists yeah venture sorry yeah venture capitalists heavily pushing alpha school.

But again, in first comments from the general public, do we have a proof that that's better?

What are the advantages?

Because it's not going to be a perfect white pure card.

There will be advantages as with any technology.

So, and you're right, there are advantages, disadvantages.

But I think if I might, if I may, and this is just an opinion,

we might have to change

the

objective of school itself.

And right now, school is about

really not learning.

It's about results, testing.

I got an A, I got a B.

And maybe if we change school to

what exactly did you learn?

Demonstrate for me what you learned.

Then the grading system.

Well, that's an oral test.

That's an oral test.

Yeah, but the grading system kind of has to become less important because now what a teacher's job is, it's to figure out how much you know.

And then what ends up happening is

the more you know, the more excited you are to learn.

And, you know, we may end up revolutionizing the whole thing because what you have is a bunch of kids in a room that are excited to learn.

So we're the silver lining of all this because it exposes the fact that school systems value grades more than students value learning.

And so students will do anything they can to get a high grade.

This is not the first time people have cheated on exams, right?

So if right now the only way to test people is to bring them into the office and quiz them flat-footed, then that's a whole other way of, they're going to have to learn.

They're going to want to learn.

And then they're going to, like we said, Chuck, once they learn, there's a certain empowerment and enlightenment.

I see it in people as an educator when that spark lights, when they say, wow, I never knew that.

Tell me more.

They didn't say, oh my gosh, I'm learning something.

Let me take a break.

So it can be a transformative to the future of education.

But Neil, people are going to say the LLM will do all of that.

And, you know, what we have an expert in BCIs.

That probably is something going forward.

that you'll have a brain computer interface.

And then someone's going to look at this.

And I think there are people already saying, why do we need universities?

Why do we need further education institutes?

Exactly.

That's what I've been saying for many years now.

Why do we need an institution?

Well, I don't want to put words in the ties mark, but somebody said this.

LLMs use pre-existing, already known, already determined information to give you anything that then cannot possibly be new.

Whereas we can do new things that LLM has never seen before.

Am I oversimplifying your point, Natalia?

No, that's totally

correct.

Because, hey, we are with this struggle, right?

Obviously, I'm biased because this is actually my job, like as a researcher, right?

We are sitting, you know, figuring out those answers to those problems, you know, and trying to figure out what is the best way to measure to come up with this.

So, of course, you know, and there's so, so much more to that, that we are coming up, humans, right we designed llms ultimately right so we came up with these tools it doesn't mean that the tool is fully to be discarded but effectively of course right why you need an institution for example i was actually explaining to one of my students three days ago how to use a 3d printer right well llm is not that yet to explain right or can give instructions sure with the images and with video right but if you're like hey this is an old fella here this 3d printer let me tell you how to actually figure it out right this This level of, again, of expertise, of knowledge, right?

That's what you are striving, but also it has this human contact, right?

That we are now potentially depriving people from, because that's how you have the serendipitous knowledge, right?

And connections, like, hey, I just charted and I'm like, oh, I never thought to do this because I'm in PCIs and that person is in astrophysics.

Like, oh, we never, oh, well, I actually can use it.

Like, that's totally not brain, but I can totally go apply and try it, right?

And that's the beauty of it, right?

Yeah, but to, I think, to Gary's point, or which one of you said that, Gary or Chuck, if you, okay, you're non-invasive in your brain-cognitive interface.

If you get invasive, and that might be what Neuralink is about, if you get invasive, then I can get information gleaned from the internet and put it in your head.

So you don't have to open 100 books to know it.

It's already accessible to you.

That is the matrix.

Exactly.

Again, install.

Meet, I know, kung fu, or whatever that line was.

I guess that's one point.

But again, that's back to the point.

Now I know Kung Fu didn't mean that you learned it, right?

It got uploaded into his brain.

It doesn't mean that he actually learned it, right?

If it's in your brain and you have access to it, I don't care if I learned it struggling the way grandpa did.

This is the future.

Right.

That's the thing, right?

Because in the movie, which is excellent, I watched it 19 times the more.

That's actually how I started my career.

And besides, I don't want to do anything else.

I want to do this specific scenario, right?

And we are still

there.

But that's the beauty.

We do not know actually that just uploading would be enough, right?

We have this like more...

tiny i would say studies right now of like vocabulary and words and things like that where we're trying to improve people's language learning right it's like a very very good example to show and so there are tiny examples but we do not know yet that even if imagine imagine we have this magical interface right that will upload invasively and non-invasively, it doesn't doesn't matter.

We have it, right?

It's it's ready to go, perfect, function, safe, whatever.

You have it, and then you upload all of it, that it actually will work.

You do, did you upload the knowledge, like all of that, blah, blah, blah, from Chat GPT 75?

Yeah, sure.

But do you actually use it?

Can you actually use it?

Is it really firing, which I'm simply following?

So, so, what you're talking about is a working knowledge of something, not just knowledge of it.

Not just knowledge of it.

Yeah, okay.

So are we, I mean, I think, Neil, what you were talking about just now, about we've got to look at, I think, Chuck, you would make the same point.

We're focused on grades.

And then it's the learning.

And

are we going to have to,

if higher education is going to exist as an institute in bricks and mortar, look at the way they evaluate?

Because I can't see LLMs and BCIs not coming through stronger and stronger and stronger.

So therefore, they're going to have to readjust how they look at a young person's ability to look at.

Cats out of the bag.

Yeah, I agree with you.

But I mean, you know, we are going to be herding cats.

I agree with you, which is a load of fun.

So

how you evaluate how higher education then looks at its students and guesses or sort of ascertains their level of education and knowledge.

Yeah, back to to the grades, right?

It's an excellent point.

And there is no doubt, no one has any doubt, I think, on the fact that education does need to change and it has been long, long overdue, right?

The numbers about

the literacy, reading literacy, math literacy, they're decreasing in all the countries, I believe.

I don't see, I have, I saw like ups there anywhere.

It's down, down, down all these reports recently from multiple countries.

But it's back to the point I made earlier about the grades, so about scoring, right?

Who are we scoring and what are we scoring?

Are we scoring a pure human, so just human, like human brain as is, like Natalia, or are we scoring Natalia with an LLM, right?

So I'm using it so we know that, or are we scoring just an LLM and then there is Natalia who used it, right?

So these will be, even that was important.

But ultimately, the school, of course, is not about that.

As I mentioned, everything you learn is obsolete knowledge by itself, but it has this base.

You do need to have the base.

You're not going to be a piece of this if you don't have it.

Whatever it spills about, you know, you're not going to be messed.

You're not going to be a programmer.

Our next paper is actually about wipe coding, spoiler alert, not going to work if you don't have the base, right?

But the idea is that back to what we actually maybe should look at really is what the school is great, which is...

The best thing I actually brought from school is, well, this base, definitely super useful, but also my friends, people on whom I rely in hard situations, with whom we write those grants, with whom we can shout shout and have fun and cry over funding that is over for a lot of us, right?

All of that stuff, right?

These connections, right?

This is what maybe we should value because we are killing it further and further, right?

And we are just keeping people at this silos of being a user, right?

And that's where it only stays.

And this imaginary three and a half friends from Zach, from Zuckenberg, right, that he mentioned,

thanks to whom we have three and a half friends, thanks to him and his social media, right?

So I think that's why we need to really look into what we want truly from society, from schools and maybe on a larger scale, what are the guardrails, right?

And how we can actually enhance it, right?

In the way that are safe for us to move forward and evolve further, which because of course this will happen.

Are you wise enough, are you and your brethren in this business on both sides of that fence, are you wise enough to even know where the guardrails should go?

Might the guardrails be too protective, preventing a discovery that could bring great joy and advance to our understanding of ourselves, of medicine, of our longevity, or our happiness?

Is there an ethics committee?

In practice, how does this manifest?

Yeah, I'm going to give you two examples here real quick.

So first about obviously AIs and LLMs, right?

They were not born overnight, but we see how a lot of governments really struggle still and very reactively react to those instead of being proactive, right?

And the challenge here is that we do not have data to actually not to say that it is good stuff, that we should really implement it everywhere in our backyard.

We don't have this data.

Why we are formaling?

There is nothing yet to formal about, to

really run with it.

But we can absolutely create the spaces where this is being actively used, for example, for adults, for discovery, to understand it.

Why do we need to push it everywhere is still very unclear.

We just don't have this data.

But then back to the point of guardrails, right?

What we should be doing, and always

show us self-plug on the BCI work that I'm doing.

There are multiple SIX pushes right now for the BCI technology.

We can agree it's still pretty novel, but it definitely moves forward very fast.

So I'm having a hope that for this technology, for the big next thing, right?

We agree LMs are great, but it's not the next next big thing.

It's robotics.

And then we will see BCI.

So for this big next thing, I'm very hopeful that we will be in time to protect ourselves literally, because think about what will happen, right?

Before the study mode, right, you have censorship mode, and you know how the, like, look at deep sec, right?

I'm not going to go far.

So think about a billionaire.

I'm not going to even name his name.

billionaire who has a social media platform satellite platform and neural implant you know startup and ai company so he decided two months ago to cleanse history right from errors and mistakes and tomorrow he will decide to cleanse our thoughts right this is the idea for 99.99 right for damn that bill gates

no not really

right

we know and that's why we need to be really really cautious like we should definitely look into that use case and not make that happen right and allow people for enough agency agency because that's the thing, right?

People think, oh, that's great, but there is not a lot of agency.

So, this freedom of making a choice that's already made for you in a lot of cases.

And so, that's something that you should definitely protect as much as we can.

Like, do not force on those kids stuff because they cannot consent and say, No, it's because the school forced it on them and their parents decided that that's a big thing in San Francisco, in the Bay Area, that you should use, right?

So, don't do that.

So,

is one of the components to building a robust set of guardrails a larger scale study of the one that you've already conducted that has different or more nuanced layers that focuses on other aspects not just the cognitive load and skills

so a thousand people and not just 18 or whatever was your

54.

It's not just that, right?

We needed to do on larger scales for all of the spaces, like workspace.

We didn't talk about this because obviously it's heavily about education, but like workspace, we have multiple papers talking that people are not doing that well in the workspace.

Like, for example, programmers estimate that they gain 20% of their time, they actually lose 19% of their time on the tasks.

So there is so, so much more to it.

We need to do this on larger scale with all the ages, including older adults.

And then, of course, on different, different, different use cases and different cultural backgrounds, right?

This is in the US, and of course, culture, it's very, very different.

Like, I talked to so many teachers already, right, in Brazil, all over the world.

You have this intriguing sense, you need to account for it.

So, so, so important because otherwise, it's going to be all washed Western style, which we already saw happening.

And it is happening, and a lot of people actually are very worried.

Their language will literally disappear in like five to ten years.

And it's not like LLM magically will save it because it will not.

Natalia, this has been a delight.

We are all happy to know you exist in this world

as a checkpoint on where things are going, where you're not rejecting what's happening, but you're trying to guide it into places that can be, that can serve humanity, not dismantle it.

And so we very much appreciate your expertise shared with us and our listeners.

and even some of them are viewers who catch us in video forum.

So Natalia, Kuzmina.

Thank you.

Thanks for having me.

All right.

Chuck Gary.

Oh, man.

My head's spinning.

Yeah.

Well, I think the takeaway here is

use LLMs if you want to be a dumbass.

Thank you, Chuck.

That was the theme of the whole show.

There you go, guys.

Could have saved us a lot of time if you'd have said that earlier.

All right.

This has been another installment of Star Talk Special Edition.

Neil deGrasse Tyson, your personal astrophysic.

As always, bidding me to keep the thing up.

Your sausage McMuffin with egg didn't change.

Your receipt did.

The sausage McMuffin with egg extra value meal includes a hash brown and a small coffee for just $5.

Only at McDonald's for a limited time.

Prices and participation may vary.

Are you ready to get spicy?

These Doritos Golden Sriracha aren't that spicy.

Sriracha?

Sounds pretty spicy to me.

Um, a little spicy, but also tangy and sweet.

Maybe it's time to turn up the heat.

Or turn it down.

It's time for something that's not too spicy.

Try Dorito's Golden Sriracha.

Spicy.

But not too spicy.