How to decode a thought
To buy tickets to our upcoming live show in New York, go to http://vox.com/unexplainablelive
For more, go to http://vox.com/unexplainable
It’s a great place to view show transcripts and read more about the topics on our show.
Also, email us! unexplainable@vox.com
We read every email.
Support Unexplainable by making a financial contribution to Vox! bit.ly/givepodcasts
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Listen and follow along
Transcript
Support for this show comes from OnePassword.
If you're an IT or security pro, managing devices, identities, and applications can feel overwhelming and risky.
Trellica by OnePassword helps conquer SaaS sprawl and shadow IT by discovering every app your team uses, managed or not.
Take the first step to better security for your team.
Learn more at onepassword.com/slash podcast offer.
That's onepassword.com slash podcastoffer.
All lowercase.
Most AI coding tools generate sloppy code that doesn't understand your setup.
Warp is different.
Warp understands your machine, stack, and code base.
It's built through the entire software lifecycle from prompt to production.
With the powers of a terminal and the interactivity of an IDE, Warp gives you a tight feedback loop with agents so you can prompt, review, edit, and ship production-ready code.
Trusted by over 600,000 developers, including 56% of the Fortune 500, try Warp free or unlock Pro for just $5 at warp.dev slash top code.
There are lots of stories about mind-reading.
Stories about people who can eavesdrop on your thoughts.
I can read read every mind in this room.
Stories about aliens who can communicate telepathically.
You can read my mind, I can read yours.
Even stories about machines built to make thoughts more transparent.
We'll be measuring the tiniest electrical impulses of your brain, and we'll be sending impulses back into the box.
But one thing these stories all have in common is that they are just stories.
Until pretty recently, most mainstream scientists agreed that reading minds was the stuff of fiction.
But now.
New research shows that tech can help read people's private thoughts.
They're training AI to essentially read your mind.
In the last few decades, we've been able to extract more and more things from people's minds.
And last May, a study was published in the journal Nature that got a lot of play in news outlets.
In that paper, a group of Texas scientists revealed that they've been able to translate some of people's thoughts into words on a screen.
This thing that you could call mind reading.
But do we want machines reading human minds?
I'm Bert Pinkerton, and on this episode of Unexplainable, how much can these researchers actually see inside of our heads?
Will they be able to see more someday soon?
And what does all this mean for our privacy?
I reached out to one of the co-authors on this paper to get some answers to these questions.
It's this guy named Alex Huth, who researches how the brain processes language.
And Alex has a word of caution on the terminology here.
A lot of people call this mind reading.
We don't use that term in general because I think it's vague.
and like, what does that mean?
He prefers a more descriptive word, which is decoding.
So basically, when the brain processes language or sounds or emotions, whatever, it generates this huge flurry of activity.
And we can capture that activity with a variety of tools.
So like electroencephalography, for example, which is EEG.
that reads electrical impulses from the mind.
Or fMRI machines will take pictures of our brain at work as we react to the things that we experience.
But then researchers like Alex have to decode the cryptic signals that come from these machines, right?
And in Alex's case, their lab is trying to parse exactly how the brain processes language.
So for them, decoding means taking the brain responses and then trying to figure out like, what were the words, what was the story that elicited these brain responses.
So how do you do that?
That is what this paper from May was all about.
Step one in their process of decoding the mind is, and I swear I am not making this up, listening to lots of podcasts.
So we just had people go in the MRI scanner over and over and over and over and listen to stories.
That was it.
Alex and his fellow researchers, they took seven people and they played them a variety of shows.
This is the Moth Radio Hour from PRX.
So Moth Stories, right?
The Moth Radio Hour.
And also the Modern Love Podcast from the New York Times.
So we're just listening to tons and tons of these stories, hours and hours and hours.
Which sounds kind of fun.
Right?
It's like, it's not that bad.
It's a dream experiment, really.
So that's it for this episode of the Moth Radio Hour.
But then things got a little less dreamy because the researchers actually had to decode all this very fun data to kind of match up words and phrases from these podcasts to the signals coming from the brain.
Which might sound easy, but unfortunately, fMRI has one small problem.
Which is that what it measures sucks.
fMRI measures blood flow in the brain and the amount of oxygen in that blood.
It turns out that when you have a burst of neural activity, if your neurons are active, they call out to nearby capillaries and say like, hey, I need more energy.
So let's say you hear the word unexplainable, for example.
A bunch of neurons in different parts of your brain will fire and call for energy, which which comes to them via blood.
And over the next three-ish seconds, you see this increase in blood flow in that area.
And then over the next five seconds, you see a slow decrease.
But it's not like your brain is only firing one thought at a time and then kind of waiting for blood flow to clear an area, right?
It's potentially hearing lots of words, even whole sentences in that eight to 10 second period.
Like maybe it's hearing, thanks so much for listening to Unexplainable.
Please leave a review.
And all those words could trigger activity in the brain, which leaves researchers like Alex with this very messy, scrambled picture to decode.
Because that means that every brain image that we measure is really some mushy combination over stuff that happened over the last 10 seconds.
So, if every brain image that you see is like a mushy combination of 20, 30 words, like how the hell can you do any kind of decoding?
For a while, the answer was: you could not really do very much decoding.
This was a huge roadblock to this research until around 2017, when we got the first real seeds of something you've almost certainly heard about in the news.
This chatbot called ChatGPT.
It's a large language model, AI, trained on a large amount of text across the internet.
The language model that powers ChatGPT is much more advanced than what Alex's team started using.
They were working with something called GPT-1, which is like a much more basic model that came out in 2018.
But this model did help Alex and his team sort of sort through the mushy, noisy pictures that they were getting from fMRI scans and sharpen the image a little bit.
It was still hard, like even with a language model helping him, it took one of Alex's grad students, this guy named Jerry Tang, years to really perfect this.
But But finally, after some testing, some retesting, checking their work, they were successful.
They could pop someone into an fMRI machine, play them a podcast, scan their brain, and decode the signals coming from their brain back into language on a screen.
The decoding here was not perfect.
Like, for example, here's a sentence from a story that the researchers played for a subject.
I don't have my driver's license yet, and I just jumped out right when I needed to.
Their decoder interpreted the brain scans and came up with this.
She's not even started to learn to drive yet.
I had to push her out of the car.
Again, the story that was played.
And she says, well, why don't you come back to my house and I'll give you a ride.
The decoder.
I said, we will take her home now.
The story?
I say, okay.
The decoder.
And she agreed.
So as you can hear, in the decoder's translations, pronouns get mixed up.
In other examples that the researchers provide in their paper, some ideas get lost, others get garbled.
But still, overall, the decoder is picking up on the kind of main gist of the story here, and it's not likely that it was just lucky.
Like it does seem to be reading these signals.
And that would be amazing enough, but the researchers did not stop there.
Jerry designed a set of experiments to test kind of how far can we go.
For example, they wanted to see if they could decode the signals coming from someone's brain if the person was just thinking about a story and not hearing it.
So they ended up having people memorize a story and then instead of playing them a podcast, they just asked them to think about the story while they were in an fMRI machine.
And then we tried to decode that data.
And it worked pretty well, which I think was kind of a surprise, the fact that that worked.
Because this meant that this tool wasn't just detecting sort of what a person was hearing, but also what they were imagining.
Which is also interesting because it suggests that there's some kind of parallel potentially between hearing something and just thinking about it.
Like our brains are doing something similar when we listen to speech and when we think about it.
And the researchers found other interesting parallels too.
Like they tried this other experiment.
Which was just weird and I still think it's kind of wild that it worked.
We had the subjects go in the scanner and watch little videos.
Silent videos with no speech, no language involved.
They were actually using Pixar shorts.
And again, they collected people's brain activities while they were watching these things and then popped that activity into their decoder.
And it turns out that the decoded things were quite good.
For example, one video is about a girl raising a baby dragon.
And in the decoding example that they give, there are definitely moments that the decoder is way off.
Like at one point, something falls out of the sky in kind of a surprising way.
and the decoded description is, quote, My mom brought out a book and she was like, Wow, look what I made.
Which is not super related.
But other moments do sync up pretty well.
Like at one point, the girl gets hit by a dragontail and falls over.
And the decoded text is, quote, I see a girl that looks just like me get hit on her back and then she is knocked off.
And that was, that was wild.
It also potentially says something really interesting about the brain, right?
Like that, even as we watch something that doesn't involve language at all, on some level, our brains seem to be processing it into language, sort of descriptions of what's on screen.
That was like exciting and weird.
And I don't know that I expected that to work as well as it did.
Now, this research is part of a longer line of work.
Like other researchers have been able to do stuff that's sort of similar to this by implanting devices into the brain, for example.
They've even been able to use fMRI machines to reconstruct images and sounds that brains have been thinking about.
But Alex and his lab, they've really taken an impressive step towards decoding part of this sort of messy chaos of freewheeling thought that runs through someone's head.
And that's kind of wild.
You know, the first response to seeing this was like, oh, this is really exciting.
And then the second response was like, oh, this is actually kind of scary too, that this works.
It's especially unsettling, at least to me, from a privacy perspective.
Like, right now, I can think pretty much whatever I want, and nobody can probe those thoughts unless I choose to share them.
And to be clear, it's not obvious that this technology is going to change that.
There are a lot of barriers in place right now, keeping our brains private.
Like, these decoders have to be tailored to one individual brain, for example.
You can't take the whatever many hours of another person sitting in the scanner and use it to predict this person's brain responses or decode this person's brain responses.
So unless you're currently in an fMRI machine having your brain scanned, and you also recently spent many hours in an fMRI machine listening to podcasts, you probably don't need to worry too much that someone is reading your thoughts.
And even if you are in an fMRI machine listening to, I don't know, this podcast, you could still keep your thoughts from being read because Alex and his team tested whether someone had to cooperate in order for the decoder to work.
Like if they actively try to make it not work, does it fail?
And it turns out that yes, it does fail in that situation.
Like if a subject refuses to listen and does math in their head, for example, like takes a number and keeps adding seven to it, the decoder does a really bad job of reading their thoughts as a result.
Like its answers become much more random.
Still, barriers like this, like the need for a bespoke decoder for each person's brain or the ability to block a decoder with one's thoughts.
That's definitely not a fundamental limitation, right?
That's definitely not something that's like
never going to change.
Maybe it won't.
Maybe that'll still be necessary, but that doesn't seem like a fundamental thing.
And certainly it's something that we could potentially improve in the future.
Alex says this is just a fundamental unknown at this point.
Like he doesn't see a way with our current technology to build a true mind reading device that works across every brain.
We don't even know if that's possible.
But this is also the very beginning of this research.
Like, again, he used the earliest version of the language model that powers ChatGPT to do this, this thing called GPT-1.
But we are now on GPT-4, and it's a lot more powerful than its predecessors.
So, who knows how much more powerful a decoder like this could become using that more advanced technology?
Maybe,
and again, this is a maybe, but maybe it'd even be possible to do this kind of decoding with simpler machinery.
Like, you might not need a big hulking device like an fMRI machine.
Maybe you could use a wearable device like EEG that records electrical signals from your brain.
It might be impossible.
We don't know.
I don't think it's going to work with EEG, but
10 years ago, people would say, I don't think this is going to work with fMRI, and it does work with fMRI, so who knows.
So what does all this mean?
I don't think, and Alex doesn't think, that we're going to wake up tomorrow and find that our innermost thoughts are available for anyone to read.
But I also don't think that we should say, you know, oh, this decoding stuff, it'll just remain a scientific curiosity.
We can ignore it.
You know, we'll never live in a world where some amount of brain decoding is taking place.
And I think that...
Because I spoke to an ethicist who says that we should be thinking very hard about what brain decoding could mean for all of us in the future.
Especially because some people already live in a world where admittedly a much lower level form of mind reading, but still a form of mind reading is part of their day-to-day.
That's after the break.
Support for this show comes from Robinhood.
Wouldn't it be great to manage your portfolio on one platform?
With Robinhood, not only can you trade individual stocks and ETFs, you can also seamlessly buy and sell crypto at low costs.
Trade all in one place.
Get started now on Robinhood.
Trading crypto involves significant risk.
Crypto trading is offered through an account with Robinhood Crypto LLC.
Robinhood Crypto is licensed to engage in virtual currency business activity by the New York State Department of Financial Services.
Crypto held through Robinhood Crypto is not FDIC insured or SIPIC protected.
Investing involves risk, including loss of principal.
Securities trading is offered through an account with Robinhood Financial LLC, LLC, member SIPIC, a registered broker dealer.
As a founder, you're moving fast towards product market fit, your next round, or your first big enterprise deal.
But with AI accelerating how quickly startups build and ship, security expectations are also coming in faster, and those expectations are higher than ever.
Getting security and compliance right can unlock growth or stall it if you wait too long.
Advanta is a trust management platform that helps businesses automate security and compliance across more than 35 frameworks like SOC2, ISO 27001, HIPAA and more.
With deep integrations and automated workflows built for fast-moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models, infrastructure, and customers evolve.
That's why fast-growing startups like Langchain, Ryder, and Cursor have all trusted Vanta to build a scalable compliance foundation from the start.
Go to Vanta.com slash Vox to save $1,000 today through the Vanta for Startups program and join over 10,000 ambitious companies already scaling with Vanta.
That's vanta.com/slash box to save $1,000 for a limited time.
The mind of this young researcher is as frantic and busy as a,
say,
as a city.
So far, we have been talking about technology that can look at a bunch of brain data and translate it to tell researchers what a subject is hearing or thinking.
It's amazing, but at least for now, it involves a lot of clunky technology, a lot of time, and a lot of cooperation from the person whose mind is being decoded.
So Most people are probably not going to have machines spitting out all their exact thoughts anytime soon.
But don't let that come for you.
Nita Farahani is still concerned.
She is a bioethicist who studies the effects of new technologies, what they mean for all of us legally, ethically, and culturally.
And recently she published a whole book about tools that read the brain.
I was somebody who had already been following this stuff for a long time.
And as I dove into the research for the book,
I mean, I was like, what?
Really?
Nita is less focused on fMRI research, trying to get at exact thoughts.
And instead, most of her book focuses on different brain reading tools, these tools that are becoming more and more commonplace.
Everyday wearables, primarily that are reading electrical activity in the brain.
Basically, when you think, or when your brain sends instructions to your body, your neurons give off a little electrical discharge.
And because hundreds of thousands of neurons are firing in your brain at the same time, you can pick up using brain sensors the broader signals that are happening.
This is electroencephalography or EEG, this technology we've mentioned before.
It's less precise than something like fMRI.
Like it doesn't tell you where in the brain the signals are coming from, but it also doesn't require you to sit in a loud machine for hours, right?
Like EEG devices can take readings by being applied to the head.
And also when the brain sends signals out into the body, like say into the wrist, other sensors can measure the electrical activity of the muscles that that happens as a result.
And they can be miniaturized and put into earbuds and watches and headphones.
Because the level of detail is lower, there isn't a way, at least right now, to kind of use EEG readings to do what Alex can do with an fMRI machine, right?
To decode brain activity into words running through people's heads.
But these devices can detect things like alertness, tiredness, focus, or reactions to stimuli.
And these readings aren't always very precise.
But but as Nita dove into her research, she found that these devices are already being used in all kinds of contexts.
It would be like, oh, imagine if it was used in this way.
And then I would find an example of it being used in that way.
And I'm like, what?
You know?
Some of the uses or potential uses for these EEG tools are actually kind of promising.
Like they could help people track their sleep better, potentially track cognitive deterioration.
Nita says they could maybe help people with epilepsy get alerts about changes in their brain that could mean a seizure, and they could help people measure their own pain more accurately.
But they also have a lot of uses that feel a little closer to invasions of privacy.
So for example, these wearable EEGs can be used to measure recognition.
like when your brain sees something, any kind of stimulus, like a house or a face or a goose, say.
Your brain reacts to the stimulus, and it reacts differently if you recognize it versus if you don't recognize it.
It does this super fast, like even before you're consciously aware of it.
And if you recognize that goose or face or house, your brain then fires a signal that says, I know that goose or face or house.
And because an EEG reader can then detect that signal, a researcher named Don Song, along with some collaborators, showed that this can be used in pretty concerning ways.
What they did was as people were playing video games wearing EEG devices, subliminally they flashed up images of numbers and they were able to go through and figure out recognition of numbers without the person even knowing that the numbers were being flashed up in the video game.
And just by doing this, just by supplying sort of subliminal prompts and then measuring reactions, these researchers were able to get some pretty personal data.
Things like your PIN number, even home addresses through this recognition-based interrogation of the brain.
That same recognition measurement has also been used in criminal investigations.
Police have interrogated criminal suspects to see whether or not they recognize details of crime scenes.
This is not a new thing.
Like as early as 1999, a researcher in the U.S.
claimed that he could use an EEG lie detector test to see if convicted felons recognized details of crimes.
This has been used by the Singapore Police Force and by investigators in India as evidence in criminal trials.
And there are lots of arguments that the data that comes from these machines is not good enough or reliable enough to base a criminal conviction on.
But whether or not this technology really works, if people believe the results of an EEG lie detector like this, it can have really serious consequences.
And not just in the court system.
Like an Australian company came up with a hat that monitors EEG signals of employees.
There's already a lot of employers worldwide who've required employees to wear devices that monitor their brain activity for whether they're tired or wide awake, like for commercial drivers.
It's also big in the mining industry.
Caps like this have been worn by workers, not just in Australia, but across the world.
And while that might seem worthwhile if it prevents accidents, some places have started monitoring more than just tiredness.
Like there are reports of Chinese companies rolling out hats for their employees.
Testing for boredom and engagement.
Even depression or anxiety.
The reporting around these suggests that EEG is way too limited to do a great job at reliably detecting those kinds of emotions.
But again, These tools don't need to work well to have professional or privacy consequences.
There's risks on the side of like if it's really accurate and, you know, what it reveals.
And then there's risks on it not being perfectly accurate and how people will use or misuse or misinterpret that information.
I think this workplace stuff is especially startling to me because when I first started reading about these EEG devices, I thought, okay, like I will simply never purchase a watch that monitors my brainwaves.
Like problem solved.
Yeah.
I mean, so
most people's first reaction to like hearing about this stuff is like, okay, I'm just never going to use one of those.
You know, great.
Like,
thank you for letting me know.
I will avoid it at all costs.
Right.
But if you have to have one of these for work, like that takes away that element of choice.
Or similarly, Nita told me about this EEG tool in the works right now that lets you type just by thinking.
And if something like that becomes the default way of typing, then maybe having a brain monitoring tool like this also becomes the default.
Like having a cell phone.
Technically, you can live without one, but it is logistically difficult.
It both becomes inescapable.
And
people are like generally outraged by the idea that most companies require the commodification of your personal data to use them as like free services, whether that's a Google search or that's, you know, a Facebook app or a different social media app.
And then they seem to forget about it and do it anyway.
And so like, there's all kinds of evidence that people trade their personal privacy for the convenience all the time, right?
This is why Nita says that we should think seriously about the implications of technologies like these EEG readers right now,
as well as the implications of more advanced thought reading technologies, like the fMRI-based ones that researchers like Alex are working on.
It's
really exciting to make our brains transparent to ourselves, but once we make our brains transparent to ourselves, we've also made it transparent to other people.
And
like at the simplest level, that's terrifying.
So I mean, I think from my perspective, there is nothing more fundamental than the basic sanctity of our own minds.
And what we're talking about is a world in which we had assumed that that was inviolate and it's not.
All this made me wonder, like,
should we shut all this down?
Like, should we stop trying to find ways to read minds and just tell researchers like Alex Huth to stop working on stuff like his fMRI brain decoder?
For Alex, it's tricky because this research isn't like working on the nuclear bomb, for example.
Like, it's not a tool that is pretty much only good for killing people.
I think it's more like, I don't know,
computers themselves.
We have shown that computers can be used for bad things, right?
Like, they can be used to surveil us or collect data about us as we browse the internet.
They're also very good.
They're used in all kinds of ways.
They're very good.
Similarly, like if EEG devices are used to monitor brainwaves and then detect problems like Alzheimer's or concussions, that would be a win.
And if the fMRI work in Alex's lab helps us understand the fundamental workings of the brain, how our mind processes language, I think that's good.
And other versions of brain reading tech are being used to help people with paralysis communicate.
I think in the same way that it's like something can be big and have implications in a lot of different ways.
It kind of matches that mold rather than like nuclear bomb mold.
But he does worry.
After his paper came out, Alex actually reached out to Nita to ask about the ethical implications of his work.
And he was not particularly surprised when she told him that decoding minds could lead to pretty concerning consequences for privacy.
Yeah, I mean...
I've been reading her book, so I think I kind of knew what page she was on.
The thing that did surprise him was when he started asking her about some further experiments his team was considering.
Like, right now, for example, Alex says their decoder can pick up the story someone is hearing, but not the stray random thoughts they're having about that story.
Like, incidental thoughts?
It's not clear whether or not it's even possible to pick up those kinds of thoughts.
But when Alex was talking to Nita, he asked her, should he try and figure out if it's possible?
Like, should he try and probe deeper into people's minds?
Are there things that we shouldn't do?
Like, is this a thing that we shouldn't do?
He thought she'd say, Alex, shut it down.
Like, stop going deeper.
But she didn't.
If we don't have the facts, it's very difficult to know what the ethics should be.
Her view was,
you know, her community, the
ethicists, philosophers, so on, lawyers, they need data.
They need information to do what they do.
And they need information like, is this possible or not?
You know, unless you know what you're dealing with, how do you develop effective countermeasures?
How do you develop effective safeguards?
So she was like, you should do that.
Like, you kind of have a responsibility to do that.
So now Alex is in a kind of an odd position.
It's a little weird.
It's a little weird.
Like feeling maybe we have a responsibility to do these things now that
are creepier because
i don't know so we can see like what the limits are and we can talk to people about that openly instead of somebody just going and doing it and hiding it away i don't know
i don't know either but i do understand this argument that it's important to figure out the unknowns here some of this stuff still feels kind of like science fiction to me and it's hard to know really how far this tech will advance or or how transparent it could make our brains but i do think there is at least a case here for mapping things out, right?
To understand what the limits of this technology might be so that we can put safeguards in place if we need to.
Yudafarahani is the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.
If you want to hear more from her, Vox's Segal Samuel did a great interview with her on the Gray Area podcast.
Look for Your Brain Isn't So Private Anymore.
And Segal also has a great text piece about mind decoding on our site, Vox.com.
You can find out more about Alex Huth's work by looking up the Hooth Lab at the University of Texas at Austin.
This episode was produced by me, Bird Pinkerton.
It was edited by Brian Resnick and Meredith Hodnott, who also manages our team.
We had sound design and mixing from Christian Ayala and music from Noam Hasenfeld.
Serena Solon checked our facts and Manning Nguen's favorite fruit is mango.
This podcast and all of Vox is free in part because of gifts from our readers and listeners.
You can go to vox.com slash give to give today.
And if you have thoughts about our show or ideas for episodes that we should do in the future, please email us.
We are at unexplainable at vox.com.
You can also leave us a review.
Both would be very much appreciated.
Unexplainable is part of the Vox Media Podcast Network, and we will be back next week.
This month on Explain It to Me, we're talking about all things wellness.
We spend nearly $2 trillion on things that are supposed to make us well.
Collagen smoothies and cold plunges, Pilates classes, and fitness trackers.
But what does it actually mean to be well?
Why do we want that so badly?
And is all this money really making us healthier and happier?
That's this month on Explain It To Me, presented by Pureleaf.
Support for this show comes from Capital One.
With the VentureX business card from Capital One, you earn unlimited double miles on every purchase.
Plus, the VentureX business card has no preset spending limit, so your purchasing power can adapt to meet your business needs.
Capital 1.
What's in your wallet?