Making Sense: How sound becomes hearing
This is the first episode of our new six-part series, Making Sense.
You can find more of Diana Deutsch’s auditory illusions at https://bit.ly/3Mdh6H4, Matthew Winn's research at http://www.mattwinn.com/Research.html, and Mike Chorost's writing at https://michaelchorost.com
For more, go to http://vox.com/unexplainable
It’s a great place to view show transcripts and read more about the topics on our show.
Also, email us! unexplainable@vox.com
We read every email.
Support Unexplainable by making a financial contribution to Vox! bit.ly/givepodcasts
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Listen and follow along
Transcript
Most AI coding tools generate sloppy code that doesn't understand your setup.
Warp is different.
Warp understands your machine, stack, and code base.
It's built through the entire software lifecycle, from prompt to production.
With the powers of a terminal and the interactivity of an IDE, Warp gives you a tight feedback loop with agents so you can prompt, review, edit, and ship production-ready code.
Trusted by over 600,000 developers, including 56% of the Fortune 500.
Try Warp free or unlock Pro for just $5 at warp.dev slash top code.
Support for this show comes from OnePassword.
If you're an IT or security pro, managing devices, identities, and applications can feel overwhelming and risky.
Trellica by OnePassword helps conquer SaaS sprawl and shadow IT by discovering every app your team uses, managed or not.
Take the first step to better security for your team.
Learn more at onepassword.com slash podcast offer.
That's onepassword.com slash podcast offer.
All lowercase.
For a lot of people, figuring out what you're meant to do with your life is a long, winding process.
But for some lucky ones, a career path becomes clear in an instant.
Well, I've always been very interested in music.
I spent all my time playing the piano and composing and so on.
For Diana Deutsch, that moment happened back in the 50s, but it didn't go exactly how she imagined it.
My music teacher performed at the BBC third program in the mornings.
She was playing piano in a trio and I was asked to be a page turner.
Essentially she'd be turning the pages of the sheet music so her teacher wouldn't have to stop playing.
So I went up to BBC House and I was all of 16 at the time and very excited about doing this.
Diana had always dreamed of being a musician, so even just turning pages on the BBC felt like the big time.
What happened was I turned the first page, no problem.
I turned the second page, no problem.
When it came to the third page, unfortunately, my hand jerked and all the pages flew down onto the floor.
The poor lady had to, while playing the piano with one hand, pick up the pieces with the other.
Yeah, it was a terrible experience.
Diana came face to face with her dream and she knew with complete clarity that it wasn't for her.
It certainly made me realize that being a performing musician was probably not a good idea for me.
Instead of aiming for a career as a performer, Diana got into researching the psychology of music, particularly how different people perceive sounds.
And she was one of the first people to study this by generating synthesized tones using enormous mainframe computers.
One day in 1973, she was experimenting with playing two sequences at the same time.
And I had no idea what would happen, but I thought it would be interesting to try.
You can actually hear exactly what Diana heard back then, but only if you're listening on headphones.
So if you have a pair around, now would be a good time to put them in.
I started off with a high tone alternating with a low tone in one ear.
And at the same time, a low tone alternating with a high tone in the other ear.
High low on one side, low high on the other.
And what I heard seemed incredible.
I heard a single high tone in my right ear that alternated with a single low tone in the left ear.
Both ears were getting high-low sequences, but she wasn't hearing them in both ears.
She only heard high tones on the right and low tones on the left.
Just as a kind of knee-jerk reaction, I switched the headphones around
and it made no difference to what I perceived.
The high tones remained in my right ear and the low tones remained in my left ear.
If you have headphones on, flip them around.
There's probably no difference.
I went out into the corridor and pulled in as many people as I could.
And by the end of that afternoon, I must have tested, oh, I don't remember how many, but probably dozens of people.
And most of them heard exactly what I heard.
Diana literally couldn't believe it.
I was beside myself.
It seemed to me that, you know, I'd entered another universe or I'd gone crazy or something.
It just seemed that the world had just turned upside down.
I'm Noam Hassenfeld, and this is Making Sense, a new series from Unexplainable about the weird, perplexing, enormous unknowns of our senses.
We're starting by trying to make some sense of sound.
What are we actually hearing when we're hearing?
How much of it is the real world, and how much is constructed in our brain?
All knowledge must come through the senses.
All that we perceive, and all of the awareness of our daily existence.
Light, double rainbow, oh my god, sound, listen to me, listen to me, Touch.
Squishes.
Odors.
You.
And tastes.
What are your thoughts concerning the human senses?
As meat and wine are nourishment to the body,
the senses provide nutriment to the soul.
All that we perceive, see, all the awareness.
Hearing, all knowledge must come through the sentences.
I have an incredible sense of touch.
All that we perceive, tasting, all the awareness.
Smelling, all knowledge must come through the sentences.
Doesn't make sense.
Sound a weave off.
Before we get to all the unknowns, let's start with what we do know about sound.
Sound is rapid changes in air pressure that happen when something is vibrating.
Matthew Wynn, audiologist, University of Minnesota.
So you can think of it in the same way that you think of a wave in a pond.
None of the water particles move very far.
They just sort of bob up and down, but they set a whole wave into motion.
And it's like a domino effect moving through space.
This pressure wave travels through the air.
And then, you know, a whole chain of events will set into motion in your ear.
The wave passes through the ear canal.
The eardrum vibrates back and forth.
And a few little bones amplify that vibration, sending it deeper toward the cochlea, this spiral-shaped organ in the inner ear that's covered with thousands of hair cells.
The cochlea is where the sensory cells are that pick up the sound and turn it into something the brain can use.
Pressure waves become electrical impulses, which are eventually interpreted as sound.
So this sounds like a long, complicated process, but it's extremely fast.
I mean, there's no sense that's faster than hearing.
Your ear can do this whole process thousands of times per second.
All of that, the pressure waves, the ear vibrations, the transformation to electrical impulses, that's the simple part, the part we know.
The complicated part is pretty much going to take up the rest of this episode.
Because there's a difference between the pressure waves that enter our ears and what we actually end up hearing.
If we actually perceived every different sound that came in, we would be utterly confused.
Take Matthew's voice, for example.
Even in the room that I'm in right now, I'm just in a room in my house, there are echoes all around me because anytime you have a flat surface on a table, a wall, a computer screen, anything, the sound will in fact reflect off of it.
All of these echoes bouncing around should theoretically make sounds really hard to locate in space.
And so if we hear that and then hear another echo coming from the wall on my right, and then I hear an echo coming off the ceiling and then my table,
how would I know which direction the sound is coming from?
It's coming from all directions.
But our brain has an answer.
Thankfully, our brain knows
sounds only come from one direction, and that's the only way the world makes sense.
In order to function in the real world, our brain makes a guess.
It perceives that first wave of sound coming in, and then every subsequent reflection of that sound, it's like saying, Okay, I can suppress you, which is why a lot of people aren't even aware that there are echoes because our brain is so good
at suppressing them.
Our brain essentially edits our auditory experience.
The way I like to phrase it is that the brain is being nudged in a direction rather than just straight out reading the world.
Which is exactly what Diana stumbled across that day in the 70s when she was flipping her headphones back and forth.
It just seemed that the world had just turned upside down.
These days, auditory illusions aren't as unheard of as they used to be, but Diana's a big reason why.
She's She's now a psychology professor at UC San Diego, and she's been using computer-generated sounds to study the brain's editor for decades.
With that first illusion she discovered, Diana thinks two parts of your brain are disagreeing, the parts that determine pitch and location.
That's why you hear a high tone on one side and a low tone on the other, even though they're really on both sides.
And after finding that first illusion, Diana couldn't stop thinking about it.
Of course, I didn't sleep much that night.
This can't be the only illusion that does this kind of thing.
Diana started wondering whether she could design other illusions to learn more about the brain's internal machinery.
In the same way as, you know, if a piece of equipment, such as a car, breaks down, you can find out a lot about the way the car works just by fixing what went wrong.
So she started brainstorming.
I was sort of half asleep and I was imagining notes jumping around in space and
by the next morning they had sort of crystallized into what I named the scale illusion.
The scale illusion.
Just like before, this illusion consists of two tone sequences, one in each ear.
So there's one channel alone.
Some high notes, some low notes.
And then the other channel alone.
Some more high notes, some more low notes.
And then you hear them together again.
If you're listening on headphones, you're probably hearing all the high notes on one side and all the low notes on the other, even though those notes are actually jumping from left to right.
That's your brain editing the sounds.
It's separating them to reflect the way the world usually is.
In the real world, one would assume that sounds that are in a higher pitched range are coming from one source and sounds in a lower pitched range are coming from another source.
So that's what the brain assumes is happening here.
The brain reorganizes the sounds in space in accordance with this interpretation.
Just like removing echoes, this kind of brain editing would normally help you make sense of the world.
But Diana's illusion is explicitly designed to fool the brain into making a wrong guess.
And not everyone's brain makes the same guess.
Left-handers as a group are likely to be hearing something different from right-handers as a group.
Right-handers tend to hear high tones on the right side, but for left-handers, it's more complicated.
They're likelier than other people to hear high tones on the left or in even weirder ways.
All of this reorganization, the way the brain edits our hearing to help us navigate the real world, it's sometimes called top-down processing.
Top-down processing occurs when the brain uses expectation, experience, and also various principles of perceptual organization to influence what is perceived.
Instead of bottom-up processing, which is sensing the world and then having that travel up to the brain, top-down processing means that our brain is influencing how we hear.
To some extent, our brain is hearing what we are expecting to hear.
In a sense, a lot of what we perceive isn't actually us hearing sound waves hit our eardrum.
It's a prediction of what those waves should be.
To illustrate this, Diana uses something called the mysterious melody.
This is a well-known tune, but the notes are presented in different octaves.
For all the non-music folks out there, an octave is basically a standard range of musical notes.
In this illusion, the notes stay the same, but which range they're played in changes.
So instead of playing Do Ray Me in the same range with all the notes next to each other,
you could play Do Ray Mi with the notes jumping into a different range.
So Diana takes a well-known tune, doesn't change the melody, just changes the range.
And the question is: can people recognize this melody?
And in fact, people can't recognize the melody.
Now, listen to a simplified version of the same sequence.
In this case, all the notes are in the same octave.
Same range.
You know what it is.
Yeah, indeed, it's Yankee Doodle.
And a lot of times when people go back and listen to the scrambled version, they can hear Yankee Doodle in there.
When you have a frame of reference for what you're hearing, when you have an expectation, it actually changes what you're hearing.
Illusions like this tend to circulate around the internet every once in a while, like this one where depending on which word you're thinking of, you might be able to hear either laurel or Yanni.
Laurel.
Laurel.
Remember last year when that Laurel versus Yanni thing, everybody's going nuts over?
Well, there's a kiddie version of it making the rounds right now.
This is from Jimmy Kimmel's show.
And he starts by pulling up a clip from Sesame Street, of all places.
I move it to follow you.
Move the camera.
Yes, yes, that sounds like an excellent idea.
All right.
And pay attention to this because tell me if you hear Growburgers say one of two things.
That sounds like an excellent idea, or that's that's an effing excellent idea.
Are you ready?
Yes, yes, that sounds like an excellent idea.
Kim, what did you hear?
It's a finger.
You heard that.
Yes, I did.
It's the first time I heard it, I didn't hear a curse word at all.
And then the next 12 times I watched it, the F-word was all I heard.
But just in case you want one more go at it, here's Grover maybe making a lot of parents upset.
Yes, yes, that sounds like an excellent idea.
This type of misperception is true to an extent with all our senses.
We've all seen visual illusions, or you might remember the debate around the dress.
But Diana eventually found that the various ways our brain edits the world, they're not just due to hard-coded differences, like whether you're right or left-handed.
Brain editing can vary from person to person based on life experience.
To prove this, she asked listeners to determine whether a pattern is going up or going down.
For people who know a bit of music theory, this interval is a tritone, which is exactly half of an octave.
So to get from note to note, you travel the same distance whether you're going up or down.
If you don't know that much about music, all you need to know is that this is a particularly ambiguous pattern.
But Diana does something really interesting in her experiment here.
She plays the melody in a bunch of registers at the same time.
So you might have an extra hard time figuring out if it's rising or falling.
And And sure enough, you get huge differences from one individual to the other.
And this is something that really does surprise people.
I hear it going up.
And Diana found that other people hear it going up, but some people hear it going down.
What's truly mind-boggling is that Diana's found that the difference in how two people perceive this pattern, it might come down to where you grew up.
Believe it or not, when Diana compared two groups, people from southern England and people people from California, she found that the English people tended to hear this pattern as rising,
whereas the Californians heard that same pattern as falling.
Diana's hypothesis is that based on where you grow up, you tend to hear different pitches as low or high.
It has to do with the pitch range of the speech to which you have been most frequently exposed, particularly in childhood.
So if you hear that first pattern, which goes from the notes D to G sharp as falling,
you probably hear this second pattern, which goes the exact same distance from the notes A to D sharp as rising, or vice versa.
But ultimately, the mechanics of all this are still pretty much a mystery.
Scientists don't really know how all this brain editing happens.
I mean, we know that the brain does that, but we don't really know how.
In a sense, it's almost like we're all listening to a play performed in our heads just for us.
There's a script, the entire world of pressure waves bouncing around, but how we actually hear it all is up to the performers.
In so many ways, our brain dictates how we hear the world.
But even though we don't know exactly how our brain does this, there are times when harnessing that brain magic starts to become a lot more important.
It was like my hearing was pouring out of my head like water out of a cracked jar.
Coming up after the break, one man's quest to hear his favorite piece of music again.
That's next.
Support for this show comes from Robinhood.
Wouldn't it be great to manage your portfolio on one platform?
With Robinhood, not only can you trade individual stocks and ETFs, you can also seamlessly buy and sell crypto at low costs.
Trade all in one place.
Get started now on Robinhood.
Trading crypto involves significant risk.
Crypto trading is offered through an account with Robinhood Crypto LLC.
Robinhood Crypto is licensed to engage in virtual currency business activity by the New York State Department of Financial Services.
Crypto held through Robinhood Crypto is not FDIC insured or SIPIC protected.
Investing involves risk, including loss of principal.
Securities trading is offered through an account with Robinhood Financial LLC, member SIPIC, a registered broker dealer.
As a founder, you're moving fast towards product market fit, your next round, or your first big enterprise deal.
But with AI accelerating how quickly startups build and ship, security expectations are also coming in faster, and those expectations are higher than ever.
Getting security and compliance right can unlock growth or stall it if you wait too long.
Vanta is a trust management platform that helps businesses automate security and compliance across more than 35 frameworks like SOC2, ISO 27001, HIPAA, and more.
With deep integrations and automated workflows built for fast-moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models, infrastructure, and customers evolve.
That's why fast-growing startups like Langchain, Ryder, and Cursor have all trusted Vanta to build a scalable compliance foundation from the start.
Go to Vanta.com slash Fox to save $1,000 today through the Vanta for Startups program and join over 10,000 ambitious companies already scaling with Vanta.
That's vanta.com slash box to save $1,000 for a limited time.
Move the camera!
Yes!
Yes, that's a an excellent idea!
Unexplainable, we're back.
And we've been talking about the mysterious way our brain filters, edits, edits, and even reconstructs the world that we hear.
For some people, this kind of brain magic can be interesting to highlight as a party trick, but for others, it can be way more important.
Okay, testing one, two, three, testing.
This is Mike Chorist.
So it's like you take the word chorus just to add a T at the end.
Mike's a science writer who was born with severe hearing loss, but he was able to use hearing aids.
And starting from when he was 15, he became obsessed with Bolero, the famous piece by Maurice Revelle.
It was this riotous melange
with such a fascinating drumbeat underneath it all.
It really thrilled me and fascinated me.
He particularly loved the way the melody would gradually evolve over the course of the piece.
Each repetition is on a higher level, it's louder, the resonance is deeper, and it's only reached a climax.
So it's a very auditorily overwhelming piece of music.
He would listen to Bolero over and over and over.
It was kind of my piece of music that I would come to again and again and again to test out new hearing aids.
So it's always been an auditory touchstone for me.
And then one day in 2001, the limited hearing he still had started disappearing.
I was standing outside a rent a car,
and I suddenly thought that my batteries had died.
My hearing aid batteries.
Suddenly, the traffic on a nearby highway started sounding different.
It was just that sound that you associate with cars going by.
You know, vroom, vroom, vroom.
But all of a sudden, it sounded more like,
whoo,
who.
As if somebody had dumped a whole whole bunch of cotton onto the highway.
Pretty soon, Mike found out he was quickly losing what was left of his hearing.
It was like my hearing was pouring out of my head like water out of a cracked jar.
So after about four hours after that initial realization, I was essentially completely deaf.
It was just such a shocking experience.
But Mike was eligible to receive a cochlear implant.
It's a surgically implanted device that can offer a form of hearing in some deaf people.
Many people in the deaf community prefer to communicate using sign language or lip reading rather than using a cochlear implant.
But for some people, especially people who've lost their hearing later in life and want to continue using their native spoken language, cochlear implants can be helpful tools.
The cochlea is this tiny spiral-shaped organ inside your head.
And a cochlear implant is a string of electrodes that's carefully inserted inside that spiral organ.
This is is Matthew again, the audiologist who actually works with cochlear implant users to help them understand their experience.
There's this external part that looks like a hearing aid, but is not a hearing aid.
It's a microphone and a computer that analyzes the sound and sends instructions to those electrodes that are inside the ear.
The implant essentially bypasses a lot of the ear.
It directly activates the cochlea, which then passes an electric signal onto the brain.
But cochlear implants don't just reproduce normal hearing.
Mike says that reducing sound to digital ones and zeros and beaming them directly into your brain, it can sound strange.
It was shocking.
It's not at all what I expected.
When Mike's implant was turned on, the first thing he did was listen to his own voice.
And my voice sounded really weirdly high-pitched.
I almost sounded like,
you know, it was that kind of sound.
It was like listening to a demented mouse.
Matthew actually gave me a program he uses as an audiologist to simulate various types of cochlear implant sounds.
So here's a general idea of what it might have sounded like to Mike.
It was very upsetting.
I thought the world would sound pretty much like I heard with hearing aids, just fuzzier.
I was completely unprepared for the huge difference in pitches.
Because of the way the implants are designed, they tend to make everything seem a bit high-pitched.
So when you send a signal to any part of the cochlear implant, the brain will interpret that as a high-pitched sound, even if it's a low pitch.
Which is why everything can sound all mousy.
But the interesting thing is, within just a day or two, I started to hear low pitches again.
And part of that was my brain adapting to it.
My brain was saying, okay, this is my voice.
I know it's supposed to be a low pitch.
However, right now I'm hearing it as a high pitch.
Never mind that.
Because I know it's a low pitch, I'm going to interpret it as a low pitch.
Essentially, Mike's brain was editing the world for him.
So very quickly, my brain started figuring out: okay, the world sounds really weird, but I'm going to try to fit that into my preconception into what the world is supposed to sound like.
He was taking command of his own top-down processing.
So within hours, I stopped sounding like making most of myself.
And then Mike started training.
I got the audio books of the Winnie the Pooh books.
And I remember the first time I put the tape into the cassette player and played Winnie the Pooh and some B's.
I think that's the one.
I couldn't make it out at all.
It was just complete gibberish.
But he also had the physical book.
So he read along with the tape.
So I was able to start matching up the weird input that I was getting with the words on the page that told me what that input meant.
What about a story?
Said Christopher Robin.
Could you very sweetly tell Winnie the Pooh one?
This is what the S sounds like.
This is what the phoneme poo sounds like.
Winnie the Pooh.
So it is a process of remapping.
According to Matthew, this process of brain remapping is a pretty normal experience for cochlear implant users.
Any good audiologist would say to someone, if they're thinking about a cochlear implant, that when you first get it and it first is activated, you probably won't understand much at all.
But over the first six months, maybe the first year, your brain learns to reorganize how it associates sound with meaning.
Training's more accessible these days.
It's certainly not as DIY as it was for Mike 20 years ago.
But this kind of improvement can still be hard to believe.
A lot of the people that I've worked with will say, now when I listen to my spouse, it sounds like her voice, which baffles all of us who work in this field, because if you look at how the ear is being activated, there's no explanation.
I mean, not to be too on the nose, but it's unexplainable, right?
So there's no way that that could possibly be true.
And yet a lot of people say it.
Tweaking settings on the implant does make it work better, but that doesn't account for most of this incredible improvement.
A lot of the success of the cochlear implant is really a testament to how strong the brain is working, rather than a reflection of the high quality of the sound input.
Our brains have an almost uncanny ability to predict language and fill in gaps, even when we hear something muffled or distorted.
But while cochlear implants work pretty well for speech, They don't work nearly as well for music.
Music is just a much more complicated kind of sound.
You need to distinguish melodies and harmonies and textures and most fundamentally, pitches.
And an implant only has a small number of electrodes.
You have to simplify all the frequencies and you can think of it as like pixelating the sound.
Making this even harder, because the cochlea is filled with fluid, it's hard to use electrical pulses to stimulate the exact part that codes for the right frequency.
Instead, the pulses kind of spread out around the part that codes for that frequency.
Let me make an analogy.
Suppose you're playing a note on the piano.
You can be really careful and hit the exact key you want, or you can be kind of crude and put your whole hand down on the piano.
Like you're going to be in the right ballpark of the note, but you're not going to hit the exact note very clearly.
So a cochlear implant is more like putting your whole hand down on the note.
It's not a very precise frequency you're hearing.
When you take all of this into account, translating music with a cochlear implant can seem almost impossible.
The current design of cochlear implants isn't set up really for music.
It's set up to understand speech.
But I'm wanting my bolaro back.
Even though Mike's brain had learned how to edit those high-pitched tinny sounds to understand speech, music still wasn't the same.
It just sounded awful.
awful.
I'm like, oh my god, you know.
It was really shocking because I was like, even if it gets twice as good as this, it's still going to be awful.
Even if it gets three times as this, it's still going to be awful.
It was really bad.
Mike upgraded the hardware of his cochlear implant.
He upgraded the software.
He even volunteered as a guinea pig for some tests on new equipment.
So I would put on a side of headphones.
I would hear the set of beeps and boops.
I'm like, okay, which song is that?
I was like, I don't know.
It's like, could anybody know?
And for me, this was a very deeply frustrating kind of experiment because I know Twinkle, Twinkle, Null Star.
I was like, that doesn't sound like Twinkle, Twinkle, Null Star to me.
How could this sound like Twinkle, Twinkle, Null Star to anybody else?
Researchers I spoke to told me that some cochlear implant users just don't enjoy music that much.
It's certainly harder to get used to than speech.
And because patients are often told to focus more on improving listening to speech, music can get left by the wayside.
But appreciating music through an implant can sometimes be presented as an insurmountable obstacle.
You can see this in the movie The Sound of Metal, where a musician gets a cochlear implant after losing his hearing.
and then goes to this performance, listening to the the song you're hearing right now.
In this scene, the movie shows what other people at the performance hear, and then it gradually shifts perspectives to highlight what the main character hears through his cochlear implant.
The performance is so upsetting for the main character that he ultimately takes his processor off.
He essentially decides not to use his implant anymore.
You can find a lot of simulations online like this.
So I asked Mike if these kind of simulations, or even ones like the simulations I created of a distorted voice or a distorted bolero for this episode, if they seem like accurate representations of what music sounds like through an implant.
I think you have to be extremely careful when listening to these simulations because
basically what those simulations are telling you is
this is what
the software is giving to the user.
Okay.
That's not the same thing as what the user hears.
These are two very different things.
You know, when I listen to these simulations, and I have listened to them, it does sound a lot like what I heard on day one.
It does not sound like what I hear in year 20.
For Mike, this was a combination of training himself with careful listening, but also tweaking the settings of the implant.
Because with a lot of practice and effort and time, the experience of listening to music can improve.
Yeah, I would listen to music over and over again,
and I would try tweaking different settings.
And I would go to my audiologist and I would say,
these pictures sound really fuzzy to me.
Can you do something about that?
And so she would tweak how much electricity went to different electrodes.
And so this was an iterative process that went on and is still going on.
After years of upgrades, tweaks, training, Mike's noticed some real improvement, but not for all music.
Most of the piece of music that I enjoy is music that I heard with hearing aids.
It's familiar to me.
Mike does listen to some new music, but preferring familiar music, it's a pattern that Matthew notices with his patients too.
And I think it's a testament to the brain filling in those gaps, conjuring the memory of what the sound quality should be.
The implant sort of gives you just enough that the brain can put together the whole puzzle.
And of course, Mike is listening to Bolero again.
Well, it sounds good.
I really enjoy it.
But there are things that I know that I'm missing.
I know that I'm still not getting some of that intensity and the purity where the music is reaching for a crescendo in each of its iterations.
So I know I'm missing that.
In a sense, Bolero is so familiar, it's almost like language for Mike.
Bolero sounds really good to me because I know exactly what it's supposed to sound like.
This new Bolero is certainly different from the version he remembers, but Mike loves the new version.
Even though the input I'm getting of Bolero is incomplete and I can hear that it's incomplete, it is still a source of pleasure to me.
Ultimately, we don't really know exactly how our brain is able to do this.
It can almost feel like magic, how it filters out echoes, how it shifts high tones to one ear and low tones to the other, how it can take a tinny, noisy input and rebuild a new version of bolero.
We do this very complex calculation, but I don't think that we really know exactly how it's done.
Psychologist Diana Deutsch again.
There are an awful lot of things about our hearing that we don't understand,
and what we hear is often quite different from what in point of fact is being presented.
But we do know that the brain is constantly editing, shaping and building the world that we hear.
Our brain, our life experience, our familiarity with a piece of music, it all shapes how we hear and what we hear, which raises a pretty fundamental question.
When an orchestra performs a symphony, what is the real music?
Is it in the mind of the composer?
Or is it in the mind of the conductor who has worked long hours to shape the orchestral performance?
Is it in the mind of someone in the audience who's never heard it before and doesn't know what to expect?
And the answer is surely that there's no one real version of the music, but many.
And each one is shaped by the knowledge and expectations that listeners bring to their experiences.
The idea that to a very real extent our brains conjure different individual realities inside our heads.
On the one hand, it's a clear reminder to be humble and not just for hearing.
No matter how certain we are, what we perceive isn't unfiltered reality.
So it's worth questioning ourselves at our most stubborn moments.
At the same time though, how cool are brains?
I know they're this perfect reminder of our own subjectivity and humility, but I also just can't get over the fact that our brain puts on this fireworks show every day.
and that a lot of people using a cochlear implant can tap into this almost magic ability to translate a few electrodes into this new, emotionally satisfying experience without scientists really knowing how the whole thing works.
There's so much we still don't understand about the brain and how it tries to make sense of the world, and it just makes me that much more excited for everything we're going to learn along the way.
This is just the first episode of our Making Sense series.
Next week, touch and its evil twin, pain.
Think of yourself if you have a toothache or if you have a problem, if someone holds your hand or someone pats your back or give you a hug, that relieves actually.
Gentle human touch can be very good.
After next week, we'll be talking about more perplexing sense mysteries like how scientists still don't really know how smell works, how many tastes there could be, why some people can't see images in their heads, and even a sixth sense.
This episode was edited by Catherine Wells, Meredith Hodenat, and Brian Resnick.
It was produced and scored by me, Noam Hasenfeld.
Christian Ayala handled the mixing and sound design with an ear from Afim Shapiro.
Richard Seema checked the facts.
Tori Dominguez is our audio fellow.
Manding Wynn is keeping things sunny.
And Bird Pinkerton is dreaming of bioluminescence.
If you want to check out more about Diana Deutsch and auditory illusions, we've got a link in our show description where you can find more illusions to listen to and a ton of info about the illusions she's discovered.
To read more about some of the topics we cover on our show or to find episode transcripts, check out our site at vox.com unexplainable.
And if you have thoughts about the show, you can always email us at unexplainable at vox.com.
Or you could leave us a review or a rating, which we would love to.
Unexplainable is part of the Vox Media Podcast Network, and we'll be back in touch with episode two of our Sense series next week.
This month on Explain It To Me, we're talking about all things wellness.
We spend nearly $2 trillion on things that are supposed to make us well.
Collagen smoothies and cold plunges, Pilates classes, and fitness trackers.
But what does it actually mean to be well?
Why do we want that so badly?
And is all this money really making us healthier and happier?
That's this month on Explain It To Me, presented by Pureleaf.
Support for the show comes from Mercury.
What if banking did more?
Because to you, it's more than an invoice.
It's your hard work becoming revenue.
It's more than a wire.
It's payroll for your team.
It's more than a deposit.
It's landing your fundraise.
The truth is, banking can do more.
Mercury brings all the ways you use money into a single product that feels extraordinary to use.
Visit mercury.com to join over 200,000 entrepreneurs who use Mercury to do more for their business.
Mercury, banking that does more.