Tune Tech: Distortion, sequencers, Auto-Tune, and more

25m
From electric guitars to samplers to drum machines and beyond, the music we love is only possible thanks to the technology used to create it. In many ways, the history of popular music is really a history of technological innovation. In this episode, we partnered with BandLab to unpack four inventions that changed music forever. Featuring author and journalist Greg Milner.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.
Join our community on Reddit and follow us on Facebook.
Sign up for Twenty Thousand Hertz+ to get our entire catalog ad-free.
If you know what this week's mystery sound is, tell us at mystery.20k.org
Visit bandlab.com/download to start creating and sharing music anytime, anywhere.
Buy Greg’s book Perfecting Sound Forever: An Aural History of Recorded Music.
Episode transcript, music, and credits can be found here: https://www.20k.org/episodes/tunetech
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

listening to 20,000 Hertz.

What makes a song great?

Of course, the writing, the performance, and the arrangement are all important.

But there's another huge factor that's really easy to miss, the technology behind the music.

In some ways, technology is like an invisible instrument.

That's 20,000 Hertz producer Andrew Anderson.

We don't always notice the role it plays, but without it, songs just don't sound the same.

There are so many examples of new inventions that transformed the sound of music, from magnetic tape to electric guitars, to drum machines, and beyond.

Developments like these can change the course of music history, and sometimes they can even change the world.

Let's get into it.

Music recording began back in the late 1800s, and due to the limits of technology, these recordings sounded pretty rough.

As an example, here's a track from 1888 called The Lost Chord.

But over the next hundred years, recorded music became a closer and closer replication of live sound, thanks to inventions like reel-to-reel tape, multi-track recorders, and high-fidelity microphones like this one.

As time went on, musicians expected their instruments to sound as pristine as possible when captured on record.

Here's a tune by the Benny Goodman sextep from the early 40s.

By modern standards, it sounds pretty vintage, but you can hear that recording quality had already come a long way since the 1880s.

But then in the 1950s something strange started to happen.

All of a sudden, you had these sounds that were just dirty and messed up.

That's journalist and author Greg Milner.

Greg literally wrote the book on the history of music technology and he says that the 1950s were a turning point.

Musicians just found ways to like mess it up to create sounds that were interesting if not sounds that were actually good based on normal standards of fidelity.

During that time you had artists like Big Mama Thornton, Howlin Wolf and Gene Vincent using abrasive guitar tones that hadn't been heard before.

You ain't nothing but a hound.

Those dirty guitar sounds are some of the first examples of intentional distortion.

Now in music, there are a few different kinds of distortion.

One of the most common is called harmonic distortion, which adds overtones to the original sound.

These extra frequencies make the sound feel richer and more powerful.

Another common type of distortion is clipping.

This is when the signal gets boosted so much that the top of the waveform gets completely flattened.

This makes it sound squashed and harsh, like when you crank the volume all the way up on a cheap speaker.

There is some debate over the first recorded song to use distortion, but the most popular candidate is Ike Turner's Rocket 88, released in 1951.

The legend is that the amplifier fell off the truck on the way to the recording session and it created this messed up sound that Ike Turner really liked.

Stories like this were pretty common back then because at the time there weren't any distortion devices for sale.

So if you wanted that gritty sound you had to improvise.

For example Dave Davies from the Kinkscott Creative with his amp to make the distorted sound on You Really Got Me.

Here's Dave in an interview with VH1.

I came across this little amp in the shop and I just got a razor blade and started to cut the current speaker.

I don't know why.

And I plugged it in and it made

that amazing sound.

But in 1962, Gibson changed the game with the Maestro Fuzz Tone guitar pedal.

It's mellow.

It's raucous.

It's tender.

It's raw.

It's the Maestro Fuzz Tone.

The Fuzz Tone took that broken amp sound and turned it into a pedal that you could switch on and off as needed.

Now, let's listen to some of the unbelievable effects that you can create with the Fuzz Tone.

Wasn't that something?

Well, it was definitely something, but it turned out most guitarists didn't want that sound on their record.

What the Fuzz Tone really needed was a hit song to take it into the big time.

And in 1965, it got exactly that.

Played by Keith Richards using the Fuzztone, I Can't Get No Satisfaction took distortion into the mainstream.

But strangely enough, Keith never meant for that guitar line to be used in the final mix.

Instead, he recorded it as a placeholder for horn pots that were supposed to be added later.

Keith thought the distorted guitar sounded gimmicky, but when the rest of the band heard it, they liked it.

So, they took a vote.

All those in favor of keeping the guitar part?

Aye.

Aye.

All those opposed?

Nay.

And the rest is history.

The crunchy tone of satisfaction sparked a never-ending quest for more and more distortion in rock music.

From Jimi Hendrix

to Black Sabbath

to Slayer.

But of course, distortion isn't just limited to hard rock and metal.

At this point, it's a standard part of a musician's toolkit, whether it's a pop star like Britney Spears,

or a rapper like Tyler the Creator.

So, why does distortion sound so good?

On a scientific level, distortion makes instruments seem louder.

For example, here's a guitar line that's totally clean.

Now, if I play that same part with distortion on it, it sounds louder, even though the average volume is exactly the same.

But more than that, distortion is really about a feeling.

The way I think about it with distortion is it creates friction and traction.

You know, it's a way for the sounds to really take hold.

It's a way that both sounds so right and so wrong.

It doesn't sound the way that music should sound, and in that sense, it becomes this whole new truth of the way music can sound.

Distortion had a massive impact on music because it allowed musicians to create sounds that simply weren't possible before.

But not long after, another invention came to the fore, one that meant musicians technically didn't have to play at all.

The sequencer.

Sequencers come in all shapes and sizes, but basically, they're a type of computer that tells instruments, like synthesizers or drum machines, what to play.

All you have to do is program the notes you want to hear,

the order and speed to play them in,

and then push play.

It's one of the earliest examples of being able to organize sounds in a way that seems perfect.

You can actually put things together, make things link up in a way that has very little to do with live performance, and you can essentially make something go on forever.

Now, sequences have actually been around for a really long time.

For example, the famous Big Ben clock at the Houses of Parliament in London plays a pre-programmed sequence of notes every quarter hour, known as the Westminster chimes.

These days it's powered by electricity, but when it first chimed in the 1800s it was basically a sequencer powered by clockwork.

Those self-playing pianos you see in old westerns are also an early type of sequencer.

The data is actually stored on a roll of paper punched full of holes, and that tells the piano which notes to play.

Here's one playing The Entertainer.

However, electronic sequencers didn't come along until the 1950s, and the music that was made with them tended to be, well, pretty weird.

Here's a piece by Raymond Scott from 1962 that he created with a homemade sequencer.

Believe it or not, this was actually meant to help babies fall asleep.

Electronic sequencers first became commercially available in the late 1960s, around the time that synthesizers hit the market.

But at first, they were really expensive, so they weren't used much in pop music.

Here's a rare example from the band Tonto's Expanding Head, released in 1971.

It wasn't until the mid-1970s that sequences became affordable for working musicians, and one of the first people who really put them to good use was an Italian composer and producer named Giorgio Morodo.

Giorgio was a disco pioneer who wrote songs like Donna Summers' 1975 hit, Love to Love You Baby.

But a couple of years later, Giorgio saw a movie that changed his mind about how music should sound.

And that movie was

Here's Giorgio in an interview with The Guardian.

I went to look at the movie of Star Wars, which had a scene called La Cantina, where they supposedly played the music of the future.

And I didn't think it was the music of the future.

It looked like, but it didn't sound like.

So I thought the only way to do it is to do it with the computers, only computers.

Giorgio set about bringing his vision of futuristic music to life.

He started by writing a traditional bass line on a bass guitar, which would have sounded something like this.

Then he took that part and programmed it into a sequencer, which then triggered the notes on a synthesizer.

That meant they could speed it up, so it sounded more like this.

And finally, he added some delay so that each note would be played twice.

And suddenly it sounded

like, oh, that's a a whole new,

that was the key moment.

The song was Donna Summer's I Feel Love, and that precise sequenced groove inspired generations of electronic artists from Eurythmics

to the Chemical Brothers

to L C D sound system.

Today, almost all pop music uses a sequencer in one way or another, whether it's for a drumbeat, a bass line, a synth pattern, or something else.

Basically, if a song is made on a computer, it's almost certainly going to involve a sequencer.

It's so much a part of music, it almost is hard to separate it from music itself.

It's so ingrained in the ethic of music, it's a good example of something that's become so prevalent, it's almost hidden in plain sight.

In a way, sequences made computers seem more human, because now computers could play music.

But then another invention would do the exact opposite, making humans sound almost like computers.

And while its original intention was to perfect human performance, the end result was something very different.

That's coming up after the bridge.

This is a Beach Boys song called Our Prayer from 1966.

The harmonies are so perfect that they almost sound artificial.

For a long time, this kind of pitch-perfect singing was only possible with years of training and natural talent.

But then in the 1990s, a musician and engineer had an idea that would make this kind of sound, or at least something similar, accessible for everyone.

Andy Hildebrand was a flute player.

He actually went to university on a music scholarship, but then he later earned his doctorate in electrical engineering, and he was doing work for the oil industry.

Andy's job was to use sound waves to find oil underground.

He'd fire very low frequency sound waves at the ocean floor and then listen for the echoes.

By analyzing those echoes, he could then tell if there might be oil or not.

Before long, Andy realized that same technology could be used in music.

Here he is remembering that moment in an interview with the Smithsonian Institution.

I had a luncheon at a trade show with my distributor and my distributor's wife, and we were talking about what project do we do next to make money.

And she says, well, why don't you make me a box where I could sing and tune?

And I said, well, that's a lousy idea.

So I didn't do a thing.

But Andy kept thinking about it and eventually had a change of heart.

About eight months later, I thought, well, that might actually be a good idea.

And I knew exactly how to do it because of my geophysical technologies.

And by the same trade show, 12 months later, I could demonstrate it with a live singer.

It worked in real time.

Andy called his invention AutoTune, and before long, almost every major recording studio had his software.

Essentially, AutoTune analyzes the pitch of a voice and then raises or lowers it to the exact in-tune pitch.

So if a singer is flat,

you can fix it.

You could even choose how many milliseconds it took to shift the note you selected.

For a rap song, since each word is so short, you could get away with a really fast change.

But for a slow ballad with long, drawn-out notes, you'd set it to a longer time so that the changes would be more subtle.

Technically, the dial could go down to zero milliseconds, meaning an instant pitch change.

But Andy figured no one would want to do that, since it would sound so unnatural.

But then in 1998, this happened.

I can remember hearing for the first time just thinking, wait, I know that voice, but that doesn't sound like anyone.

So it's like, who is that?

Believe by Cher was a massive international hit, and it didn't take long for other musicians to start copying that hyper-auto-tuned vocal sound.

It was all over Kenya West's album 808s and Heartbreak.

It even made the crossover to alternative music with Barney Ver.

And perhaps most famously, T.

Payne made that robotic sound his trademark.

In fact, Auto-Tune became so popular that Andy Hildebrand won a Grammy for his invention.

Although, during his acceptance speech, he acknowledged that not everyone loves him.

I uh

But for Greg, those stylized robotic vocal sounds are what make Auto-Tune so interesting.

The people who invented it thought of it as something that would be a way to perfect something that was supposedly imperfect.

But instead, it was used essentially as a musical instrument, right?

As a way to make something completely different and sound sort of crazy.

However, when auto-tune is used to make a performance pitch perfect, it can cover up the beautiful imperfections that make a voice unique.

Yeah, I mean, the human voice is amazingly expressive, right?

It's more so maybe than any instrument.

I mean, if you can just like put a little catch in your voice and it changes the emotion of it completely.

And so that's always what I found sort of disappointing about when Auto-Tune took over.

If you take away the real artistic uses of Auto-Tune, you're left with example after example after example of people using it to make vocalists sound almost superhuman, to sound more perfect than they could ever be.

And there's a sort of way in which your brain knows that.

Distortion, sequences, and auto-tune all brought brand new sounds to the masses.

But there's one type of software that truly democratized music making.

The Digital Audio Workstation, commonly known by its acronym, DAW.

Now, digital audio workstation sounds like a pretty technical term, but the idea is simple.

Putting a recording studio inside your computer.

With a DAW, you can record multiple tracks, play with things like panning and EQ, and add effects like chorus or delay.

And you can do it all without ever messing with tape, which can be hard to work with, takes up space, and degrades over time.

DAWs actually have a surprisingly long history with the first one being released in 1977, The Soundstream.

It could record four channels of digital audio and had built-in effects.

You can even edit the audio using an oscilloscope, which was kind of like an early computer monitor.

One of the first musicians to use the soundstream was our disco hero from before, Giorgio Moroda.

Here's his 1979 song, E equals MC Squared, which was recorded entirely on a soundstream.

Soon enough, Doors started to replace traditional recording setups for a number of reasons.

One of them is adaptability.

It does for sounds what the word processor did for writing and for words.

Everything becomes infinitely flexible.

You can build on top of a rough draft.

And that was really powerful.

And most importantly, Doors made music recording far more accessible.

It put music solely in the mind of the music maker.

It no longer mattered what you could do in a studio because you could come up with ways of doing it at home in your bedroom.

That was, in a way, I think, the last frontier.

Now any space could be a place to make music because physical space didn't even really matter anymore.

By the early 90s, there were quite a few affordable doors that could run on your home computer, like Cakewalk, Pro Tools, and Cubase.

Before long, artists like Nine Inch Nails, PJ Harvey, and The Beastie Boys were making entire albums using doors, or in the box as it's called within the industry.

Here's the song Where It's At from Beck's 1994 album Odolate, which was recorded almost entirely in the home of his producers using a door.

But it took a few more years before a song that was recorded, mixed and mastered entirely in a door reached number one on the US Billboard charts.

And that song was...

These days, the majority of music is created entirely in the box.

And that allows artists to really focus on details in a way that wasn't possible in the past.

For example, when Billie Eilish and her brother slash producer Phineas create a song, they actually assemble the main vocal from dozens of takes.

Here they are explaining that process in an interview with David Letterman.

Here is the vocal take for Billy's song Happier Than Ever.

We got up to like 87 takes.

So, pay attention.

When I'm away

from you,

different take, right?

So, this is all one take.

Different take a different take.

Different take.

Different take.

Really?

Yeah.

Different take.

I wish it wasn't.

Different take.

And you would never know!

And as the internet took over, doors opened the window for more and more collaboration.

Today, you can work on a recording, sync it to the cloud, then someone on the other side of the world can pop it open and make their own changes.

For instance, during the pandemic, a group called Trip the Witch recorded an entire album remotely without ever meeting each other in person.

As for the future, the only certainty is that technology will continue to inspire musicians to come up with new sounds.

I think in a way that technology changes music more than music changes technology.

And the reason I say that is because there are many instances of technology changing music in ways that the people who built the technology had no idea were going to happen.

Our relationship with technology is messy and exciting and constantly evolving.

I mean, there's a night and day difference between how we use technology today versus 30 years ago.

And that complex relationship comes through loud and clear in the music we make.

I mean, obviously I'm biased and I would never tell everyone that they should be a music geek, but I think it really behooves people to think about what music sounds like and why it sounds like that and how maybe it sounded like 10, 20, 30, 40, 100 years ago even and why music has changed and what it says about what we want out of not just music but sort of out of life itself.

20,000 Hertz is produced out of the sound design studios of DeFacto Sound.

This episode was written and produced by Andrew Anderson.

It was story edited by Casey Emmerling.

With help from Grace East.

It was sound design and mixed by Justin Hollis.

Thanks to our guest, Greg Milner.

Greg's book, Affecting Sound Forever, takes an even deeper dive into this topic and is available wherever you buy books.

Thanks also to Band Lab for partnering with us on this episode.

To learn more, visit bandlab.com.

I'm Dallas Taylor.

Thanks for listening.