From: <email address not valid>
Hello, All --
Rather than dissect a song, I want to talk about ears and how they interpret aural messages. I will be using "In My Time of Dying" to illustrate my examples. All time references come from the box set version of IMTOD.
First, I want you to listen to the opening guitar figure. I hear three distinct tones happening: the "main" one, the really thick, phased sound that's slightly left of center; a cleaner, "ring-y" sort of sound on the far left; and a very distorted, buzzy sound on the far right. All three are almost certainly the same performance captured with different sounds, which is a great way to capture a wide range of sound. Usually, when Page doubles a part he actually performs a second take of the sound with a different sound. Here he has sent his guitar singal to two different sources and recorded both of them. Neat.
Consider this Suppose you plug a Y-cord into the guitar and send it to two mixer channels, then record the two identical sounds. Then suppose you pan those two tracks hard left and right. Do you end up with a stereo sound? No, actually, because the signals are identical in depth (i.e. reverb) and tone. No matter how you pan those two tracks, in your head you're only going to hear one guitar source. If this is confusing, think of it this way: you're standing on the sidewalk and a friend of yours calls to you. His or her voice reaches both of your ears, but that voice obviously comes from only one mouth. You hear that voice coming from a certain direction because your ears receive the sound at slightly different times and at different volumes, which your brain has learned to interpret as directional.
I'm sure you've all noticed that your ears have lots of ridges and curves and knobbly bits that are loads of fun to scrub with Q-tips. These ridges and curves reflect sounds and change their phase relationships and give your brain a ton of information to work with so your brain can tell where a sound came from. Very useful for staying alive in a world full of threatening beasties.
Ever notice how babies react to sounds? Suppose you have a young baby crawling on the carpet in the living room. He's drooling, he's cooing, he's just being a baby. If you're on one side of the baby and you make an interesting sound, the baby will likely look over in your direction. But if you're behind him, and he's facing away from you, the baby will look around for the source of the sound. The baby hasn't yet learned to interpret sounds coming from behind.
Obviously, our brain learns this kind of thing without much conscious help from us. Over time, enough sounds reach you from behind, and you turn and finally locate them, and your brains thinks, "Aha! Behind me again!" And once you've identified something coming from behind, your brain goes back and realizes that the sound was different is tone (and pahse relationship) from a sound coming from the front, because the sound bouncing off the curves and knobs in your ear had differences in phase compared to sounds from the front.
When I was a freshman at UCLA, a friend of mine was both a musician and an ROTC cadet. Once day he was at the range, practice-firing an M-16. He must've forgotten his earplugs or something, because the loud bangs permanently degraded the hearing in his right ear. Big bummer, especially for a talented musician. Let that be a lesson to any would-be ROTC candidates. Anyway, a couple of days after this, I saw him on campus and called out to him, and he turned around every which way looking for me. Like a baby, his brain hadn't yet learned to deal with his new hearing situation.
Still with me? Good.
Headphones screw up your ear's ability to determine 3 dimensions of direction because the sound is being pumped directly into them. Your curves and knobs don't get a chance to affect the sound, or rather, every sound coming in is affected the same way, which negates the whole process. So, with headphones on, sounds pan across a two dimentional field from left to right. Sounds can be given depth by using reverb, which your brain (or mine, anyway) has decided to interpret as being in front of me. What I mean is that the reverb depth is a field to the front, not to the back. This might be different for some of you. I can sometimes "gestalt" the sound and send it behind me, but that takes effort that doesn't add anything to the music, so why bother? And no matter which direction you perceive the reverb going, you'll never hear music coming from above you or below you while wearing headphones.
Now, getting back to our two-identical-tracks example, each of your ears is hearing exactly the same sound. Because your ears have only relative volume levels to work with, you're going to hear this as a single source panned somewhere in the stereo field. If the left and right channels are at equal volumes, you'll hear it in the middle of your head. If the right side is louder, the sound will be placed towards the right side of your head, and so on.
The question is, what happens if you record the same part to two different tracks, but with two different tones? Ahh, well, here it gets a little tricky. I wasn't even sure myself, so I experimented by recording two simulatneous parts with two different sounds, using my four-track. Guess what? Within each of the two sounds are regions of tone that are the same -- and they appear to be in the middle of your head. The parts that are dissimilar appear towards either side. In my experiment, the buzziness of the right side appeared all the way to the right, and the sparkle of the clean left side appeared all the way to the left, and in the middle were the common tones.
Now, if you took Strat and split the signal into *three* parts, sent the right side to a Twin Reverb via a compressor, the left side to a Pignose, and the center to a small combo (like a Fender Princeton) by way of a phase shifter, you'd have created the sound of In My Time of Dying.
Surprise! I'll bet you thought I'd forgotten that this was a post about 'Page's Studio tricks.'
Notice how each of the three sounds brings out different characteristics of the notes that Page chooses. The warm center sound emphasizes the lows, the clean Twin Reverb brings out sparkly highs, and the buzzy Pignose brings out some of the quieter parts because it's running flat out and compresing the hell out of the signal.
The verses pretty much remain in this configuration (the left-right-center setup). Notice that at the chorus parts, as what happens at 1:33, the clean Twin Reverb sound is removed to make room for the drums and bass, but another phase-shifted guitar comes in to replace it. And within the quiet verse parts, Page pumps up the reverb to emphasize certain notes, like at 2:32.
An interesting characteristic of phase shifting is that if you phase-shift one signal, a non-phase-shifted counterpart sounds as though it's phased, too. You can hear this during the verse section from 2:34 through 3:00. That clean sound off on the right souds phased because of its close proximity to the actually-phased warm sound next to it.
For a discussion of phase-shifting and other time-domain effects, come back for Page's Studio Tricks VI, which I'll post tomorrow.
As in OTHAFA, WIAWSNB, and Levee, Page makes use of gentle panning. There's a difference here, though. Notice that as the warm, phased sound moves right, it really merges with the Pignose sound, like two water drops coming together as one. Cool. That's what I mean when I say the common tones merge together.
Yes, I know the two sounds are distinct during much of the song, but I think I know why. Page probably applied a very short delay time to one of the guitars here -- very short, as in a couple of milliseconds. Your ears, fine instruments that they are, can't distinguish a few milliseconds of delay in isolation. A single sound delayed two or three milliseconds won't sound any different that a sound with no delay. But if you have two identical sounds (or sounds with common tones) and you delay one of them a few milliseconds, your ears (fine instruments that they are) *will* hear them as two separate entites. This is a common studio trick used by everyone from Les Paul onward to take a single track and make it sound like two. By delaying one sound a few milliseconds and panning the pair hard right and left, your ears hear the sound on the outside of the spectrum, not in the middle. Clever, timeless, and useful.
Anyway, as for IMTOD, notice that the guitar piles into the middle during the loud parts, as in 3:51, then back out again at 4:00. Page maintains this pattern throughout this section of the song (and the others like it). This panning gives this part of the song the feeling of a "call and response," even though Page played it continuously. Nifty.
Notice how combining the two in the middle (piled with one another) changes the way they sound. This is caused by phase-cancellation, which I'll be talking about in Studio Tricks VI.
And that awesome transformation of sound as Page launches into the guitar solos? Same thing: phase cancellation. See part VI tomorrow.
I really doubt that Page sat there thinking about the workings of the human ear when he recorded this, or any other Zeppelin track. I just thought that would be interesting information.
Something McCue said to me led to me to this: When it comes to recording, the creation of interesting sounds is 90% the ability to envision something cool to the ears and 10% the technical ability to achieve it. Once you know what you're trying to do, the rest is easy. And along the way toward achieving *the* sound you're looking for, happy accidents happen all the time. I would guess (and this is purely a guess) that Page was searching for a good sound for the track by running into the three amps and comparing them, and he tried all three and said, "Cool. Hey, gimme a smoke, mate!"
That goes for me, too. Gotta letter some comic books.
This Month in