Deep Note

“Deep Note” is the iconic sound at the beginning of THX enabled movies.    You can hear it here.      Awesome to the nth degree.

What really fascinates me it the brute force way Andy Moorer developed it.    Although synthesizers had been on the music scene for more than two decades, Moorer used a mainframe to control multiple oscillators to generate the sound.  The code to manage the oscillators was 20,000 line, hand-coded over four days.

The story how it came about is a great read.  I am lifting it from several places on the web, so you can Google “thx deep note” to obtain relevant source references.  I don’t know what the original source was, else I would provide an appropriate reference.

From : ““:

There are various theories on the web about how the THX sound was created – some people say it was a Yamaha CS-80, others that it was a Synclavier.  I emailed Andy Moorer to ask how it was really made.  The short answer was “On a big-ass mainframe computer at LucasFilm”. But I thought I should give you the long answer here in full, just because it feels like Andy’s writing his own history for the first time…

I’ve never written the THX story down (nobody ever asked). So, here’s the whole story:

I was working in what was then called the “Lucasfilm Computer Division” that existed from roughly 1980 to 1987 or so. It spawned several companies, including Pixar and Sonic Solutions. I was head of the audio group. In about 1982, we built a large-scale audio processor. This was in the days before DSP chips, so it was quite a massive thing. We called it the ASP (Audio Signal Processor).

At the same time Tom Holman was also working at Lucasfilm. He had developed what is now called the THX sound system. It was to premiere with Lucasfilm’s “Return of the Jedi.” They were making a logo to go before the film. I was asked by the producer of the logo piece to do the sound. He said he wanted “something that comes out of nowhere and gets really, really big!” I allowed as to how I figured I could do something like that.

I set up some synthesis programs for the ASP that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the “score” for the piece.

The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form “set frequency of oscillator X to Y Hertz”.

The oscillators were not simple – they had 1-pole smoothers on both amplitude and frequency. At the beginning, they form a cluster from 200 to 400 Hz. I randomly assigned and poked the frequencies so they drifted up and down in that range. At a certain time (where the producer assured me that the THX logo would start to come into view), I jammed the frequencies of the final chord into the smoothers and set the smoothing time for the time that I was told it would take for the logo to completely materialize on the screen. At the time the logo was supposed to be in full view, I set the smoothing times down to very low values so the frequencies would converge to the frequencies of the big chord (which had been typed in by hand – based on a 150-Hz root), but not converge so precisely that I would lose all the beats between oscillators. All followed by the fade-out. It took about 4 days to program and debug the thing. The sound was produced entirely in real-time on the ASP.

When we went to sync up the sound with the video (which I hadn’t seen yet), we discovered that the timings were all different. I readjusted the times, generated a new score, and in ten minutes, we had the sound synced up with the video perfectly.

There are many, many random numbers involved in the score for the piece. Every time I ran the C-program, it produced a new “performance” of the piece. The one we chose had that conspicuous descending tone that everybody liked. It just happened to end up real loud in that version.

Some months after the piece was released (along with “Return of the Jedi”) they lost the original recording. I recreated the piece for them, but they kept complaining that it didn’t sound the same. Since my random-number generators were keyed on the time and date, I couldn’t reproduce the score of the performance that they liked. I finally found the original version and everybody was happy.

The ASP was decommissioned in 1986 and later sold for scrap.

Lucas Films recently (2015) resynthesized THX Deep Note using “modern” synth technology and mixing it in current cinema audio formats at the historic Skywalker Ranch. Due to technology limitations of the time, the original piece is a four-channel recording that has been adapted over the years for various trailers in up to a 5.1 mix. Without the technical hurdles he faced thirty years ago, Andy Moorer has developed variations of the ‘THX Deep Note’ in three different lengths (30 seconds, 45 seconds, and 60 seconds) all in stereo, 5.1, 7.1 and Atmos mixes. Faithful to the original character, the resulting audio is intensely more complex, taking the audience on an epic sensory journey unlike anything they’ve experienced before.

Waves Q-Clone

I have been using Waves Q-Clone for about a year now, and thought it was time for a blog post.

As I understand it, Waves developed this plug-in some time ago (at least 10 years).  I’m not sure it ever caught on, but it sure is a useful tool.  When it originally came out, it cost more than $1k, and now can be purchased for under $100.  I bought it for $49, though it typically is $99.

Here is a link to the Q-Clone manual for reference and clarification.

Q-Clone enables you to model hardware EQ and apply those models to your audio track.  For example, if you have a guitar track that needs EQ, you can send the track through the hardware, tune the hardware, and  capture the applied equalization in Q-Clone.  That Q-Clone “impulse” can be used on the relevant track or saved and reused subsequently on other tracks.

Functionally, there are two components to Q-Clone — Q-Clone and the Q-Capture.  Q-Capture is loaded on its own track, sends an impulse through the signal chain, receives the impulse back, and models the equalization.  Q-Clone is loaded into a track as an insert, and the impulse/model is loaded into the plugin.  The impulse is fixed and cannot be tweaked without resampling.

I really like Q-Clone, and use it continuously.  For hardware EQ, I have a Manley Massive Passive and a Crane Song IBIS.  I prefer hardware to software in all situations.  Hardware has more ambience and 3D depth lacking in software.  I have not tried software emulations of the Massive Passive, and don’t really care to.

I find the Q-Clone mimics the tone of the hardware very well, but lacks the musicality, or extra dimensionality.  This makes sense, as the EQ settings are really just math.  The extra secret sauce must be harmonics created in the signal path that are not apparent to Q-Clone.  This is probably what we refer to as “character”.  Q-Clone models only EQ, so if you have something like a compressor in the signal path, it does not model the compression, but only the resulting EQ.  Most compressors do impart a sound, so this would affect the Q-Clone model.

While I prefer the sound of hardware to software EQ, what I really like about using the Q-Clone against hardware is turning real knobs on the hardware.  But, more importantly, hardware designers have a philosophy about how the hardware works.  In the case of  EQ, there are multiple bands that interact and influence each other.  Frequency and Bandwidth overlap.  So its not a question simply of “turn this knob and the EQ-band X will increase”.  Q-Clone is indifferent to how the EQ gets to where it is, only about modelling the result.  But the how it gets there is largely what differentiates hardware.

I use Q-Clone to set up my EQ for track recording.  When I record my own playing, I don’t have the luxury of tweaking the EQ from the control room.  So I record a scratch track with flat EQ, run the track back through Q-Clone, monitoring within the context of the mix, set up the EQ, and THEN record the final track through the hardware.  This works really well for me.  In the process, I always save the model which I sometimes reuse.

To that end, here are some qClone-profiles I created, mostly using the Massive Passive.

To summarize, Q-Clone works really well for me, given a substantial investment in great hardware EQ.  If you don’t have the hardware, it will be hard finding libraries of impulse samples.   The product comes with some impulses, but because they are static, would likely be of limited use.

Removing Incidental Noise

Incidental noise on a track is always a problem.  In a recent recording session, my acoustic guitarist shifted a couple times on a creaky chair at a particularly inopportune time — right at the end of a track on the final chord.  Here is what it sounded like:

Removing the offending noises using time-based edits (i.e. cutting the piece out, muting the section, or using parametric EQ) to remove or diminish the sound would result in an audible glitch.

Using Sony SpectraLayers, I was able to isolate the sound and remove it without affecting the rest of the track.

Here is a screen capture of the creaks:


And closer detail:


By selecting the offending bits, copying them to a new layer, and reversing the phase of the layer, the sounds are cancelled out of the resulting audio:


Here is the before and after audio:

Friggin’ amazing!

p.s., per request, here is a link to the original wav file, prior to the “fix.

Pitch tuning

I would be surprised if there is a single major release that does not have pitch tuned vocals, even if just a tweak here and there.  Some singers are able to exist only because of pitch tuning and killer videos.

In the  “old days,” before pitch tuning (circa 2000), vocalists had to hit the notes right with their own vocal chords or find a different career.  According to Wikipedia, Auto-Tune was first deployed as an effect on Cher’s 1998 song “Believe.”  Once in the hands of producers and engineers, it rapidly proliferated, and is an essential tool in any pro or serious amateur studio.

The Wikipedia entry is a good read, and talks about musicians protesting tuned vocals.  Frankly, I would be surprised if anyone protesting has not had some tuning done to smooth out a vocal part they missed.

In my opinion, as with much technology, pitch tuning has diminished the art.  A mediocre vocalist can “sing” in perfect pitch.  Style supersedes substance.  But, pitch tuning is deeply and permanently embedded in music culture.  So, like it or not, live with it.

Notwithstanding, I recently gave a presentation on recording techniques, which took me briefly onto the topic of pitch tuning.  I decided to put up a blog post because most non-recording musicians do not understand how it works.  As an example, I provided the following screen capture to visualize a vocal line as recorded by the singer.


(click on the image for a larger view) The column on the far left shows the perfect-pitch notes (e.g. A, A#, B, B# …).  The alternating horizontal bands across the image are the pitches, essentially black and white keys on the piano.  Top to bottom is actually a continuous increment in frequencies that is conveniently grouped by the alternating bands (e.g. the center of middle A=440hz, middle C=261.63Hz).

The blobs in the middle are the notes that the singer sang.  The fatter they are, the louder they are.  Length is the duration of the note.

As you can see, some of the notes sit right in the middle of the band, whereas some straddle between two bands.  The notes between bands are the ones which need to be corrected.  Correcting them is dead simple — just drag the note to its optimal position.  Bazinga – perfect pitch.  Its crazy how easy it is.

Plus, you can shorten or lengthen a note, make it louder or softer, smooth out the modulation or quiver, etc.  Perfect notes.

The software will allow you to take much more creative liberties with the notes, such as copying and stacking to create harmonies, or resetting to extreme positions, such as full octaves.  However, the more extreme you go, the more you introduce residual noise, aka artifacts, which are not desirable.

The technology will improve so as to enable cheap, real-time, zero-latency pitch tuning, i.e. when you sing into the cheap little karaoke box you bought, your pitch will be perfect.  Amazing, yes?

Open tuned chords

Just a quick clarification.  I am not referring to open tunings, in the sense of playing a song in a guitar tuned to open-G, open-D, DADGAD etc.  I am talking about tuning a guitar to an explicit open tuned chord and playing it, if that makes sense.  Let me explain.

A bit of context.  I remember reading an interview with producer Mutt Lange.  He was talking about some recording techniques, and mentioned recording a song where each chord was played on a guitar tuned to that chord.  So, a sequence of chords, say Am – G – F – D7, would be played with the guitar only playing the chord in the sequence for which it was tuned.  Awkward, but intriguing.

His thinking was that open strings sound differently from fretted strings, which is definitely true, and easy to validate.  Playing a song with a lot of quick changes would be a challenge to get it all right, but a portion of a song in which each chord lasts a couple beats over a medium tempo would be easy to do.

So, I decided to give it a try.  I tuned a strat to Am, and capoed on the second fret to give me a Bm (so not totally open, but I didn’t want to tune up to a Bm).  Here is the chord with a bit of vibrato:

And the chord in context, in my song A Delicate Balance:

I like the sound, and intend to use it again.

And for completeness on the topic of open tunings, I really like playing in them.  I regularly play in open-G, open-D, DADGAD, drop-D and double-drop-D.  Definitely something you should try.

Backwards tracks

One of the sounds that I have always been fascinated with is backward tracks.  Backward tracking is a tape technique that has been around since tape was used, but only became popular in the fifties, according to Wikipedia.  Being the Beatles fan that I am, my first experience was on Revolver in 1966.

The technique is simple.  Flip the tape over so the feed reel becomes the take-up reel.   In the digital world, DAWs and audio editing tools allow you to reverse the track.

Reversing a track is easy.  Getting it to work within the context of a song is finicky, but not so bad.  I get a mix together of where I want the reversed track to show up, paste into a new project, reverse that track and then record what will become the reversed track in the mix against that.  It takes some practice, but the track should follow whatever changes are in the composition, the tempo and such.  A bit strange to listen to and record against, but you get used to it.

Here are some examples of reversed tracks.  First is the track played normally, next is the reversed track:

  • from Revolver, the lead break from “I’m Only Sleeping”
  • also from Revolver, the lead break from “Tomorrow Never Knows.”  I think this sounds fabulous.
  • from Sgt Pepper, the inner groove* at the end of the record after Day in a Life.  It is quite a jumble.  It is also sped up.  I slowed it to 74% of the original.
  • The last recording I did, “A Delicate Balance” uses a backward guitar track, panning left and right.  I included the intro to that song.

*additional notes on the inner groove track.

On the original inner groove track, they seem to be saying, “I never could see any other way.”  On the reversed and slowed down track, you can clearly hear the vocal saying “and we’ll all be there to seek your pleasure.”  It sounds like they recorded one word at a time and then put them together to form a sentence.

I think what they did was record the initial track “I never could see any other way”, played it back in reverse, and said, “I think it sounds like they are saying “… and we’ll all be there to seek your pleasure.”  Which they tracked at a slower speed, reversed and played at the normal speed.

One other thing funny about the Sgt Pepper inner groove track is a high pitch just before the inner groove begins playing.  You can see it here:


John thought it would be amusing to include something that only dogs could hear, so they added a 16k sine wave.   The vast majority of humans cannot hear that high.  But on the slowed track, the pitch is now quite audible at about 10k.  You can hear it at the tail end of the reversed track.

Mid-side recording

I was recently wondering about mid-side recording, and decided to try it out.

The technique is to use a figure-8 microphone placed with the “null” side facing the source, while the front and rear sides of the mic face perpendicular left and right to the source.  The recording of the source is done in mono, which means that the resulting mono file essentially has stereo information in it.

The file is then copied and phase reversed.  Placing the two files in a DAW and hard panning them left and right, the resulting stereo output will have a wide mono sound, but the middle is empty because of the phase cancellation between the two identical, phase reversed tracks.

Adding another track, e.g. a vocal track, recorded using a cardioid pattern mic and mixing that in the middle, gives you a complete MS stereo recording.  With an MS decoder applied to the resulting stereo track, you are able to manipulate the stereo soundstage to widen or narrow it, and push the mid-channel up or down.

Its a pretty nifty technique that sounds great.  Here is a recording I did using mid-side technique, of Stephen Stills’ “So Begins the Task”.  There is a lot of presence to the guitar and width across the sound stage.  The vocal sits nicely in the middle surrounded by the guitar.

Here is a discussion on mid-side on Gearslutz.

For additional reference, Sengpiel Audio has a neat page that visualizes the various techniques for mic placement, e.g. ORTF, XY, Blumlein …


Ooo Eee Oo Ah Ah

A recording technique that has long been used for interesting and amusing effect is to change the speed of the playback.  Faster would raise the pitch, slower would lower it.  In some cases, artists slowed the tape speed down during tracking to make instrument parts easier to play or to hit higher notes, and then sped up for the final mix. Other times they would adjust the tape speed during mastering to get a sound they liked more.  The timbre of the notes changes.  Guitars played back at even slightly faster speeds sound chimier.  Vocals at slightly slower speeds have a deeper resonance.

One of my favorite Beatle songs, which coincidentally used this technique, is Rain, where the master was slowed down.  The guitars sound sublime, especially that little guitar break at 2:33.

In thinking about the Beatles use of the varispeed technique, it occurred to me that the backing vocals on Magical Mystery Tour sounded like they were pitched upwards.  And indeed they are.  Here are some parts with the final version followed by the slower playback which would have been used during tracking:

On a tape machine, this was easy to do, as you just needed to flip a switch for preset speeds (typically 30ips, 15ips, 7.5ips and 3.75ips), or change the voltage to the capstan motor (varispeed) for continuous increments.

In the digital world, pitch tuning (Autotune, Melodyne, …) allows you exceptional ability to change notes.  But this does not sound the same to me, and more extreme changes result in artifacts that sound bad.  Changing the playback speed for recording or mastering takes a couple of extra steps.

The “time stretch” function in your DAW or audio editing software serves the same purpose.  In Wavelab, time stretch lets you define how much to stretch the track (%).  It allows you to keep the pitch constant, or to let the pitch change consistent with the change in track length, i.e. longer stretch would lower the pitch.

All that to say, as a little exercise, I took a run at Alvin and the Chipmunks, to see what the real voice behind the little critters actually sounds like.  Here is the amusing result at half-speed, from the original 1958 recording of Witch Doctor that started it all for Alvin:

Finally, the most famous chord of all time …

… the opening chord of A Hard Days Night has been successfully dissected.

In this radio interview with Randy Bachman on CBC’s Guitarology program, RB talks about meeting Giles Martin, son of George Martin, at GM’s private studio at Abbey Road. In the studio, GM has access to digitized copies of all the Beatle’s multitrack source tapes.

After pondering what he would like to hear, RB is provided with solo’d track by track playback of “the chord”.  When it is all put together, HE NAILS IT!  Give it a listen …

Here is the breakdown he describes:

  • Track 1: George on Rickenbacker 12-string  GCFACG
  • Track 2: Paul on Bass playing D
  • Track 3: John on 6-string xxDADG

The notes being picked up are:  A-C-D-F-G

  • From a G perspective: 1-2-4-5-b7
  • From an F perspective: 1-2-3-5-6
  • From a D perspective: 1-b3-4-5-b7
  • From a C perspective: 1-2-4-5-6
  • From an A perspective: 1-b3-4-b6-b7

Closest thing to call it would be a Dm11 or an F6add9.  Whatever you call it, you can’t make the chord sound properly with only one hand (chording) and one guitar.

What a sound. This is TOO COOL!!

Sound engineer humor

One of my favorite sites is, where people, mostly sound engineer types, talk about … gear. Most of the time, the conversations are about experiences with specific pieces of recording equipment or techniques. Some can be offbeat and really amusing, like this one — He Is…the Most Interesting Gear Slut!

Basically, assembling all the collected wisdom, knowledge and opinion about gear, talent and luck, and packaging it as hyperbole. Here are some samples:

  • He mixed the entire Hotel California record in one day on headphones in a room AT Hotel California and then left without checking out
  • Word clocks sync to him
  • He records a whole band perfectly with one mic, in one take, on one track, on tape — and mixes it to surround sound … telepathically
  • He pronounces Moog correctly
  • He’s so forward thinking that the last time he played guitar was tomorrow
  • He can tune a piano and tuna fish
  • He’s won Grammy’s for songs he almost worked on
  • He once wrote a concerto for dog whistle
  • He thought he’d made a mistake once, but he was mistaken
  • He can hear, pan, eq and add effects to the sound of one hand clapping
  • At a lecture, he once uttered, “just do it” and walked off the stage. Nike tried to sue him for using the catch phrase, but ended up being sued themselves by him as he had already developed that exact shoe style for a song intro that required someone running into a house. The album was “Nike Runner” and the title song was “Just Do It”. He did however let Nike keep making the shoe pump that he had invented for the compression effect on that intro. It eliminated sock issues by compressing foot sweat.

… well, *I* think they are really funny. 😎

Full-spectrum ears, great gear & a passion for audio