Deep Note

“Deep Note” is the iconic sound at the beginning of THX enabled movies.    You can hear it here.      Awesome to the nth degree.

What really fascinates me it the brute force way Andy Moorer developed it.    Although synthesizers had been on the music scene for more than two decades, Moorer used a mainframe to control multiple oscillators to generate the sound.  The code to manage the oscillators was 20,000 line, hand-coded over four days.

The story how it came about is a great read.  I am lifting it from several places on the web, so you can Google “thx deep note” to obtain relevant source references.  I don’t know what the original source was, else I would provide an appropriate reference.

From : “musicthing.blogspot.ca“:

There are various theories on the web about how the THX sound was created – some people say it was a Yamaha CS-80, others that it was a Synclavier.  I emailed Andy Moorer to ask how it was really made.  The short answer was “On a big-ass mainframe computer at LucasFilm”. But I thought I should give you the long answer here in full, just because it feels like Andy’s writing his own history for the first time…

I’ve never written the THX story down (nobody ever asked). So, here’s the whole story:

I was working in what was then called the “Lucasfilm Computer Division” that existed from roughly 1980 to 1987 or so. It spawned several companies, including Pixar and Sonic Solutions. I was head of the audio group. In about 1982, we built a large-scale audio processor. This was in the days before DSP chips, so it was quite a massive thing. We called it the ASP (Audio Signal Processor).

At the same time Tom Holman was also working at Lucasfilm. He had developed what is now called the THX sound system. It was to premiere with Lucasfilm’s “Return of the Jedi.” They were making a logo to go before the film. I was asked by the producer of the logo piece to do the sound. He said he wanted “something that comes out of nowhere and gets really, really big!” I allowed as to how I figured I could do something like that.

I set up some synthesis programs for the ASP that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the “score” for the piece.

The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form “set frequency of oscillator X to Y Hertz”.

The oscillators were not simple – they had 1-pole smoothers on both amplitude and frequency. At the beginning, they form a cluster from 200 to 400 Hz. I randomly assigned and poked the frequencies so they drifted up and down in that range. At a certain time (where the producer assured me that the THX logo would start to come into view), I jammed the frequencies of the final chord into the smoothers and set the smoothing time for the time that I was told it would take for the logo to completely materialize on the screen. At the time the logo was supposed to be in full view, I set the smoothing times down to very low values so the frequencies would converge to the frequencies of the big chord (which had been typed in by hand – based on a 150-Hz root), but not converge so precisely that I would lose all the beats between oscillators. All followed by the fade-out. It took about 4 days to program and debug the thing. The sound was produced entirely in real-time on the ASP.

When we went to sync up the sound with the video (which I hadn’t seen yet), we discovered that the timings were all different. I readjusted the times, generated a new score, and in ten minutes, we had the sound synced up with the video perfectly.

There are many, many random numbers involved in the score for the piece. Every time I ran the C-program, it produced a new “performance” of the piece. The one we chose had that conspicuous descending tone that everybody liked. It just happened to end up real loud in that version.

Some months after the piece was released (along with “Return of the Jedi”) they lost the original recording. I recreated the piece for them, but they kept complaining that it didn’t sound the same. Since my random-number generators were keyed on the time and date, I couldn’t reproduce the score of the performance that they liked. I finally found the original version and everybody was happy.

The ASP was decommissioned in 1986 and later sold for scrap.

Lucas Films recently (2015) resynthesized THX Deep Note using “modern” synth technology and mixing it in current cinema audio formats at the historic Skywalker Ranch. Due to technology limitations of the time, the original piece is a four-channel recording that has been adapted over the years for various trailers in up to a 5.1 mix. Without the technical hurdles he faced thirty years ago, Andy Moorer has developed variations of the ‘THX Deep Note’ in three different lengths (30 seconds, 45 seconds, and 60 seconds) all in stereo, 5.1, 7.1 and Atmos mixes. Faithful to the original character, the resulting audio is intensely more complex, taking the audience on an epic sensory journey unlike anything they’ve experienced before.

Leave a Reply