Fixing Headphones Using Electronics

So here’s the problem with headphone listening in a nutshell: the sound in the right channel is only heard in the right ear and the sound in the left channel is only heard in the left ear. What’s missing in headphones is the sound going from each channel to the opposite ear, arriving a short time later for the extra distance traveled, and with a bit of high frequency roll-off for the shadowing effect of the head.

Way back in the late 1950s, an engineer named Ben Bauer who understood the problems with headphones figured out that he could use an analog filter that would give pretty close to the right time delay and EQ change to feed a little of each channel across to the opposite channel. His hope was to get the sound from headphones to appear to come from outside of the head. Unfortunately, headphones weren’t so good in those days, and electronics were not quite so sophisticated as they are now.

Though his circuit did make headphone listening more natural sounding, it didn’t make a large enough improvement to offset the disadvantages. His circuit was also fairly expensive due to the number of large value capacitors and coils, and it was fairly inefficient so it required a full size power amp to drive the headphones. The product that came out of Mr.Bauer's work never sold well and eventually disappeared from the marketplace. Ben wasn’t the only person working on this technology in the 50’s and 60’s. Many other researchers also played around with trying to get headphones to sound like speakers. Unfortunately, results always fell short of expectations (it’s very hard to fool Mother Nature), and headphones, not being sold in anywhere near the quantity they are today, did not hold enough promise of profit for corporations to remain interested. Headphone crossfeed research faded out.

Flash forward 30 years and the scene had changed significantly. By the 1980s, headphones were being used in large numbers for portable equipment; integrated circuits made the construction of complex analog filters much simpler and less expensive; headphone quality had improved significantly; and the high-end audio community had begun to accept headphone listening as a legitimate “audiophile” activity.

At the same time, Tyll Hertsens (HeadRoom Founder) had a job that required a lot of air travel and discovered that portable players were simply incapable of driving good headphones with any reasonable degree of quality. So he built a simple portable headphone amplifier and was quite pleased with the results. On one particular flight, Tyll sat next to a recording engineer who was interested in his amp. After a bit of talk, the recording engineer mentioned that he had heard of a scientist named Ben Bauer who had done a bunch of research on making headphones sound more 'realistic'. Some time later, Tyll looked up the old papers and built a prototype psychoacoustic processor and headphone amplifier that accomplished the same time delays and EQ changes as the Bauer circuits, but used current I.C. techniques.

The idea of HeadRoom was born. Today, all HeadRoom amplifiers include a much-improved version of this analog psychoacoustic processor. The current HeadRoom crossfeed circuit uses a two-stage active filter that provides about 400 uSec of delay and a gentle frequency response roll-off starting at about 2 kHz. The left crossfeed signal is mixed in with the right channel’s direct signal (and vice versa) at a level about 8dB lower. However, note that because audio signals in air mix somewhat differently than they do electronically, and because there are limitations on what can be done with analog filters, the performance characteristics of the processor circuit is not exactly the same as the acoustic speaker-to-head environment that it models. In air, the crossfeed channel is only about 3dB lower in overall intensity than the direct channel, and the frequency response of the crossfeed signal is not a simple roll-off. To elaborate, in a speaker environment (or other “air mix” environments), at low frequencies where the half-wavelengths are larger than the head (about 300Hz and lower) there is virtually no attenuation of the acoustic energy as it reaches to far ear. However, as frequency increases above 400 Hz, the head does a better and better job of shadowing the far ear and the frequency response gently reduces to, at the most, -4dB at 2.5 kHz.

It’s also worth noting at this point that the time delay to the far ear is somewhat longer at 500 Hz than at 2 kHz. This happens because a standing wave at the head makes the head appear to have a larger diameter. The shorter the wavelength, the smaller the standing wave and, therefore, the smaller the apparent diameter of the head.

At higher frequencies a strange thing begins to happen: skin effect begins to guide the higher frequency energies around the face to the far ear. Basically, at high frequencies sound can begin to ball up and act like a particle. Little hemispherical packets of oscillating energy like to attach themselves to, and propagate along, surfaces. This allows high frequencies to turn the corner around the head and get to the far ear.

The result of all this is that the frequency response of the “natural” (in-air) crossfeed channel is flat for a while in the lows, then begins to drop off until 2.5 kHz, at which point it rises again for a while before it finally falls off due to the high frequency limit of human hearing. This complex mechanism is essentially impossible to model accurately using simple analog circuits; however, a little good luck and careful thinking allow us to build a circuit that comes fairly close. It turns out that if you take an audio signal and combine it with itself after a short delay you will get periodic cancellations in the frequency domain. These cancellations will occur for every frequency that is an integer multiple of the delay plus a half wavelength. For example if you delay a signal 1 mSec, the audio will cancel at 500 Hz, 1500 Hz, 2500 Hz, 3500 Hz, etc. When we crossfeed a signal from one channel to the other, it is a slightly different signal, but part of it (the mono component) is the same. So, when we crossfeed the signal through a delay we get a comb filter effect, the strength of which depends on the amount of mono in the signal. The depth of the notch in the comb filter is also affected by the fact that the crossfeed signal has a smaller amplitude than the direct signal, and the time delay of the crossfeed signal gets shorter as frequency gets higher. In the end, by very subtly playing with crossfeed level and the delay rate, we can create a simulation of the amplitude notch at 2.5 kHz by using the comb filter effect.

 

Read more on The Audiophiles
Your place for audio reviews, news, learning, and guides from our experts.
Discuss on our audiophile forums or join our Discord