Does anyone know what the correct level is?
Establishing and maintaining a standard audio level, especially in studios, has been a problem ever since the first radio station went on the air.
In the 1960s, audio console output levels were typically +8 dBm (eight decibels above one milliwatt into 600 ohms) for tubes and early solid-state consoles.
This +8 was a compromise between headroom and noise. A vintage 1969 Gates Statesman audio console had a dynamic range of 74 dB between maximum clipping at +18 dBm and noise level. This is 64 dB between the normal level of the program and the noise. This was fine for AM stations, where a signal-to-noise ratio of just 45dB was required by the FCC between 100% modulation and no audio.
Then came FM with a signal-to-noise ratio of 60 dB or better required by the FCC throughout the audio chain. The Statesman’s 64 dB signal-to-noise ratio left only 4 dB for additional noise in the station’s audio before it could not meet FCC specifications.
Figure 1 shows an oscilloscope view of a normal voice, where there is just enough dynamic range to accommodate it. The vertical peaks are right at the limits of the equipment. I prefer an oscilloscope to see exactly when peak clipping occurs. There is a discussion of this in a November 9, 2016 article I wrote in Radio World, “Calibrate Analog Audio Consoles”.
If you follow the numbers, you will realize that there was only 10dB between 100% (0dB on the meter as shown in Fig. 2) and audio clipping. The same microphone and voice sounded different from one audio console to another depending on the voice and the operator performing the commands.
My audio tests, with an oscilloscope 50 years ago, showed that some voices can have a mean/peak ratio of up to 16 dB. I deduced that ALL audio consoles need their analog VU meters calibrated to read 100% when peak clipping occurs 20 dB higher. The extra 4 dB supports audio when operators let levels rise hot.
Figure 3 shows the audio driven into maximum distortion. Not everyone hears the maximum clipping distortion, that dirty, heartbreaking sound added to the original content. Female listeners are the first to tune out. Ouch!
[Read More Articles by Mark Persons]
Recalibrating a meter to fix this on an older console might have meant -2 dBm output level when the VU meter read 100%. The signal-to-noise ratio would then be degraded by 10 dB. Those with good ears would say the sound wasn’t distorted, but there was a hiss in the background. Which is more acceptable?
Hi-Fi home stereo systems were gaining popularity in the 1960s and had better audio fidelity than broadcasters could provide. No wonder there’s been a push to design and build better audio consoles, especially for classic stations. For rock and roll listeners of the day, the distortion and hiss didn’t matter because the sound was just loud!
Audio quality improved when ICs such as the NE5532N came to market. These op-amp chips and the like allow balanced sound to be sent and received without the need for transformers.
As you probably know, even the best transformers color the sound of the audio a bit. While this coloration may be desirable and sought after in some recording studios, the goal is generally to keep the broadcast chain as clean as possible. Doing without transformers solved this problem.
ICs can provide about +24 dBm. The normal output level specified on most consoles using this technology is +4 dBm, which gives 20 dB of headroom. Typical examples are the Radio Systems RS-12A and Arrakis 150 to 12000 series audio consoles. This 20 dB is needed to help protect against peak clipping when operators like to hear the mechanical VU meter needles” click” when they drive the audio with force and the needles in the ankle. There is still 80 dB or more between the normal program level and the background noise. We have come a long way!
Devil and details
Audio processing was helpful in fixing audio level issues, but can do little for distortion. Some newer digital processors attempt to unclip the audio by mathematically recreating the original waveforms. This is a problem that should not exist in the first place if the proper procedures are followed. Once audio is clipped, it is permanently damaged.
Going back in time to the early 1960s there was the aptly named Gates Level Devil which expected a +8 input but could be adjusted to operate on low level audio up to 25dB of boost or gain reduction. It solved many problems caused by inattentive operators. The goal was to maintain consistent audio levels for ease of listening.
A story from the time goes that a minister came to a radio station to do a live radio show and insisted that the Level Devil be pulled from the circuit while it was on the air!
As far as audio levels go, 1960s console inputs were designed to accommodate a fairly wide range of source material through the use of input attenuators. There was no standard, if I remember correctly. Cartridge and reel tape machines may be capable of 0 dBm, but can easily be turned down to match what a console channel was optional for. A phono preamplifier/turntable may only deliver –15 dBm.
I constantly reminded the operators to monitor the audio levels to keep the sound consistent, as I could hear audio level issues when listening live. Operators, as you know, use their ears, under tightly pressed headphones to determine audio quality, rather than being bothered to read VU meters. They do not realize that the listener does not have the same helmets.
Automotive environments are a particular listening challenge with perhaps only 15 dB of listening range above noise on a busy highway. It’s bad business when a listener has to turn the radio volume up and down when they should have heard the radio sound at a constant level.
Try it yourself – place an oscilloscope over the vocal audio in a studio to see what you hear or observe the waveform in a digital editor. Intentionally record a voice at too high an audio level. You’ll hear the maximum clipping distortion, caused by running audio beyond the limits of the equipment, and you’ll understand that more isn’t better.
What is standard when recording digitally?
On some devices, the LED VU meter reads -14 dBFS (14 dB below full scale) when sound is normal. On an Axia Radius console, the meters change from green to yellow at -20 dBFS and from yellow to red at -10 dBFS. These are peak readings, which helps. Average-reading meters won’t tell the story. When the meters touch red, there is 10 dB of digital headroom. You might need that or more headroom for an occasional spike.
Remember that 0 dBFS is an absolute limit and digital clipping is even more destructive than analog. At this point, there are no more bits left to represent the signal.
Since noise is no longer an issue, I strongly recommend -20 dBFS as a digital reference level for +4 dBm or 0 VU in the analog world, especially if you think noise might be -90 dB or higher. It’s embarrassing and inexcusable to let the audio go into audible peak clipping distortion that ruins an otherwise great sound.
I’ve visited radio studios and watched LED VU meters showing anything between a constant -20 and an exaggerated redline. I bet listeners hear that and turn away. They don’t know why; they just find another station.
Yes, it’s true that today’s audio processors do a pretty good job of fixing audio level issues. However, it is up to the production staff and engineers to keep the station factory running so that the processing can do its job properly when fed with consistent audio input levels.
The original +8 dBm standard evolved to +4 dBm and now 0 dBm on some studio devices. Many, but not all, analog audio routing switches have audio level controls. If this is not the case with the ones you encounter, you should install resistive audio pads to lower the higher level sources to match the lowest level in the installation.
Figure 5 shows a simple balanced audio pad. It assumes that the audio source has a low drive impedance and that all devices powered by it are bridged (input impedance of 10 Kohm or more). The values chosen are standard and are resistors of 1/4 watt or more. Make it variable by replacing a 5000 ohm variable resistor with the 2200 ohm fixed resistor.
Many studios today have a mix of traditional analog on punchblocks and StudioHub on Cat5. Next up is AoIP, or Audio over Internet Protocol. Audio levels can get out of hand if all the sources don’t match.
And the podcasts?
Podcasts usually come to me without any audio processing and often have 10 dB or more disparities between voices during an interview. Some operators deliberately adjust the audio level to emphasize a point. It doesn’t go well when listening in a car with high ambient noise. No listener should have to turn the volume up and down to follow the content.
I also noticed a level difference of 10 dB or more from podcast to podcast, and even from episode to episode. Automakers should look into simple audio processing for unprocessed material like podcasts and CDs. It could improve user experience in noisy road environment.
Consistency is the key to good sound. Take pride in the sound of the installations you work on. Radio depends on sustaining listeners.
Comment on this article or any article. Write to [email protected].
Mark Persons, WØMH, CPBE, is a retired engineering consultant and recipient of the SBE John H. Battison Lifetime Achievement Award.