Search DC to White Light

Friday, July 29, 2011

Working out of your Element

Sometimes we get asked to perform tasks outside our job description – or outside our area of expertise. Most of us, smartly, investigate the job and, if we can do it, we just do. But what happens when you’re totally outside your area.

OK, friends…this one’s kinda political. Specifically about politicians and appointees who think that “just because it’s a good thought, it should be implemented.”

Now, I’m not against implementing good ideas. But when someone comes up with one of those “good ideas” which is outside their area of expertise, it spells danger. Case in point:

The FCC continues its push for reassignment of broadcast television spectrum to personal services like smart phones and PDA’s. “Everyone should have broadband,” is the thinking. And, yes, it’s admirable. And in support of it, Richard H. Thayer, professor at a major university, filed a story with the New York Times. Now, I do give him credit for his statement that it, “…sounds too good to be true…” because it is.

Professor Thayer makes the point that there are a lot of folks who aren’t on cable or satellite who could be converted – by forcing the systems to provide low cost service – thereby freeing the broadcast spectrum. He also tries to draw a picture of broadband being more efficient than broadcast.

This is a perfect example of someone working outside their area of expertise. And, yes, it’s dangerous.

First, think about the physics. It is evident that Professor Thayer doesn't understand the "economics" of spectrum allocation. (The Buried Treasure in Your TV Dial, New York Times, 2-27-10).

Do the math on the spectrum requirements for people in 1200 autos, 2 C&NW trains and two "El" trains on the Kennedy Expressway in his city and you will see that there isn't enough spectrum space available even including that currently allocated to broadcast TV.

Then do the math on efficiencies. In New York City, a program with a 5 rating is reaching about 550,000 households or (with an average VPVH of 1.5) 825,000 individuals. Divide that 6 megahertz TV channel by the 825,000 and you get a spectral "cost" of about 7.3 Hz*. Not kilohertz, just good old Hertz! Now stack that up against the multi-kilohertz to megahertz of bandwidth required PER PERSON under Professor Thayer's proposal and you'll quickly see his tremendous error.

*I’m being generous since the full six megahertz is only used in the 1080i HD mode.

Once again, good ideas need to be tempered with reality. Otherwise, you may be creating rules against the laws of physics. Hey, if that’s possible, let’s outlaw auto accidents and heart attacks, too.

We live through this regularly as legislators pass regulations without looking at the real world. Today, for example, automakers were informed that their new MPG target is 54. Think they’ll make it? It’s double what the rules are today. Might happen but the laws of physics say it has to be with small, light – and potentially unsafe – cars with fewer weight-adding amenities and no pickup. Or with electric cars, right? Infinite MPG – while they push the pollution caused by coal. Again, politicians think electricity is free…and they don’t realize that there’s actually a real live formula for converting horsepower to watts. That law against hurricanes must be just around the corner.

It’s a shame Professor Thayer didn’t run his idea past a truly distinguished professor from his same school, Prof. Steven Levitt, author of Freakonomics. I doubt he would have let Thayer’s work out of the building.

Friday, July 22, 2011

The Amazing Transducer/Filter/Equalizer

So I sat listening to Siegfried Idyll and tracking the first horn part. Even amidst a swelling tutti section, I was able to hear every note. ‘Bout an hour later, Buddy Rich was cranking away on Big Swing Face. Love his bass kicks and, again, it’s pretty easy to pick them out, separating them from the bass part, the piano left hand and the bari sax.

Revisiting the “How do it know?” punch line of the Thermos® bottle, you have to ask, “How?”  After all, I’m talking about a pair of ears. That combination of membrane, bones, hair cells, nerves and brain interpretation that processes what we call sound.

Try this some time: Point a mic at a street corner and tell it to “pick out what the girl in the tank top is saying.” Good luck.

OK – so you cheat and use a hypercardioid mic to pick out what’s coming from her direction. But there’s plenty of noise. So you load up Adobe Audition and open the file. Now – tell Adobe, “Pick out what the girl in the tank top is saying.”


Guess what. We’re not there yet. Yes, you can open the equalizers – graphic and/or parametric or sample the noise and try to cancel it – items – items you have to operate…but think about the fact that ears do all that automatically and in real time. You decide you want to hear the lead guitar and that’s what you hear.


The craziest part is that these wild transducers inside our heads have a lot of problems. First, they’re nonlinear. It’s a good thing. If ears operated linearly, we’d be able to handle about 50dB (softest to loudest) before losing the ability to hear the sound or covering ears to relieve the pain. Yet, given a combination of mechanical operations including the outer hairs in cochlea, we get to around 120dB volume range. Remember that decibels are logs so 10dB higher = 10 times the level but 20dB higher is 100 times, and 30dB is 1000 times, etc. Woah, Bessie, pretty good range. In fact, better than a standard CD.

“But wait, there’s more!” as Billy May (a voice any ear could recognize) would say. In its nonlinearity, the ear is subject to intermodulation distortion. OK. I give. Either bring it down a notch or go away. No, it’s simple. When two tones hit an eardrum, additional frequencies can be created. Well – not created physically…if you were to use a mic to measure the tones, a spectrum analyzer would show only two tones…but the human ear (tin, golden and in between) will hear additional tones. For example, listen to a 500Hz tone. Then add a second tone – 600 Hz. Intermodulation products begin to form as the level of the 600Hz tone is increased. You will “hear” 1100Hz (600+500) and then 100Hz (600-500) and, possibly, 1600Hz (2x500 + 600) and 400Hz (2x500 – 600). There actually are nearly infinite “orders” of intermodulation products, of varying levels, depending upon the level of the original two tones.


So, imagine a full orchestra playing. All those notes impinging on ear drums. A lot of intermod going on…but our brain understands it. Listen closely and it’s there, but it doesn’t get in your way.


Think this is all bunk? One speaker company is working on a new way of transmitting audio: If you want to create a 50Hz tone in someone’s ear, just generate a 22,000Hz tone and a 22,050Hz tone. Both are above audible range but a number of people have ear drums which move at those frequencies and if they do, they’ll generate the intermod tone of 22,050 – 22,000 or 50Hz.


And if you want to experience intermodulation in its basest form, listen as two oboes tune. (Actually, the riddle asks, “How do you make two oboists play in tune?” “Shoot one.”) As they come close in frequency, you will hear a “beat” between the two. That beat will waver at the frequency equal to the difference in frequency between the two. So if the wavering (not one oboe’s vibrato but the beat between two oboes) is occurring at twice a second, the two are one-half Hertz off in frequency.


Then add harmonic distortion on top of that and you wonder how we hear anything beyond mumbling and the cacophony of a bad orchestra tuning up. Harmonic distortion occurs when the eardrum – or any of the other devices of the ear – begins to move at twice or three time the frequency exciting it. If a 1000Hz tone is played into a human ear, to varying degrees the ear will generate harmonics (2000, 3000, 4000Hz) along with the fundamental. These harmonics are only a percentage the loudness of the fundamental but they’re there. It’s why an oboe sounds a little brighter than its spectral print says it should.


If you’re still hanging in here, let’s jump to the ultimate insult to the ear – audio compression schemes. Digitizing an audio signal for a CD means sampling each channel 44.1 thousand times a second, giving each sample a 16 bit number (one for each channel) for each of those slices of a second, then recording each digital “number” representing the signal at that point in time. A lot of data…like 700MB for 74 minutes of music. Doesn’t take much to fill up an iPod.


So we get crafty. We analyze the music/speech/whatever. We look at human ear response curves, impulse response, fatigue and more. And we find that, wow. When there’s a loud note of a particular frequency, it tends to mask other information in a small sliver of the audio band on either side of it. We tell the recording program to toss out that other info. And it does.


And what do we get? Kind of a skeleton of the music. As you listen, you know it’s Stevie Ray Vaughan. But a rhythm guitar playing an e softly behind his much louder note will be dropped – discarded. And what does that mean? Well tell your ear to listen for the rhythm guitar when listening to the CD and you may well hear it. Tell your ear to listen for it in an MP3 and your ear will reach around and smack you, know that it can’t. It’s a trick because the note just ain’t there. The greater the compressing, the less music is left.

Yet millions of folks, most without realizing it, record in a compression mode, tossing notes into the drink without thought. mp3, wma and similar formats are very good at sending content to the great beyond but either the lack of knowledge or the desire to put every K-TEL recording ever made on their iPods drives them to give themselves less that quality listening experiences. And it seems that the younger the person is, the more intent they are on cramming the most onto their devices. And the saddest part there is that they’ll never be able to experience critical listening. The “critical” has been removed by a process that leaves only the boldest notes.

Noise to the rescue. Another part of listening surrendered. There’s more radio listening in cars than at home. Noise surrounds iPod listeners on the street or in a plane. That noise masks some of the other problems created by compression. So now, listening to compressed audio, masked by noise, I have an interesting experience. A good one? Not really. But it is interesting. And seldom will you find a place where there are low enough noise levels to actually experience and analyze the music. Besides, who wants that? You miss the phone ringing, your special “other” talking to you, or the announcement of the next stop on the Red Line.

About here is where I’m supposed to offer the solution. Well I don’t have one. With no one buying CD’s today (OK, not many or the industry’d be in a little better shape) there’s no one with the gear to listen critically. They’re stuck. Their kids are stuck. Instead, they get excited over separation – as if the fact that the bass is mixed far left and the vocal is mixed center is the major point of the music – or other effects. It’s just not the music anymore…because there aren’t any music listeners any more. Except me – and maybe you.