As you say, the goal is intelligibility in noisy conditions - not fidelity. Processing toward this end has been done *on purpose* since WW2 at least. Since sibiltance and fricatives carry the most information, low-frequency content is filtered out. A common basic technique is to high-pass filter (at say 300 Hz), then hard clip (to boost and limit high-frequency content), and finally low-pass filter to remove extreme high frequencies. This is well documented and works to vastly improve intelligibility. It makes me laugh to hear folks say that audio quality was inherently bad in the sixties ... you're grossly overestimating the impact of "improvements" and "going digital" over the years! --- Bill Whitlock, Life Fellow of the Audio Engineering Society and IEEE Life Senior.
Voices sound much the same in low pressure oxygen because the speed of sound remains almost the same. That characteristic "donald duck" talk in helium results only from its unusually high speed of sound compared with N2/O2.
Sound does carry less efficiently in low pressure air. Skylab added a little nitrogen to its otherwise Apollo-style pure O2 atmosphere but many of its astronauts still had to use the intercoms because their voices just didn't carry well in the reduced pressure.
The Shuttle and the ISS both use ordinary air at sea pressure levels, so that's not it. When astronauts who are radio amateurs plug their headsets into amateur radio transceivers, their voices sound great -- so it's not their headsets either. I once asked this exact question of one of those "hams in space" and he said that the spot on the shuttle he used for his ham operating wasn't especially noisy at all.
So it must be the audio processors, A/D codecs or the communication channels themselves that are responsible for that famously poor NASA voice quality.
The ISS, like the Shuttle, make heavy use of TDRSS to provide continuous communications throughout each orbit. Remember (or maybe you don't) that when Mercury, Gemini, Apollo and Skylab spacecraft were in low earth orbit, each pass over an earth station gave them only a few minutes to talk. They were completely station, and they were out of touch much of the time. This was even true for the first few Shuttle flights until they launched the first TDRSS spacecraft.
The Apollo missions to the moon were in a separate category. Because of their high altitude, it only took three earth stations to provide continuous communications except when they were behind the moon. But NASA's stuck back in earth orbit (or even on earth itself depending on how you look at it) so they again need a relay for near-continuous communications.
Voice has been completely digital on the Shuttle from day one. Each voice channel uses 32 kb/s, I believe. I have no reason to think the ISS is any worse.
At the start I think the Shuttle used ADPCM encoding for voice, which shouldn't should much worse than an ordinary phone call. But 32 kb/s is more than plenty for a very good sounding voice codec, so it can't be anything but inertia that keeps NASA from using them.
NASA has traditionally used heavy analog speech processing to increase intelligibility on poor analog channels by an extra couple of dB. And since they still use VHF AM radios as a backup, it's possible they're still doing this at the expense of voice quality on the good digital channels. Or maybe it's still just institutional inertia.
Pioneer and Voyager are incredibly far from earth. (They're also robots so there's no need to send voice, but I figure you already know that.)
Human crews in the Shuttle or ISS are closer to the earth than San Francisco is to Los Angeles, so there's really no comparison. And no reason why they can't have perfectly good voice communications.
MikeLC, this is inherent to any efficient digital communications system. The Shannon theorem says that any given channel bandwidth and SNR has a certain capacity in bits/sec. You can theoretically send perfectly right up to this capacity but not one bit/sec beyond it.
Since broadcasting is one-way the user on a bad path can't tell the source to drop down below channel capacity, so the link stops working for him.
The ability of older analog systems to degrade more slowly as the signal weakens simply indicates that they were not very efficient to start with. Indeed, when a modern digital scheme fails it's usually already well beyond the point where the analog scheme would have failed.
I would like to sound off in agreement with NASA-HD and Randall, when it comes at space exploration, the system has to perform reliably and do its mission.
It is all about a balance among technical priorities. (REAL Mission needs, Costs, Deadlines, life support for astronauts, and other items in a long wish list, among others).
In noisy environments what good is HiFi audio if it is overwhelmed by surroundings audio sources, I would like to pose a rhetorical question, who would give up life support and mission critical comm. bandwidth for the luxury of high fidelity audio?
Here is a great example where technology application needs to be SMART not wasteful, do what is effective and reliable and nothing more.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used in Orion Spacecraft, part of NASA's Mars mission. Live radio show and live chat. Get your questions ready.
Brought to you by