I wonder if a test of standard cell phones or satellite phones would prove better in near space than current in use technology? I do find it hard to believe that we can't get reasonable audio quality from space. If the limitations are mic and front-end electronics could we just upgrade that part and realize new and improved audio performance? Just a thought..
Rich, it strikes me as naive for the diyAudio guy to believe that the audio should be good in the first place. We've been softened up by the ready availability of fantastic quality audio in our daily lives. The radio links from spacecraft have to operate in an abysmally low SNR environment. Fidelity is an irrelevant luxury; extracting intelligibility from this channel requires much time- and frequency-domain processing and filtering. You could probably get better audio from near-Earth craft with systematic redesign, but what's the incentive? NASA doesn't want to rummage around for different receiver types for craft at different distances from the earth.
Recovering signals from distant spacecraft makes anything in even audiophile audio look simple (and I'm an audiophile, don't get me wrong). Last time a signal from Pioneer 11 was picked up, the ratio between the size of the antenna and the distance away was around 4 trillion. That's a path loss of 252dB. In a hifi system, that would be like trying to measure a signal difference caused by _one_ extra electron arriving _per day_...
Nice to see someone putting this in perspective!
Given the distance it's amazing we had any audio at all!
What might be even more amazing is how bad the audio is in current digital tv antenna systems, where an all-or-nothing approach is used. If the video signal is too weak, the frame freezes and no attempt is even made (by the system) to send the lower-bandwidth audio.
MikeLC, this is inherent to any efficient digital communications system. The Shannon theorem says that any given channel bandwidth and SNR has a certain capacity in bits/sec. You can theoretically send perfectly right up to this capacity but not one bit/sec beyond it.
Since broadcasting is one-way the user on a bad path can't tell the source to drop down below channel capacity, so the link stops working for him.
The ability of older analog systems to degrade more slowly as the signal weakens simply indicates that they were not very efficient to start with. Indeed, when a modern digital scheme fails it's usually already well beyond the point where the analog scheme would have failed.
Pioneer and Voyager are incredibly far from earth. (They're also robots so there's no need to send voice, but I figure you already know that.)
Human crews in the Shuttle or ISS are closer to the earth than San Francisco is to Los Angeles, so there's really no comparison. And no reason why they can't have perfectly good voice communications.
There really is no excuse for it. Clear communications is desirable and necessary. There is no technological reason, analog, digital or rf that the communications can't be crystal clear. NASA hasn't bothered probably more for marketing reasons than anything else. But whatever the reason they haven't bothered, it is technically feasible, not very expensive, and clear audio should be designed into the system. If all they have is Monster Cable audiophiles at NASA, let me know and I'll design every aspect of a high quality system for an appropriate price.
Even if you designed every aspect of a high quality system for _FREE_, they would not upgrade it. The cost of the documentation alone is prohibitive. It's not for technical reasons, but bureaucratic. Stop thinking like an engineer :-)
It's obvious as to "why": it makes it sound more realistic, not faked. If it sounded too good, people might think the astronauts were next door, in a studio, and the whole thing was a fake set-up--the equivalent of being Photoshopped. ;-)
I suspect it's a very narrowband, companded, low bitrate channel, and the reasons for it not being better are directly related to the prioritization of video quality over audio quality.
After all, the audio is just human voice and the only fidelity criteria is intelligibility of speech. Pictures and video, on the other hand, are what make space exploration interesting to the public and therefore have much greater PR value to NASA.
Voice has been completely digital on the Shuttle from day one. Each voice channel uses 32 kb/s, I believe. I have no reason to think the ISS is any worse.
At the start I think the Shuttle used ADPCM encoding for voice, which shouldn't should much worse than an ordinary phone call. But 32 kb/s is more than plenty for a very good sounding voice codec, so it can't be anything but inertia that keeps NASA from using them.
NASA has traditionally used heavy analog speech processing to increase intelligibility on poor analog channels by an extra couple of dB. And since they still use VHF AM radios as a backup, it's possible they're still doing this at the expense of voice quality on the good digital channels. Or maybe it's still just institutional inertia.
I like the third alternative -- or one very similar to it: The audio SOUNDS different from other audio broadcasting; so, listeners can tell when the words are coming from space or not.
NASA might as well make it sound degraded, because sometimes distant communications do get degraded.
OK, let me get this straight. Satellite phones use an array of orbiting repeaters. The ISS is orbiting the same planet.
I would think the shortest distance would be a line between the astronaut and the satellite, just tap in.
By the way, digital answer machines compand the living daylights out of your voice and it still sounds good, even over the bandwidth limited POTS lines.
AND, if the navy seals can use throat mikes for covert ops, the technology is already there.
The ISS, like the Shuttle, make heavy use of TDRSS to provide continuous communications throughout each orbit. Remember (or maybe you don't) that when Mercury, Gemini, Apollo and Skylab spacecraft were in low earth orbit, each pass over an earth station gave them only a few minutes to talk. They were completely station, and they were out of touch much of the time. This was even true for the first few Shuttle flights until they launched the first TDRSS spacecraft.
The Apollo missions to the moon were in a separate category. Because of their high altitude, it only took three earth stations to provide continuous communications except when they were behind the moon. But NASA's stuck back in earth orbit (or even on earth itself depending on how you look at it) so they again need a relay for near-continuous communications.
TDRS...was a 70's design and we did not think about HD video at the time. We just launched a new gen TDRS Sat - I think in the last 36 months or so. We should see a better mix between audio and video in the next few years. Given the current Administrations idea that it is more important to reach out to the Muslim world via NASA...we may never get a full upgrade to TDRS. I also look for manned exploration to get hit big..at least for the next 2 years.
Nasa spinoffs have created a great world to live in. We need this kind of exploration to continue the pace of innovation we have become used to.
The shuttle is based on 1960s technology with periodic upgrades to systems like avionics. As long as Houston has its telemetry down links, I doubt they worry much about audio quality. Remember, also, that many of the Apollo astronauts were amazed at how good comms was during lunar trips. About the only time there were problems with voice communications was during maneuvers like extracting the LM from the Saturn third stage en route to the moon. NASA has bigger fish to fry (like developing heavy-lift rockets) than upgrading audio.
The Shuttle and the ISS both use ordinary air at sea pressure levels, so that's not it. When astronauts who are radio amateurs plug their headsets into amateur radio transceivers, their voices sound great -- so it's not their headsets either. I once asked this exact question of one of those "hams in space" and he said that the spot on the shuttle he used for his ham operating wasn't especially noisy at all.
So it must be the audio processors, A/D codecs or the communication channels themselves that are responsible for that famously poor NASA voice quality.
There are several audio sources and paths on the space shuttle. Some audio sources are inherently better than others. In this case, it is the transmission path that determines the audio fidelity. The best path now is the high bandwidth Ku-Band satellite link. Recently, it was upgraded with a high-fidelity handheld microphone so the audio sounds as good as any media outlet on the ground. You have to actually watch closely to see the microgravity effects in the picture to tell it is not taking place on the ground. However, this link is not always available everywhere on orbit. The TDRS link is much more available. Included in that link are the air-to-ground intercom communications. They are indeed low-quality and audio-bandwidth limited to 2 kHz. This is due to the fact that the orbiter uses a 1970's technology digital multiplexer and transceiver set that has limited data bandwidth. The intercom audio is actually digitized and multiplexed with other non-voice data and transmitted to TDRS or ground via S-band radio. Due to the overall limited data capacity, the voice data had to be limited to the lowest rate possible. In the 1970's, psychoacoustic digital audio compression was not common nor inexpensive to implement with limited flight hardware and processing resources. The easiest solution was to limit the audio sampling rate to about 4 kS/s. This results in poor audio fidelity yet retaining intelligibility. Nowadays, the same bit rate could yield a higher fidelity audio output by using compression. This is widespread technology found in every cell phone and MP3 player. However, the costs of testing and qualifying a change to the critical voice channel in flight hardware and software at this point are prohibitive. Hopefully NASA will embrace the current audio compression technology and retain full-bandwidth audio for all channels, including intercom, for the next generation spacecraft.
The modern NASA network is highly digital. The only actual "radios" on the ISS or space shuttle are used for landing (in the case of the shuttle) or emergencies.
The audio quality (or lack thereof, if you wish) is all in the microphone. Any aerospace application will tend to use mil-spec, tried and true tech whenever possible, and the (mechanical) noise cancelling microphone dates back to the 50's or 60's. These mics are inherently narrow band and have low dynamic range. They must be placed on the lips to work properly (move them just a few mm away and you can't hear the speaker). The good news is they work and are relatively cheap (emphasis on "they work").
I have not heard it lately. It originated from the Motorola VHF equipment, which was naturally narrow banded, exactly like all the other PM communications at the time. It was still the same,
and might be still analog, but insted of VHF antennas on earth, we now use TDRS. Apollo definitely used PM or FM communications which was separate from the Unified S band signals.
My question, why do we have such poor quality cell phones ????
It might be interesting to know that phone manufacturers, my employer in fact, have to inject "comfort noise" into their audiopath. If we didn't then the idle pauses where no one is speaking would be very uncomfortable.
So there is something to having "noisy" audio - it does indeed affirm reality in the communication path.
And if you believe that NASA thought about this when the designed their interstellar radio, then there's this bridge I know...
No - I think it was just Serendipity.
Voices sound much the same in low pressure oxygen because the speed of sound remains almost the same. That characteristic "donald duck" talk in helium results only from its unusually high speed of sound compared with N2/O2.
Sound does carry less efficiently in low pressure air. Skylab added a little nitrogen to its otherwise Apollo-style pure O2 atmosphere but many of its astronauts still had to use the intercoms because their voices just didn't carry well in the reduced pressure.
If we can get HD TV back from a geosynchronous bird, we could get decent audio from the shuttle--if we cared. The bottom line is that it does not affect the--you guessed it--bottom line.
It's just like the audio at the fast food drive up that has been atrocious for 60-odd years. When will fast food chains spend the money to improve it? When people quit patronizing their restaurants because they can't stand the crappy audio at the drive up. When will people quit patronizing their restaurants because they can't stand the crappy audio at the drive up? Looks a lot like never, at this point.
How many congressmen, senators or NASA executives have been seriously inconvenienced by the quality of audio from manned missions? When they are, the sound will improve.
Consider that the main priority is for the com system to be reliable and understandable. a Khz bandwidth is neither, although you do hear a lot of them on the amateur radio bands. So it is a bit wider than 2. It is indeed probably compressed a bit to help it be clearer, and it may also be digitized and multiplexed. One other thing, which is that far more important is reliability.
In addition, by the time it gets to the broadcast people, it has probably been abused a whole lot on the trip down from space. Did you ever listen to digitized single-sideband? Not even close to natural. Then consider how rugged it has to be, and think: could you drive a nail with your cell phone as the hammer?
The shuttle uses a modified ABATE algorithm as part of its telemetry stream on the S-Band. The Ku-band uses this same stream as the S-band, since the multiplexers/demux can be the same. See http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1094022 I was responsible for troubleshooting audio problems (and other telecommunications issues) on the orbiters for 7 years, and I had to help figure out when we had a problem with audio. So, the hardware, specs, TDRSS satellites, etc, was designed for that standard. Yes, even MP3 is much superior, but that was the 60s/70s when designed, and it worked. Yes, it's poor audio. It is prohibitively expensive to change, especially for NASA and all its bureaucracy. Read, for example why the railroad tracks are the width they are, it goes back to Roman times! I'm not condoning the bad audio, just trying to explain how we got where we are.
The distances, bandwidth, and just being in space, do not explain it. We satellite TV viewers receive hundreds of channels of HD TV with Dolby Digital audio from much higher orbits. Even the old analog satellite TV from the 1980's had decent mono or stereo audio. There certainly is no technical reason for poor audio from the space station.
My guess is that it may have been the best they could manage with the available power envelope for the Apollo program. When the Space Shuttle program was in the design phase they had to go with what was proven, something not far removed from the Apollo comm. Over the next 3 decades, nobody at NASA wanted to dip into their budget for the painstaking process of a comm redesign. The old comm was known to work, so a redesign was simply a low priority project that never got funded.
I propose that the poor audio quality is due simply to lack of funding.
Hi, there's another possible cause for the noisy audio. The electronics is exposed to an ionizing radiation field much stronger than in surface earth. Even in an aircraft it is much stronger. A reverse biased junction is a good gamma and RX detector.
It must be that.
It's all in the microphone.
They have a broadcast quality mic aboard the Alpha/ISS, which they use occasionally. So we know at least some if not all of the audio channels are high quality.
So why does it sound "bad" the rest of the time?
Because the primary function is to GET THE MESSAGE ACROSS.
Flat broadcast-quality mics are not the best choice for that. Communications quality mics, with a tailored response (rolled-off bass, somewhat peaked upper midrange) are much better at getting the message across, even in adverse conditions like high noise and/or distortion.
Who is the intended recipient? Not you and me! Except for occasional broadcasts to the press or general public, it is other astronauts and NASA workers. They need to know WHAT was said, not how pleasant it sounds. They need to GET THE JOB DONE.
As for "'squelchy'-sounding audio", I think that's because NASA still uses squelch and PTT. They don't seem to like leaving mics live all the time. On board Alpha, their mics are PTT (push to talk) and we hear them only after they push the switch, and it may cut off the first syllable or two because some of them aren't good at it.
On spacewalks there is no PTT, they use only squelch (VOX). The environment in a spacesuit is very noisy with air conditioning running all the time, and the threshold between squelched and not squelched is a fine line; consequently a lot of noise gets through and it sounds choppy. They may also be using some sort of throat mics.
Again, it's all for the same reason: get the message across. Intelligibility wins over fidelity.
The electronics between Alpha and earth is not impaired by radiation or distance or low air pressure or age or something like that. The bandwidth needed to pass flat audio is trivial compared to passing video or high bandwidth data; and we know the audio is good quality when they use the good mic. The rest of the time, they use mics that enhance intelligibility, because that's what matters.
As you say, the goal is intelligibility in noisy conditions - not fidelity. Processing toward this end has been done *on purpose* since WW2 at least. Since sibiltance and fricatives carry the most information, low-frequency content is filtered out. A common basic technique is to high-pass filter (at say 300 Hz), then hard clip (to boost and limit high-frequency content), and finally low-pass filter to remove extreme high frequencies. This is well documented and works to vastly improve intelligibility. It makes me laugh to hear folks say that audio quality was inherently bad in the sixties ... you're grossly overestimating the impact of "improvements" and "going digital" over the years! --- Bill Whitlock, Life Fellow of the Audio Engineering Society and IEEE Life Senior.
You must remember how NASA works. They don't do anything that is faster, better, or more quality. They do things for the cheapest amount possible when the manual is written. What I mean is that if someone finds a better way to put a panel on the shuttle craft, the better way will never meet the real world as it would cost way too much to change all of the necessary and unnecessary documents NASA has.
NASA does everything through a manual. Nothing, and I mean absolutely nothing is done without planning for it ahead of time.
Hey gang, I wanted to help shed a little light on this. We do indeed have decent audio being downlinked from the International Space Station. See link. http://www.youtube.com/watch?v=ywFfI0-nu00
We have been doing this live since 2006. The other types of audio that you may hear is through our S-band system that has to have multiple redundancy and back up systems for critical communications. Qualifying hardware for space flight is challenging. As electronic components become smaller and smaller, they tend to be more susceptible to radiation. Downlink from the ISS is 150 Mbps.
I think what's being overlooked here is that the channel is for voice communications in noisy backgrounds. Well before World War II, research had shown that, under extreme noise conditions (think airplane cockpits), voices are more intelligible if the audio is high-pass filtered, hard-clipped, and then low-pass filtered. It sure isn't "hi-fi" but it vastly boosts voice intelligibility. I'll assume they were smart enough to do this on purpose. Just because it's "old school" doesn't mean it's bad ... neither is digital always better!
Personally, I don't want to hear crystal clear full bandwidth dynamic audio from the space shuttle or the ISS. I think the graininess and distortion add to the atmosphere. The reports from space wouldn't be the same if they were crystal clear!
I would like to sound off in agreement with NASA-HD and Randall, when it comes at space exploration, the system has to perform reliably and do its mission.
It is all about a balance among technical priorities. (REAL Mission needs, Costs, Deadlines, life support for astronauts, and other items in a long wish list, among others).
In noisy environments what good is HiFi audio if it is overwhelmed by surroundings audio sources, I would like to pose a rhetorical question, who would give up life support and mission critical comm. bandwidth for the luxury of high fidelity audio?
Here is a great example where technology application needs to be SMART not wasteful, do what is effective and reliable and nothing more.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.