Digital vs Analog - Noise Distortion

In summary, the conversation discusses the differences between digital and analog communication, focusing on their abilities to withstand noise and distortion. It is agreed that digital communication can handle noise and distortion better, as long as they stay within certain limits. However, it is also acknowledged that digital does not degrade gracefully and can suffer from pixelization and loss of signal when noise levels rise. On the other hand, analog communication can still be understood even with some noise, making it better for human interpretation. It is also noted that digital communication has better error correction capabilities, but it relies on analog integration to achieve optimal results. Ultimately, the statement in the textbook is deemed too sweeping and neglects practical considerations.
  • #1
frenzal_dude
77
0
In my textbook it says this:

"Digital communication, which can withstand channel noise and distortion much better than analog as long as the noise and the distortion are within limits, is more rugged than analog communication. With analog messages, on the other hand, any distortion or noise, no matter how small, will distort the recevied signal."

Since a digital signal already has quantization error, wouldn't the distortion caused by noise in an analog signal be no worse than the quantization error in a digital signal which can withstand the same noise?
 
Last edited:
Engineering news on Phys.org
  • #2
Depends on the level of noise and the number of quantization levels you are using.

As a very rough example: Suppose you are amplitude modulating. The digital case uses some scheme where bands of amplitudes represent bits such that every 200mV was a new bit. Now, say your quantization level was 0.1uV and you had noise on the channel of 1mV.

The digital system could transmit with a worse case error of 0.1uV but an anlog system would have 10x that error.
 
  • #3
I once heard an audio comparison between an FM audio channel and a digital channel. The digital channel was noise free down to about -105 dBm where it started getting choppy and finally dropped out all together. The analog channel had noise but was intelligible down to -118 dBm. Which would you say was better?
 
  • #4
i don't know if i would agree with the textbook.

it is well known in the audio world that digital does not degrade gracefully. anyone with a real digital TV (not cable) can see that. what would be snow in an old analog TV becomes this terrible pixelization and eventual loss in the new digital. but the digital TV looks great until the added noise is large enough that it start to degrade.

a little bit of noise hurts nothing in a digital signal. the 1s remain 1 and the 0s remain 0. but a little bit of noise hurts an analog signal a little bit.
 
  • #5
For practical reasons we have developed forward error correction (FEC) algorithms designed to correct all errors as long as the channel gives them better than about 1/50 BER. This is by design since that approach has the most general use.. But the statement seems to imply that behavior is some fundamental law. ideally a FEC should be able to be tweaked to fail optimally for the given use of the data as SNR degrades.

The statement is too sweeping and neglects quite a few considerations, like:

1. If the source data is sound or video and will be interpreted by a human, then (assuming source and channel coding being equal) analog is better because humans are better at filtering analog noise in audio or video than anything else out there.

2. We currently can do digital error correction much better than analog error correction simply because we've figured it out better (in fact, analog REC and FEC are still laboratory novelties, as far as I know). So here digital is better for a purely practical reason. The reason we've pursued digital error correction so much more than analog error correction is that analog error correction must be designed per channel and type of data. Digital error correction works with any digital data (although in some cases it is somewhat tweaked for the type of data and typical channel behavior).

3. For purely practical reasons, digital is convenient for channels with very low SNR, like hard disks and computer data buses.

If you want to wax theoretical (i.e. what would a super advanced civilization use) then it would be filling the channel to the brim with good FEC or REC (if possible) so Shannon-Hartley was optimized, but with degradation with reduction in SNR in a way that depended on the use of the data. For example, audio and video would degrade such that the brain receiving it could always get the most info. My guess is that some amount of analog channel coding is necessary to realize perfect (Shannon-Hartley) data rate.

Currently we must use analog integration to get the BER better than about 1/50 so that our current FEC algorithms can still work. (Any worse than 1/50 BER blows are best FEC out of the water). And this is why typical digital communication works wonderfully or not at all. We don't degrade very gracefully because we can get the most bits across on average by raising the BER to 1/50 through simple (and inefficient) analog redundancy, then digital FEC (plus maybe some REC) does the rest.
 
Last edited:
  • #6
In the previous post, where I said "very low SNR" I meant to say "very high SNR".
 
  • #7
rbj said:
i don't know if i would agree with the textbook.

it is well known in the audio world that digital does not degrade gracefully. anyone with a real digital TV (not cable) can see that. what would be snow in an old analog TV becomes this terrible pixelization and eventual loss in the new digital. but the digital TV looks great until the added noise is large enough that it start to degrade.
Funny, but that is exactly what my cable does. My provider does provides digital content, and when the signal does go bad it goes bad in a very big way. But when the signal is good (and not bandwidth limited), it is clean. Well, except maybe for some Gibbs phenomena, which you can see/hear even when the signal is perfect. Solution: Crank up the bass. Then no one will know that the high frequencies suffer.

a little bit of noise hurts nothing in a digital signal. the 1s remain 1 and the 0s remain 0. but a little bit of noise hurts an analog signal a little bit.
I think this is what the textbook was after. The text did qualify its statement with "as long as the noise and the distortion are within limits." A little bit of noise in an analog signal is going to come across as a little bit of white noise in the picture / sound. A little bit of noise in a digital signal can be completely removed thanks to redundancies (error correcting codes) embedded in the signal.
 
  • #8
I am no expert in communication. I read some books about it. My thinking is if the digital signal is truly "1" and "0", be it using edges or level, it should be more reliable as it can take a lot of distortion before you loss it. Analog signal change when you start adding distortion.

BUT, now a days digital communication formats are not 1 and 0...far from it. Like the QAM, you can have up to 256 pieces of information in one bit. It is a quadrature signal of two signals and depend of the "analog level" you create a 2D map. You get right back to the analog distortion problem again.

The good thing about the digital communications is they always have checks and feedbacks. For example they have check sum for each bit, if the check sum fail, the reciever can tell the transmitter to re-transmit the info again. That is the reason a bad connection slows the communication by not fail until to a certain point then it dies! You cannot do the check, verify and request to resent in analog communication, you loss it, it's lost. That is also part of the reason of the delay when you see news that the anchor interview a person in a remote location. Notice I highlight "part". Because there are other reason like they pipelining different channels, bundling and all sort of processes during transmission. But bad link will cause seconds of delay due to the re-transmissions. 10 years ago when I was designing hardware for SONET OC48 and OC192, I read into SONET and ATM. I believe they both do that. But as usual most of the stuff I read leaked out from the back of my head already!

As I said, I am not an expert on this, this is just my understanding from the limited knowledge.
 
Last edited:

Related to Digital vs Analog - Noise Distortion

1. What is the difference between digital and analog noise distortion?

Digital noise distortion is caused by errors in the digital representation of an analog signal, while analog noise distortion is caused by external interference in the analog signal.

2. Which type of noise distortion is more prevalent in modern technology?

Digital noise distortion is more prevalent in modern technology due to the widespread use of digital devices and the conversion of analog signals to digital for processing and storage.

3. How does noise distortion affect the quality of audio or visual media?

Noise distortion can result in a loss of clarity and fidelity in audio or visual media, making it harder to discern details and reducing overall quality.

4. Can noise distortion be eliminated entirely?

Noise distortion can never be completely eliminated, but it can be minimized through various techniques such as signal processing, shielding, and filtering.

5. Is one type of noise distortion better than the other?

Both digital and analog noise distortion have their own strengths and weaknesses, and the better type depends on the specific application and technology being used.

Similar threads

  • Electrical Engineering
Replies
13
Views
2K
Replies
2
Views
2K
Replies
6
Views
2K
  • Electrical Engineering
Replies
8
Views
2K
Replies
9
Views
2K
Replies
7
Views
3K
Replies
9
Views
2K
  • Electrical Engineering
Replies
7
Views
3K
  • Electrical Engineering
Replies
1
Views
2K
  • Electrical Engineering
Replies
5
Views
4K
Back
Top