My question concerns the mirror function of nyquist frequency. If, for example, I apply a distortion algorithm (non-linear) to a signal (audio signal) I can generate frequencies that exceed the nyquist limit. Why in the digital field are these frequencies brought back into the spectrum as if the nyquist frequency were a mirror? Is this because the fourier transform of a sequence is a periodicisation of the fourier transform of an analog signal?
This is actually covered in my FX plugin books (both editions) that show how an under-sampled signal (violation of Nyquist) is encoded as an alias, or "in disguise" with the incorrect frequency. If you study that and do some sketches, you will see that as the signal goes just above Nyquist, it will be encoded as a signal slightly below Nyquist - the "mirror" that you are talking about. But that is just one way to look at it. The more classical approach uses the convolution of the analog signal and the sampling switch.
The periodicity of the DFT is a result of convolving the input spectrum with the spectrum of the sampling signal (the switch), whose DFT is a series of spikes at multiples of the sample frequency. If the input spectrum is not band-limited, then the skirts of the DFTs will overlap. The negative frequencies from the spectrum just above Nyquist, on the left skirt will then "fold down" into the positive frequencies in the normal spectrum. In this approach, when you alias the signal, you are really listening to negative frequencies. I do this as an experiment with my students on the first day of Plugin Class.
Using RackAFX's sweep generator and a simple "horrible distortion" algorithm that simply clamps the signal to +1 or -1 based on polarity, you can see the mirroring, and hear the aliased components (negative frequencies), and you can find a special frequency where the aliasing suddenly disappears.
Hope that helps,
Thank you very much!
Therefore, to avoid aliasing, it is advisable to oversample the signal to raise the nyquist frequency and have more room for harmonics generated by, for example, non-linear processing. If we don't do this we'll get frequencies above nyquist and then aliasing (so we violated the sampling theorem). As long as, for example, the guitar signal is sampled from the sound card, everything is fine and nyquist is respected. The problems arise from non-linear processing (amplifiers simulation, distortion, etc.) and then in signal reconstruction. Is this correct?
Most Users Ever Online: 152
Currently Browsing this Page:
Guest Posters: 1
Newest Members:Alex, oneday, Phelan Kane, audiocoder, agel, Makai, Abyz, Nonlinear, IgorVish, Arjuna
Moderators: W Pirkle: 470
Administrators: Tom: 74, JD Young: 80, Will Pirkle: 0, W Pirkle: 470