Signal Detection Variance of Noise when signal is not present - matlab

I am trying to use Matlab to simulate detection of signal, and the amplitude of signal is either 1 or 0. However, after the AWGN channel, I need to generate the white noise, and I know the variance of noise is proportional to the amplitude of signal for a given SNR. However, if my amplitude is 0, does that mean my variance of noise if 0? If that is true, then there will be no false alarm probability. If that is not true, then how do I calculate the variance of noise?

The SNR is a ratio of the average power of the signal and the average power of the noise.
So in your example the signal is 0 and 1 and roughly half the time (if the data is independent and identically distributed, iid). Then the power would be:
0.5*0+0.5*1=0.5
so no the SNR is a ration between the 0.5 and the power of the noise.
Normally during the detection you would decide on a threshold, e.g. 0.5. If the detected signal is below the threshold you would decide on a 0 and if it is above you would decide on 1.

Related

Does the duration of a signal affect its frequency component's amplitude? Also, does the sampling frequency affect the power of a signal?

I have two questions that are bugging me:
Does the duration of an audio signal affect the amplitude of frequency components of that same signal? For example, I am recording the sound of a fan using a microphone. At first, I record only for 10 sec and convert the audio signal into frequency spectrum. Then, I record the same sound for 20 sec and then convert the audio signal into frequency spectrum. In both the cases, the sound of the fan is same, but does the duration of the signal affect the amplitude of frequency components in the spectrum plot?
For example, I have 2 audio signals. For the first one, I have that same fan sound recording for 10 sec and the sampling frequency is 5KHz, and for the second recording, I have that same audio signal but now the sampling frequency is changed to 15KHz. I used MATLAB to check the power for both the signals and the power for both the signals was same, however I want to know why. Formula that I used was Power=rms(signal)^2. According to me the second signal should have more power because now there are more number of samples compared to the first recording and since those extra samples would also have a random amplitude, the average shouldn't be the same as for the first one. Am I thinking it right?
Can anyone provide their thoughts? Thank You!
This answer is from: https://dsp.stackexchange.com/questions/75025/does-the-duration-of-a-signal-affect-its-frequency-components-amplitude-also
Power is energy per unit time. If you increase the duration, you increase the energy, but due to the normalization with time the power would be the same.
The DFT as given by
X[k]=∑n=0N−1x[n]e−j2πnk/N
will scale the frequency component by N as given by the summation over N samples. This can be normalized by multiplying the result by 1/N.
The frequency components of the signal levels will be the same in a normalized DFT (normalized by dividing by the total number of samples) for signal components that occupy one bin (pure tones), but the noise floor as observed may be lower by the change in sampling rate: if the noise floor is limited by quantization noise, the total quantization noise (well approximated as white noise, meaning constant across all frequencies) will be spread over a wider frequency range, so increasing the sampling rate will cause the contribution of quantization noise on a per Hz basis (noise density) to be lower. Further the duration will effect the frequency resolution in each bin in the frequency domain; for an unwindowed DFT, the equivalent noise bandwidth per bin is fs/N where fs is the sampling rate and N is the number of bins. At a given sampling rate, increasing the number of bins will increase the total duration and thus reduce the noise bandwidth per bin; causing the overall DFT noise floor as observed to decrease. (Same noise power just less measured in each bin). Windowing if done in the time domain will reduce the signal level by the coherent gain of the window, but increase the equivalent noise bandwidth such that pure tones will be reduced more than noise that is spread over multiple bins.

How to generate band-limited random noise with flat spectrum?

How can I generate 500 ms worth of noise sampled at 1280 Hz, with a flat frequency distribution between 0.1 - 640 Hz and normally distributed amplitude values?
See the screenshot below for an illustration of the desired output.
Timeplot of waveform, frequency distribution, and histogram of amplitudes
The parameters of your question make the answer trivial:
640 Hz is exactly half of 1280 Hz, so this is the highest frequency (Nyquist) in the Fourier decomposition;
0.1 Hz is way below 1 / 500ms = 2Hz, which is the frequency resolution of your Fourier decomposition, and therefore the lowest positive frequency you can control.
So in your case, the "band-limited" constraint is trivial, and you can simply generate the desired noise with:
duration = 500e-3;
rate = 1280;
amplitude = 500;
npoints = duration * rate;
noise = amplitude * randn( 1, npoints ); % normally distributed white noise
time = (0:npoints-1) / rate;
However, more generally, generating noise in a specific frequency band with constraints on spectrum shape (eg flat) and value statistics (eg normally distributed) can be difficult. There are two simple approximations I can think of:
Working in the time-domain, first enforcing constraints on value statistics by drawing from the distribution of choice, and then filtering the resulting signal using a band-pass FIR filter for example. For this approximation, note that the filter will also affect the distribution of values, and so in general your constraints on value statistics will be poorly met unless you design the filter very carefully.
Working backwards from the Fourier domain, first enforcing constraints on the amplitude coefficients, taking random noise for the phase, and using an inverse transform to come back to the time-domain. For this approximation, note that the phase distribution will affect the temporal distribution of values in non-trivial ways, depending on the amplitude constraints, and that if your sampling rate is much larger than your frequency cutoff, you might need to enforce constraints on harmonic amplitudes as well in order to avoid artefacts.

Noise Comparison in MATLAB

Assuming I have two signals (raw data as excel file) measured from two different power supplies, I want to compare the noise-levels of these signals to find out which one of them the noisier one. Both power supplies produce signals with the same frequency but the amount of data points are different. Is there a way to do this in MATLAB?
You could calculate the signal-to-noise ratio for each signal. This is just a ratio of the average signal power and average noise power, usually measured in decibels. An ideal noiseless signal would have SNR = infinity.
Recall that signal power is just the square of the signal amplitude, and to get the value x in decibels, we just take 10*log10(x).
SNR = 10*log10( mean(signal.^2)/mean(noise.^2) );
To separate the signal from the noise, you could run a low-pass filter over the noisy signal.
To get the noise you could just subtract the clean signal from the noisy signal.
noise = noisy_signal - signal;

How To Make Noise Signal After Estimate Noise Variance

For Blind Source Separation (Adapative Filtering , LMS Algorithm), i need two input. a)noisy signal, 2)noise signal. But How can i make the noise signal. If i can estimate the noise variance of noisy signal, then how can i make a noise signal from noise variance in matlab. I am new in signal processing.
try this (matlab):
noise = normrnd(mean ,sqrt(variance) ,rows ,columns);
it will generate random numbers from the normal distribution (mean,variance).
rows and columns will dictate the result matrix dimensions.

FFT plot in Matlab

I have a small input signal of 60Hz sine wave from signal generator, which is corrupted with 50Hz mains supply frequency. I want to measure the amplitude of the 60Hz signal using FFT because it is very small to see in the oscilloscope.
The Matlab FFT code:
y = data;
Fs = 2048;
[r, L] = size(y);
NFFT = 2^nextpow2(L); % Next power of 2 from length of y
Y = fft(y,NFFT)/L;
f = Fs/2*linspace(0,1,NFFT/2+1);
% Plot single-sided amplitude spectrum.
plot(f,2*abs(Y(1:NFFT/2+1)))
But the FFT plot doesn't give a sharp peak at 50 and 60Hz. The plot looks like this:
The consecutive points have high and low amplitude alternatively which gives a saw-tooth like plot. Why is it so? Is the amplitude of 60Hz affected by this?
Probably there are two effects
If one measures a time window of a signal, this leads unavoidable to a phase gap between the start and endpoint of the signal modes. The FFT of a gap or a rectangular signal causes high frequency oscillations. These oscillations caused by border effects can be damped due to a window function, which smooths out the signal to the borders.
There is a discrete frequency spectra in DFT. If one measures a signal which does not match to any of these discrete modes, a lot more frequencies are necessary to reconstruct the original signal.
Your 50 Hz signal may not be a pure perfect sine wave. Any differences from a perfect sine-wave (such as clipping or distortion) is equivalent to a modulation that will produce sidebands in the spectrum.
Windowing a signal whose period is not an exact sub-multiple of the FFT length will also convolve windowing artifacts with that signal.