Generating random noise in matlab - matlab

When I add Gaussian noise to an array shouldnt the histogram be Gaussian? Although the noise is random, the distribution should be gaussian right? That is not what I get.
A=zeros(10);
A=imnoise(A,'gaussian');
imhist(A)

Two things could be going on:
You don't have enough of a sample size, or
The default mean of imnoise with gaussian distribution is 0, meaning you're only seeing the right half of the bell curve.
Try
imhist(imnoise(zeros(1000), 'gaussian', 0.5));

This is what your code is doing:
A = zeros(10);
mu = 0; sd = 0.1; %# mean, std dev
B = A + randn(size(A))*sd + mu; %# add gaussian noise
B = max(0,min(B,1)); %# make sure that 0 <= B <= 1
imhist(B) %# intensities histogram
can you see where the problem is? (Hint: RANDN returns number ~N(0,1), thus the resulting added noise is ~N(mu,sd))
Perhaps what you are trying to do is:
hist( randn(1000,1) )

imnoise() is a function that can be applied to images, not plain arrays.
Maybe you can look into the randn() function, instead.

You might not see a bell-curve with a sampling frame of only 10.
See the central limit theorem.
http://en.wikipedia.org/wiki/Central_limit_theorem
I would try increasing the sampling frame to something much larger.
Reference:
Law of Large Numbers
http://en.wikipedia.org/wiki/Law_of_large_numbers

Related

Find zero crossing points in a signal and plot them matlab

I have a signal 's' of voice of which you can see an extract here:
I would like to plot the zero crossing points in the same graph. I have tried with the following code:
zci = #(v) find(v(:).*circshift(v(:), [-1 0]) <= 0); % Returns Zero-Crossing Indices Of Argument Vector
zx = zci(s);
figure
set(gcf,'color','w')
plot(t,s)
hold on
plot(t(zx),s(zx),'o')
But it does not interpole the points in which the sign change, so the result is:
However, I'd like that the highlighted points were as near as possible to zero.
I hope someone can help me. Thanks you for your responses in advanced.
Try this?
w = 1;
crossPts=[];
for k=1:(length(s)-1)
if (s(k)*s(k+1)<0)
crossPts(w) = (t(k)+t(k+1))/2;
w = w + 1;
end
end
figure
set(gcf,'color','w')
plot(t,s)
hold on
plot(t, s)
plot(crossPts, zeros(length(crossPts)), 'o')
Important questions: what is the highest frequency conponent of the signal you are measuring? Can you remeasure this signal? What is your sampling rate? What is this analysis for? (Schoolwork or scholarly research). You may have quite a bit of trouble measuring the zeros of this function with any significance or accurracy because it looks like your waveform has a frequency greater than half of your sampling rate (greater than your Nyquist frequency). Upsampling/interpolating your entire waveform will allow you to find the zeros much more precisely (but with no greater degree of accurracy) but this is a huge no-no in the scientific community. While my method may not look super pretty, it's the most accurate method that doesn't make unsafe assumptions. If you just want it to look pretty, I would recommend interp1 and using the 'Spline' method. You can interpolate the whole waveform and then use the above answer to find more accurate zeros.
Also, you could calculate the zeros on the interpolated waveform and then display it on the raw data.
A remotely possible solution to improve your data;
If you're measuring a human voice, why not try filtering at the range of human speech? This should be fine mathematically and could possibly improve your waveform.

Matlab create array random samples Gaussian distribution

I'd like to make an array of random samples from a Gaussian distrubution.
Mean value is 0 and variance is 1.
If I take enough samples, I would think my maximum value of a sample can be 0+1=1.
However, I find that I get values like 4.2891 ...
My code:
x = 0+sqrt(1)*randn(100000,1);
mean(x)
var(x)
max(x)
This would give me a mean like 0, a var of 0.9937 but my maximum value is 4.2891?
Can anyone help me out why it does this?
As others have mentioned, there is no bound on the possible values that x can take on in a gaussian distribution. However, the farther x is from the mean, the less likely it is to be drawn.
To give you some intuition for what the variance actually means (for any distribution, not just the gaussian case), you can look at the 68-95-99.7 rule. The rule says:
about 68% of the population will be within one sigma of the mean
about 95% of the population will be within two sigma's of the mean
about 99.7% of the population will be within three sigma's of the mean
Here sigma = sqrt(var) is the standard deviation of the distribution.
So while in theory it is possible to draw any x from a gaussian distribution, in practice it is unlikely to draw anything past 5 or 6 standard deviations away for a population of 100000.
This will yield N random numbers using the gaussian normal distribution.
N = 100;
mu = 0;
sigma = 1;
Xs = normrnd(mu, sigma, N);
EDIT:
I just realized that your code is in fact equivalent to what I've written.
As others already pointed out: variance is not the maximum distance a sample can deviate from the mean! (It is just the average of the squares of those distances)

manipulate data to better fit a Gaussian Distribution

I have got a question concerning normal distribution (with mu = 0 and sigma = 1).
Let say that I firstly call randn or normrnd this way
x = normrnd(0,1,[4096,1]); % x = randn(4096,1)
Now, to assess how good x values fit the normal distribution, I call
[a,b] = normfit(x);
and to have a graphical support
histfit(x)
Now come to the core of the question: if I am not satisfied enough on how x fits the given normal distribution, how can I optimize x in order to better fit the expected normal distribution with 0 mean and 1 standard deviation?? Sometimes because of the few representation values (i.e. 4096 in this case), x fits really poorly the expected Gaussian, so that I wanna manipulate x (linearly or not, it does not really matter at this stage) in order to get a better fitness.
I'd like remarking that I have access to the statistical toolbox.
EDIT
I made the example with normrnd and randn cause my data are supposed and expected to have normal distribution. But, within the question, those functions are only helpful to better understand my concern.
Would it be possible to appy a least-squares fitting?
Generally the distribution I get is similar to the following:
My
Maybe, you can try to normalize your input data to have mean=0 and sigma=1. Like this:
y=(x-mean(x))/std(x);
If you are searching for a nonlinear transformation that would make your distribution look normal, you can first estimate the cumulative distribution, then take the function composition with the inverse of standard normal CDF. This way you can transform almost any distribution to a normal through invertible transformation. Take a look at the example code below.
x = randn(1000, 1) + 4 * (rand(1000, 1) < 0.5); % some funky bimodal distribution
xr = linspace(-5, 9, 2000);
cdf = cumsum(ksdensity(x, xr, 'width', 0.5)); cdf = cdf / cdf(end); % you many want to use a better smoother
c = interp1(xr, cdf, x); % function composition step 1
y = norminv(c); % function composition step 2
% take a look at the result
figure;
subplot(2,1,1); hist(x, 100);
subplot(2,1,2); hist(y, 100);

Generate white noise with amplitude between [-1 1] with Matlab

I'm using the Matlab function Y = WGN(M,N,P) to generate white noise with Gaussian distribution. This function uses a power value (dB Watts) to calculate the amplitude of the output signal. Since I want to get an output amplitude range of -1 V to 1 V there is a function mode 'linear'.
I'm trying to use the 'linear' mode to produce the output but the result is an output amplitude range of [-4 4]
RandomSignal = wgn(10000,1,1,1,'linear');
Time = linspace(0,10,10000);
figure()
plot(Time,RandomSignal)
figure()
hist(RandomSignal,100)
Is there another function to produce this result, or am I just doing something wrong?
As others have said, you can't limit a Gaussian distribution.
What you can do is define your range to be 6 standard deviations, and then use randn(m,sigma) to generate your signal.
For example if you want a range of [-1 1] you will choose sigma=2/6=0.333 and Mu=0. This will create a chance of 99.7% to be inside the range. You can then round up and down those numbers that are out of the range.
This will not be a pure Gaussian distribution, but this is the closest you can get.
why you just take randn function of whatever bound and then just normalize it like this ex.
noise=randn(400); noise=noise./max(max(noise));
so whatever is the output of randn finally you will have a w.n. inside [-1 1].
Gaussian noise has an unbounded range. (The support of the Gaussian pdf is infinite.)
You can use rand rather than Gaussian generator. The output range of rand is 0-1, so to make it in the range -1 1 you use rand(args)*2 -1.
It should be noted that this generator is sampling a uniform density.
Don't want to say something very wrong, but when I copied your code and changed
RandomSignal = .25*wgn(10000,1,1,1,'linear');
it was then ok. Hope it works for you.(Assuming random data/4 is still random data)
hello I think it is so late to answer this question but I think it can be useful for other research.
the function which is shown below is worked for producing gaussian random noise between [-1,1].
function nois=randn_fun_Q(S_Q)
% S_Q a represents the number of rows of the noise vector. For example, in
% system identification, it shows the number of rows of the covariance matrix.
i=0;
nois=[];
for j=1:S_Q
while i~=1
nois_n=randn;
if nois_n<1 && nois_n>-1
nois_=nois_n;
i=1;
else
continue
end
end
nois=[nois;nois_];
i=0;
end
end
you should use this function in "for" loop in main cod to produce n number of noisy point in range of [-1,1].

DSP - Filtering in the frequency domain via FFT

I've been playing around a little with the Exocortex implementation of the FFT, but I'm having some problems.
Whenever I modify the amplitudes of the frequency bins before calling the iFFT the resulting signal contains some clicks and pops, especially when low frequencies are present in the signal (like drums or basses). However, this does not happen if I attenuate all the bins by the same factor.
Let me put an example of the output buffer of a 4-sample FFT:
// Bin 0 (DC)
FFTOut[0] = 0.0000610351563
FFTOut[1] = 0.0
// Bin 1
FFTOut[2] = 0.000331878662
FFTOut[3] = 0.000629425049
// Bin 2
FFTOut[4] = -0.0000381469727
FFTOut[5] = 0.0
// Bin 3, this is the first and only negative frequency bin.
FFTOut[6] = 0.000331878662
FFTOut[7] = -0.000629425049
The output is composed of pairs of floats, each representing the real and imaginay parts of a single bin. So, bin 0 (array indexes 0, 1) would represent the real and imaginary parts of the DC frequency. As you can see, bins 1 and 3 both have the same values, (except for the sign of the Im part), so I guess bin 3 is the first negative frequency, and finally indexes (4, 5) would be the last positive frequency bin.
Then to attenuate the frequency bin 1 this is what I do:
// Attenuate the 'positive' bin
FFTOut[2] *= 0.5;
FFTOut[3] *= 0.5;
// Attenuate its corresponding negative bin.
FFTOut[6] *= 0.5;
FFTOut[7] *= 0.5;
For the actual tests I'm using a 1024-length FFT and I always provide all the samples so no 0-padding is needed.
// Attenuate
var halfSize = fftWindowLength / 2;
float leftFreq = 0f;
float rightFreq = 22050f;
for( var c = 1; c < halfSize; c++ )
{
var freq = c * (44100d / halfSize);
// Calc. positive and negative frequency indexes.
var k = c * 2;
var nk = (fftWindowLength - c) * 2;
// This kind of attenuation corresponds to a high-pass filter.
// The attenuation at the transition band is linearly applied, could
// this be the cause of the distortion of low frequencies?
var attn = (freq < leftFreq) ?
0 :
(freq < rightFreq) ?
((freq - leftFreq) / (rightFreq - leftFreq)) :
1;
// Attenuate positive and negative bins.
mFFTOut[ k ] *= (float)attn;
mFFTOut[ k + 1 ] *= (float)attn;
mFFTOut[ nk ] *= (float)attn;
mFFTOut[ nk + 1 ] *= (float)attn;
}
Obviously I'm doing something wrong but can't figure out what.
I don't want to use the FFT output as a means to generate a set of FIR coefficients since I'm trying to implement a very basic dynamic equalizer.
What's the correct way to filter in the frequency domain? what I'm missing?
Also, is it really needed to attenuate negative frequencies as well? I've seen an FFT implementation where neg. frequency values are zeroed before synthesis.
Thanks in advance.
There are two issues: the way you use the FFT, and the particular filter.
Filtering is traditionally implemented as convolution in the time domain. You're right that multiplying the spectra of the input and filter signals is equivalent. However, when you use the Discrete Fourier Transform (DFT) (implemented with a Fast Fourier Transform algorithm for speed), you actually calculate a sampled version of the true spectrum. This has lots of implications, but the one most relevant to filtering is the implication that the time domain signal is periodic.
Here's an example. Consider a sinusoidal input signal x with 1.5 cycles in the period, and a simple low pass filter h. In Matlab/Octave syntax:
N = 1024;
n = (1:N)'-1; %'# define the time index
x = sin(2*pi*1.5*n/N); %# input with 1.5 cycles per 1024 points
h = hanning(129) .* sinc(0.25*(-64:1:64)'); %'# windowed sinc LPF, Fc = pi/4
h = [h./sum(h)]; %# normalize DC gain
y = ifft(fft(x) .* fft(h,N)); %# inverse FT of product of sampled spectra
y = real(y); %# due to numerical error, y has a tiny imaginary part
%# Depending on your FT/IFT implementation, might have to scale by N or 1/N here
plot(y);
And here's the graph:
The glitch at the beginning of the block is not what we expect at all. But if you consider fft(x), it makes sense. The Discrete Fourier Transform assumes the signal is periodic within the transform block. As far as the DFT knows, we asked for the transform of one period of this:
This leads to the first important consideration when filtering with DFTs: you are actually implementing circular convolution, not linear convolution. So the "glitch" in the first graph is not really a glitch when you consider the math. So then the question becomes: is there a way to work around the periodicity? The answer is yes: use overlap-save processing. Essentially, you calculate N-long products as above, but only keep N/2 points.
Nproc = 512;
xproc = zeros(2*Nproc,1); %# initialize temp buffer
idx = 1:Nproc; %# initialize half-buffer index
ycorrect = zeros(2*Nproc,1); %# initialize destination
for ctr = 1:(length(x)/Nproc) %# iterate over x 512 points at a time
xproc(1:Nproc) = xproc((Nproc+1):end); %# shift 2nd half of last iteration to 1st half of this iteration
xproc((Nproc+1):end) = x(idx); %# fill 2nd half of this iteration with new data
yproc = ifft(fft(xproc) .* fft(h,2*Nproc)); %# calculate new buffer
ycorrect(idx) = real(yproc((Nproc+1):end)); %# keep 2nd half of new buffer
idx = idx + Nproc; %# step half-buffer index
end
And here's the graph of ycorrect:
This picture makes sense - we expect a startup transient from the filter, then the result settles into the steady state sinusoidal response. Note that now x can be arbitrarily long. The limitation is Nproc > 2*min(length(x),length(h)).
Now onto the second issue: the particular filter. In your loop, you create a filter who's spectrum is essentially H = [0 (1:511)/512 1 (511:-1:1)/512]'; If you do hraw = real(ifft(H)); plot(hraw), you get:
It's hard to see, but there are a bunch of non-zero points at the far left edge of the graph, and then a bunch more at the far right edge. Using Octave's built-in freqz function to look at the frequency response we see (by doing freqz(hraw)):
The magnitude response has a lot of ripples from the high-pass envelope down to zero. Again, the periodicity inherent in the DFT is at work. As far as the DFT is concerned, hraw repeats over and over again. But if you take one period of hraw, as freqz does, its spectrum is quite different from the periodic version's.
So let's define a new signal: hrot = [hraw(513:end) ; hraw(1:512)]; We simply rotate the raw DFT output to make it continuous within the block. Now let's look at the frequency response using freqz(hrot):
Much better. The desired envelope is there, without all the ripples. Of course, the implementation isn't so simple now, you have to do a full complex multiply by fft(hrot) rather than just scaling each complex bin, but at least you'll get the right answer.
Note that for speed, you'd usually pre-calculate the DFT of the padded h, I left it alone in the loop to more easily compare with the original.
Your primary issue is that frequencies aren't well defined over short time intervals. This is particularly true for low frequencies, which is why you notice the problem most there.
Therefore, when you take really short segments out of the sound train, and then you filter these, the filtered segments wont filter in a way that produces a continuous waveform, and you hear the jumps between segments and this is what generates the clicks you here.
For example, taking some reasonable numbers: I start with a waveform at 27.5 Hz (A0 on a piano), digitized at 44100 Hz, it will look like this (where the red part is 1024 samples long):
So first we'll start with a low pass of 40Hz. So since the original frequency is less than 40Hz, a low-pass filter with a 40Hz cut-off shouldn't really have any effect, and we will get an output that almost exactly matches the input. Right? Wrong, wrong, wrong – and this is basically the core of your problem. The problem is that for the short sections the idea of 27.5 Hz isn't clearly defined, and can't be represented well in the DFT.
That 27.5 Hz isn't particularly meaningful in the short segment can be seen by looking at the DFT in the figure below. Note that although the longer segment's DFT (black dots) shows a peak at 27.5 Hz, the short one (red dots) doesn't.
Clearly, then filtering below 40Hz, will just capture the DC offset, and the result of the 40Hz low-pass filter is shown in green below.
The blue curve (taken with a 200 Hz cut-off) is starting to match up much better. But note that it's not the low frequencies that are making it match up well, but the inclusion of high frequencies. It's not until we include every frequency possible in the short segment, up to 22KHz that we finally get a good representation of the original sine wave.
The reason for all of this is that a small segment of a 27.5 Hz sine wave is not a 27.5 Hz sine wave, and it's DFT doesn't have much to do with 27.5 Hz.
Are you attenuating the value of the DC frequency sample to zero? It appears that you are not attenuating it at all in your example. Since you are implementing a high pass filter, you need to set the DC value to zero as well.
This would explain low frequency distortion. You would have a lot of ripple in the frequency response at low frequencies if that DC value is non-zero because of the large transition.
Here is an example in MATLAB/Octave to demonstrate what might be happening:
N = 32;
os = 4;
Fs = 1000;
X = [ones(1,4) linspace(1,0,8) zeros(1,3) 1 zeros(1,4) linspace(0,1,8) ones(1,4)];
x = ifftshift(ifft(X));
Xos = fft(x, N*os);
f1 = linspace(-Fs/2, Fs/2-Fs/N, N);
f2 = linspace(-Fs/2, Fs/2-Fs/(N*os), N*os);
hold off;
plot(f2, abs(Xos), '-o');
hold on;
grid on;
plot(f1, abs(X), '-ro');
hold off;
xlabel('Frequency (Hz)');
ylabel('Magnitude');
Notice that in my code, I am creating an example of the DC value being non-zero, followed by an abrupt change to zero, and then a ramp up. I then take the IFFT to transform into the time domain. Then I perform a zero-padded fft (which is done automatically by MATLAB when you pass in an fft size bigger than the input signal) on that time-domain signal. The zero-padding in the time-domain results in interpolation in the frequency domain. Using this, we can see how the filter will respond between filter samples.
One of the most important things to remember is that even though you are setting filter response values at given frequencies by attenuating the outputs of the DFT, this guarantees nothing for frequencies occurring between sample points. This means the more abrupt your changes, the more overshoot and oscillation between samples will occur.
Now to answer your question on how this filtering should be done. There are a number of ways, but one of the easiest to implement and understand is the window design method. The problem with your current design is that the transition width is huge. Most of the time, you will want as quick of transitions as possible, with as little ripple as possible.
In the next code, I will create an ideal filter and display the response:
N = 32;
os = 4;
Fs = 1000;
X = [ones(1,8) zeros(1,16) ones(1,8)];
x = ifftshift(ifft(X));
Xos = fft(x, N*os);
f1 = linspace(-Fs/2, Fs/2-Fs/N, N);
f2 = linspace(-Fs/2, Fs/2-Fs/(N*os), N*os);
hold off;
plot(f2, abs(Xos), '-o');
hold on;
grid on;
plot(f1, abs(X), '-ro');
hold off;
xlabel('Frequency (Hz)');
ylabel('Magnitude');
Notice that there is a lot of oscillation caused by the abrupt changes.
The FFT or Discrete Fourier Transform is a sampled version of the Fourier Transform. The Fourier Transform is applied to a signal over the continuous range -infinity to infinity while the DFT is applied over a finite number of samples. This in effect results in a square windowing (truncation) in the time domain when using the DFT since we are only dealing with a finite number of samples. Unfortunately, the DFT of a square wave is a sinc type function (sin(x)/x).
The problem with having sharp transitions in your filter (quick jump from 0 to 1 in one sample) is that this has a very long response in the time domain, which is being truncated by a square window. So to help minimize this problem, we can multiply the time-domain signal by a more gradual window. If we multiply a hanning window by adding the line:
x = x .* hanning(1,N).';
after taking the IFFT, we get this response:
So I would recommend trying to implement the window design method since it is fairly simple (there are better ways, but they get more complicated). Since you are implementing an equalizer, I assume you want to be able to change the attenuations on the fly, so I would suggest calculating and storing the filter in the frequency domain whenever there is a change in parameters, and then you can just apply it to each input audio buffer by taking the fft of the input buffer, multiplying by your frequency domain filter samples, and then performing the ifft to get back to the time domain. This will be a lot more efficient than all of the branching you are doing for each sample.
First, about the normalization: that is a known (non) issue. The DFT/IDFT would require a factor 1/sqrt(N) (apart from the standard cosine/sine factors) in each one (direct an inverse) to make them simmetric and truly invertible. Another possibility is to divide one of them (the direct or the inverse) by N, this is a matter of convenience and taste. Often the FFT routines do not perform this normalization, the user is expected to be aware of it and normalize as he prefers. See
Second: in a (say) 16 point DFT, what you call the bin 0 would correspond to the zero frequency (DC), bin 1 low freq... bin 4 medium freq, bin 8 to the highest frequency and bins 9...15 to the "negative frequencies". In you example, then, bin 1 is actually both the low frequency and medium frequency. Apart from this consideration, there is nothing conceptually wrong in your "equalization". I don't understand what you mean by "the signal gets distorted at low frequencies". How do you observe that ?