I need to create a diagonal matrix containing the Fourier coefficients of the Gaussian wavelet function, but I'm unsure of what to do.
Currently I'm using this function to generate the Haar Wavelet matrix
http://www.mathworks.co.uk/matlabcentral/fileexchange/33625-haar-wavelet-transformation-matrix-implementation/content/ConstructHaarWaveletTransformationMatrix.m
and taking the rows at dyadic scales (2,4,8,16) as the transform:
M= 256
H = ConstructHaarWaveletTransformationMatrix(M);
fi = conj(dftmtx(M))/M;
H = fi*H;
H = H(4,:);
H = diag(H);
etc
How do I repeat this for Gaussian wavelets? Is there a built in Matlab function which will do this for me?
For reference I'm implementing the algorithm in section 4 of this paper:
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04218361
I maybe would not being answering the question, but i will try to help you advance.
As far as i know, the Matlab Wavelet Toolbox only deal with wavelet operations and coefficients, increase or decrease resolution levels, and similar operations, but do not exposes the internal matrices serving to doing the transformations from signals and coefficients.
Hence i fear the answer to this question is no. Some time ago, i did this for some of the Hart Class wavelet, and i actually build the matrix from the scratch, and then i compared the coefficients obtained with the Built-in Matlab Wavelet Toolbox, hence ensuring your matrices are good enough for your algorithm. In my case, recursive parameter estimation for time varying models.
For the function ConstructHaarWaveletTransformationMatrix it is really simple to create the matrix, because the Hart Class could be really simple expressed as Kronecker products.
The Gaussian Wavelet case as i fear should be done from the scratch too...
THe steps i suggest would be;
Although MATLAB dont include explicitely the matrices, you can use the Matlab built-in functions to recover the Gaussian Wavelets, and thus compose the matrix for your algorithm.
Build every column of the matrix with every Gaussian Wavelet, for every resolution levels you are requiring (the dyadic scales). Use the Matlab Wavelets toolbox for recover the shapes.
After this, compare the coefficients obtained by you, with the coefficients of the toolbox. This way you will correct the order of the Matrix row.
Numerically, being fj the signal projection over Vj (the PHI signals space, scaling functions) at resolution level j, and gj the signal projection over Wj (the PSI signals space, mother functions) at resolution level j, we can write:
f=fj0+sum_{j0}^{j1-1}{gj}
Hence, both fj0 and gj will induce two matrices, lets call them PHIj and PSIj matrices:
f=PHIj0*cj0+sum_{j0}^{j1-1}{PSIj*dj}
The PHIj columns contain the scaled and shifted scaling wavelet signal (one, for j0 only) for the approximation projection (the Vj0 space), and the PSIj columns contain the scaled and shifted mother wavelet signals (several, from j0 to j1-1) for the detail projection (onto the Wj0 to Wj1-1 spaces).
Hence, the Matrix you need is:
PHI=[PHIj0 PSIj0... PSIj1]
Thus you can express you original signal as:
f=PHI*C
where C is a vector of approximation and detail coefficients, for the levels:
C=[cj0' dj0'...dj1']'
The first part, for addressing the PHI build can be achieved by writing:
function PHI=MakePhi(l,str,Jmin,Jmax)
% [PHI]=MakePhi(l,str,Jmin,Jmax)
%
% Build full PHI Wavelet Matrix for obtaining wavelet coefficients
% (extract)
%FILTER
[LO_R,HI_R] = wfilters(str,'r');
lf=length(LO_R);
%PHI BUILD
PHI=[];
laux=l([end-Jmax end-Jmax:end]);
PHI=[PHI MakeWMatrix('a',str,laux)];
for j=Jmax:-1:Jmin
laux=l([end-j end-j:end]);
PHI=[PHI MakeWMatrix('d',str,laux)];
end
the wfilters is a MATLAB built in function, giving the required signal for the approximation and or detail wavelet signals.
The MakeWMatrix function is:
function M=MakeWMatrix(typestr,str,laux)
% M=MakeWMatrix(typestr,str,laux)
%
% Build Wavelet Matrix for obtaining wavelet coefficients
% for a single level vector.
% (extract)
[LO_R,HI_R] = wfilters(str,'r');
if typestr=='a'
F_R=LO_R';
else
F_R=HI_R';
end
la=length(laux);
lin=laux(2); lout=laux(3);
M=MakeCMatrix(F_R,lin,lout);
for i=3:la-1
lin=laux(i); lout=laux(i+1);
Mi=MakeCMatrix(LO_R',lin,lout);
M=Mi*M;
end
and finally the MakeCMatrix is:
function [M]=MakeCMatrix(F_R,lin,lout)
% Convolucion Matrix
% (extract)
lf=length(F_R);
M=[];
for i=1:lin
M(:,i)=[zeros(2*(i-1),1) ;F_R ;zeros(2*(lin-i),1)];
end
M=[zeros(1,lin); M ;zeros(1,lin)];
[ltot,lin]=size(M);
lmin=floor((ltot-lout)/2)+1;
lmax=floor((ltot-lout)/2)+lout;
M=M(lmin:lmax,:);
This last matrix should include some interpolation routine for having better general results in each case.
I expect this solve part of your problem.....
Hyp
Related
I need help in implementing a MATLAB code to compute the frequency of the Fourier coefficients for 2D data. I first applied MATLAB's fft2 on the data followed by the fftshift, all I need to do now is compute the f=sqrt(fx*fx+fy*fy) such that fx is the coefficients along the columns and fy is the coefficients along the rows.
data=data-mean(data(:))
dft=fft2(data,1024,1024);
dftshift=fftshift(dft);
wn=2*pi/1024
cx=floor(1024/2)+1
for I=1:1024
for J=1:1024
freqx=(I-cx)*wn;
freqy=(J-cy)*wn;
freq=sqrt(freqx*freqx+freqy*freqy);
end
end
I am not doing signal processing. But in my area, I will use the spectral density of a matrix of data. I get quite confused at a very detailed level.
%matrix H is given.
corr=xcorr2(H); %get the correlation
spec=fft2(corr); % Wiener-Khinchin Theorem
In matlab, xcorr2 will calculate the correlation function of this matrix. The lag will range from -N+1 to N-1. So if size of matrix H is N by N, then size of corr will be 2N-1 by 2N-1. For discretized data, I should use corr or half of corr?
Another problem is I think Wiener-Khinchin Theorem is basically for continuous function. I have always thought that Discretized FT is an approximation to Continuous FT, or you can say it is a tool to calculate Continuous FT. If you use matlab build in function 'fft', you should divide the final result by \delta x.
Any kind soul who knows this area well there to share some matlab code with me?
Basically, approximating a continuous FT by a Discretized FT is the same as approximating an integral by a finite sum.
We will first discuss the 1D case, then we'll discuss the 2D case.
Let's look at the Wiener-Kinchin theorem (for example here).
It states that :
"For the discrete-time case, the power spectral density of the function with discrete values x[n], is :
where
Is the autocorrelation function of x[n]."
1) You can see already that the sum is taken from -infty to +infty in the calculation of S(f)
2) Now considering the Matlab fft - You can see (command 'edit fft' in Matlab), that it is defined as :
X(k) = sum_{n=1}^N x(n)*exp(-j*2*pi*(k-1)*(n-1)/N), 1 <= k <= N.
which is exactly what you want to be done in order to calculate the power spectral density for a frequency f.
Note that, for continuous functions, S(f) will be a continuous function. For Discretized function, S(f) will be discrete.
Now that we know all that, it can easily be extended to the 2D case. Indeed, the structure of fft2 matches the structure of the right hand side of the Wiener-Kinchin Theorem for the 2D case.
Though, it will be necessary to divide your result by NxM, where N is the number of sample points in x and M is the number of sample points in y.
In my homework, I am required to depict that a method can generate an Gaussian Distribution. the matlab program is shown below:
n=100;
b=25;
len=200000;
X=rand(n,len);
x=sum(X-0.5)*b/n;
[ps2,t2]=hist(x,50);
ps2=ps2/len;
bar(t2,ps2,'y');
hold on;
sigma_2=b^2/(12*n);
R=normrnd(0,sqrt(sigma_2),1,len);
[ps2,t2]=hist(R,50);
ps2=ps2/len;
plot(t2,ps2,'bo-','linewidth',1.5);
x is the sum of n uniformly distributed variables multiplying by b/n. And x is gaussian distributed with zero-mean and sigma^2=b^2/12n.
Then I got the image where the two distribution matched.
However, when I substituted the t2 inside the normal distibution density function f(x)=exp(-x.^2/(2*sigma_2))/sqrt(2*pi*sigma_2), the output is quite larger than the first one, although the shape is similar.
I wander why this occurs?
Its because you did not normalize discrete histograms. We know that in a continuous distributions the integral of probability functions are one. For solving this issue you should divide histogram to its integral. An approximate integral of a discrete function is rectangular integral:
integral (f) = sum(f)* LengthStep
so you should change your code this way :
n=100;
b=25;
len=200000;
X=rand(n,len);
x=sum(X-0.5)*b/n;
[ps2,t2]=hist(x,50);
ps2=ps2/(sum(ps2)*(t2(2)-t2(1))); % normalize discrete distribution
bar(t2,ps2,'y');
hold on;
sigma_2=b^2/(12*n);
R=normrnd(0,sqrt(sigma_2),1,len);
[ps2,t2]=hist(R,50);
ps2=ps2/(sum(ps2)*(t2(2)-t2(1))); % normalize discrete distribution
plot(t2,ps2,'bo-','linewidth',1.5);
hold on
plot(t2,exp(-t2.^2/(2*sigma_2))/sqrt(2*pi*sigma_2),'r'); %plot continuous distribution
and this is the result :
I've got an arbitrary probability density function discretized as a matrix in Matlab, that means that for every pair x,y the probability is stored in the matrix:
A(x,y) = probability
This is a 100x100 matrix, and I would like to be able to generate random samples of two dimensions (x,y) out of this matrix and also, if possible, to be able to calculate the mean and other moments of the PDF. I want to do this because after resampling, I want to fit the samples to an approximated Gaussian Mixture Model.
I've been looking everywhere but I haven't found anything as specific as this. I hope you may be able to help me.
Thank you.
If you really have a discrete probably density function defined by A (as opposed to a continuous probability density function that is merely described by A), you can "cheat" by turning your 2D problem into a 1D problem.
%define the possible values for the (x,y) pair
row_vals = [1:size(A,1)]'*ones(1,size(A,2)); %all x values
col_vals = ones(size(A,1),1)*[1:size(A,2)]; %all y values
%convert your 2D problem into a 1D problem
A = A(:);
row_vals = row_vals(:);
col_vals = col_vals(:);
%calculate your fake 1D CDF, assumes sum(A(:))==1
CDF = cumsum(A); %remember, first term out of of cumsum is not zero
%because of the operation we're doing below (interp1 followed by ceil)
%we need the CDF to start at zero
CDF = [0; CDF(:)];
%generate random values
N_vals = 1000; %give me 1000 values
rand_vals = rand(N_vals,1); %spans zero to one
%look into CDF to see which index the rand val corresponds to
out_val = interp1(CDF,[0:1/(length(CDF)-1):1],rand_vals); %spans zero to one
ind = ceil(out_val*length(A));
%using the inds, you can lookup each pair of values
xy_values = [row_vals(ind) col_vals(ind)];
I hope that this helps!
Chip
I don't believe matlab has built-in functionality for generating multivariate random variables with arbitrary distribution. As a matter of fact, the same is true for univariate random numbers. But while the latter can be easily generated based on the cumulative distribution function, the CDF does not exist for multivariate distributions, so generating such numbers is much more messy (the main problem is the fact that 2 or more variables have correlation). So this part of your question is far beyond the scope of this site.
Since half an answer is better than no answer, here's how you can compute the mean and higher moments numerically using matlab:
%generate some dummy input
xv=linspace(-50,50,101);
yv=linspace(-30,30,100);
[x y]=meshgrid(xv,yv);
%define a discretized two-hump Gaussian distribution
A=floor(15*exp(-((x-10).^2+y.^2)/100)+15*exp(-((x+25).^2+y.^2)/100));
A=A/sum(A(:)); %normalized to sum to 1
%plot it if you like
%figure;
%surf(x,y,A)
%actual half-answer starts here
%get normalized pdf
weight=trapz(xv,trapz(yv,A));
A=A/weight; %A normalized to 1 according to trapz^2
%mean
mean_x=trapz(xv,trapz(yv,A.*x));
mean_y=trapz(xv,trapz(yv,A.*y));
So, the point is that you can perform a double integral on a rectangular mesh using two consecutive calls to trapz. This allows you to compute the integral of any quantity that has the same shape as your mesh, but a drawback is that vector components have to be computed independently. If you only wish to compute things which can be parametrized with x and y (which are naturally the same size as you mesh), then you can get along without having to do any additional thinking.
You could also define a function for the integration:
function res=trapz2(xv,yv,A,arg)
if ~isscalar(arg) && any(size(arg)~=size(A))
error('Size of A and var must be the same!')
end
res=trapz(xv,trapz(yv,A.*arg));
end
This way you can compute stuff like
weight=trapz2(xv,yv,A,1);
mean_x=trapz2(xv,yv,A,x);
NOTE: the reason I used a 101x100 mesh in the example is that the double call to trapz should be performed in the proper order. If you interchange xv and yv in the calls, you get the wrong answer due to inconsistency with the definition of A, but this will not be evident if A is square. I suggest avoiding symmetric quantities during the development stage.
In MATLAB I need to generate a second derivative of a gaussian window to apply to a vector representing the height of a curve. I need the second derivative in order to determine the locations of the inflection points and maxima along the curve. The vector representing the curve may be quite noise hence the use of the gaussian window.
What is the best way to generate this window?
Is it best to use the gausswin function to generate the gaussian window then take the second derivative of that?
Or to generate the window manually using the equation for the second derivative of the gaussian?
Or even is it best to apply the gaussian window to the data, then take the second derivative of it all? (I know these last two are mathematically the same, however with the discrete data points I do not know which will be more accurate)
The maximum length of the height vector is going to be around 100-200 elements.
Thanks
Chris
I would create a linear filter composed of the weights generated by the second derivative of a Gaussian function and convolve this with your vector.
The weights of a second derivative of a Gaussian are given by:
Where:
Tau is the time shift for the filter. If you are generating weights for a discrete filter of length T with an odd number of samples, set tau to zero and allow t to vary from [-T/2,T/2]
sigma - varies the scale of your operator. Set sigma to a value somewhere between T/6. If you are concerned about long filter length then this can be reduced to T/4
C is the normalising factor. This can be derived algebraically but in practice I always do this numerically after calculating the filter weights. For unity gain when smoothing periodic signals, I will set C = 1 / sum(G'').
In terms of your comment on the equivalence of smoothing first and taking a derivative later, I would say it is more involved than that. As which derivative operator would you use in the second step? A simple central difference would not yield the same results.
You can get an equivalent (but approximate) response to a second derivative of a Gaussian by filtering the data with two Gaussians of different scales and then taking the point-wise differences between the two resulting vectors. See Difference of Gaussians for that approach.