Kriging / Gaussian Process Conditional Simulations in Matlab - matlab

I would like to perform conditional simulations for Gaussian process (GP) models in Matlab. I have found a tutorial by Martin Kolář (http://mrmartin.net/?p=223).
sigma_f = 1.1251; %parameter of the squared exponential kernel
l = 0.90441; %parameter of the squared exponential kernel
kernel_function = #(x,x2) sigma_f^2*exp((x-x2)^2/(-2*l^2));
%This is one of many popular kernel functions, the squared exponential
%kernel. It favors smooth functions. (Here, it is defined here as an anonymous
%function handle)
% we can also define an error function, which models the observation noise
sigma_n = 0.1; %known noise on observed data
error_function = #(x,x2) sigma_n^2*(x==x2);
%this is just iid gaussian noise with mean 0 and variance sigma_n^2s
%kernel functions can be added together. Here, we add the error kernel to
%the squared exponential kernel)
k = #(x,x2) kernel_function(x,x2)+error_function(x,x2);
X_o = [-1.5 -1 -0.75 -0.4 -0.3 0]';
Y_o = [-1.6 -1.3 -0.5 0 0.3 0.6]';
prediction_x=-2:0.01:1;
K = zeros(length(X_o));
for i=1:length(X_o)
for j=1:length(X_o)
K(i,j)=k(X_o(i),X_o(j));
end
end
%% Demo #5.2 Sample from the Gaussian Process posterior
clearvars -except k prediction_x K X_o Y_o
%We can also sample from this posterior, the same way as we sampled before:
K_ss=zeros(length(prediction_x),length(prediction_x));
for i=1:length(prediction_x)
for j=i:length(prediction_x)%We only calculate the top half of the matrix. This an unnecessary speedup trick
K_ss(i,j)=k(prediction_x(i),prediction_x(j));
end
end
K_ss=K_ss+triu(K_ss,1)'; % We can use the upper half of the matrix and copy it to the
K_s=zeros(length(prediction_x),length(X_o));
for i=1:length(prediction_x)
for j=1:length(X_o)
K_s(i,j)=k(prediction_x(i),X_o(j));
end
end
[V,D]=eig(K_ss-K_s/K*K_s');
A=real(V*(D.^(1/2)));
for i=1:7
standard_random_vector = randn(length(A),1);
gaussian_process_sample(:,i) = A * standard_random_vector+K_s/K*Y_o;
end
hold on
plot(prediction_x,real(gaussian_process_sample))
set(plot(X_o,Y_o,'r.'),'MarkerSize',20)
The tutorial generates the conditional simulations using a direct simulation method based on covariance matrix decomposition. It is my understanding that there are several methods of generating conditional simulations that may be better when the number of simulation points is large such as conditioning by Kriging using a local neighborhood. I have found information regarding several methods in J.-P. Chilès and P. Delfiner, “Chapter 7 - Conditional Simulations,” in Geostatistics: Modeling Spatial Uncertainty, Second Edition, John Wiley & Sons, Inc., 2012, pp. 478–628.
Is there an existing Matlab toolbox that can be used for conditional simulations? I am aware of DACE, GPML, and mGstat (http://mgstat.sourceforge.net/). I believe only mGstat offers the capability to perform conditional simulations. However, mGstat also seems to be limited to only 3D models and I am interested in higher dimensional models.
Can anybody offer any advice on getting started performing conditional simulations with an existing toolbox such as GPML?
===================================================================
EDIT
I have found a few more Matlab toolboxes: STK, ScalaGauss, ooDACE
It appears STK is capable of conditional simulations using covariance matrix decomposition. However, is limited to a moderate number (maybe a few thousand?) of simulation points due to the Cholesky factorization.

I used the STK toolbox and I recommend it for others:
http://kriging.sourceforge.net/htmldoc/
I found that if you need conditional simulations at a large number of points then you might consider generating a conditional simulation at the points in a large design of experiment (DoE) and then simply relying on the mean prediction conditional on that DoE.

Related

MATLAB - Meaning of guassian distribution data. (in Neural Network)

I'm a newbie to MATLAB and now I'm trying to create a 2-d gaussian distribute data to train my neural network. I just found the code on the official document.
mu = [0 0];
Sigma = [.25 .3; .3 1];
x1 = -3:.2:3; x2 = -3:.2:3;
[X1,X2] = meshgrid(x1,x2);
F = mvnpdf([X1(:) X2(:)],mu,Sigma);
I know "mu" is average of the data. Sigma is something related to
Standard deviation. But I just don't get what is the idea of mesgrid and the interval(x1,x2). And the Geometric meaning of these code.
Also, can someone explain me why is guassian distribution so important in machine learning and data science? Cause all the course keep saying and saying this term.
Meshgrid is a basic matlab function, that is in no way specifically related to neural networks or a gaussian distribution. Check the documentation of Matlab to find out more about it.
The gaussian distribution (also known as normal distribution) is important for datascience because it comes with several nice statistical properties. Unfortunately it is hard to describe them all in a compact way, and this would also not be a question about programming, but more about statistics.
I think the code you provide seems confusing to you because you expect it to generate samples whereas it merely returns values of the Gaussian PDF (probability density function) for some given pairs of (x1,x2).
For example F = mvnpdf(a,b,mu, Sigma) returns the probability of x1=a and x2=b given that they follow a multivariate Gaussian distribution with mean mu and covariance matrix Sigma.
Being in Stack Overflow, I am focusing on the Matlab aspect of your question: for generating 100 samples of a 2-D Gaussian you can use something like the following (taken from the Matlab help of randn function):
mu = [1 2];
Sigma = [1 .5; .5 2];
R = chol(Sigma);
z = repmat(mu,100,1) + randn(100,2)*R;
The array z = [x1,x2] contains the x1 and x2 vectors that you are looking for.
Some statistics textbook or wikipedia could convince you on why the above code indeed generates such samples. The last line of code is related to one of the nice properties of a Gaussian distribution (or any other elliptical distribution).

How to solve equations with complex coefficients using ode45 in MATLAB?

I am trying to solve two equations with complex coefficients using ode45.
But iam getting an error message as "Inputs must be floats, namely single or
double."
X = sym(['[',sprintf('X(%d) ',1:2),']']);
Eqns=[-(X(1)*23788605396486326904946699391889*1i)/38685626227668133590597632 + (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632; (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632 + X(1)*(- 2500000 + (5223289665997855453060886952725538686654593059791*1i)/324518553658426726783156020576256)] ;
f=#(t,X)[Eqns];
[t,Xabc]=ode45(f,[0 300*10^-6],[0 1])
How can i fix this ? Can somebody can help me ?
Per the MathWorks Support Team, the "ODE solvers in MATLAB 5 (R12) and later releases properly handle complex valued systems." So the complex numbers are the not the issue.
The error "Inputs must be floats, namely single or double." stems from your definition of f using Symbolic Variables that are, unlike complex numbers, not floats. The easiest way to get around this is to not use the Symbolic Toolbox at all; just makes Eqns an anonymous function:
Eqns= #(t,X) [-(X(1)*23788605396486326904946699391889*1i)/38685626227668133590597632 + (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632; (X(2)*23788605396486326904946699391889*1i)/38685626227668133590597632 + X(1)*(- 2500000 + (5223289665997855453060886952725538686654593059791*1i)/324518553658426726783156020576256)] ;
[t,Xabc]=ode45(Eqns,[0 300*10^-6],[0 1]);
That being said, I'd like to point out that numerically time integrating this system over 300 microseconds (I assume without units given) will take a long time since your coefficient matrix has imaginary eigenvalues on the order of 10E+10. The extremely short wavelength of those oscillations will more than likely be resolved by Matlab's adaptive methods, and that will take a while to solve for a time span just a few orders greater than the wavelength.
I'd, therefore, suggest an analytical approach to this problem; unless it is a stepping stone another problem that is non-analytically solvable.
Systems of ordinary differential equations of the form
,
which is a linear, homogenous system with a constant coefficient matrix, has the general solution
,
where the m-subscripted exponential function is the matrix exponential.
Therefore, the analytical solution to the system can be calculated exactly assuming the matrix exponential can be calculated.
In Matlab, the matrix exponential is calculate via the expm function.
The following code computes the analytical solution and compares it to the numerical one for a short time span:
% Set-up
A = [-23788605396486326904946699391889i/38685626227668133590597632,23788605396486326904946699391889i/38685626227668133590597632;...
-2500000+5223289665997855453060886952725538686654593059791i/324518553658426726783156020576256,23788605396486326904946699391889i/38685626227668133590597632];
Eqns = #(t,X) A*X;
X0 = [0;1];
% Numerical
options = odeset('RelTol',1E-8,'AbsTol',1E-8);
[t,Xabc]=ode45(Eqns,[0 1E-9],X0,options);
% Analytical
Xana = cell2mat(arrayfun(#(tk) expm(A*tk)*X0,t,'UniformOutput',false)')';
k = 1;
% Plots
figure(1);
subplot(3,1,1)
plot(t,abs(Xana(:,k)),t,abs(Xabc(:,k)),'--');
title('Magnitude');
subplot(3,1,2)
plot(t,real(Xana(:,k)),t,real(Xabc(:,k)),'--');
title('Real');
ylabel('Values');
subplot(3,1,3)
plot(t,imag(Xana(:,k)),t,imag(Xabc(:,k)),'--');
title('Imaginary');
xlabel('Time');
The comparison plot is:
The output of ode45 matches the magnitude and real parts of the solution very well, but the imaginary portion is out-of-phase by exactly π.
However, since ode45's error estimator only looks at norms, the phase difference is not noticed which may lead to problems depending on the application.
It will be noted that while the matrix exponential solution is far more costly than ode45 for the same number of time vector elements, the analytical solution will produce the exact solution for any time vector of any density given to it. So for long time solutions, the matrix exponential can be viewed as an improvement in some sense.

Fast scaling of Gaussian Kernel by the Covariance of the Inputs

I am currently fiddling with multivariate kernel density estimations for estimating the probability density functions (PDF) of hydrological data sets using Matlab. I am most familiar with kernel density estimation using Gaussian kernels as outlined in Sharma (2000 and 2014) (where the kernel bandwidths are set using the Gaussian Reference Rule (GRR)). The GRR is written as follows (Sharma, 2000):
where lambda_ref = GRR bandwidth of kernel, n is the sample size, and d is the dimension of the data set we are using for density estimation. To estimate the multivariate density of our data set X we use the following formula (Sharma, 2000):
where lamda is the same as lamda_ref above, S is the sample covariance of X and det() stands for determinant.
My question is: I understand that there are many "fast" methods for calculating the Gaussian kernel function represented by the term exp() such as the method proposed here (using Matlab): http://mrmartin.net/?p=218. Since I will be working with data sets that are quite large in sample size (1000-10,000) I am looking for a fast code. Is anyone aware how I can write a fast code for the second equation that takes into account the inverse of the sample covariance matrix (S^-1)?
I greatly appreciate any help that can be provided on this issue. Thank you!
Note(s):
I understand that there is a Matlab code for calculating the second equation, found as a sub-function in: http://www.mathworks.com/matlabcentral/fileexchange/29039-mutual-information-2-variablle/content/MutualInfo.m. However this code has a bottleneck in how it calculates the kernel matrix.
References:
1 A. Sharma, Seasonal to interannual rainfall probabilistic forecasts for improved water supply management: Part 3 — A nonparametric probabilistic forecast model, Journal of Hydrology, Volume 239, Issues 1–4, 20 December 2000, Pages 249-258, ISSN 0022-1694, http://dx.doi.org/10.1016/S0022-1694(00)00348-6.
2 Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resour. Res., 50, 650–660, doi:10.1002/2013WR013845.
I have found a code that I am able to modify for my purposes. The original code is listed at the following link: http://www.kernel-methods.net/matlab/kernels/rbf.m.
Code
function K = rbf(coord,sig)
%function K = rbf(coord,sig)
%
% Computes an rbf kernel matrix from the input coordinates
%
%INPUTS
% coord = a matrix containing all samples as rows
% sig = sigma, the kernel width; squared distances are divided by
% squared sig in the exponent
%
%OUTPUTS
% K = the rbf kernel matrix ( = exp(-1/(2*sigma^2)*(coord*coord')^2) )
%
%
% For more info, see www.kernel-methods.net
%
%Author: Tijl De Bie, february 2003. Adapted: october 2004 (for speedup).
n=size(coord,1);
K=coord*coord'/sig^2;
d=diag(K);
K=K-ones(n,1)*d'/2;
K=K-d*ones(1,n)/2;
K=exp(K);
Modified Code incorporating sample covariance scaling:
xcov = cov(x.'); % sample covariance of the data
invxc = pinv(xcov); % inversion of data sample covariance
coord = x.';
sig = sigma; % kernel bandwidth
n = size(coord,1);
K = coord*invxc*coord'/sig^2;
d = diag(K);
K = K-ones(n,1)*d'/2;
K = K-d*ones(1,n)/2;
K = exp(K); % kernel matrix
I hope this helps someone else looking into the same problem.

Creating a matrix of Gaussian Wavelets at dyadic scales

I need to create a diagonal matrix containing the Fourier coefficients of the Gaussian wavelet function, but I'm unsure of what to do.
Currently I'm using this function to generate the Haar Wavelet matrix
http://www.mathworks.co.uk/matlabcentral/fileexchange/33625-haar-wavelet-transformation-matrix-implementation/content/ConstructHaarWaveletTransformationMatrix.m
and taking the rows at dyadic scales (2,4,8,16) as the transform:
M= 256
H = ConstructHaarWaveletTransformationMatrix(M);
fi = conj(dftmtx(M))/M;
H = fi*H;
H = H(4,:);
H = diag(H);
etc
How do I repeat this for Gaussian wavelets? Is there a built in Matlab function which will do this for me?
For reference I'm implementing the algorithm in section 4 of this paper:
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04218361
I maybe would not being answering the question, but i will try to help you advance.
As far as i know, the Matlab Wavelet Toolbox only deal with wavelet operations and coefficients, increase or decrease resolution levels, and similar operations, but do not exposes the internal matrices serving to doing the transformations from signals and coefficients.
Hence i fear the answer to this question is no. Some time ago, i did this for some of the Hart Class wavelet, and i actually build the matrix from the scratch, and then i compared the coefficients obtained with the Built-in Matlab Wavelet Toolbox, hence ensuring your matrices are good enough for your algorithm. In my case, recursive parameter estimation for time varying models.
For the function ConstructHaarWaveletTransformationMatrix it is really simple to create the matrix, because the Hart Class could be really simple expressed as Kronecker products.
The Gaussian Wavelet case as i fear should be done from the scratch too...
THe steps i suggest would be;
Although MATLAB dont include explicitely the matrices, you can use the Matlab built-in functions to recover the Gaussian Wavelets, and thus compose the matrix for your algorithm.
Build every column of the matrix with every Gaussian Wavelet, for every resolution levels you are requiring (the dyadic scales). Use the Matlab Wavelets toolbox for recover the shapes.
After this, compare the coefficients obtained by you, with the coefficients of the toolbox. This way you will correct the order of the Matrix row.
Numerically, being fj the signal projection over Vj (the PHI signals space, scaling functions) at resolution level j, and gj the signal projection over Wj (the PSI signals space, mother functions) at resolution level j, we can write:
f=fj0+sum_{j0}^{j1-1}{gj}
Hence, both fj0 and gj will induce two matrices, lets call them PHIj and PSIj matrices:
f=PHIj0*cj0+sum_{j0}^{j1-1}{PSIj*dj}
The PHIj columns contain the scaled and shifted scaling wavelet signal (one, for j0 only) for the approximation projection (the Vj0 space), and the PSIj columns contain the scaled and shifted mother wavelet signals (several, from j0 to j1-1) for the detail projection (onto the Wj0 to Wj1-1 spaces).
Hence, the Matrix you need is:
PHI=[PHIj0 PSIj0... PSIj1]
Thus you can express you original signal as:
f=PHI*C
where C is a vector of approximation and detail coefficients, for the levels:
C=[cj0' dj0'...dj1']'
The first part, for addressing the PHI build can be achieved by writing:
function PHI=MakePhi(l,str,Jmin,Jmax)
% [PHI]=MakePhi(l,str,Jmin,Jmax)
%
% Build full PHI Wavelet Matrix for obtaining wavelet coefficients
% (extract)
%FILTER
[LO_R,HI_R] = wfilters(str,'r');
lf=length(LO_R);
%PHI BUILD
PHI=[];
laux=l([end-Jmax end-Jmax:end]);
PHI=[PHI MakeWMatrix('a',str,laux)];
for j=Jmax:-1:Jmin
laux=l([end-j end-j:end]);
PHI=[PHI MakeWMatrix('d',str,laux)];
end
the wfilters is a MATLAB built in function, giving the required signal for the approximation and or detail wavelet signals.
The MakeWMatrix function is:
function M=MakeWMatrix(typestr,str,laux)
% M=MakeWMatrix(typestr,str,laux)
%
% Build Wavelet Matrix for obtaining wavelet coefficients
% for a single level vector.
% (extract)
[LO_R,HI_R] = wfilters(str,'r');
if typestr=='a'
F_R=LO_R';
else
F_R=HI_R';
end
la=length(laux);
lin=laux(2); lout=laux(3);
M=MakeCMatrix(F_R,lin,lout);
for i=3:la-1
lin=laux(i); lout=laux(i+1);
Mi=MakeCMatrix(LO_R',lin,lout);
M=Mi*M;
end
and finally the MakeCMatrix is:
function [M]=MakeCMatrix(F_R,lin,lout)
% Convolucion Matrix
% (extract)
lf=length(F_R);
M=[];
for i=1:lin
M(:,i)=[zeros(2*(i-1),1) ;F_R ;zeros(2*(lin-i),1)];
end
M=[zeros(1,lin); M ;zeros(1,lin)];
[ltot,lin]=size(M);
lmin=floor((ltot-lout)/2)+1;
lmax=floor((ltot-lout)/2)+lout;
M=M(lmin:lmax,:);
This last matrix should include some interpolation routine for having better general results in each case.
I expect this solve part of your problem.....
Hyp

Estimating the variance of eigenvalues of sample covariance matrices in Matlab

I am trying to investigate the statistical variance of the eigenvalues of sample covariance matrices using Matlab. To clarify, each sample covariance matrix is constructed from a finite number of vector snapshots (afflicted with random white Gaussian noise). Then, over a large number of trials, a large number of such matrices are generated and eigendecomposed in order to estimate the theoretical statistics of the eigenvalues.
According to several sources (see, for example, [1, Eq.3] and [2, Eq.11]), the variance of each sample eigenvalue should be equal to that theoretical eigenvalue squared, divided by the number of vector snapshots used for each covariance matrix. However, the results I get from Matlab aren't even close.
Is this an issue with my code? With Matlab? (I've never had such trouble working on similar problems).
Here's a very simple example:
% Data vector length
Lvec = 5;
% Number of snapshots per sample covariance matrix
N = 200;
% Number of simulation trials
Ntrials = 10000;
% Noise variance
sigma2 = 10;
% Theoretical covariance matrix
Rnn_th = sigma2*eye(Lvec);
% Theoretical eigenvalues (should all be sigma2)
lambda_th = sort(eig(Rnn_th),'descend');
lambda = zeros(Lvec,Ntrials);
for trial = 1:Ntrials
% Generate new (complex) white Gaussian noise data
n = sqrt(sigma2/2)*(randn(Lvec,N) + 1j*randn(Lvec,N));
% Sample covariance matrix
Rnn = n*n'/N;
% Save sample eigenvalues
lambda(:,trial) = sort(eig(Rnn),'descend');
end
% Estimated eigenvalue covariance matrix
b = lambda - lambda_th(:,ones(1,Ntrials));
Rbb = b*b'/Ntrials
% Predicted (approximate) theoretical result
Rbb_th_approx = diag(lambda_th.^2/N)
References:
[1] Friedlander, B.; Weiss, A.J.; , "On the second-order statistics of the eigenvectors of sample covariance matrices," Signal Processing, IEEE Transactions on , vol.46, no.11, pp.3136-3139, Nov 1998
[2] Kaveh, M.; Barabell, A.; , "The statistical performance of the MUSIC and the minimum-norm algorithms in resolving plane waves in noise," Acoustics, Speech and Signal Processing, IEEE Transactions on , vol.34, no.2, pp. 331- 341, Apr 1986
According to the abstract of from your first reference:
"Formulas for the second-order statistics of the eigenvectors have been derived in the statistical literature and are widely used. We point out a discrepancy between the statistics observed in numerical simulations and the theoretical formulas, due to the nonuniqueness of the definition of eigenvectors. We present two ways to resolve this discrepancy. The first involves modifying the theoretical formulas to match the computational results. The second involved a simple modification of the computations to make them match existing formulas."
Sounds like there is a discrepancy, and it also sounds like the two 'solutions' are hacks, but without access to the actual paper, it's kind of hard to help.