How to reduce dimensions of Gaussian Mixture Model parameters - matlab

Assuming I have already built a Gaussian Mixture Model using the fitgmdist function and want to map the multivariate distributions into a subspace with a smaller dimension without having to recreate the model how do I go about it?
In MATLAB terms, I have a GMM, gmm_goal, with gmm_goal.NumComponents = K and gmm_goal.NumVariables = N and want to reduce N to a number n < N.
If code isn't available, an explanation or mathematical derivation will do.

The parameters of the Gaussian Mixture Model effected by the transformation into a subspace are the mean and variance of the Gaussian distributions that form the GMM.
Assuming a linear transformation of your data points x:
y = A*x + b
Because of linearity of expectation, we can calculate the new mean and variance of the subspace from the old ones:
mean_new = A*mean + b
variance_new = A*variance*A'

Related

How can I compute kernels in Matlab?

I want to calculate weighted kernels (for using in a SVM classifier) in Matlab but I'm currently compeletely confused.
I would like to implement the following weighted RBF and Sigmoid kernel:
x and y are vectors of size n, gamma and b are constants and w is a vector of size n with weights.
The problem now is that the fitcsvm method from Matlab need two matrices as input, i.e. K(X,Y). For example the not weighted RBF and sigmoid kernel can be computed as follows:
K_rbf = exp(-gamma .* pdist2(X,Y,'euclidean').^2)
K_sigmoid = tanh(gamma*X*Y' + b);
X and Y are matrices where the rows are the data points (vectors).
How can I compute the above weighted kernels efficiently in Matlab?
Simply scale your input by the weights before passing to the kernel equations. Lets assume you have a vector w of weights (of size of the input problem), you have your data in rows of X, and features are columns. Multiply it with broadcasting over rows (for example using bsxfun) with w. Thats all. Do not do the same to Y though, just multiply one of the matrices. This is true for every such "weighted" kernel based on scalar product (like sigmoid); for distance based (like RBF) you want to scale both by sqrt of w.
Short proofs:
scalar based
f(<wx, y>) = f(w<x, y>) (linearity of scalar product)
distance based
f(||sqrt(w)x - sqrt(w)y||^2) = f(SUM_i (sqrt(w_i)(x_i - y_i))^2)
= f(SUM_i w_i (x_i - y_i)^2)

conditional probability density from GMM

I have fitted a Gaussian Mixture Model to the multiple joint probability density functions. How can I obtain the conditional probability density function (i.e.,p(x|y)) from this mixture model (NXN matrix) in Matlab?
Based on Bayes rule, you can write down formula p(x|y)=p(x,y)/p(y). If you are able to obtain probability value p(y) for some given y, you can plug it in directly into Bayes formula. Otherwise you can go on and express each gaussian of the mixture as conditional gaussian with parameters (P stands for covariance matrices, mu stands for means):
mu_x|y = mu_x + P_xy P_yy^-1 (y - mu_y)
P_x|y = P_xx + P_xy P_yy^-1 P_yx

Principal Components calculated using different functions in Matlab

I am trying to understand principal component analysis in Matlab,
There seems to be at least 3 different functions that do it.
I have some questions re the code below:
Am I creating approximate x values using only one eigenvector (the one corresponding to the largest eigenvalue) correctly? I think so??
Why are PC and V which are both meant to be the loadings for (x'x) presented differently? The column order is reversed because eig does not order the eigenvalues with the largest value first but why are they the negative of each other?
Why are the eig values not in ordered with the eigenvector corresponding to the largest eigenvalue in the first column?
Using the code below I get back to the input matrix x when using svd and eig, but the results from princomp seem to be totally different? What so I have to do to make princomp match the other two functions?
Code:
x=[1 2;3 4;5 6;7 8 ]
econFlag=0;
[U,sigma,V] = svd(x,econFlag);%[U,sigma,coeff] = svd(z,econFlag);
U1=U(:,1);
V1=V(:,1);
sigma_partial=sigma(1,1);
score1=U*sigma;
test1=score1*V';
score_partial=U1*sigma_partial;
test1_partial=score_partial*V1';
[PC, D] = eig(x'*x)
score2=x*PC;
test2=score2*PC';
PC1=PC(:,2);
score2_partial=x*PC1;
test2_partial=score2_partial*PC1';
[o1 o2 o3]=princomp(x);
Yes. According to the documentation of svd, diagonal elements of the output S are in decreasing order. There is no such guarantee for the the output D of eig though.
Eigenvectors and singular vectors have no defined sign. If a is an eigenvector, so is -a.
I've often wondered the same. Laziness on the part of TMW? Optimization, because sorting would be an additional step and not everybody needs 'em sorted?
princomp centers the input data before computing the principal components. This makes sense as normally the PCA is computed with respect to the covariance matrix, and the eigenvectors of x' * x are only identical to those of the covariance matrix if x is mean-free.
I would compute the PCA by transforming to the basis of the eigenvectors of the covariance matrix (centered data), but apply this transform to the original (uncentered) data. This allows to capture a maximum of variance with as few principal components as possible, but still to recover the orginal data from all of them:
[V, D] = eig(cov(x));
score = x * V;
test = score * V';
test is identical to x, up to numerical error.
In order to easily pick the components with the most variance, let's fix that lack of sorting ourselves:
[V, D] = eig(cov(x));
[D, ind] = sort(diag(D), 'descend');
V = V(:, ind);
score = x * V;
test = score * V';
Reconstruct the signal using the strongest principal component only:
test_partial = score(:, 1) * V(:, 1)';
In response to Amro's comments: It is of course also possible to first remove the means from the input data, and transform these "centered" data. In that case, for perfect reconstruction of the original data it would be necessary to add the means again. The way to compute the PCA given above is the one described by Neil H. Timm, Applied Multivariate Analysis, Springer 2002, page 446:
Given an observation vector Y with mean mu and covariance matrix Sigma of full rank p, the goal of PCA is to create a new set of variables called principal components (PCs) or principal variates. The principal components are linear combinations of the variables of the vector Y that are uncorrelated such that the variance of the jth component is maximal.
Timm later defines "standardized components" as those which have been computed from centered data and are then divided by the square root of the eigenvalues (i.e. variances), i.e. "standardized principal components" have mean 0 and variance 1.

How do I draw samples from multivariate gaussian distribution parameterized by precision in matlab

I am wondering how to draw samples in matlab, where I have precision matrix and mean as the input argument.
I know mvnrnd is a typical way to do so, but it requires the covariance matrix (i.e inverse of precision)) as the argument.
I only have precision matrix, and due to the computational issue, I can't invert my precision matrix, since it will take too long (my dimension is about 2000*2000)
Good question. Note that you can generate samples from a multivariant normal distribution using samples from the standard normal distribution by way of the procedure described in the relevant Wikipedia article.
Basically, this boils down to evaluating A*z + mu where z is a vector of independent random variables sampled from the standard normal distribution, mu is a vector of means, and A*A' = Sigma is the covariance matrix. Since you have the inverse of the latter quantity, i.e. inv(Sigma), you can probably do a Cholesky decomposition (see chol) to determine the inverse of A. You then need to evaluate A * z. If you only know inv(A) this can still be done without performing a matrix inverse by instead solving a linear system (e.g. via the backslash operator).
The Cholesky decomposition might still be problematic for you, but I hope this helps.
If you want to sample from N(μ,Q-1) and only Q is available, you can take the Cholesky factorization of Q, L, such that LLT=Q. Next take the inverse of LT, L-T, and sample Z from a standard normal distribution N(0, I).
Considering that L-T is an upper triangular dxd matrix and Z is a d-dimensional column vector,
μ + L-TZ will be distributed as N(μ, Q-1).
If you wish to avoid taking the inverse of L, you can instead solve the triangular system of equations LTv=Z by back substitution. μ+v will then be distributed as N(μ, Q-1).
Some illustrative matlab code:
% make a 2x2 covariance matrix and a mean vector
covm = [3 0.4*(sqrt(3*7)); 0.4*(sqrt(3*7)) 7];
mu = [100; 2];
% Get the precision matrix
Q = inv(covm);
%take the Cholesky decomposition of Q (chol in matlab already returns the upper triangular factor)
L = chol(Q);
%draw 2000 samples from a standard bivariate normal distribution
Z = normrnd(0,1, [2, 2000]);
%solve the system and add the mean
X = repmat(mu, 1, 2000)+L\Z;
%check the result
mean(X')
var(X')
corrcoef(X')
% compare to the sampling from the covariance matrix
Y=mvnrnd(mu,covm, 2000)';
mean(Y')
var(Y')
corrcoef(Y')
scatter(X(1,:), X(2,:),'b')
hold on
scatter(Y(1,:), Y(2,:), 'r')
For more efficiency, I guess you can search for some package that efficiently solves triangular systems.

Fast and efficient upper diagonal matrix inverse

I compute the multinomial Gaussian density for some huge number of times in a project where I update the covariance matrix by rank-1. Instead of computing the covariance from scratch, I used the cholupdate function to add a new sample to the covariance and remove a new sample to the covariance. By this way, the update is told to be in $O(n^2)$ as opposed to $O(n^3)$ Cholesky factorization of the covariance matrix.
persistent R
if (initialize) % or isempty(R)
% compute covariance V
R = chol(V);
else
R = cholupdate(R,xAdded);
detVar = prod(diag(R))^2;
Rt = R';
coeff = 1/sqrt((2*pi)^dimension*detVar);
y = Rt\x;
logp = log(coeff) - 1/2 * norm(y)^2;
Actually the code is quite complicated but I simplified it here. I wonder if there is a faster way to compute the inverse (the Rt\x part in the code) of an upper triangular matrix in MATLAB. Do you have any ideas to do it more efficiently in MATLAB.
Note that computing the determinant is also faster this way. So the new method will also not bad for the computation of the determinant.
The mldivide function is smart enough to check for triangular matrices, in which case it uses a forward/backward substitution method to efficiently solve the linear system:
AX=B <--> X=inv(A)*B <--> X=A\B
(compute x1, substitute it in second equation and compute x2, substitute in third ...)