Difference between matlab normrnd() mvnrnd() - matlab

What is the difference in Matlab if I do:
R = mvnrnd(MU,SIGMA)
vs.
R = normrnd(mu,sigma)

R = normrnd(mu, sigma) outputs normal random numbers from 1D normal distribution. i.e., for each element of the inputs a single output is generated - R(i) will be a random scalar from the normal distribution defined by mu(i) and sigma(i).
If sigma is a scalar rather than vector the same value is used for each element of R.
R = mvnrnd(MU,SIGMA) outputs random numbers from multivariate normal distribution. i.e., for each row of MU and 2D matrix of SIGMA a single row of R is generated, where the dimension (d, the number of columns) of R is the same as the dimension of MU and SIGMA. R(i,:) will be a random vector from the multivariate normal distribution defined by MU(i,:) and SIGMA(:,:,i).
if SIGMA is a dxd 2D matrix rather than dxdxn 3D matrix, the same 2D matrix is used for each row of R.

Related

Random draws in Matlab of dependent/independent but not uniform random variables in [0,1]

Consider the random vector W=(X,Y) where Xand Y are scalar random variables with support [0,1] and PDF f.
I would like to draw P realisations of W in Matlab, firstly assuming that xand Yare independent, and then assuming that they are dependent.
I know how to do this in Matlab if I impose fbeing the uniform distribution. Is there any way to do this using other types of f?
For example, I am thinking about the beta distribution; I considered the package betarndin Matlab but I couldn't understand how to control the correlation between the random variables.
One possibility I thought about
clear
P=500; %number draws
% parameters beta 1
mu1=0.896;
v1=0.001;
a1=((1-mu1)/v1 -1/mu1)*mu1^2;
b1=a1*(1/mu1-1);
% parameters beta 1
mu2=0.206;
v2=0.004;
a2=((1-mu2)/v2 -1/mu2)*mu2^2;
b2=a2*(1/mu2-1);
% correlation
rho=0.5;
%Draw bivariate standard normal with correlation rho
mu=[0 0];
sigma=[1 rho; rho 1];
R = mvnrnd(mu, sigma, P,1);
%Apply standard normal cdf to each column of R
R2=[normcdf(R(:,1)) normcdf(R(:,2))]; %nxK
%Apply inverse beta cdf to each column of R2
R3=[betainv(R2(:,1), a1, b1) betainv(R2(:,2), a2, b2)];
I am not sure it works however.

Dimension of Filter in 3-D Convolution in MATLAB

The function to perform an N-dimensional convolution of arrays A and B in matlab is shown below:
C = convn(A,B) % returns the N-dimensional convolution of arrays A and B.
I am interested in a 3-D convolution with a Gaussian filter.
If A is a 3 x 5 x 6 matrix, what do the dimensions of B have to be?
The dimensions of B can be anything you want. There is no set restriction in terms of size. For the Gaussian filter, it can be 1D, 2D or 3D. In 1D, what will happen is that each row gets filtered independently. In 2D, what will happen is that each slice gets filtered independently. Finally, in 3D you will be doing what is expected in 3D convolution. I am assuming you would like a full 3D convolution, not just 1D or 2D.
You may be interested in the output size of convn. If you refer to the documentation, given the two N dimensional matrices, for each dimension k of the output and if nak is the size of dimension k for the matrix A and nbk is the size of dimension k for matrix B, the size of dimension of the output matrix C or nck is such that:
nck = max([nak + nbk - 1, nak, nbk])
nak + nbk - 1 is straight from convolution theory. The final output size of a dimension is simply the sum of the two sizes in dimension k subtracted by 1. However should this value be smaller than either of nak or nbk, we need to make sure that the output size is compatible so that any of the input matrices can fit in the final output. This is why you have the final output size and bounded by both A and B.
To make this easier, you can set the size of the filter guided by the standard deviation of the distribution. I would like to refer you to my previous Stack Overflow post: By which measures should I set the size of my Gaussian filter in MATLAB?
This determines what the output size of a Gaussian filter should be given a standard deviation.
In 2D, the dimensions of the filter are N x N, such that N = ceil(6*sigma + 1) with sigma being the desired standard deviation. Therefore, you would allocate a 3D matrix of size N x N x N with N = ceil(6*sigma + 1);.
Therefore, the code you would want to use to create a 3D Gaussian filter would be something like this:
% Example input
A = rand(3, 5, 6);
sigma = 0.5; % Example
% Find size of Gaussian filter
N = ceil(6*sigma + 1);
% Define grid of centered coordinates of size N x N x N
[X, Y, Z] = meshgrid(-N/2 : N/2);
% Compute Gaussian filter - note normalization step
B = exp(-(X.^2 + Y.^2 + Z.^2) / (2.0*sigma^2));
B = B / sum(B(:));
% Convolve
C = convn(A, B);
One final note is that if the filter you provide has any of its dimensions that are beyond the size of the input matrix A, you will get a matrix using the constraints of each nck value, but then the border elements will be zeroed due to zero-padding.

on symmetric positive semi-definiteness of covariance matrices in matlab

Hi everybody I have this problem:
I have Dataset of n vectors each has D dimensions.
I also have a covariance matrix of size D*D, Let It be C.
I perform the following action:
I choose K vectors from the dataset, and also choose E dimensions randomly. Let M be the sample covariance of the selected data on the selected dimensions.so M is a E*E matrix.
let P be the partial covariance matrix corresponding to the dimensions E of C, ie. C(E,E) in matlab
is the following matrix positive semi definite?:
X = (1-a)P + aM
where a is a constant like 0.2.
I sometimes get the following error when using mvnrnd(mean,X) :
SIGMA must be a symmetric positive semi-definite matrix
My code is:
%%%Dims are randomly choosen dimensions
%%%Inds are randomly choosen Indexes form {1, 2, ...,n}
%%% PP are n D dimensional vectors, composing my data set PP is n*D
%%% Sigmaa is a D*D covariance matrix
co = cov(PP(Inds,Dims));
me = mean(PP(Inds,Dims));
Bettaa = 0.2;
sigmaaDims = sigmaa(Dims,Dims);
sigmaaDims = (1-Bettaa)*sigmaaDims + (co)*Bettaa;
Tem = mvnrnd(me,sigmaaDims);
Simply looking at the matrix dimensions It is not possible to tell if a matrix is positive semi-definite.
To find out if a given matrix is positive semi-definite, you must check if It's eigenvalues are non-negative and it's symmetry:
symmetry = issymmetric(X);
[~,D]=eig(X);
eigenvalues = diag(D);
if all(eigenvalues>0) & symmetry
disp('Positive semi-definite matrix.')
else
disp('Non positive semi-definite matrix.')
end
Where X is the matrix you are interested in.
Note that if you use the weaker definition of a positive definite matrix (see Extention for non symmetric matrices section), X does not need to be symmetric and you would end up with:
[~,D]=eig(X);
eigenvalues = diag(D);
if all(eigenvalues>=0)
disp('Positive semi-definite matrix.')
else
disp('Non positive semi-definite matrix.')
end

Generating multivariate normally distributed random numbers in Matlab

This question is about the use of the covariance matrix in the multidimensional normal distribution:
I want to generate multi-dimensional random numbers x in Matlab with a given mean mu and covariance matrix Sigma. Assuming Z is a standard normally distributed random number (e.g. generated using randn), what is the correct code:
x = mu + chol(Sigma) * Z
or
x = mu + Sigma ^ 0.5 * Z
?
I am not sure about the use of the covariance matrix in the definition of the multidimensional normal distribution – whether the determinant in the denominator is of the square root or the Cholesky factor...
If by definition you refer to the density of the multivariate normal distribution:
it contains neither the Cholesky decomposition nor the matrix square root of Σ, but its inverse and the scalar square root of its determinant.
But for numerically generating random numbers from this distribution, the density is not helpful. It is not even the most general description of the multivariate normal distribution, since the density formula makes only sense for positive definite matrices Σ, while the distribution is also defined if there are zero eigenvalues – that just means that the variance is 0 in the direction of the respective eigenvector.
Your question follows the approach to start from standard multivariate normally distributed random numbers Z as produced by randn, and then apply a linear transformation. Assuming that mu is a p-dimensional row vector we want an nxp-dimensional random matrix (each row one observation, each column one variable):
Z = randn(n, p);
x = mu + Z * A;
We need a matrix A such that the covariance of x is Sigma. Since the covariance of Z is the identity matrix, the covariance of x is given by A' * A. A solution to this is given by the Cholesky decomposition, so the natural choice is
A = chol(Sigma);
where A is an upper triangular matrix.
However, we can also search for a Hermitian solution, A' = A, and then A' * A becomes A^2, the matrix square. A solution to this is given by a matrix square root, which is computed by replacing each eigenvalue of Sigma by its square root (or its negative); in general there are 2ⁿ possible solutions for n positive eigenvalues. The Matlab function sqrtm returns the principal matrix square root, which is the unique nonnegative-definite solution. Therefore,
A = sqrtm(Sigma)
works also. A ^ 0.5 should in principle do the same.
Simulations using this code
p = 10;
n = 1000;
nr = 1000;
cp = nan(nr, 1);
sp = nan(nr, 1);
pp = nan(nr, 1);
for i = 1 : nr
x = randn(n, p);
Sigma = cov(x);
cS = chol(Sigma);
cp(i) = norm(cS' * cS - Sigma);
sS = sqrtm(Sigma);
sp(i) = norm(sS' * sS - Sigma);
pS = Sigma ^ 0.5;
pp(i) = norm(pS' * pS - Sigma);
end
mean([cp sp pp])
yield that chol is more precise than the two other methods, and profiling shows that it is also much faster, for both p = 10 and p = 100.
The Cholesky decomposition does however have the disadvantage that it is only defined for positive-definite Σ, while the requirement of the matrix square root is merely that Σ is nonnegative-definite (sqrtm returns a warning for a singular input, but returns a valid result).

Calculating the degree matrix having the sparse representation of the adjacency matrix

I am trying to calculate the laplacian matrix of a graph. I ve calculated the sparse representation of the adjacency matrix which is stored in a text file with dimension Nx3. N the size of nodes (ith-node jth node weight). I open in Matlab this file with adj = spconvert(adj);. The next step is to calculate the degree matrix of this sparse matrix in order to perform the operation L = D - adj. How is it possible to calculate the degree matrix having as an input the sparse adjacency matrix of the graph? In order to calculate the degree matrix I calculate the degree for every node:
for i=1:n % size of the node
degree(i) = length(find(adj(:,1) == i & adj(:,3) == 1));
end
However, how can I perform the subtraction of D and A?
Use the spdiags function to convert the degree vector to a sparse diagonal matrix. Then subtract the adjacency matrix from diagonal matrix to get the Laplacian. Example using your code:
adj = spconvert(adj);
for i=1:size(adj, 1)
degree(i) = CalcDegree(adj, i)
end
D = spdiags(degree, 0, size(adj, 1), size(adj, 2));
L = D - adj;
By the way, your code for calculating the node degree may be incorrect.