Eigen faces using PCA - matlab

I am trying to implement Principal Component Analysis (PCA) to extract the features from the image in MATLAB. I have implemented the following code.
[Rows, Columns] = size(x); % find size of input matrix
m=mean(x); % find mean of input matrix
y=x-ones(size(x,1),1)*m; % normalise by subtracting mean
c=cov(y); % find covariance matrix
[V,D]=eig(c); % find eigenvectors (V) and eigenvalues (D) of covariance matrix
[D,idx] = sort(diag(D)); % sort eigenvalues in descending order by first diagonalising eigenvalue matrix, idx stores order to use when ordering eigenvectors
D = D(end:-1:1)';
V = V(:,idx(end:-1:1)); % put eigenvectors in order to correspond with eigenvalues
V2d=V(:,1:200); % (significant Principal Components we use, OutputSize is input variable)
prefinal=V2d'*y';
final=prefinal'; % final is normalised data projected onto eigenspace
imshow(final);
I want to know that how can I check the 1st Eigen faces,2nd Eigen faces.. etc
EDIT:
Here is the Input Image and the Eigen Image is Eigen Image

The first eigenface is the first eigenvector!
My guess, is that with your code:
eigenface1=reshape(V(:,1),rows,cols);
as, if your code is right, each eigenvector should be the same size as your input images, but unrolled. I am assuming that rows and cols are the size of the image.

Related

Dimension of Filter in 3-D Convolution in MATLAB

The function to perform an N-dimensional convolution of arrays A and B in matlab is shown below:
C = convn(A,B) % returns the N-dimensional convolution of arrays A and B.
I am interested in a 3-D convolution with a Gaussian filter.
If A is a 3 x 5 x 6 matrix, what do the dimensions of B have to be?
The dimensions of B can be anything you want. There is no set restriction in terms of size. For the Gaussian filter, it can be 1D, 2D or 3D. In 1D, what will happen is that each row gets filtered independently. In 2D, what will happen is that each slice gets filtered independently. Finally, in 3D you will be doing what is expected in 3D convolution. I am assuming you would like a full 3D convolution, not just 1D or 2D.
You may be interested in the output size of convn. If you refer to the documentation, given the two N dimensional matrices, for each dimension k of the output and if nak is the size of dimension k for the matrix A and nbk is the size of dimension k for matrix B, the size of dimension of the output matrix C or nck is such that:
nck = max([nak + nbk - 1, nak, nbk])
nak + nbk - 1 is straight from convolution theory. The final output size of a dimension is simply the sum of the two sizes in dimension k subtracted by 1. However should this value be smaller than either of nak or nbk, we need to make sure that the output size is compatible so that any of the input matrices can fit in the final output. This is why you have the final output size and bounded by both A and B.
To make this easier, you can set the size of the filter guided by the standard deviation of the distribution. I would like to refer you to my previous Stack Overflow post: By which measures should I set the size of my Gaussian filter in MATLAB?
This determines what the output size of a Gaussian filter should be given a standard deviation.
In 2D, the dimensions of the filter are N x N, such that N = ceil(6*sigma + 1) with sigma being the desired standard deviation. Therefore, you would allocate a 3D matrix of size N x N x N with N = ceil(6*sigma + 1);.
Therefore, the code you would want to use to create a 3D Gaussian filter would be something like this:
% Example input
A = rand(3, 5, 6);
sigma = 0.5; % Example
% Find size of Gaussian filter
N = ceil(6*sigma + 1);
% Define grid of centered coordinates of size N x N x N
[X, Y, Z] = meshgrid(-N/2 : N/2);
% Compute Gaussian filter - note normalization step
B = exp(-(X.^2 + Y.^2 + Z.^2) / (2.0*sigma^2));
B = B / sum(B(:));
% Convolve
C = convn(A, B);
One final note is that if the filter you provide has any of its dimensions that are beyond the size of the input matrix A, you will get a matrix using the constraints of each nck value, but then the border elements will be zeroed due to zero-padding.

Understanding PCA in MATLAB

What are the difference between the following two functions?
prepTransform.m
function [mu trmx] = prepTransform(tvec, comp_count)
% Computes transformation matrix to PCA space
% tvec - training set (one row represents one sample)
% comp_count - count of principal components in the final space
% mu - mean value of the training set
% trmx - transformation matrix to comp_count-dimensional PCA space
% this is memory-hungry version
% commented out is the version proper for Win32 environment
tic;
mu = mean(tvec);
cmx = cov(tvec);
%cmx = zeros(size(tvec,2));
%f1 = zeros(size(tvec,1), 1);
%f2 = zeros(size(tvec,1), 1);
%for i=1:size(tvec,2)
% f1(:,1) = tvec(:,i) - repmat(mu(i), size(tvec,1), 1);
% cmx(i, i) = f1' * f1;
% for j=i+1:size(tvec,2)
% f2(:,1) = tvec(:,j) - repmat(mu(j), size(tvec,1), 1);
% cmx(i, j) = f1' * f2;
% cmx(j, i) = cmx(i, j);
% end
%end
%cmx = cmx / (size(tvec,1)-1);
toc
[evec eval] = eig(cmx);
eval = sum(eval);
[eval evid] = sort(eval, 'descend');
evec = evec(:, evid(1:size(eval,2)));
% save 'nist_mu.mat' mu
% save 'nist_cov.mat' evec
trmx = evec(:, 1:comp_count);
pcaTransform.m
function [pcaSet] = pcaTransform(tvec, mu, trmx)
% tvec - matrix containing vectors to be transformed
% mu - mean value of the training set
% trmx - pca transformation matrix
% pcaSet - output set transforrmed to PCA space
pcaSet = tvec - repmat(mu, size(tvec,1), 1);
%pcaSet = zeros(size(tvec));
%for i=1:size(tvec,1)
% pcaSet(i,:) = tvec(i,:) - mu;
%end
pcaSet = pcaSet * trmx;
Which one is actually doing PCA?
If one is doing PCA, what is the other one doing?
The first function prepTransform is actually doing the PCA on your training data where you are determining the new axes to represent your data onto a lower dimensional space. What it does is that it finds the eigenvectors of the covariance matrix of your data and then orders the eigenvectors such that the eigenvector with the largest eigenvalue appears in the first column of the eigenvector matrix evec and the eigenvector with the smallest eigenvalue appears in the last column. What's important with this function is that you can define how many dimensions you want to reduce the data down to by keeping the first N columns of evec which will allow you to reduce your data down to N dimensions. The discarding of the other columns and keeping only the first N is what is set as trmx in the code. The variable N is defined by the prep_count variable in prepTransform function.
The second function pcaTransform finally transforms data that is defined within the same domain as your training data but not necessarily the training data itself (it could be if you wish) onto the lower dimensional space that is defined by the eigenvectors of the covariance matrix. To finally perform the reduction of dimensions, or dimensionality reduction as it is popularly known, you simply take your training data where each feature is subtracted from its mean and you multiply your training data by the matrix trmx. Note that prepTransform outputting the mean of each feature in the vector mu is important in order to mean subtract your data when you finally call pcaTransform.
How to use these functions
To use these functions effectively, first determine the trmx matrix, which contain the principal components of your data by first defining how many dimensions you want to reduce your data down to as well as the mean of each feature stored in mu:
N = 2; % Reduce down to two dimensions for example
[mu, trmx] = prepTransform(tvec, N);
Next you can finally perform dimensionality reduction on your data that is defined within the same domain as tvec (or even tvec if you wish, but it doesn't have to be) by:
pcaSet = pcaTransform(tvec, mu, trmx);
In terms of vocabulary, pcaSet contain what are known as the principal scores of your data, which is the term used for the transformation of your data to the lower dimensional space.
If I can recommend something...
Finding PCA through the eigenvector approach is known to be unstable. I highly recommend you use the Singular Value Decomposition via svd on the covariance matrix where the V matrix of the result already gives you the eigenvectors sorted which correspond to your principal components:
mu = mean(tvec, 1);
[~,~,V] = svd(cov(tvec));
Then perform the transformation by taking the mean subtracted data per feature and multiplying by the V matrix, once you subset and grab the first N columns of V:
N = 2;
X = bsxfun(#minus, tvec, mu);
pcaSet = X*V(:, 1:N);
X is the mean subtracted data which performs the same thing as doing pcaSet = tvec - repmat(mu, size(tvec,1), 1);, but you are not explicitly replicating the mean vector over each training example but letting bsxfun do that for you internally. However, taking advantage of MATLAB R2016b, this repeating can be done without the explicit call to bsxfun:
X = tvec - mu;
Further Reading
If you fully want to understand the code that was written and the theory behind what it's doing, I recommend the following two Stack Overflow posts that I have written that talk about the topic:
What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?
How to use eigenvectors obtained through PCA to reproject my data?
The first post brings the code you presented into light which performs PCA using the eigenvector approach. The second post touches base on how you'd do it using the SVD towards the end of the answer. This answer I've written here is a mix between the two posts above.

I'm trying to find eigenvalues and vectors of a grayscale image and getting error "Matrix dimensions must agree"

The code is giving error "matrix dimension must agree". So what changes should i make?
%reading a image
I =imread('C:\Program Files\MATLAB\R2013a\New folder\fac.jpg');
m = mean(I,2);
I = double(I)- double(repmat(m,10,1));
%calculating covariance matrix
c=cov(I);
%calculating eigenvalues and eigenvectors
[eigenvalue,eigenvector]=eig(c);
First, make sure that I is a 2D matrix. This is necessary for cov to work. Secondly, use repmat(m,n,p), where n and p are such that size(repmat(m,n,p))==size(I).
Example
I =imread('myImg.jpg'); % 63x83x3 matrix containing 3D RGB information.
I = rgb2gray(I); % 3D RGB to 2D gray scale. Now I is a 63x83 matrix.
m = mean(I,2);
I = double(I)- double(repmat(m,1,83));
c=cov(I);
[eigenvalue,eigenvector]=eig(c);

Parity check matrix of LDPC encoder and decoder in Matlab

MATLAB provides powerful LDPC encoder and decoder objects in the latest versions. However the parity check matrix H, with dimension (N-K) by N, needs to satisfy the following condition:
"The last N−K columns in the parity check matrix H must be an invertible matrix in GF(2)"
Indeed, this condition is not easy to be satisfied for most LDPC codes, although we know that there is at least one (N-M) by (N-M) invertible sub-block in the parity check matrix H, if H is with a full rank.
I want to know that, if there exists a fast algorithm or a MATLAB function, which can find out an invertible sub-block in H provided H is with a full rank. So that we can use the MATLAB objects and Simulink blocks conveniently.
I tried repermuting the columns of H matrix until it matches the Malab
% Programmer: Taha Valizadeh
% Date: September 2016
%% Column Permutation
% Permute columns of a binary Matrix until the rightmost square matrix is
% invertible over GF(2)
% matrix dimensions:
[~, n] = size(H);
% Initialization
HInvertible = H;
PermutorIndex = 1:n;
flag = true;
counter = 0;
% Initial Report
disp('Creating a ParityCheck matrix which is suitable for MATLAB COMM Tollbox')
% Permute columns
while flag
% Check if the rightmost square matrix is invertible over GF(2)
try
EncoderObject = comm.LDPCEncoder(sparse(HInvertible));
% Check if new matrix works
fprintf(['ParityCheck Matrix become suitable for Matlab LDPC Encoder ',...
'after ',num2str(counter),' permutations!\n'])
flag = false; % Break the loop
catch
% Choose different columns for the rightmost part of matrix
counter = counter+1; %Permutation Counter
PermutorIndex = randperm(n);
HInvertible = H(:,PermutorIndex);
end
end

Generating random weighted adjacency matrix in MATLAB

I would like to create a random adjacency matrix in MATLAB such that the total sum of weight is equal to the number of edges. Finally find the Laplacian matrix using
L = diag(sum(A)) - A
and then graph it. Is there any way to do so?
Thanks in advance.
An adjacency matrix for an undirected graph is simply a square symmetric matrix.
If you have no constraints on the degree of the nodes only on the weights than I would suggest something like
n ; % number of nodes in the graph
density = 1e-3; % a rough estimate of the amount of edges
A = sprand( n, n, density ); % generate adjacency matrix at random
% normalize weights to sum to num of edges
A = tril( A, -1 );
A = spfun( #(x) x./nnz(A), A );
% make it symmetric (for undirected graph)
A = A + A.';
I have used in this code:
sprand to generate random sparse matrix.
spfun to help normalize the edge weights.
tril to extract only half the matrix.