Using the SVD rather than covariance matrix to calculate eigenfaces - matlab

I'm using the set of n = 40 faces from AT&T (http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html) to try and generate eigenfaces via the SVD.
First I calculate the average vector:
Then I subtract it from every vector in the training set, reshape the new vector into a 1 by (p*q) column vector of a n by (p*q) matrix x, and calculate a matrix X such that X = (1/sqrt(n))*x. (here's where the issue is: all my results in X are rounded to 0, resulting in a black image result for eigenface as seen below)
Then I calculate the SVD of this matrix X and try to get the first eigenface of the first column of the unitary matrix by reshaping it back into a p by q matrix
However, this is my result:
Can anyone spot my error in the code below? Any answer is much appreciated
n = 40;
%read images
A = double(imread('faces_training/1.pgm'));
f(:, :, 1) = A;
for j = 2:n
f(:, :, j) = double(imread(['faces_training/',num2str(j),'.pgm']));
A = A + f(:, :, j);
end
%calculate average
a = (1/n)*A;
%imshow(uint8(a))
for i = 1:n
%subtract from images
x_vector(:, i) = reshape(f(:, :, i) - a, [], 1);
end
X = (1/sqrt(n))*x_vector;
%svd
[U S V] = svd(X);
B = reshape(U(:, 1), [size(a, 1) size(a, 2)]);
imshow(uint8(B))

Doing the same thing and had the same problem. The short answer is you have to normalize your eigenvector to get a good image. Before normalizing, you’ll notice your vector values are very close to 0 (probably because of how svd was done) which probably means they’re close to black.
Anyway, use this equation on the eigenvectors you wanna transform:
newpixel[i,j]=(oldpixel[i,j]-min(oldpixel[:,j]))/(max(oldpixel[:,j])--min(oldpixel[:,j]))

Related

Volumetric 3D data plotting from 2D map in MATLAB?

I have a heat map
and want to convert this 2D matrix to a 3D volume/shape/surface data points for further processing. Not simply display it in 3D using surf.
What would be a good way to do this?
With a lot of help from this community I could come closer:
I shrunk the size to 45x45 px for simplicity.
I = (imread("TESTGREYPLASTIC.bmp"))./2+125;
Iinv = 255-(imread("TESTGREYPLASTIC.bmp"))./2-80;%
for i = 1:45
for j = 1:45
A(i, j, I(i,j) ) = 1;
A(i, j, Iinv(i,j) ) = 1;
end
end
volshow(A)
Its not ideal but the matrix is what I wanted now. Maybe the loop can be improved to run faster when dealing with 1200x1200 points.
How do I create a real closed surface now?
Following your conversation with #BoilermakerRV, I guess you are looking for one of the following two results:
A list of 3d points, where x and y are index of pixels in the image, and z is value of corresponding pixels. The result will be an m*n by 3 matrix.
An m by n by 256 volume of zeros and ones, that for (i,j)-th pixel in the image, all voxels of the (i, j)-the pile of the volume are 0, except the one at I(i, j).
Take a look at the following example that generates both results:
close all; clc; clear variables;
I = rgb2gray(imread('data2.png'));
imshow(I), title('Data as image')
% generating mesh grid
[m, n] = size(I);
[X, Y] = meshgrid(1:n, 1:m);
% converting image to list of 3-d points
P = [Y(:), X(:), I(:)];
figure
scatter3(P(:, 1), P(:, 2), P(:, 3), 3, P(:, 3), '.')
colormap jet
title('Same data as a list of points in R^3')
% converting image to 256 layers of voxels
ind = sub2ind([m n 256], Y(:), X(:), I(:));
V = zeros(m, n, 256);
V(ind) = 1.0;
figure
h = slice(V, [250], [250], [71]) ;
[h.EdgeColor] = deal('none');
colormap winter
camlight
title('And finally, as a matrix of 0/1 voxels')
The contour plot that is shown can't be generated with "2D" data. It requires three inputs as follows:
[XGrid,YGrid] = meshgrid(-4:.1:4,-4:.1:4);
C = peaks(XGrid,YGrid);
contourf(XGrid,YGrid,C,'LevelStep',0.1,'LineStyle','none')
colormap('gray')
axis equal
Where XGrid, YGrid and C are all NxN matrices defining the X values, Y values and Z values for every point, respectively.
If you want this to be "3D", simply use surf:
surf(XGrid,YGrid,C)

Multiplying a 4D matrix by a vector, and collapsing 1 dimension

I have a question regarding the multiplication of a 4-dimensional object by a 1 dimensional object.
Effectively, I have a 4D object of sizes (15,15,3,5).
I want to multiply out the 4th dimension by using a 5x1 vector, collapsing the last dimension to 1. Then I want to use squeeze to get a (15,15,3) sized object, again multiplying it by a 3x1 vector, leaving me with a 15x15 matrix.
I can do this in a loop, but that is quite costly. Can anyone give me suggestions how to do this without a loop?
For now the loop:
expectationCalc = reshape(mValueFunction(age+1, :, :, :, :), nGridAssets, nGridHumanCapital, nNetInterestRate, nShockstoHumanCapital);
for i = 1:nGridAssets
for j = 1:nGridHumanCapital
expectation(i,j) = mTransitionNetInterestRate(nNetIntRate, :)*(squeeze(expectationCalc(i,j,:,:))*mTransitionShockHumanCapital(ShockHcapital, :)');
end
end
If you reshape your 4D matrix to a 2D matrix, where the 2nd dimension is the one you want to reduce by dot product, and the 1st dimension contains all other dimensions, then you can apply a regular matrix multiplication. The result can then be reshaped to the original size (minus one dimension):
% Input data
M = randn(15,15,3,5);
v1 = randn(5,1);
v2 = randn(3,1);
% 1st multiplication
sz = size(M);
M = reshape(M,[],sz(end));
M = M * v1;
sz(end) = []; % We no longer have that last dimension
M = reshape(M,sz);
% 2nd multiplication
M = reshape(M,[],sz(end));
M = M * v2;
sz(end) = []; % We no longer have that last dimension
M = reshape(M,sz);

Curve fitting of complex variable in Matlab

I want to solve the following system of equations shown in the image below,
The matrix system
where the component of the matrix A is complex numbers with the angle (theta) runs from 0 to 2*pi which has m divisions, and n = 9. The known value z = x + iy. Suppose the x and y of matrix z is
z =
0 1.0148
0.1736 0.9848
0.3420 0.9397
0.5047 0.8742
0.6748 0.8042
0.8419 0.7065
0.9919 0.5727
1.1049 0.4022
1.1757 0.2073
1.1999 0
1.1757 -0.2073
1.1049 -0.4022
0.9919 -0.5727
0.8419 -0.7065
0.6748 -0.8042
0.5047 -0.8742
0.3420 -0.9397
0.1736 -0.9848
0 -1.0148
How do you solve them iteratively? Notice that the value of the first component of the desired constants must equal 1. I am working with Matlab.
You can apply simple multilinear regression for complex valued data.
Step 1. Get the matrix ready for linear regression
Your linear system
written without matrices, becomes
that rearranged yelds
If you rewrite it with matrices you get
Step 2. Apply multiple linear regression
Let the system above be
where
Now you can apply linear regression, that returns the best fit for α when
where
is the conjugate transpose.
In MATLAB
Y = Z - A(:,1); % Calculate Y subtracting the first col of A from Z
R = A(:,:); R(:,1) = []; % Calculate R as an exact copy of A, just without first column
Rs = ctranspose(R); % Calculate R-star (conjugate transpose of R)
alpha = (Rs*R)^(-1)*Rs*Y; % Finally apply multiple linear regression
alpha = cat(1, 1, alpha); % Add alpha1 back, whose value is 1
or, if you prefer built-ins, have a look at regress function:
Y = Z - A(:,1); % Calculate Y subtracting the first col of A from Z
R = A(:,:); R(:,1) = []; % Calculate R as an exact copy of A, just without first column
alpha = regress(Y, R); % Finally apply multiple linear regression
alpha = cat(1, 1, alpha); % Add alpha1 back, whose value is 1

Best way to perform a convolution with a new vector for each image?

I try to figure out the best way to perform a kind of convolution.
I have a 3D matrix I = [N x M x P] and a 2D matrix S = [1 x 1 x K x P]. For each pth frame (third dimension) of my 3D matrix I want to return the valid convolution between I(:, :, p-K/2:p+K/2) and S(1, 1, :, p). Do you see a way to do this ?
In fact, in terms of computation the numbers of operation in very close to a standard convolution, the difference is that I need to change the second matrix for each frame...
This is the method I currently use:
% I = 3D matrix [N x M x P]
% S = Filter [1 x 1 x K x P] (K is an odd number)
% OUT = Result
[N, M, P] = size(I); % Data size
K = size(S, 3); % Filter length
win = (K-1)/2 ; % Window
OUT = zeros(size(I)); % Pre-allocation
for p = win+1:P-win
OUT(:, :, p) = convn(I(:, :, p-win:p+win), S(1, 1, :, p), 'valid'); % Perform convolution
end
At the end we have the same number of operations than the standard convolution, the only difference is that the filter is different for each frame...
Any idea ?
Thanks ;)
So you want to convolve a NxMxK sub-image with a 1x1xKx1 kernel, and then only take the valid part, which is an NxM image.
Let's look at this operation for a single (x,y) location. This 1D convolution, of which you only keep 1 value, is equivalent to the dot product of the sub-image and your kernel:
OUT(x,y,p) = squeeze(I(x,y,p-win:p+win))' * squeeze(S(1,1,:,p))
This you can vectorize across all (x,y) by reshaping the sub-image of I to a (N*M)xK matrix (the K is horizontal, S is a column vector).
Repeating this across all p is easiest to implement with a loop, as you do now. The alternative is to create a larger S where each column is shifted by one, so you can do a single dot product between tge two matrices. But that S is also espensive to create, presumably requires a loop too. I don't think that avoiding loops is that pressing any more in MATLAB (it's gotten a lot faster over the years) and the product itself is probably the most expensive part of the algorithm anyway.

How to sum a sub-tensor of high dimention tensor in Matlab?

We are given a D-dimensional tensor, represented as a vector of size n^D.
The vector represents a D-dimensional distribution of a random variable X \in {0,1,..,n}^d. That is the (i_1,i_2,...,i_d) entry in the tensor represents the probability of X_1 = i_1, X_2 = i_2, ... X_d = i_d.
I need to compute, for each dimension d, and value i\in [n] the marginal distribution P(X_d = i).
i.e., this means that the answer of P(X_d = i) is the sum of n^(D-1) entries of the vector.
For example, if D=2 and n=4, we have a vector x of size (16,1) and the probability of the first dimension being equal to 1 is
P(X_1 = 1) = x(1) + x(2) + x(3) + x(4)
The probability of the second dimension being equal to 3 is '
P(X_2 = 3) = x(3) + x(7) + x(11) + x(15)
I'm writing Matlab code that needs to compute these marginal distributions, but I'm not familiar enough with Matlab to do it in a simple way (it is doable using some ugly recursion, but there has to be a better option).
To calculate P(X_k=z) for a D-dimensional matrix you can use
xD = reshape(x, n*ones(1,D));
B = permute(xD, [k setdiff(1:D, k)]);
P = sum(B(z,:));
It first makes it a D-dimensional matrix. It brings the dimension of interest k to the beginning and then chooses the z-th element and sums over elements corresponding to that.
Mohsen Nosratinia's answer would be my first option. As an alternative, it can be done without reshaping or permuting dimensions, which can result in faster code:
k = 2; %// chosen dimension
z = 3; %// chosen value (along d-th dimension)
result = sum(x(mod(floor((0:end-1)/n^(k-1)), n)==z-1));