Computing object statistics from the second central moments - matlab

I'm currently working on writing a version of the MATLAB RegionProps function for GNU Octave. I have most of it implemented, but I'm still struggling with the implementation of a few parts. I had previously asked about the second central moments of a region.
This was helpful theoretically, but I'm having trouble actually implementing the suggestions. I get results wildly different from MATLAB's (or common sense for that matter) and really don't understand why.
Consider this test image:
We can see it slants at 45 degrees from the X axis, with minor and major axes of 30 and 100 respectively.
Running it through MATLAB's RegionProps function confirms this:
MajorAxisLength: 101.3362
MinorAxisLength: 32.2961
Eccentricity: 0.9479
Orientation: -44.9480
Meanwhile, I don't even get the axes right. I'm trying to use these formulas from Wikipedia.
My code so far is:
raw_moments.m:
function outmom = raw_moments(im,i,j)
total = 0;
total = int32(total);
im = int32(im);
[height,width] = size(im);
for x = 1:width;
for y = 1:height;
amount = (x ** i) * (y ** j) * im(y,x);
total = total + amount;
end;
end;
outmom = total;
central_moments.m:
function cmom = central_moments(im,p,q);
total = 0;
total = double(total);
im = int32(im);
rawm00 = raw_moments(im,0,0);
xbar = double(raw_moments(im,1,0)) / double(rawm00);
ybar = double(raw_moments(im,0,1)) / double(rawm00);
[height,width] = size(im);
for x = 1:width;
for y = 1:height;
amount = ((x - xbar) ** p) * ((y - ybar) ** q) * double(im(y,x));
total = total + double(amount);
end;
end;
cmom = double(total);
And here's my code attempting to use these. I include comments for the values I get
at each step:
inim = logical(imread('135deg100by30ell.png'));
cm00 = central_moments(inim,0,0); % 2567
up20 = central_moments(inim,2,0) / cm00; % 353.94
up02 = central_moments(inim,0,2) / cm00; % 352.89
up11 = central_moments(inim,1,1) / cm00; % 288.31
covmat = [up20, up11; up11, up02];
%[ 353.94 288.31
% 288.31 352.89 ]
eigvals = eig(covmat); % [65.106 641.730]
minoraxislength = eigvals(1); % 65.106
majoraxislength = eigvals(2); % 641.730
I'm not sure what I'm doing wrong. I seem to be following those formulas correctly, but my results are nonsense. I haven't found any obvious errors in my moment functions, although honestly my understanding of moments isn't the greatest to begin with.
Can anyone see where I'm going astray? Thank you very much.

EDIT:
According to Wikipedia:
the eignevalues [...] are proportional
to the squared length of the eigenvector axes.
which is explained by:
axisLength = 4 * sqrt(eigenValue)
Shown below is my version of the code (I vectorized the moments functions):
my_regionprops.m
function props = my_regionprops(im)
cm00 = central_moments(im, 0, 0);
up20 = central_moments(im, 2, 0) / cm00;
up02 = central_moments(im, 0, 2) / cm00;
up11 = central_moments(im, 1, 1) / cm00;
covMat = [up20 up11 ; up11 up02];
[V,D] = eig( covMat );
[D,order] = sort(diag(D), 'descend'); %# sort cols high to low
V = V(:,order);
%# D(1) = (up20+up02)/2 + sqrt(4*up11^2 + (up20-up02)^2)/2;
%# D(2) = (up20+up02)/2 - sqrt(4*up11^2 + (up20-up02)^2)/2;
props = struct();
props.MajorAxisLength = 4*sqrt(D(1));
props.MinorAxisLength = 4*sqrt(D(2));
props.Eccentricity = sqrt(1 - D(2)/D(1));
%# props.Orientation = -atan(V(2,1)/V(1,1)) * (180/pi); %# sign?
props.Orientation = -atan(2*up11/(up20-up02))/2 * (180/pi);
end
function cmom = central_moments(im,i,j)
rawm00 = raw_moments(im,0,0);
centroids = [raw_moments(im,1,0)/rawm00 , raw_moments(im,0,1)/rawm00];
cmom = sum(sum( (([1:size(im,1)]-centroids(2))'.^j * ...
([1:size(im,2)]-centroids(1)).^i) .* im ));
end
function outmom = raw_moments(im,i,j)
outmom = sum(sum( ((1:size(im,1))'.^j * (1:size(im,2)).^i) .* im ));
end
... and the code to test it:
test.m
I = imread('135deg100by30ell.png');
I = logical(I);
>> p = regionprops(I, {'Eccentricity' 'MajorAxisLength' 'MinorAxisLength' 'Orientation'})
p =
MajorAxisLength: 101.34
MinorAxisLength: 32.296
Eccentricity: 0.94785
Orientation: -44.948
>> props = my_regionprops(I)
props =
MajorAxisLength: 101.33
MinorAxisLength: 32.275
Eccentricity: 0.94792
Orientation: -44.948
%# these values are by hand only ;)
subplot(121), imshow(I), imdistline(gca, [17 88],[9 82]);
subplot(122), imshow(I), imdistline(gca, [43 67],[59 37]);

Are you sure about the core of your raw_moments function? You might try
amount = ((x-1) ** i) * ((y-1) ** j) * im(y,x);
This doesn't seem like enough to cause the problems you're seeing, but it might be at least a part.

Related

How to reduce the time consumed by the for loop?

I am trying to implement a simple pixel level center-surround image enhancement. Center-surround technique makes use of statistics between the center pixel of the window and the surrounding neighborhood as a means to decide what enhancement needs to be done. In the code given below I have compared the center pixel with average of the surrounding information and based on that I switch between two cases to enhance the contrast. The code that I have written is as follows:
im = normalize8(im,1); %to set the range of pixel from 0-255
s1 = floor(K1/2); %K1 is the size of the window for surround
M = 1000; %is a constant value
out1 = padarray(im,[s1,s1],'symmetric');
out1 = CE(out1,s1,M);
out = (out1(s1+1:end-s1,s1+1:end-s1));
out = normalize8(out,0); %to set the range of pixel from 0-1
function [out] = CE(out,s,M)
B = 255;
out1 = out;
for i = s+1 : size(out,1) - s
for j = s+1 : size(out,2) - s
temp = out(i-s:i+s,j-s:j+s);
Yij = out1(i,j);
Sij = (1/(2*s+1)^2)*sum(sum(temp));
if (Yij>=Sij)
Aij = A(Yij-Sij,M);
out1(i,j) = ((B + Aij)*Yij)/(Aij+Yij);
else
Aij = A(Sij-Yij,M);
out1(i,j) = (Aij*Yij)/(Aij+B-Yij);
end
end
end
out = out1;
function [Ax] = A(x,M)
if x == 0
Ax = M;
else
Ax = M/x;
end
The code does the following things:
1) Normalize the image to 0-255 range and pad it with additional elements to perform windowing operation.
2) Calls the function CE.
3) In the function CE obtain the windowed image(temp).
4) Find the average of the window (Sij).
5) Compare the center of the window (Yij) with the average value (Sij).
6) Based on the result of comparison perform one of the two enhancement operation.
7) Finally set the range back to 0-1.
I have to run this for multiple window size (K1,K2,K3, etc.) and the images are of size 1728*2034. When the window size is selected as 100, the time consumed is very high.
Can I use vectorization at some stage to reduce the time for loops?
The profiler result (for window size 21) is as follows:
The profiler result (for window size 100) is as follows:
I have changed the code of my function and have written it without the sub-function. The code is as follows:
function [out] = CE(out,s,M)
B = 255;
Aij = zeros(1,2);
out1 = out;
n_factor = (1/(2*s+1)^2);
for i = s+1 : size(out,1) - s
for j = s+1 : size(out,2) - s
temp = out(i-s:i+s,j-s:j+s);
Yij = out1(i,j);
Sij = n_factor*sum(sum(temp));
if Yij-Sij == 0
Aij(1) = M;
Aij(2) = M;
else
Aij(1) = M/(Yij-Sij);
Aij(2) = M/(Sij-Yij);
end
if (Yij>=Sij)
out1(i,j) = ((B + Aij(1))*Yij)/(Aij(1)+Yij);
else
out1(i,j) = (Aij(2)*Yij)/(Aij(2)+B-Yij);
end
end
end
out = out1;
There is a slight improvement in the speed from 93 sec to 88 sec. Suggestions for any other improvements to my code are welcomed.
I have tried to incorporate the suggestions given to replace sliding window with convolution and then vectorize the rest of it. The code below is my implementation and I'm not getting the result expected.
function [out_im] = CE_conv(im,s,M)
B = 255;
temp = ones(2*s,2*s);
temp = temp ./ numel(temp);
out1 = conv2(im,temp,'same');
out_im = im;
Aij = im-out1; %same as Yij-Sij
Aij1 = out1-im; %same as Sij-Yij
Mij = Aij;
Mij(Aij>0) = M./Aij(Aij>0); % if Yij>Sij Mij = M/Yij-Sij;
Mij(Aij<0) = M./Aij1(Aij<0); % if Yij<Sij Mij = M/Sij-Yij;
Mij(Aij==0) = M; % if Yij-Sij == 0 Mij = M;
out_im(Aij>=0) = ((B + Mij(Aij>=0)).*im(Aij>=0))./(Mij(Aij>=0)+im(Aij>=0));
out_im(Aij<0) = (Mij(Aij<0).*im(Aij<0))./ (Mij(Aij<0)+B-im(Aij<0));
I am not able to figure out where I'm going wrong.
A detailed explanation of what I'm trying to implement is given in the following paper:
Vonikakis, Vassilios, and Ioannis Andreadis. "Multi-scale image contrast enhancement." In Control, Automation, Robotics and Vision, 2008. ICARCV 2008. 10th International Conference on, pp. 856-861. IEEE, 2008.
I've tried to see if I could get those times down by processing with colfiltand nlfilter, since both are usually much faster than for-loops for sliding window image processing.
Both worked fine for relatively small windows. For an image of 2048x2048 pixels and a window of 10x10, the solution with colfilt takes about 5 seconds (on my personal computer). With a window of 21x21 the time jumped to 27 seconds, but that is still a relative improvement on the times displayed on the question. Unfortunately I don't have enough memory to colfilt using windows of 100x100, but the solution with nlfilter works, though taking about 120 seconds.
Here the code
Solution with colfilt:
function outval = enhancematrix(inputmatrix,M,B)
%Inputmatrix is a 2D matrix or column vector, outval is a 1D row vector.
% If inputmatrix is made of integers...
inputmatrix = double(inputmatrix);
%1. Compute S and Y
normFactor = 1 / (size(inputmatrix,1) + 1).^2; %Size of column.
S = normFactor*sum(inputmatrix,1); % Sum over the columns.
Y = inputmatrix(ceil(size(inputmatrix,1)/2),:); % Center row.
% So far we have all S and Y, one value per column.
%2. Compute A(abs(Y-S))
A = Afunc(abs(S-Y),M);
% And all A: one value per column.
%3. The tricky part. If Y(i)-S(i) > 0 do something.
doPositive = (Y > S);
doNegative = ~doPositive;
outval = zeros(1,size(inputmatrix,2));
outval(doPositive) = (B + A(doPositive) .* Y(doPositive)) ./ (A(doPositive) + Y(doPositive));
outval(doNegative) = (A(doNegative) .* Y(doNegative)) ./ (A(doNegative) + B - Y(doNegative));
end
function out = Afunc(x,M)
% Input x is a row vector. Output is another row vector.
out = x;
out(x == 0) = M;
out(x ~= 0) = M./x(x ~= 0);
end
And to call it, simply do:
M = 1000; B = 255; enhancenow = #(x) enhancematrix(x,M,B);
w = 21 % windowsize
result = colfilt(inputImage,[w w],'sliding',enhancenow);
Solution with nlfilter:
function outval = enhanceimagecontrast(neighbourhood,M,B)
%1. Compute S and Y
normFactor = 1 / (length(neighbourhood) + 1).^2;
S = normFactor*sum(neighbourhood(:));
Y = neighbourhood(ceil(size(neighbourhood,1)/2),ceil(size(neighbourhood,2)/2));
%2. Compute A(abs(Y-S))
test = (Y>=S);
A = Afunc(abs(Y-S),M);
%3. Return outval
if test
outval = ((B + A) * Y) / (A + Y);
else
outval = (A * Y) / (A + B - Y);
end
function aval = Afunc(x,M)
if (x == 0)
aval = M;
else
aval = M/x;
end
And to call it, simply do:
M = 1000; B = 255; enhancenow = #(x) enhanceimagecontrast(x,M,B);
w = 21 % windowsize
result = nlfilter(inputImage,[w w], enhancenow);
I didn't spend much time checking that everything is 100% correct, but I did see some nice contrast enhancement (hair looks particularly nice).
This answer is the implementation that was suggested by Peter. I debugged the implementation and presenting the final working version of the fast implementation.
function [out_im] = CE_conv(im,s,M)
B = 255;
im = ( im - min(im(:)) ) ./ ( max(im(:)) - min(im(:)) )*255;
h = ones(s,s)./(s*s);
out1 = imfilter(im,h,'conv');
out_im = im;
Aij = im-out1; %same as Yij-Sij
Aij1 = out1-im; %same as Sij-Yij
Mij = Aij;
Mij(Aij>0) = M./Aij(Aij>0); % if Yij>Sij Mij = M/(Yij-Sij);
Mij(Aij<0) = M./Aij1(Aij<0); % if Yij<Sij Mij = M/(Sij-Yij);
Mij(Aij==0) = M; % if Yij-Sij == 0 Mij = M;
out_im(Aij>=0) = ((B + Mij(Aij>=0)).*im(Aij>=0))./(Mij(Aij>=0)+im(Aij>=0));
out_im(Aij<0) = (Mij(Aij<0).*im(Aij<0))./ (Mij(Aij<0)+B-im(Aij<0));
out_im = ( out_im - min(out_im(:)) ) ./ ( max(out_im(:)) - min(out_im(:)) );
To call this use the following code
I = imread('pout.tif');
w_size = 51;
M = 4000;
output = CE_conv(I(:,:,1),w_size,M);
The output for the 'pout.tif' image is given below
The execution time for Bigger image and with 100*100 block size is around 5 secs with this implementation.

Matlab Genetic Algorithm for a non-genetic case

I've never used optimization tools, but I think I have to use now, so I'm a bit lost. After using the answer given by #A. Donda, I have noticed that maybe that is not the best solution because every time I run the function it gives a different matrix 'pares' and in the majority of times it says that I need more evaluations. So I was thinking that maybe the answer to my problem are Genetic Algorithms optimization, but once again I do not know how to work with them.
My first problem is described below and the answer by #A. Donda is in the only post of a answer. I really need this optimization done and I don't know how to proceed for this case with GA tools.
Thank you so much in advance again, and thank you #A. Donda for your answer.
As asked, I tried to put here the code that I was trying to explain, I hope it will result:
function opt_pares
clear all; clc; close all;
h = randi(24,8760,1);
nd = randi(365,8760,1);
veic = randi(333,8760,1);
max_veic = max(veic);
veicN = veic./max_veic;
Gh = randi(500,8760,1);
Dh = randi(500,8760,1);
Ih = Gh-Dh;
A = randi([300 800], 27,1);
max_Gh = max(Gh);
max_Dh = max(Dh);
max_Ih = max(Ih);
lat = 70;
HRA =15.*(h-12);
decl = 23.27*sind(360*(284+nd)/365);
Ii = zeros(8760,27);
Di = zeros(8760,27);
Gi = zeros(8760,27);
pares = randi([0,90],27,2);
inclin = pares(:,1);
azim = pares(:,2);
% for MATRIZC
for n=1:27
Ii(:,n) = Ih.*(sind(decl).*sind(lat).*cosd(inclin(n))-sind(decl).*cosd(lat).*sind(inclin(n)).*cosd(azim(n))+cosd(decl).*cosd(lat).*cosd(inclin(n)).*cosd(HRA)+cosd(decl).*sind(lat).*sind(inclin(n)).*cosd(azim(n)).*cosd(HRA)+cosd(decl).*sind(inclin(n)).*sind(azim(n)).*sind(HRA));
Di(:,n) = 0.5*Dh.*(1+cosd(inclin(n)));
Gi(:,n) = (Ii(:,n)+Di(:,n))*A(n,1);
end
Gparque = sum(Gi,2);
max_Gparque = max(Gparque);
GparqueN = Gparque./max_Gparque;
RMSE = sqrt(mean((GparqueN-veicN).^2));
% end
end
I don't know if it is possible, maybe this time I can be more assertive.
My main goal is to achieve the best 'RMSE' possible, to do so I have to create a matrix ('pares') where each line contains a pair of values (one value from each column).
These values have to be within a certain range(0-90). With each of this 27 pairs I have to calculate 'Ii'/'Gi'/'Di', giving me a matrix with a size like 8760*27.
Then I make a sum of 'Gi' to have 'Gparque'(vector 8760*1) and finally I I calculate 'RMSE'. When I have RMSE calculated, I have to modify the matrix 'pares' to other values that can result in a better RMSE. Once there are many combinations of 27 values that can be within the 0-90 range, I have to get a solution that can optimize this search for the minimum RMSE.
The parts that are in comments in the code (a for loop with 'pares') is the thing that I have no idea how to do, because I have to change the values of 'pares' but with some optimization criteria that can approximate the minimum of RMSE.
I hope this time I have explain this doubt better.
Thank you very much!
OK, so here is an attempt at a question. I'm not sure how useful the results will be in the end, because I don't understand the underlying problem and I don't have real data to test it with.
You were right that you need an optimization algorithm, your problem appears to be more complex than simple linear algebra. For optimization I use the function fminsearch from the Optmization Toolbox.
First the function whose value is to be optimized (the objective function) needs to be defined. Based on your code, this is
function RMSE = fun(pares)
inclin = pares(:,1);
azim = pares(:,2);
Ii = zeros(8760,27);
Di = zeros(8760,27);
Gi = zeros(8760,27);
for n=1:27
Ii(:,n) = Ih.*(sind(decl).*sind(lat).*cosd(inclin(n))-sind(decl).*cosd(lat).*sind(inclin(n)).*cosd(azim(n))+cosd(decl).*cosd(lat).*cosd(inclin(n)).*cosd(HRA)+cosd(decl).*sind(lat).*sind(inclin(n)).*cosd(azim(n)).*cosd(HRA)+cosd(decl).*sind(inclin(n)).*sind(azim(n)).*sind(HRA));
Di(:,n) = 0.5*Dh.*(1+cosd(inclin(n)));
Gi(:,n) = (Ii(:,n)+Di(:,n))*A(n,1);
end
Gparque = sum(Gi,2);
max_Gparque = max(Gparque);
GparqueN = Gparque./max_Gparque;
RMSE = sqrt(mean((GparqueN-veicN).^2));
end
Now we can call
pares_opt = fminsearch(#fun, randi([0,90],27,2))
using random initialization. The optimization takes quite a while because the objective function is not very efficiently implemented. Here's a vectorized version that does the same:
% precompute
cHRA = cosd(HRA);
sHRA = sind(HRA);
sdecl = sind(decl);
cdecl = cosd(decl);
slat = sind(lat);
clat = cosd(lat);
function RMSE = fun(pares)
% precompute
cinclin = cosd(pares(:,1))';
sinclin = sind(pares(:,1))';
cazim = cosd(pares(:,2))';
sazim = sind(pares(:,2))';
Ii = bsxfun(#times, Ih, ...
sdecl * (slat * cinclin - clat * sinclin .* cazim) ...
+ (cdecl .* cHRA) * (clat * cinclin + slat * sinclin .* cazim) ...
+ (cdecl .* sHRA) * (sinclin .* sazim));
Di = 0.5 * Dh * (1 + cinclin);
Gi = (Ii + Di) * diag(A);
Gparque = sum(Gi,2);
max_Gparque = max(Gparque);
GparqueN = Gparque./max_Gparque;
RMSE = sqrt(mean((GparqueN-veicN).^2));
end
We have not yet implemented the constraint for pares to lie within [0, 90]. A crude way to do this is to insert these lines:
if any(pares(:) < 0) || any(pares(:) > 90)
RMSE = inf;
return
end
at the beginning of the objective function.
Putting it all together:
function Raquel
h = randi(24,8760,1);
nd = randi(365,8760,1);
veic = randi(333,8760,1);
max_veic = max(veic);
veicN = veic./max_veic;
Gh = randi(500,8760,1);
Dh = randi(500,8760,1);
Ih = Gh-Dh;
A = randi([300 800], 27,1);
lat = 70;
HRA =15.*(h-12);
decl = 23.27*sind(360*(284+nd)/365);
% precompute
cHRA = cosd(HRA);
sHRA = sind(HRA);
sdecl = sind(decl);
cdecl = cosd(decl);
slat = sind(lat);
clat = cosd(lat);
pares_opt = fminsearch(#fun, randi([0,90],27,2))
function RMSE = fun(pares)
if any(pares(:) < 0) || any(pares(:) > 90)
RMSE = inf;
return
end
% precompute
cinclin = cosd(pares(:,1))';
sinclin = sind(pares(:,1))';
cazim = cosd(pares(:,2))';
sazim = sind(pares(:,2))';
Ii = bsxfun(#times, Ih, ...
sdecl * (slat * cinclin - clat * sinclin .* cazim) ...
+ (cdecl .* cHRA) * (clat * cinclin + slat * sinclin .* cazim) ...
+ (cdecl .* sHRA) * (sinclin .* sazim));
Di = 0.5 * Dh * (1 + cinclin);
Gi = (Ii + Di) * diag(A);
Gparque = sum(Gi,2);
max_Gparque = max(Gparque);
GparqueN = Gparque./max_Gparque;
RMSE = sqrt(mean((GparqueN-veicN).^2));
end
end
With simulated data, if I run the optimization twice on the same randomized data but different initial values I get different solutions. This is an indication that there is more than one local minimum of the objective function. Hopefully, this will not be the case with real data.

Has fminsearch too many parameters?

I run this code in matlab to minimize the parameters of my function real_egarchpartial with fminsearch:
data = xlsread('return_cc_in.xlsx');
SPY = data(:,25);
dailyrange = xlsread('DR_in.xlsx');
drSPY= dailyrange(:,28);
startingVals = [mean(SPY); 0.041246; 0.70121; 0.05; 0.04; 0.45068; -0.1799; 1.0375; 0.06781; 0.070518];
T = size(SPY,1);
options = optimset('fminsearch');
options.Display = 'iter';
estimates = fminsearch(#real_egarchpartial, startingVals, options, SPY, drSPY);
[ll, lls, u]=real_egarchpartial(estimates, SPY, drSPY);
And I get this message:
Exiting: Maximum number of function evaluations has been exceeded
- increase MaxFunEvals option.
I put the original starting values. So I assumed they are corrects. I used fminsearch and not fmincon because my function hasn't constraints and with fminunc my function gets a lot of red messages.
the real_egarchpartial function is the following:
function [ll,lls,lh] = real_egarchpartial(parameters, data, x_rk)
mu = parameters(1);
omega = parameters(2);
beta = parameters(3);
tau1 = parameters(4);
tau2 = parameters(5);
gamma = parameters(6);
csi = parameters(7);
phi = parameters(8);
delta1 = parameters(9);
delta2 = parameters(10);
%Data and h are T by 1 vectors
T = size(data,1);
eps = data-mu;
lh = zeros(T,1);
h = zeros(T,1);
u = zeros(T,1);
%Must use a back Cast to start the algorithm
h(1)=var(data);
lh(1) = log(h(1));
z= eps/sqrt(h(1));
u(1) = rand(1);
lxRK = log(x_rk);
for t = 2:T;
lh(t) = omega + beta*lh(t-1) + tau1*z(t-1) + tau2*((z(t-1).^2)-1)+ gamma*u(t-1);
h(t)=exp(lh(t));
z = eps/sqrt(h(t));
end
for t = 2:T
u(t)= lxRK(t) - csi - phi*h(t) - delta1*z(t) - delta2*((z(t).^2)-1);
end
lls = 0.5*(log(2*pi) + lh + eps.^2./h);
ll = sum(lls);
Could someone explain what is wrong? Is there another function more efficient for my estimation? Any help will be appreciated! Thank you.

Implementation of shadow free 1d invariant image

I implemented a method for removing shadows based on invariant color features found in the paper Entropy Minimization for Shadow Removal. My implementation seems to be yielding similar computational results sometimes, but they are always off, and my grayscale image is blocky, maybe as a result of incorrectly taking the geometric mean.
Here is an example plot of the information potential from the horse image in the paper as well as my invariant image. Multiply the x-axis by 3 to get theta(which goes from 0 to 180):
And here is the grayscale Image my code outputs for the correct maximum theta (mine is off by 10):
You can see the blockiness that their image doesn't have:
Here is their information potential:
When dividing by the geometric mean, I have tried using NaN and tresholding the image so the smallest possible value is .01, but it doesn't seem to change my output.
Here is my code:
I = im2double(imread(strname));
[m,n,d] = size(I);
I = max(I, .01);
chrom = zeros(m, n, 3, 'double');
for i = 1:m
for j = 1:n
% if ((I(i,j,1)*I(i,j,2)*I(i,j,3))~= 0)
chrom(i,j, 1) = I(i,j,1)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
chrom(i,j, 2) = I(i,j,2)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
chrom(i,j, 3) = I(i,j,3)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
% else
% chrom(i,j, 1) = 1;
% chrom(i,j, 2) = 1;
% chrom(i,j, 3) = 1;
% end
end
end
p1 = mat2gray(log(chrom(:,:,1)));
p2 = mat2gray(log(chrom(:,:,2)));
p3 = mat2gray(log(chrom(:,:,3)));
X1 = mat2gray(p1*1/(sqrt(2)) - p2*1/(sqrt(2)));
X2 = mat2gray(p1*1/(sqrt(6)) + p2*1/(sqrt(6)) - p3*2/(sqrt(6)));
maxinf = 0;
maxtheta = 0;
data2 = zeros(1, 61);
for theta = 0:3:180
M = X1*cos(theta*pi/180) - X2*sin(theta*pi/180);
s = sqrt(std2(X1)^(2)*cos(theta*pi/180) + std2(X2)^(2)*sin(theta*pi/180));
s = abs(1.06*s*((m*n)^(-1/5)));
[m, n] = size(M);
length = m*n;
sources = zeros(1, length, 'double');
count = 1;
for x=1:m
for y = 1:n
sources(1, count) = M(x , y);
count = count + 1;
end
end
weights = ones(1, length);
sigma = 2*s;
[xc , Ak] = fgt_model(sources , weights , sigma , 10, sqrt(length) , 6 );
sum1 = sum(fgt_predict(sources , xc , Ak , sigma , 10 ));
sum1 = sum1/sqrt(2*pi*2*s*s);
data2(theta/3 + 1) = sum1;
if (sum1 > maxinf)
maxinf = sum1;
maxtheta = theta;
end
end
InvariantImage2 = cos(maxtheta*pi/180)*X1 + sin(maxtheta*pi/180)*X2;
Assume the Fast Gauss Transform is correct.
I don't know whether this makes any difference as it is more than a month now, but the blockiness and different information potential plot is simply caused by compression of the used image. You can't expect to be getting same results using this image as they had, because they have used raw, high resolution uncompressed version of it. I have to say I am fairly impressed with your results, especially with implementing the information potential. That thing went over my head a little.
John.

Why do I get so many eigenvalues of zero in my Matlab eigenfaces implementation?

I'm trying to implement a very basic eigenface calculation in Matlab. It kind of works but I get only two meaningful eigenvalues - the rest are zero. The corresponding eigenvectors seem to be right since most of them will show an eigenface when converting to an image.
So why are most of my eigenvalues zero? I need them to be different from zero in order to sort the eigenfaces by their significance (greatest magnitude eigenvalues).
I am reading 400 images, each size h/w = 112/92 px
They can be found here: http://www.cl.cam.ac.uk/Research/DTG/attarchive/pub/data/att_faces.zip
The code:
clear all;
files = dir('eigenfaces2/training/*.pgm');
[numFaces, discard] = size(files);
h = 112;
w = 92;
s = h * w;
%calculate average face
avgFace = zeros(s, 1);
faces = [];
for i=1:numFaces
file = strcat('eigenfaces2/training/', files(i).name);
im = double(imread(file));
im = reshape(im, s, 1);
avgFace = avgFace + im;
faces(:,i) = im;
end
avgFace = avgFace ./ numFaces;
A = [];
for i=1:numFaces
diff = avgFace - faces(i);
A(:,i) = diff;
end
numEigs = 20;
L = (A' * A) / numFaces;
[tmpEigs, discard] = eigs(L, numEigs);
eigenfaces = [];
for i=1:numEigs
v = tmpEigs(:,i);
eigenfaces(:,i) = A * v;
end
%visualize largest eigenfaces
figure;
for i=1:numEigs
eigface = eigenfaces(:,i);
mmax = max(eigface);
mmin = min(eigface);
eigface = 255 .* (eigface-mmin) ./ (mmax-mmin);
eigface = reshape(eigface, h, w);
subplot(4,5,i); imshow(uint8(eigface));
end
I've don't have much experience with computer vision/image recognition, but I think you might want
diff = avgFace - faces(:,i);
in your second for loop. Otherwise it's just subtracting a constant from avgFace each time, and so A (and hence L) only gets a rank of 2.