eigenvalue decomposition of structure tensor in matlab - matlab

I have a synthetic image. I want to do eigenvalue decomposition of local structure tensor (LST) of it for some edge detection purposes. I used the eigenvaluesl1 , l2 and eigenvectors e1 ,e2 of LST to generate an adaptive ellipse for each pixel of image. Unfortunately I get unequal eigenvalues l1 , l2 and so unequal semi-axes length of ellipse for homogeneous regions of my figure:
However I get good response for a simple test image:
I don't know what is wrong in my code:
function [H,e1,e2,l1,l2] = LST_eig(I,sigma1,rw)
% LST_eig - compute the structure tensor and its eigen
% value decomposition
%
% H = LST_eig(I,sigma1,rw);
%
% sigma1 is pre smoothing width (in pixels).
% rw is filter bandwidth radius for tensor smoothing (in pixels).
%
n = size(I,1);
m = size(I,2);
if nargin<2
sigma1 = 0.5;
end
if nargin<3
rw = 0.001;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% pre smoothing
J = imgaussfilt(I,sigma1);
% compute gradient using Sobel operator
Sch = [-3 0 3;-10 0 10;-3 0 3];
%h = fspecial('sobel');
gx = imfilter(J,Sch,'replicate');
gy = imfilter(J,Sch','replicate');
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% compute tensors
gx2 = gx.^2;
gy2 = gy.^2;
gxy = gx.*gy;
% smooth
gx2_sm = imgaussfilt(gx2,rw); %rw/sqrt(2*log(2))
gy2_sm = imgaussfilt(gy2,rw);
gxy_sm = imgaussfilt(gxy,rw);
H = zeros(n,m,2,2);
H(:,:,1,1) = gx2_sm;
H(:,:,2,2) = gy2_sm;
H(:,:,1,2) = gxy_sm;
H(:,:,2,1) = gxy_sm;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% eigen decomposition
l1 = zeros(n,m);
l2 = zeros(n,m);
e1 = zeros(n,m,2);
e2 = zeros(n,m,2);
for i = 1:n
for j = 1:m
Hmat = zeros(2);
Hmat(:,:) = H(i,j,:,:);
[V,D] = eigs(Hmat);
D = abs(D);
l1(i,j) = D(1,1); % eigen values
l2(i,j) = D(2,2);
e1(i,j,:) = V(:,1); % eigen vectors
e2(i,j,:) = V(:,2);
end
end
Any help is appreciated.
This is my ellipse drawing code:
% determining ellipse parameteres from eigen value decomposition of LST
M = input('Enter the maximum allowed semi-major axes length: ');
I = input('Enter the input data: ');
row = size(I,1);
col = size(I,2);
a = zeros(row,col);
b = zeros(row,col);
cos_phi = zeros(row,col);
sin_phi = zeros(row,col);
for m = 1:row
for n = 1:col
a(m,n) = (l2(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
b(m,n) = (l1(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
cos_phi1 = e1(m,n,1);
sin_phi1 = e1(m,n,2);
len = hypot(cos_phi1,sin_phi1);
cos_phi(m,n) = cos_phi1/len;
sin_phi(m,n) = sin_phi1/len;
end
end
%% plot elliptic structuring elements using parametric equation and superimpose on the image
figure; imagesc(I); colorbar; hold on
t = linspace(0,2*pi,50);
for i = 10:10:row-10
for j = 10:10:col-10
x0 = j;
y0 = i;
x = a(i,j)/2*cos(t)*cos_phi(i,j)-b(i,j)/2*sin(t)*sin_phi(i,j)+x0;
y = a(i,j)/2*cos(t)*sin_phi(i,j)+b(i,j)/2*sin(t)*cos_phi(i,j)+y0;
plot(x,y,'r','linewidth',1);
hold on
end
end
This my new result with the Gaussian derivative kernel:
This is the new plot with axis equal:

I created a test image similar to yours (probably less complicated) as follows:
pos = yy([400,500]) + 100 * sin(xx(400)/400*2*pi);
img = gaussianlineclip(pos+50,7) + gaussianlineclip(pos-50,7);
I = double(stretch(img));
(This requires DIPimage to run)
Then ran your LST_eig on it (sigma1=1 and rw=3) and your code to draw ellipses (no change to either, except adding axis equal), and got this result:
I suspect some non-uniformity in some of the blue areas of your image, which cause very small gradients to appear. The problem with the definition of the ellipses as you use them is that, for sufficiently oriented patterns, you'll get a line even if that pattern is imperceptible. You can get around this by defining your ellipse axes lengths as follows:
a = repmat(M,size(l2)); % longest axis is always the same
b = M ./ (l2+1); % shortest axis is shorter the more important the largest eigenvalue is
The smallest eigenvalue l1 is high in regions with strong gradients but no clear direction. The above does not take this into account. One option could be to make a depend on both energy and anisotropy measures, and b depend only on energy:
T = 1000; % some threshold
r = M ./ max(l1+l2-T,1); % circle radius, smaller for higher energy
d = (l2-l1) ./ (l1+l2+eps); % anisotropy measure in range [0,1]
a = M*d + r.*(1-d); % use `M` length for high anisotropy, use `r` length for high isotropy (circle)
b = r; % use `r` width always
This way, the whole ellipse shrinks if there are strong gradients but no clear direction, whereas it stays large and circular when there are only weak or no gradients. The threshold T depends on image intensities, adjust as needed.
You should probably also consider taking the square root of the eigenvalues, as they correspond to the variance.
Some suggestions:
You can write
a = (l2+eps)./(l1+l2+2*eps) * M;
b = (l1+eps)./(l1+l2+2*eps) * M;
cos_phi = e1(:,:,1);
sin_phi = e1(:,:,2);
without a loop. Note that e1 is normalized by definition, there is no need to normalize it again.
Use Gaussian gradients instead of Gaussian smoothing followed by Sobel or Schaar filters. See here for some MATLAB implementation details.
Use eig, not eigs, when you need all eigenvalues. Especially for such a small matrix, there is no advantage to using eigs. eig seems to produce more consistent results. There is no need to take the absolute value of the eigenvalues (D = abs(D)), as they are non-negative by definition.
Your default value of rw = 0.001 is way too small, a sigma of that size has no effect on the image. The goal of this smoothing is to average gradients in a local neighborhood. I used rw=3 with good results.
Use DIPimage. There is a structuretensor function, Gaussian gradients, and a lot more useful stuff. The 3.0 version (still in development) is a major rewrite that improves significantly on dealing with vector- and matrix-valued images. I can write all of your LST_eig as follows:
I = dip_image(I);
g = gradient(I, sigma1);
H = gaussf(g*g.', rw);
[e,l] = eig(H);
% Equivalences with your outputs:
l1 = l{2};
l2 = l{1};
e1 = e{2,:};
e2 = e{1,:};

Related

Plot the phase structure function of a phase screen by definition

I have already had a phase screen (a 2-D NxN matrix and LxL in size scale, ex: N = 256, L = 2 meters).
I would like to find phase structure function - D(r) defined by D(delta(r)) = <[x(r)-x(r+delta(r))]^2> (<.> is ensemble averaging, r is position in phase screen in meter, x is phase value at a point in phase screen, delta(r) is variable and not fix) in Matlab program. Do you have any suggestion for my purpose?
P/S: I tried to calculate D(r) via the autocorrelation (is defined as B(r)), but this calculation still remaining some approximations. Therefore, I want to calculate precisely the result of D(r). May you please see this image to better understand the definition of D(r) and B(r). Below is my function code to calculate B(r).
% Code copied from "Numerical Simulation of Optical Wave Propagation with Examples in Matlab",
% by Jason D. Schmidt, SPIE Press, SPIE Vol. No.: PM199
% listing 3.7, page 48.
% (Schmidt defines the ft2 and ift2 functions used in this code elswhere.)
function D = str_fcn2_ft(ph, mask, delta)
% function D = str_fcn2_ft(ph, mask, delta)
N = size(ph, 1);
ph = ph .* mask;
P = ft2(ph, delta);
S = ft2(ph.^2, delta);
W = ft2(mask, delta);
delta_f = 1/(N*delta);
w2 = ift2(W.*conj(W), delta_f);
D = 2 * ft2(real(S.*conj(W)) - abs(P).^2, delta) ./ w2 .*mask;`
%fire run
N = 256; %number of samples
L = 16; %grid size [m]
delta = L/N; %sample spacing [m]
F = 1/L; %frequency-domain grid spacing[1/m]
x = [-N/2 : N/2-1]*delta;
[x y] = meshgrid(x);
w = 2; %width of rectangle
%A = rect(x/2).*rect(y/w);
A = lambdaWrapped;
%A = phz;
mask = ones(N);
%perform digital structure function
C = str_fcn2_ft(A, mask, delta);
C = real(C);
One way of directly computing this function D(r) is through random sampling: you pick two random points on your screen, determine their distance and phase difference squared, and update an accumulator:
phi = rand(256,256)*(2*pi); % the data, phase
N = size(phi,1); % number of samples
L = 16; % grid size [m]
delta = L/N; % sample spacing [m]
D = zeros(1,sqrt(2)*N); % output function
count = D; % for computing mean
for n = 1:1e6 % find a good amount of points here, the more points the better the estimate
coords = randi(N,2,2);
r = round(norm(coords(1,:) - coords(2,:)));
if r<1
continue % skip if the two coordinates are the same
end
d = phi(coords(1,1),coords(1,2)) - phi(coords(2,1),coords(2,2));
d = mod(abs(d),pi); % you might not need this, depending on how A is constructed
D(r) = D(r) + d.^2;
count(r) = count(r) + 1;
end
I = count > 0;
D(I) = D(I) ./ count(I); % do not divide by 0, some bins might not have any samples
I = count < 100;
D(I) = 0; % ignore poor estimates
r = (1:length(D)) * delta;
plot(r,D)
If you need even more precision, consider interpolating. Compute random coordinates as floating-point values, and interpolate the phase to get the values in between samples. D then needs to be longer, indexed as round(r*10) or something like that. You will need many more random samples to fill up that much larger accumulator.

Fit plane to N dimensional points in MATLAB

I have a set of N points in k dimensions as a matrix of size N X k.
How can I find the best fitting line through these points? The line will be a plane (hyerpplane) in k dimensions. It will have k coefficients and one bias term.
Existing functions like fit seem to be usable only for points in 2 or 3 dimension.
You can fit a hyperplane (or any lower dimensional affine space) to a set of D dimensional data using Principal Component Analysis. Here's an example of fitting a plane to a set of 3D data. This is explained in more detail in the MATLAB documentation but I tried to construct the simplest example I could.
% generate some random correlated data
D = 3;
mu = zeros(1,D);
sqrt_sig = randn(D);
sigma = sqrt_sig'*sqrt_sig;
% generate 50 points in a D x 50 matrix
X = mvnrnd(mu, sigma, 50)';
% perform PCA
coeff = pca(X');
% The last principal component is normal to the best fit plane and plane goes through mean of X
a = coeff(:,D);
b = -mean(X,2)'*a;
% plane defined by a'*x + b = 0
dist = abs(a'*X+b) / norm(a);
mse = mean(dist.^2)
Edit: Added example plot of results for D = 3. I take advantage of the orthogonality of the other principal components here. Ignore the code if you want it's just to demonstrate that the plane does in fact fit the data pretty well.
% plot in 3D
X0 = bsxfun(#minus,X,mean(X,2));
b1 = coeff(:,1); b2 = coeff(:,2);
y1 = b1'*X0; y2 = b2'*X0;
y1_min = min(y1); y1_max = max(y1);
y1_span = y1_max - y1_min;
y2_min = min(y2); y2_max = max(y2);
y2_span = y2_max - y2_min;
pad = 0.2;
y1_min = y1_min - pad*y1_span;
y1_max = y1_max + pad*y1_span;
y2_min = y2_min - pad*y2_span;
y2_max = y2_max + pad*y2_span;
[y1_m,y2_m] = meshgrid(linspace(y1_min,y1_max,5), linspace(y2_min,y2_max,5));
grid = bsxfun(#plus, bsxfun(#times,y1_m(:)',b1) + bsxfun(#times,y2_m(:)',b2), mean(X,2));
x = reshape(grid(1,:),size(y1_m));
y = reshape(grid(2,:),size(y1_m));
z = reshape(grid(3,:),size(y1_m));
figure(1); clf(1);
surf(x,y,z,'FaceColor','black','FaceAlpha',0.3,'EdgeAlpha',0.6);
hold on;
plot3(X(1,:),X(2,:),X(3,:),' .');
axis equal;
axis vis3d;
Edit2: When I say "principal component" I'm being a bit sloppy (or just plain wrong) with the wording. I'm actually referring to the orthogonal basis vectors that the principal components are expressed in.
Here's a simpler solution, that just uses MATLAB's \ operator. We start with defining a plane in k dimensions:
% 0 = a + x(1) * b(1) + x(2) * b(2) + ... + x(k) * 1
k = 8;
a = randn(1);
b = randn(k-1,1);
(note that we assume b(k)=1, you can always multiply the plane parameters by any value without changing the plane).
Next we generate N random points within this plane:
N = 1000;
x = rand(N,k-1);
x(:,k) = -(a + x * b);
...sorry, it's not the best way to generate random points on the plane, but it's good enough for the demonstration here. Add noise to the points:
x = x + 0.05*randn(size(x));
To find the parameters of the plane, we solve the system of equations
% a + x(1:k-1) * b == -x(k)
in the least-squares sense. a and b are the unknowns there. We can rewrite the left-hand side as [1,x(1:k-1)] * [a;b]. If we have a matrix equation M*p=v we can solve for p by writing p=M\v:
p = [ones(N,1),x(:,1:k-1)]\(-x(:,k));
disp(['ground truth: [a,b,1] = ',mat2str([a,b',1],3)]);
disp(['estimated : [a,b,1] = ',mat2str([p',1],3)]);
This gives as output:
ground truth: [a,b,1] = [-1.35 -1.44 -1.48 1.17 0.226 -0.214 0.234 -1.59 1]
estimated : [a,b,1] = [-1.41 -1.38 -1.43 1.14 0.219 -0.195 0.221 -1.54 1]
The less noise or the more points in the dataset, the smaller the error will be of course!

Matlab SVM custom kernel function

In the Matlab SVM tutorial, it says
You can set your own kernel function, for example, kernel, by setting 'KernelFunction','kernel'. kernel must have the following form:
function G = kernel(U,V)
where:
U is an m-by-p matrix.
V is an n-by-p matrix.
G is an m-by-n Gram matrix of the rows of U and V.
When I followed the custom SVM kernel example, I set a break point in mysigmoid.m function. However, I found U and V were in fact 1-by-p vectors and G was a scalar.
Why does not MATLAB process the kernel by matrices?
My custom kernel function is
function G = mysigmoid(U,V)
% Sigmoid kernel function with slope gamma and intercept c
gamma = 0.5;
c = -1;
G = tanh(gamma*U*V' + c);
end
My Matlab script is
%% Train SVM Classifiers Using a Custom Kernel
rng(1); % For reproducibility
n = 100; % Number of points per quadrant
r1 = sqrt(rand(2*n,1)); % Random radius
t1 = [pi/2*rand(n,1); (pi/2*rand(n,1)+pi)]; % Random angles for Q1 and Q3
X1 = [r1.*cos(t1), r1.*sin(t1)]; % Polar-to-Cartesian conversion
r2 = sqrt(rand(2*n,1));
t2 = [pi/2*rand(n,1)+pi/2; (pi/2*rand(n,1)-pi/2)]; % Random angles for Q2 and Q4
X2 = [r2.*cos(t2), r2.*sin(t2)];
X = [X1; X2]; % Predictors
Y = ones(4*n,1);
Y(2*n + 1:end) = -1; % Labels
% Plot the data
figure(1);
gscatter(X(:,1),X(:,2),Y);
title('Scatter Diagram of Simulated Data');
SVMModel1 = fitcsvm(X,Y,'KernelFunction','mysigmoid','Standardize',true);
% Compute the scores over a grid
d = 0.02; % Step size of the grid
[x1Grid,x2Grid] = meshgrid(min(X(:,1)):d:max(X(:,1)),...
min(X(:,2)):d:max(X(:,2)));
xGrid = [x1Grid(:),x2Grid(:)]; % The grid
[~,scores1] = predict(SVMModel1,xGrid); % The scores
figure(2);
h(1:2) = gscatter(X(:,1),X(:,2),Y);
hold on;
h(3) = plot(X(SVMModel1.IsSupportVector,1),X(SVMModel1.IsSupportVector,2),...
'ko','MarkerSize',10);
% Support vectors
contour(x1Grid,x2Grid,reshape(scores1(:,2),size(x1Grid)),[0,0],'k');
% Decision boundary
title('Scatter Diagram with the Decision Boundary');
legend({'-1','1','Support Vectors'},'Location','Best');
hold off;
CVSVMModel1 = crossval(SVMModel1);
misclass1 = kfoldLoss(CVSVMModel1);
disp(misclass1);
Kernels add dimensions to a feature. If you have, for example, one feature for sample x={a} it will expand it into something like x= {a_1... a_q}. As you are doing this for all of your data at once, you are going to have a M x P (M is the number of examples in your training set and P is the number of features). The second matrix it asks for is P x N, where N is the number of examples in the training/test set.
That said, your output should be M x N. Since it is instead 1, it means that you have U = 1XM and V=Nx1 where N=M. To have an output of M x N logic follows that you should simply transpose your inputs.

Testing for Unimodal (Unimodality) or Bimodal (Bimodality) Distribution in MATLAB

Is there a way in MATLAB to check whether the histogram distribution is unimodal or bimodal?
EDIT
Do you think Hartigan's Dip Statistic would work? I tried passing an image to it, and get the value 0. What does that mean?
And, when passing an image, does it test the distribution of the histogram of the image on the gray levels?
Thanks.
Here is a script using Nic Price's implementation of Hartigan's Dip Test to identify unimodal distributions. The tricky point was to calculate xpdf, which is not probability density function, but rather a sorted sample.
p_value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. In this case null hypothesis is that distribution is unimodal.
close all; clear all;
function [x2, n, b] = compute_xpdf(x)
x2 = reshape(x, 1, prod(size(x)));
[n, b] = hist(x2, 40);
% This is definitely not probability density function
x2 = sort(x2);
% downsampling to speed up computations
x2 = interp1 (1:length(x2), x2, 1:1000:length(x2));
end
nboot = 500;
sample_size = [256 256];
% Unimodal
sample2d = normrnd(0.0, 10.0, sample_size);
[xpdf, n, b] = compute_xpdf(sample2d);
[dip, p_value, xlow, xup] = HartigansDipSignifTest(xpdf, nboot);
figure;
subplot(1,2,1);
bar(n, b)
title(sprintf('Probability of unimodal %.2f', p_value))
% Bimodal
sample2d = sign(sample2d) .* (abs(sample2d) .^ 0.5);
[xpdf, n, b] = compute_xpdf(sample2d);
[dip, p_value, xlow, xup] = HartigansDipSignifTest(xpdf, nboot);
subplot(1,2,2);
bar(n, b)
title(sprintf('Probability of unimodal %.2f', p_value))
print -dpng modality.png
There are many different ways to do what you are asking. In the most literal sense, "bimodal" means there are two peaks. Usually though, you want the "two peaks" to be separated by some reasonable distance, and you want them to each contain a reasonable proportion of the total counts. Only you know what is "reasonable" for your situation, but the following approach might help.
Create a histogram of the intensities
Form the cumulative distribution with cumsum
For different values of the "cut" between distributions (25%, 30%, 50%, …), compute the mean and standard deviation of the two distributions (above and below the cut).
Compute the distance between the means divided by the sum of the standard deviations of the two distributions
That quantity will be a maximum at the "best cut"
You have to decide what size of that quantity represents "bimodal" for you. Here is some code that demonstrates what I am talking about. It generates bimodal distributions of different degrees of severity - two Gaussians, with increasing delta between them (steps = size of standard deviation). I compute the quantity described above, and plot it for a range of different values of delta. I then fit a parabola through this curve over a range corresponding to +- 1 sigma of the entire distribution. As you can see, when the distribution becomes more bimodal, two things happen:
The curvature of this curve flips (it goes from a valley to a peak)
The maximum increases (it is about 1.33 for a Gaussian).
You can look at these quantities for some of your own distributions, and decide where you want to put the cutoff.
% test for bimodal distribution
close all
for delta = 0:10:50
a1 = randn(100,100) * 10 + 25;
a2 = randn(100,100) * 10 + 25 + delta;
a3 = [a1(:); a2(:)];
[h hb] = hist(a3, 0:100);
cs = cumsum(h);
llimi = find(cs < 0.2 * max(cs(:)));
ulimi = find(cs > 0.8 * max(cs(:)));
llim = hb(llimi(end));
ulim = hb(ulimi(1));
cuts = linspace(llim, ulim, 20);
dmean = mean(a3);
dstd = std(a3);
for ci = 1:numel(cuts)
d1 = a3(a3<cuts(ci));
d2 = a3(a3>=cuts(ci));
m(ci,1) = mean(d1);
m(ci, 2) = mean(d2);
s(ci, 1) = std(d1);
s(ci, 2) = std(d2);
end
q = (m(:, 2) - m(:, 1)) ./ sum(s, 2);
figure;
plot(cuts, q);
title(sprintf('delta = %d', delta))
% compute curvature of plot around mean:
xlims = dmean + [-1 1] * dstd;
indx = find(cuts < xlims(2) && cuts > xlims(1));
pf = polyfit(cuts(indx), q(indx), 2);
m = polyval(pf, dmean);
fprintf(1, 'coefficients: a = %.2e, peak = %.2f\n', pf(1), m);
end
Output values:
coefficients: a = 1.37e-03, peak = 1.32
coefficients: a = 1.01e-03, peak = 1.34
coefficients: a = 2.85e-04, peak = 1.45
coefficients: a = -5.78e-04, peak = 1.70
coefficients: a = -1.29e-03, peak = 2.08
coefficients: a = -1.58e-03, peak = 2.48
Sample plots:
And the histogram for delta = 40:

Creating a matrix containing a filled ellipse based on a non-contiguous outline

I'm trying to create a matrix of 0 values, with 1 values filling a ellipse shape. My ellipse was generated using minVolEllipse.m (Link 1) which returns a matrix of the ellipse equation in the 'center form' and the center of the ellipse. I then use a snippet of code from Ellipse_plot.m (from the aforementioned link) to parameterize the vector into major/minor axes, generate a parametric equation, and generate a matrix of transformed coordinates. You can see their code to see how this is done. The result is a matrix that has index locations for points along the ellipse. It does not encompass every value along the outline of the ellipse unless I set the number of grid points, N, to a ridiculously high value.
When I use the MATLAB plot or patch commands I see exactly the result I'm looking for. However, I want this represented as a matrix of 0 values with 1s where patch 'fills in' the blanks. It is apparent that MATLAB has this functionality, but I have yet to find the code to execute it. What I am looking for is similar to how bwfill of the image processing toolbox works (Link 2). bwfill does not work for me because my ellipse is not contiguous, so the function returns a matrix filled completely with 1 values.
Hopefully I have outlined the problem well enough, if not please comment and I can edit the post to clarify.
EDIT:
I have devised a strategy using the 2-D X vector from Ellipse_plot.m as an input to EllipseDirectFit.m (Link 3). This function returns the coefficients for the ellipse function ax^2+bxy+cy^2+dx+dy+f=0. Using these coefficients I calculate the angle between the x-axis and the major axis of the ellipse. This angle, along with the center and major/minor axes are passed into ellipseMatrix.m (Link 4), which returns a filled matrix. Unfortunately, the matrix appears to be out of rotation from what I want. Here is the portion of my code:
N = 20; %Number of grid points in ellipse
ellipsepoints = clusterpoints(clusterpoints(:,1)==i,2:3)';
[A,C] = minVolEllipse(ellipsepoints,0.001,N);
%%%%%%%%%%%%%%
%
%Adapted from:
% Ellipse_plot.m
% Nima Moshtagh
% nima#seas.upenn.edu
% University of Pennsylvania
% Feb 1, 2007
% Updated: Feb 3, 2007
%%%%%%%%%%%%%%
%
%
% "singular value decomposition" to extract the orientation and the
% axes of the ellipsoid
%------------------------------------
[U D V] = svd(A);
%
% get the major and minor axes
%------------------------------------
a = 1/sqrt(D(1,1))
b = 1/sqrt(D(2,2))
%theta values
theta = [0:1/N:2*pi+1/N];
%
% Parametric equation of the ellipse
%----------------------------------------
state(1,:) = a*cos(theta);
state(2,:) = b*sin(theta);
%
% Coordinate transform
%----------------------------------------
X = V * state;
X(1,:) = X(1,:) + C(1);
X(2,:) = X(2,:) + C(2);
% Output: Elip_Eq = [a b c d e f]' is the vector of algebraic
% parameters of the fitting ellipse:
Elip_Eq = EllipseDirectFit(X')
% http://mathworld.wolfram.com/Ellipse.html gives the equation for finding the angle theta (teta).
% The coefficients from EllipseDirectFit are rescaled to match what is expected in the wolfram link.
Elip_Eq(2) = Elip_Eq(2)/2;
Elip_Eq(4) = Elip_Eq(4)/2;
Elip_Eq(5) = Elip_Eq(5)/2;
if Elip_Eq(2)==0
if Elip_Eq(1) < Elip_Eq(3)
teta = 0;
else
teta = (1/2)*pi;
endif
else
tetap = (1/2)*acot((Elip_Eq(1)-Elip_Eq(3))/(Elip_Eq(2)));
if Elip_Eq(1) < Elip_Eq(3)
teta = tetap;
else
teta = (pi/2)+tetap;
endif
endif
blank_mask = zeros([height width]);
if teta < 0
teta = pi+teta;
endif
%I may need to switch a and b, depending on which is larger (so that the fist is the major axis)
filled_mask1 = ellipseMatrix(C(2),C(1),b,a,teta,blank_mask,1);
EDIT 2:
As a response to the suggestion from #BenVoigt, I have written a for-loop solution to the problem, here:
N = 20; %Number of grid points in ellipse
ellipsepoints = clusterpoints(clusterpoints(:,1)==i,2:3)';
[A,C] = minVolEllipse(ellipsepoints,0.001,N);
filled_mask = zeros([height width]);
for y=0:1:height
for x=0:1:width
point = ([x;y]-C)'*A*([x;y]-C);
if point < 1
filled_mask(y,x) = 1;
endif
endfor
endfor
Although this is technically a solution to the problem, I am interested in a non-iterative solution. I'm running this script over many large images, and need it to be very fast and parallel.
EDIT 3:
Thanks #mathematical.coffee for this solution:
[X,Y] = meshgrid(0:width,0:height);
fill_mask=arrayfun(#(x,y) ([x;y]-C)'*A*([x;y]-C),X,Y) < 1;
However, I believe there is yet a better way to do this. Here is a for-loop implementation that I did that runs faster than both above attempts:
ellip_mask = zeros([height width]);
[U D V] = svd(A);
a = 1/sqrt(D(1,1));
b = 1/sqrt(D(2,2));
maxab = ceil(max(a,b));
xstart = round(max(C(1)-maxab,1));
xend = round(min(C(1)+maxab,width));
ystart = round(max(C(2)-maxab,1));
yend = round(min(C(2)+maxab,height));
for y = ystart:1:yend
for x = xstart:1:xend
point = ([x;y]-C)'*A*([x;y]-C);
if point < 1
ellip_mask(y,x) = 1;
endif
endfor
endfor
Is there a way to accomplish this goal (the total image size is still [width height]) without this for-loop? The reason this is faster is because I don't have to iterate over the entire image to determine if my point is within the ellipse. Instead, I can simply iterate over a square region that is the length of the center +/- the largest principle axis.
Expanding the matrix multiply, which is an elliptic norm, gives a fairly simply vectorized expression:
[X,Y] = meshgrid(0:width,0:height);
X = X - C(1);
Y = Y - C(2);
fill_mask = X.^2 * A(1,1) + X.*Y * (A(1,2) + A(2,1)) + Y.^2 * A(2,2) < 1;
This is what I intended by my original comment.