special point distance transform in matlab - matlab

I use matlab to calculate the distance transform of a binary image, and I found that bwdist() can calculate distances of all the points of the image, but I just want to know the distance of a special point.
for example,I have a binary image like this
image =
1 0 0
0 0 1
0 0 0
The bwdist() compute the distance transform of all points
>> bwdist(a)
ans =
0 1.0000 1.0000
1.0000 1.0000 0
2.0000 1.4142 1.0000
But I just want to compute distance of the point image(3,2), so the function give me 1.4142
any function can do?

You can use find to find row and column indices for all 1's, then use pdist2 from Statistics and Machine Learning Toolbox to calculate distances for all 1's from the search point (3,2) and finally choose the minimum of those distances to get the final output. Here's the implementation shown as a sample run -
>> image
image =
1 0 0
0 0 1
0 0 0
>> point
point =
3 2
>> [R,C] = find(image);
>> min(pdist2([R C],point))
ans =
1.4142
If you don't have access to pdist2, you can use bsxfun to replace it like so -
min(sqrt(sum(bsxfun(#minus,[R C],point).^2,2)))

Related

Multiplication of matrices involving inverse operation: getting infinity

In my earlier question asked here : Matlab: How to compute the inverse of a matrix
I wanted to know how to perform inverse operation
A = [1/2, (1j/2), 0;
1/2, (-1j/2), 0;
0,0,1]
T = A.*1
Tinv = inv(T)
The output is Tinv =
1.0000 1.0000 0
0 - 1.0000i 0 + 1.0000i 0
0 0 1.0000
which is the same as in the second picture. The first picture is the matrix A
However for a larger matrix say 5 by 5, if I don't use the identity, I to perform element wise multiplication, I am getting infinity value. Here is an example
A = [1/2, (1j/2), 1/2, (1j/2), 0;
1/2, (-1j/2), 1/2, (-1j/2), 0;
1/2, (1j/2), 1/2, (1j/2), 0;
1/2, (-1j/2), 1/2, (-1j/2), 0;
0, 0 , 0 , 0, 1.00
];
T = A.*1
Tinv = inv(T)
Tinv =
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
So, I tried to multiply T = A.*I where I = eye(5) then took the inverse Eventhough, I don't get infinity value, I am getting element 2 which is not there in the picture for 3 by 3 matrix case. Here is the result
Tinv =
2.0000 0 0 0 0
0 0 + 2.0000i 0 0 0
0 0 2.0000 0 0
0 0 0 0 + 2.0000i 0
0 0 0 0 1.0000
If for 3 by 3 matrix case, I use I = eye(3), then again I get element 2.
Tinv =
2.0000 0 0
0 0 + 2.0000i 0
0 0 1.0000
What is the proper method?
Question : For general case, for any sized matrix m by m, should I multiply using I = eye(m) ? Using I prevents infinity values, but results in new numbers 2. I am really confused. Please help
UPDATE: Here is the full image where Theta is a vector of 3 unknowns which are Theta1, Theta1* and Theta2 are 3 scalar valued parameters. Theta1 is a complex valued number, so we are representing it into two parts, Theta1 and Theta1* and Theta2 is a real valued number. g is a complex valued function. The expression of the derivative of a complex valued function with respect to Theta evaluates to T^H. Since, there are 3 unknowns, the matrix T should be of size 3 by 3.
your problem is slightly different than you think. The symbols (I, 0) in the matrices in the images are not necessarily scalars (only for n = 1), but they are actually square matrices.
I is an identity matrix and 0 is a matrix of zeros. if you treat these matrix like that you will get the expected answers:
n = 2; % size of the sub-matrices
I = eye(n); % identity matrix
Z = zeros(n); % matrix of zeros
% your T matrix
T = [1/2*I, (1j/2)*I, Z;
1/2*I, (-1j/2)*I, Z;
Z,Z,I];
% inverse of T
Tinv1 = inv(T);
% expected result
Tinv2 = [I,I,Z;
-1j*I,1j*I,Z;
Z,Z,I];
% max difference between computed and expected
maxDist = max(abs(Tinv1(:) - Tinv2(:)))
First you should know, whether you should do
T = A.*eye(...)
or
I = A.*1 %// which actually does nothing
These are completely different things. Be sure what you need, then think about the code.
The reason why you get all inf is because the determinant det of your matrix is zero.
det(T) == 0
So from the mathematical point of view your result is correct, as building the inverse requires every element of T to be divided by det(T). Your matrix cannot be inversed. If it should be possible, the error is in your input matrix, or again in your understanding of the actual underlying problem to solve.
Edit
After your question update, it feels like you're actually looking for ctranpose instead of inv.

Power Method in MATLAB

I would like to implement the Power Method for determining the dominant eigenvalue and eigenvector of a matrix in MATLAB.
Here's what I wrote so far:
%function to implement power method to compute dominant
%eigenvalue/eigenevctor
function [m,y_final]=power_method(A,x);
m=0;
n=length(x);
y_final=zeros(n,1);
y_final=x;
tol=1e-3;
while(1)
mold=m;
y_final=A*y_final;
m=max(y_final);
y_final=y_final/m;
if (m-mold)<tol
break;
end
end
end
With the above code, here is a numerical example:
A=[1 1 -2;-1 2 1; 0 1 -1]
A =
1 1 -2
-1 2 1
0 1 -1
>> x=[1 1 1];
>> x=x';
>> [m,y_final]=power_method(A,x);
>> A*x
ans =
0
2
0
When comparing with the eigenvalues and eigenvectors of the above matrix in MATLAB, I did:
[V,D]=eig(A)
V =
0.3015 -0.8018 0.7071
0.9045 -0.5345 0.0000
0.3015 -0.2673 0.7071
D =
2.0000 0 0
0 1.0000 0
0 0 -1.0000
The eigenvalue coincides, but the eigenvector should be approaching [1/3 1 1/3]. Here, I get:
y_final
y_final =
0.5000
1.0000
0.5000
Is this acceptable to see this inaccuracy, or am I making some mistake?
You have the correct implementation, but you're not checking both the eigenvector and eigenvalue for convergence. You're only checking the eigenvalue for convergence. The power method estimates both the prominent eigenvector and eigenvalue, so it's probably a good idea to check to see if both converged. When I did that, I managed to get [1/3 1 1/3]. Here is how I modified your code to facilitate this:
function [m,y_final]=power_method(A,x)
m=0;
n=length(x);
y_final=x;
tol=1e-10; %// Change - make tolerance more small to ensure convergence
while(1)
mold = m;
y_old=y_final; %// Change - Save old eigenvector
y_final=A*y_final;
m=max(y_final);
y_final=y_final/m;
if abs(m-mold) < tol && norm(y_final-y_old,2) < tol %// Change - Check for both
break;
end
end
end
When I run the above code with your example input, I get:
>> [m,y_final]=power_method(A,x)
m =
2
y_final =
0.3333
1.0000
0.3333
On a side note with regards to eig, MATLAB most likely scaled that eigenvector using another norm. Remember that eigenvectors are not unique and are accurate up to scale. If you want to be sure, simply take the first column of V, which coincides with the dominant eigenvector, and divide by the largest value so that we can get one component to be normalized with the value of 1, just like the Power Method:
>> [V,D] = eig(A);
>> V(:,1) / max(abs(V(:,1)))
ans =
0.3333
1.0000
0.3333
This agrees with what you have observed.

Jaccard index on Matlab produces wrong results

I have the following matrix
a =
0 10 10 0 0
0 5 5 0 0
1 0 0 50 51
0 0 10 100 100
I compute the Jaccard distances
D = pdist(a,'jaccard');
D =
1.0000 1.0000 0.7500 1.0000 1.0000 1.0000
and finally I put the distances in a matrix
sim = squareform(D)
sim =
0 1.0000 1.0000 0.7500
1.0000 0 1.0000 1.0000
1.0000 1.0000 0 1.0000
0.7500 1.0000 1.0000 0
The jaccard index is computed as "One minus the Jaccard coefficient, which is the percentage of nonzero coordinates that differ." (http://www.mathworks.it/help/stats/pdist.html)
The distance between row 1 and 4 is correct (0.75), while the distance between row 1 and 2 should be 0 and is, instead, 1. It seems that when the jaccard similarity is 1, matlab doesn't execute the 1-similarity computation. What am I doing wrong?
MATLAB seems right to me.
All of the non-zero numbers in rows 1 and 2 differ (in row 1 they're all 10, in row 2 they're all 5), so rows 1 and 2 should have a distance of 1.
Three out of four of the non-zero numbers in rows 1 and 4 differ (10:0, 10:10, 0:100, 0:100), so rows 1 and 4 should have a distance of 0.75.
There seems to be a lot of disagreement about what thing is the Jaccard "coefficient", the Jaccard "index", the Jaccard "similarity" and the Jaccard "distance", and which is one minus the other. MATLAB's documentation doesn't help, as it's not obvious, in the sentence you quote, whether "which" refers to (what MATLAB is describing as) the Jaccard coefficient, or to one minus the Jaccard coefficient.
In any case, whether the terminology used by the MATLAB documentation is correct, the function pdist seems to be giving consistent results, and you can always take one minus whatever it outputs if you want something different.

How to implement correlation kernel for image

I read a paper that said about correlation kernel that defined:
W(x−y)=(α/1+d(|y−x|))
where α=(∫(1+d(y−x)dy)−1, (d(|y−x|)) is spatial Euclidean distance from the central pixel.
Given an image I. Could you help me implement convolution that kernel with the image by matlab code. Thank you so much
OK! Sorry for the delay. Referencing the paper, the convolution kernel can be written as:
d(|y-x|) is the Euclidean distance between the centre pixel y and a location in the kernel x. \alpha is used to ensure that the entire area under the kernel is 1. However, you did not specify how big this kernel is. As such, we will specify the rows and columns of this kernel to be M and N respectively. Let's also assume that the size of the kernel for each dimension is odd. The reason why is because the shape of the kernel will be an even square and makes implementation easier. As such, here are the steps that I would perform to do this:
Define a grid of X and Y co-ordinates, and ensure that the centre pixel is at 0.
Compute each term in the convolution kernel without the \alpha term.
Sum up all of the terms in the kernel, then divide every value in this kernel by this term so that the entire area of the kernel is 1.
Let's do this step by step:
Step #1
We can do this by using meshgrid. meshgrid (in this case) creates a 2D grid of (X,Y) co-ordinates. X defines the horizontal co-ordinate for each location in X, while Y defines this vertically. By calling meshgrid(1:m, 1:n), I am creating a n x m grid for both X and Y, where each row of X progresses from 1 to m, while each column of Y progresses from 1 to n. Therefore, these will both be n x m matrices. Calling the above with m = 4 and n = 4 computes:
m = 4;
n = 4;
[X,Y] = meshgrid(1:m, 1:n)
X =
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
Y =
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
As such, we simply have to modify this, but we ensure that the centre is at (0,0), and also ensure that the size of X and Y are odd. Let's say that M = 5 and N = 5. We can then define our X and Y co-ordinates like so:
M = 5;
N = 5;
[X,Y] = meshgrid(-floor(N/2):floor(N/2), -floor(M/2):floor(M/2))
X =
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
Y =
-2 -2 -2 -2 -2
-1 -1 -1 -1 -1
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
As you can see here, the centre pixel for both X and Y are defined as (0,0). Everywhere else has its (X,Y) co-ordinates defined with respect to the centre.
Step #2
We simply have to compute the Euclidean distance between the centre pixel to every point in the kernel. This can be done by:
dis = sqrt(X.^2 + Y.^2)
dis =
2.8284 2.2361 2.0000 2.2361 2.8284
2.2361 1.4142 1.0000 1.4142 2.2361
2.0000 1.0000 0 1.0000 2.0000
2.2361 1.4142 1.0000 1.4142 2.2361
2.8284 2.2361 2.0000 2.2361 2.8284
Doing some quick calculation checks, you can see that this agrees with our understanding of Euclidean distance. Moving to the left by 1 from the centre is a distance of 1. Moving to the left by 1 then up by 1 gives us a Euclidean distance of \sqrt(1^2 + 1^2) = \sqrt(2) = 1.4142. Doing similar checks with each element will demonstrate that this is indeed a Euclidean distance field from the centre pixel. After we do this, let's compute the kernel terms without the \alpha term.
kern = 1 ./ (1 + dis)
kern =
0.2612 0.3090 0.3333 0.3090 0.2612
0.3090 0.4142 0.5000 0.4142 0.3090
0.3333 0.5000 1.0000 0.5000 0.3333
0.3090 0.4142 0.5000 0.4142 0.3090
0.2612 0.3090 0.3333 0.3090 0.2612
Step #3
The last step we need is to normalize the mask so that the total sum of the kernel is 1. This can simply be done by:
kernFinal = kern / sum(kern(:))
kernFinal =
0.0275 0.0325 0.0351 0.0325 0.0275
0.0325 0.0436 0.0526 0.0436 0.0325
0.0351 0.0526 0.1052 0.0526 0.0351
0.0325 0.0436 0.0526 0.0436 0.0325
0.0275 0.0325 0.0351 0.0325 0.0275
This should finally give you the correlation kernel that you are seeking. You can now use this in convolution (i.e. using imfilter or conv2).
Hopefully I have answered your question adequately. Good luck!

problem in finding eigenvectors of a matrix in MATLAB

I have a symmetric matrix with the elements A=[8.8191,0,1.0261; 0,3,0; 1.0261,0,3.1809];
I used the eig(A) function in MATLAB , the eigenvalues and eigenvectors are given :
eigvect =
0.1736 0 0.9848
0 -1.0000 0
-0.9848 0 0.1736
eigval =
3.0000 0 0
0 3.0000 0
0 0 9.0000
Eigenvalues are correct but the eigenvectors are not which I expect, because I think 2 of them should be equal. Does MATLAB calculate correctly the eigenvectors?
The definition of an eigenvalue can be found anywhere on the web
A*v = lam*v
v being the eigenvector with lam, its corresponding eigenvalue.
So test your results:
i =1;
A*eigvect (:,i)-eigval(i,i)*eigvect(:,i) %which should be approx [0;0;0]
It is not necessary that each of the repeating eigenvalue should have its (independent) associated eigenvector. This means, an nxn matrix with an eigenvalue repeating more than once has less or equal to n linearly independent eigenvectors.
Example 1: Matrix
2 0;
0 2
has eigenvalue 2 (repeating twice), but it has two linearly independent eigenvectors associated with eigenvalue 2
Example 2:Matrix
A= 1 1 1 -2;
0 1 0 -1;
0 0 1 1;
0 0 0 1
has eigenvalue 1 (repeating four times), but it has only two independent eigenvectors associated with eigenvalue 1.