I read a paper that said about correlation kernel that defined:
W(x−y)=(α/1+d(|y−x|))
where α=(∫(1+d(y−x)dy)−1, (d(|y−x|)) is spatial Euclidean distance from the central pixel.
Given an image I. Could you help me implement convolution that kernel with the image by matlab code. Thank you so much
OK! Sorry for the delay. Referencing the paper, the convolution kernel can be written as:
d(|y-x|) is the Euclidean distance between the centre pixel y and a location in the kernel x. \alpha is used to ensure that the entire area under the kernel is 1. However, you did not specify how big this kernel is. As such, we will specify the rows and columns of this kernel to be M and N respectively. Let's also assume that the size of the kernel for each dimension is odd. The reason why is because the shape of the kernel will be an even square and makes implementation easier. As such, here are the steps that I would perform to do this:
Define a grid of X and Y co-ordinates, and ensure that the centre pixel is at 0.
Compute each term in the convolution kernel without the \alpha term.
Sum up all of the terms in the kernel, then divide every value in this kernel by this term so that the entire area of the kernel is 1.
Let's do this step by step:
Step #1
We can do this by using meshgrid. meshgrid (in this case) creates a 2D grid of (X,Y) co-ordinates. X defines the horizontal co-ordinate for each location in X, while Y defines this vertically. By calling meshgrid(1:m, 1:n), I am creating a n x m grid for both X and Y, where each row of X progresses from 1 to m, while each column of Y progresses from 1 to n. Therefore, these will both be n x m matrices. Calling the above with m = 4 and n = 4 computes:
m = 4;
n = 4;
[X,Y] = meshgrid(1:m, 1:n)
X =
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
Y =
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
As such, we simply have to modify this, but we ensure that the centre is at (0,0), and also ensure that the size of X and Y are odd. Let's say that M = 5 and N = 5. We can then define our X and Y co-ordinates like so:
M = 5;
N = 5;
[X,Y] = meshgrid(-floor(N/2):floor(N/2), -floor(M/2):floor(M/2))
X =
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
Y =
-2 -2 -2 -2 -2
-1 -1 -1 -1 -1
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
As you can see here, the centre pixel for both X and Y are defined as (0,0). Everywhere else has its (X,Y) co-ordinates defined with respect to the centre.
Step #2
We simply have to compute the Euclidean distance between the centre pixel to every point in the kernel. This can be done by:
dis = sqrt(X.^2 + Y.^2)
dis =
2.8284 2.2361 2.0000 2.2361 2.8284
2.2361 1.4142 1.0000 1.4142 2.2361
2.0000 1.0000 0 1.0000 2.0000
2.2361 1.4142 1.0000 1.4142 2.2361
2.8284 2.2361 2.0000 2.2361 2.8284
Doing some quick calculation checks, you can see that this agrees with our understanding of Euclidean distance. Moving to the left by 1 from the centre is a distance of 1. Moving to the left by 1 then up by 1 gives us a Euclidean distance of \sqrt(1^2 + 1^2) = \sqrt(2) = 1.4142. Doing similar checks with each element will demonstrate that this is indeed a Euclidean distance field from the centre pixel. After we do this, let's compute the kernel terms without the \alpha term.
kern = 1 ./ (1 + dis)
kern =
0.2612 0.3090 0.3333 0.3090 0.2612
0.3090 0.4142 0.5000 0.4142 0.3090
0.3333 0.5000 1.0000 0.5000 0.3333
0.3090 0.4142 0.5000 0.4142 0.3090
0.2612 0.3090 0.3333 0.3090 0.2612
Step #3
The last step we need is to normalize the mask so that the total sum of the kernel is 1. This can simply be done by:
kernFinal = kern / sum(kern(:))
kernFinal =
0.0275 0.0325 0.0351 0.0325 0.0275
0.0325 0.0436 0.0526 0.0436 0.0325
0.0351 0.0526 0.1052 0.0526 0.0351
0.0325 0.0436 0.0526 0.0436 0.0325
0.0275 0.0325 0.0351 0.0325 0.0275
This should finally give you the correlation kernel that you are seeking. You can now use this in convolution (i.e. using imfilter or conv2).
Hopefully I have answered your question adequately. Good luck!
Related
I use matlab to calculate the distance transform of a binary image, and I found that bwdist() can calculate distances of all the points of the image, but I just want to know the distance of a special point.
for example,I have a binary image like this
image =
1 0 0
0 0 1
0 0 0
The bwdist() compute the distance transform of all points
>> bwdist(a)
ans =
0 1.0000 1.0000
1.0000 1.0000 0
2.0000 1.4142 1.0000
But I just want to compute distance of the point image(3,2), so the function give me 1.4142
any function can do?
You can use find to find row and column indices for all 1's, then use pdist2 from Statistics and Machine Learning Toolbox to calculate distances for all 1's from the search point (3,2) and finally choose the minimum of those distances to get the final output. Here's the implementation shown as a sample run -
>> image
image =
1 0 0
0 0 1
0 0 0
>> point
point =
3 2
>> [R,C] = find(image);
>> min(pdist2([R C],point))
ans =
1.4142
If you don't have access to pdist2, you can use bsxfun to replace it like so -
min(sqrt(sum(bsxfun(#minus,[R C],point).^2,2)))
I would like to implement the Power Method for determining the dominant eigenvalue and eigenvector of a matrix in MATLAB.
Here's what I wrote so far:
%function to implement power method to compute dominant
%eigenvalue/eigenevctor
function [m,y_final]=power_method(A,x);
m=0;
n=length(x);
y_final=zeros(n,1);
y_final=x;
tol=1e-3;
while(1)
mold=m;
y_final=A*y_final;
m=max(y_final);
y_final=y_final/m;
if (m-mold)<tol
break;
end
end
end
With the above code, here is a numerical example:
A=[1 1 -2;-1 2 1; 0 1 -1]
A =
1 1 -2
-1 2 1
0 1 -1
>> x=[1 1 1];
>> x=x';
>> [m,y_final]=power_method(A,x);
>> A*x
ans =
0
2
0
When comparing with the eigenvalues and eigenvectors of the above matrix in MATLAB, I did:
[V,D]=eig(A)
V =
0.3015 -0.8018 0.7071
0.9045 -0.5345 0.0000
0.3015 -0.2673 0.7071
D =
2.0000 0 0
0 1.0000 0
0 0 -1.0000
The eigenvalue coincides, but the eigenvector should be approaching [1/3 1 1/3]. Here, I get:
y_final
y_final =
0.5000
1.0000
0.5000
Is this acceptable to see this inaccuracy, or am I making some mistake?
You have the correct implementation, but you're not checking both the eigenvector and eigenvalue for convergence. You're only checking the eigenvalue for convergence. The power method estimates both the prominent eigenvector and eigenvalue, so it's probably a good idea to check to see if both converged. When I did that, I managed to get [1/3 1 1/3]. Here is how I modified your code to facilitate this:
function [m,y_final]=power_method(A,x)
m=0;
n=length(x);
y_final=x;
tol=1e-10; %// Change - make tolerance more small to ensure convergence
while(1)
mold = m;
y_old=y_final; %// Change - Save old eigenvector
y_final=A*y_final;
m=max(y_final);
y_final=y_final/m;
if abs(m-mold) < tol && norm(y_final-y_old,2) < tol %// Change - Check for both
break;
end
end
end
When I run the above code with your example input, I get:
>> [m,y_final]=power_method(A,x)
m =
2
y_final =
0.3333
1.0000
0.3333
On a side note with regards to eig, MATLAB most likely scaled that eigenvector using another norm. Remember that eigenvectors are not unique and are accurate up to scale. If you want to be sure, simply take the first column of V, which coincides with the dominant eigenvector, and divide by the largest value so that we can get one component to be normalized with the value of 1, just like the Power Method:
>> [V,D] = eig(A);
>> V(:,1) / max(abs(V(:,1)))
ans =
0.3333
1.0000
0.3333
This agrees with what you have observed.
I have a matrix A composed by 4 vectors (columns) of 12 elements each
A = [ 0 0 0 0;
0.0100 0.0100 0.0100 0;
0.3000 0.2700 0.2400 0.2400;
0.0400 0 0.0200 0.0200;
0.1900 0.0400 0.0800 0.0800;
0.1600 0.6500 0.2100 0.3800;
0.0600 0.0100 0.0300 0.0200;
0.1500 0.0100 0.0600 0.1700;
0 0 0 0.0800;
0.0300 0 0.0200 0.0100;
0.0700 0 0.1200 0.0100;
0 0 0.2300 0]
I also have a similarity matrix that states how much a vector is similar to the others
SIM =[1.00 0.6400 0.7700 0.8300;
0.6400 1.0000 0.6900 0.9100;
0.7700 0.6900 1.0000 0.7500;
0.8300 0.9100 0.7500 1.0000]
reading the rows of this matrix
vetor 1 is similar to vector 2 for 64%
vector 1 is similar to vector 3 for the 77%
...
I would like to create a dendrogram graph that shows me how many different groups there are in A considering a threshold of 0.95 for similarity (i.e. if 2 groups have a similarity >0.7 connect them)
I didn't really understand how to use this function with my data...
Not sure I understood correctly you question, but for what I've understood I will do that:
DSIM = squareform(1-SIM); % convert to a dissimilarity vector
it gives the result:
% DSIM = 0.3600 0.2300 0.1700 0.3100 0.0900 0.2500
% DSIM = 1 vs 2 , 1 vs 3 , 1 vs 4, 2 vs 3, 2 vs 4, 3 vs 4 ;
After, compute the linkage:
Z = linkage (DSIM,'average'); % there is other grouping option than average
You can plot the dendrogram with:
dendrogram(Z)
However, you want to split the groups according to a threshold so:
c = 0.1;
This is the dissimilarity at which to cut, here it means that two groups will be connected if they have a similarity higher than 0.9
T = cluster(tree,'cutoff',c,'criterion','distance')
The result of T in that case is:
T =
1
2
3
2
This means that at this level your vectors 1, 2, 3, 4 (call them A B C D) are organized in 3 groups:
A
B,D
C
Also, with c = 0.3, or 0.7 similarity:
T = 1 1 1 1
So there is just one group here.
To have that on the dendrogram you can calculate the number of groups:
num_grp = numel(unique(T));
After:
dendrogram(tree,num_grp,'labels',{'A','B','C','D'})
In that case the dendrogram won't display all groups because you set the maximum of nodes equal to the number of groups.
I have the defination of Truncated gaussian kernel as:
So I confuse which is correct implementation of truncated gaussian kernel. Let see two case and let me know, thank you so much
Case 1:
G_truncated=fspecial('gaussian',round(2*sigma)*2 + 1,sigma); % kernel
Case 2:
G=fspecial('gaussian',round(2*sigma)*2 + 1,sigma); % normal distribution kernel
B = ones(round(2*sigma)*2 + 1,round(2*sigma)*2 + 1);
G_truncated=G.*B;
G_truncated = G_truncated/sum(G_truncated(:)); %normalized for sum=1
To add on to the previous post, there is a question of how to implement the kernel. You could use fspecial, truncate the kernel so that anything outside of the radius is zero, then renormalize it, but I'm assuming you'll want to do this from first principles.... so let's figure that out then. First, you need to generate a spatial map of distances from the centre of the mask. In conjunction, you use this to figure out what the Gaussian values (un-normalized) would be. You filter out those values in the un-normalized mask based on the spatial map of distances, then normalize that. As such, given your standard deviation tau, and your radius rho, you can do this:
%// Find grid of points
[X,Y] = meshgrid(-rho : rho, -rho : rho)
dists = (X.^2 + Y.^2); %// Find distances from the centre (Euclidean distance squared)
gaussVal = exp(-dists / (2*tau*tau)); %// Find unnormalized Gaussian values
%// Filter out those locations that are outside radius and set to 0
gaussVal(dists > rho^2) = 0;
%// Now normalize
gaussMask = gaussVal / (sum(gaussVal(:)));
Here is an example with using rho = 2 and tau = 2 with the outputs at each stage:
Stage #1 - Find grid co-ordinates
>> X
X =
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
>> Y
Y =
-2 -2 -2 -2 -2
-1 -1 -1 -1 -1
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
Step #2 - Find distances from centre and unnormalized Gaussian values
>> dists
dists =
8 5 4 5 8
5 2 1 2 5
4 1 0 1 4
5 2 1 2 5
8 5 4 5 8
>> gaussVal
gaussVal =
0.3679 0.5353 0.6065 0.5353 0.3679
0.5353 0.7788 0.8825 0.7788 0.5353
0.6065 0.8825 1.0000 0.8825 0.6065
0.5353 0.7788 0.8825 0.7788 0.5353
0.3679 0.5353 0.6065 0.5353 0.3679
Step #3 - Filter out locations that don't belong within the radius and set to 0
>> gaussVal =
0 0 0.6065 0 0
0 0.7788 0.8825 0.7788 0
0.6065 0.8825 1.0000 0.8825 0.6065
0 0.7788 0.8825 0.7788 0
0 0 0.6065 0 0
Step #4 - Normalize so sum is equal to 1
>> gaussMask =
0 0 0.0602 0 0
0 0.0773 0.0876 0.0773 0
0.0602 0.0876 0.0993 0.0876 0.0602
0 0.0773 0.0876 0.0773 0
0 0 0.0602 0 0
To verify that the mask sums to 1, just do sum(gaussMask(:)) and you'll see it's equal to 1... more or less :)
Your definition of truncated gaussian kernel is different than how MATLAB truncates filter kernels, though it generally won't matter in practice for sizable d.
fspecial already returns truncated AND normalized filter, so the second case is redundant, because it generates exactly the same result as case 1.
From MATLAB help:
H = fspecial('gaussian',HSIZE,SIGMA) returns a rotationally
symmetric Gaussian lowpass filter of size HSIZE with standard
deviation SIGMA (positive). HSIZE can be a vector specifying the
number of rows and columns in H or a scalar, in which case H is a
square matrix.
The default HSIZE is [3 3], the default SIGMA is 0.5.
You can use fspecial('gaussian',1,sigma) to generate a 1x1 filter and see that it is indeed normalized.
To generate a filter kernel that fits your definition, you need to make B in your second case a matrix that has ones in a circular area. A less strict (but nonetheless redundant in practice) solution is to use fspecial('disk',size) to truncate your gaussian kernel. Don't forget to normalize it in either case.
The answer of rayryeng is very useful for me. I only extend the gaussian kernel to ball kernel. The ball kernel is defined :
So based on answer of rayryeng. We can do it by
sigma=2;
rho=sigma;
tau=sigma;
%// Find grid of points
[X,Y] = meshgrid(-rho : rho, -rho : rho)
dists = (X.^2 + Y.^2); %// Find distances from the centre (Euclidean distance squared)
ballVal=dists;
ballVal(dists>sigma)=0;
ballVal(dists<=sigma)=1;
%// Now normalize
ballMask = ballVal / (sum(ballVal(:)));
Let me know, if it has any error or problem. Thank you
I am trying to use matlab for calculating the approximation of a function using the composite trapezoidal rule, and then displaying the function and approximation using a surf function and a bar3 function. the thing is is that when I try plot the function surf(x,y,Z) I receive and error saying dimension's mismatch.
my question is how would I get the surf function to plot the 3D graph when my x,y and z arrays differ in size.
I've tried to create zeros functions of the the x and y array's of the same size and then adding my values to each, then NaN'ing the extra 0's, but as u see each of my arrays start with 0's therefore NaN'ing where i find a zero in my arrays will effect my graph plot. and plus i still get the same error "dimensions mismatch" so i supposed thats because my Z array is bigger than my x and y.
Any help would be appreciated.
code for my x and y are:
`
x = linspace(a,b,h); %h being 11 and breaks up the difference because datapoints a and b into h number of sub intervals
y = linspace(c,d,k); %k being 6 and breaks up the difference because data points c and d into k number of sub intervals
Z = zeros(h,k);
for i = 1:1:h
for j = 1:1:k
Z(i,j) = f(x(i),y(j));
end
end
surf(x,y,Z);
`
x
0 0.3000 0.6000 0.9000 1.2000 1.5000 1.8000 2.1000 2.4000 2.7000 3.0000
y
0 0.6286 1.2571 1.8857 2.5143 3.1429
Z
0 0 0 0 0 0
0 0.1764 0.2854 0.2852 0.1761 -0.0004
0 0.3528 0.5707 0.5705 0.3522 -0.0008
0 0.5292 0.8561 0.8557 0.5283 -0.0011
0 0.7056 1.1415 1.1410 0.7044 -0.0015
0 0.8820 1.4268 1.4262 0.8804 -0.0019
0 1.0584 1.7122 1.7115 1.0565 -0.0023
0 1.2348 1.9975 1.9967 1.2326 -0.0027
0 1.4112 2.2829 2.2820 1.4087 -0.0030
0 1.5876 2.5683 2.5672 1.5848 -0.0034
0 1.7640 2.8536 2.8525 1.7609 -0.0038
Error using surf (line 75)
Data dimensions must agree.
Error in CompositeTrapazoidal>btnSolve_Callback (line 167)
surf(x,y,Z);
Try surf(x,y,Z'); (because x's length should match the Z's column count)