Truncated gaussian kernel implementation Matlab,right? - matlab

I have the defination of Truncated gaussian kernel as:
So I confuse which is correct implementation of truncated gaussian kernel. Let see two case and let me know, thank you so much
Case 1:
G_truncated=fspecial('gaussian',round(2*sigma)*2 + 1,sigma); % kernel
Case 2:
G=fspecial('gaussian',round(2*sigma)*2 + 1,sigma); % normal distribution kernel
B = ones(round(2*sigma)*2 + 1,round(2*sigma)*2 + 1);
G_truncated=G.*B;
G_truncated = G_truncated/sum(G_truncated(:)); %normalized for sum=1

To add on to the previous post, there is a question of how to implement the kernel. You could use fspecial, truncate the kernel so that anything outside of the radius is zero, then renormalize it, but I'm assuming you'll want to do this from first principles.... so let's figure that out then. First, you need to generate a spatial map of distances from the centre of the mask. In conjunction, you use this to figure out what the Gaussian values (un-normalized) would be. You filter out those values in the un-normalized mask based on the spatial map of distances, then normalize that. As such, given your standard deviation tau, and your radius rho, you can do this:
%// Find grid of points
[X,Y] = meshgrid(-rho : rho, -rho : rho)
dists = (X.^2 + Y.^2); %// Find distances from the centre (Euclidean distance squared)
gaussVal = exp(-dists / (2*tau*tau)); %// Find unnormalized Gaussian values
%// Filter out those locations that are outside radius and set to 0
gaussVal(dists > rho^2) = 0;
%// Now normalize
gaussMask = gaussVal / (sum(gaussVal(:)));
Here is an example with using rho = 2 and tau = 2 with the outputs at each stage:
Stage #1 - Find grid co-ordinates
>> X
X =
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
>> Y
Y =
-2 -2 -2 -2 -2
-1 -1 -1 -1 -1
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
Step #2 - Find distances from centre and unnormalized Gaussian values
>> dists
dists =
8 5 4 5 8
5 2 1 2 5
4 1 0 1 4
5 2 1 2 5
8 5 4 5 8
>> gaussVal
gaussVal =
0.3679 0.5353 0.6065 0.5353 0.3679
0.5353 0.7788 0.8825 0.7788 0.5353
0.6065 0.8825 1.0000 0.8825 0.6065
0.5353 0.7788 0.8825 0.7788 0.5353
0.3679 0.5353 0.6065 0.5353 0.3679
Step #3 - Filter out locations that don't belong within the radius and set to 0
>> gaussVal =
0 0 0.6065 0 0
0 0.7788 0.8825 0.7788 0
0.6065 0.8825 1.0000 0.8825 0.6065
0 0.7788 0.8825 0.7788 0
0 0 0.6065 0 0
Step #4 - Normalize so sum is equal to 1
>> gaussMask =
0 0 0.0602 0 0
0 0.0773 0.0876 0.0773 0
0.0602 0.0876 0.0993 0.0876 0.0602
0 0.0773 0.0876 0.0773 0
0 0 0.0602 0 0
To verify that the mask sums to 1, just do sum(gaussMask(:)) and you'll see it's equal to 1... more or less :)

Your definition of truncated gaussian kernel is different than how MATLAB truncates filter kernels, though it generally won't matter in practice for sizable d.
fspecial already returns truncated AND normalized filter, so the second case is redundant, because it generates exactly the same result as case 1.
From MATLAB help:
H = fspecial('gaussian',HSIZE,SIGMA) returns a rotationally
symmetric Gaussian lowpass filter of size HSIZE with standard
deviation SIGMA (positive). HSIZE can be a vector specifying the
number of rows and columns in H or a scalar, in which case H is a
square matrix.
The default HSIZE is [3 3], the default SIGMA is 0.5.
You can use fspecial('gaussian',1,sigma) to generate a 1x1 filter and see that it is indeed normalized.
To generate a filter kernel that fits your definition, you need to make B in your second case a matrix that has ones in a circular area. A less strict (but nonetheless redundant in practice) solution is to use fspecial('disk',size) to truncate your gaussian kernel. Don't forget to normalize it in either case.

The answer of rayryeng is very useful for me. I only extend the gaussian kernel to ball kernel. The ball kernel is defined :
So based on answer of rayryeng. We can do it by
sigma=2;
rho=sigma;
tau=sigma;
%// Find grid of points
[X,Y] = meshgrid(-rho : rho, -rho : rho)
dists = (X.^2 + Y.^2); %// Find distances from the centre (Euclidean distance squared)
ballVal=dists;
ballVal(dists>sigma)=0;
ballVal(dists<=sigma)=1;
%// Now normalize
ballMask = ballVal / (sum(ballVal(:)));
Let me know, if it has any error or problem. Thank you

Related

Storing the output of a for loop in a (block) diagonal matrix when the output of the for loop is 2*2 matrix?

I have a “for loop” with 1000 iterations, where its output is a 2*2 matrix. So, my question is that “how can I store the output of all iteration as a diagonal (or block diagonal) matrix?”
X_t=[0 0.0016 0 0 -0.0015 0;0 0.0005 0 0 -0.0010 0];
X=[0 2 0 0 -1 0;0 0.5 0 0 -0.01 0];
ar = linspace(1e-3,1e2,1000);
w = 1i*ar;
M = zeros(2*numel(w));
for k=1:numel(w)
P=[(10*exp(-w(k)))/(12*w(k)+1) (-8*exp(-3*w(k)))/(7*w(k)+1);
(5*exp(-8*w(k)))/(9*w(k)+1) (-17*exp(-4*w(k)))/(12*w(k)+1)];
W=[1 0;1/w(k) 0;w(k)/(1+0.3*w(k)) 0;0 1;0 1/w(k);0 w(k)/(1+0.3*w(k))];
Z=eye(2)+(X*W)*P;
Y=((1.4)^2)*eye(2);
Z_t=eye(2)+(X_t*W)*P;
index = 2*k + [-1 0];
M(index, index)=(Z'*Z_t)-(Y'*Y);
end
You should determine which indexes you want to change in each iteration:
index = 2*k + [-1 0];
M(index, index)=(Z'*Z_t)-(Y'*Y);
And to improve the performance, you should preallocate the output matrix:
M = zeros(2*numel(w));
Note that it may be more (memory) efficient to use a sparse matrix.
You can check if your matrix is correctly build using the spy command, which graphically displays the non zero matrix entries:

Multiplication of matrices involving inverse operation: getting infinity

In my earlier question asked here : Matlab: How to compute the inverse of a matrix
I wanted to know how to perform inverse operation
A = [1/2, (1j/2), 0;
1/2, (-1j/2), 0;
0,0,1]
T = A.*1
Tinv = inv(T)
The output is Tinv =
1.0000 1.0000 0
0 - 1.0000i 0 + 1.0000i 0
0 0 1.0000
which is the same as in the second picture. The first picture is the matrix A
However for a larger matrix say 5 by 5, if I don't use the identity, I to perform element wise multiplication, I am getting infinity value. Here is an example
A = [1/2, (1j/2), 1/2, (1j/2), 0;
1/2, (-1j/2), 1/2, (-1j/2), 0;
1/2, (1j/2), 1/2, (1j/2), 0;
1/2, (-1j/2), 1/2, (-1j/2), 0;
0, 0 , 0 , 0, 1.00
];
T = A.*1
Tinv = inv(T)
Tinv =
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
So, I tried to multiply T = A.*I where I = eye(5) then took the inverse Eventhough, I don't get infinity value, I am getting element 2 which is not there in the picture for 3 by 3 matrix case. Here is the result
Tinv =
2.0000 0 0 0 0
0 0 + 2.0000i 0 0 0
0 0 2.0000 0 0
0 0 0 0 + 2.0000i 0
0 0 0 0 1.0000
If for 3 by 3 matrix case, I use I = eye(3), then again I get element 2.
Tinv =
2.0000 0 0
0 0 + 2.0000i 0
0 0 1.0000
What is the proper method?
Question : For general case, for any sized matrix m by m, should I multiply using I = eye(m) ? Using I prevents infinity values, but results in new numbers 2. I am really confused. Please help
UPDATE: Here is the full image where Theta is a vector of 3 unknowns which are Theta1, Theta1* and Theta2 are 3 scalar valued parameters. Theta1 is a complex valued number, so we are representing it into two parts, Theta1 and Theta1* and Theta2 is a real valued number. g is a complex valued function. The expression of the derivative of a complex valued function with respect to Theta evaluates to T^H. Since, there are 3 unknowns, the matrix T should be of size 3 by 3.
your problem is slightly different than you think. The symbols (I, 0) in the matrices in the images are not necessarily scalars (only for n = 1), but they are actually square matrices.
I is an identity matrix and 0 is a matrix of zeros. if you treat these matrix like that you will get the expected answers:
n = 2; % size of the sub-matrices
I = eye(n); % identity matrix
Z = zeros(n); % matrix of zeros
% your T matrix
T = [1/2*I, (1j/2)*I, Z;
1/2*I, (-1j/2)*I, Z;
Z,Z,I];
% inverse of T
Tinv1 = inv(T);
% expected result
Tinv2 = [I,I,Z;
-1j*I,1j*I,Z;
Z,Z,I];
% max difference between computed and expected
maxDist = max(abs(Tinv1(:) - Tinv2(:)))
First you should know, whether you should do
T = A.*eye(...)
or
I = A.*1 %// which actually does nothing
These are completely different things. Be sure what you need, then think about the code.
The reason why you get all inf is because the determinant det of your matrix is zero.
det(T) == 0
So from the mathematical point of view your result is correct, as building the inverse requires every element of T to be divided by det(T). Your matrix cannot be inversed. If it should be possible, the error is in your input matrix, or again in your understanding of the actual underlying problem to solve.
Edit
After your question update, it feels like you're actually looking for ctranpose instead of inv.

Matlab calculate 3D similarity transformation. fitgeotrans for 3D

How can I calculate in MatLab similarity transformation between 4 points in 3D?
I can calculate transform matrix from
T*X = Xp,
but it will give me affine matrix due to small errors in points coordinates. How can I fit that matrix to similarity one? I need something like fitgeotrans, but in 3D
Thanks
If I am interpreting your question correctly, you seek to find all coefficients in a 3D transformation matrix that will best warp one point to another. All you really have to do is put this problem into a linear system and solve. Recall that warping one point to another in 3D is simply:
A*s = t
s = (x,y,z) is the source point, t = (x',y',z') is the target point and A would be the 3 x 3 transformation matrix that is formatted such that:
A = [a00 a01 a02]
[a10 a11 a12]
[a20 a21 a22]
Writing out the actual system of equations of A*s = t, we get:
a00*x + a01*y + a02*z = x'
a10*x + a11*y + a12*z = y'
a20*x + a21*y + a22*z = z'
The coefficients in A are what we need to solve for. Re-writing this in matrix form, we get:
[x y z 0 0 0 0 0 0] [a00] [x']
[0 0 0 x y z 0 0 0] * [a01] = [y']
[0 0 0 0 0 0 x y z] [a02] [z']
[a10]
[a11]
[a12]
[a20]
[a21]
[a22]
Given that you have four points, you would simply concatenate rows of the matrix on the left side and the vector on the right
[x1 y1 z1 0 0 0 0 0 0] [a00] [x1']
[0 0 0 x1 y1 z1 0 0 0] [a01] [y1']
[0 0 0 0 0 0 x1 y1 z1] [a02] [z1']
[x2 y2 z2 0 0 0 0 0 0] [a10] [x2']
[0 0 0 x2 y2 z2 0 0 0] [a11] [y2']
[0 0 0 0 0 0 x2 y2 z2] [a12] [z2']
[x3 y3 z3 0 0 0 0 0 0] * [a20] = [x3']
[0 0 0 x3 y3 z3 0 0 0] [a21] [y3']
[0 0 0 0 0 0 x3 y3 z3] [a22] [z3']
[x4 y4 z4 0 0 0 0 0 0] [x4']
[0 0 0 x4 y4 z4 0 0 0] [y4']
[0 0 0 0 0 0 x4 y4 z4] [z4']
S * a = T
S would now be a matrix that contains your four source points in the format shown above, a is now a vector of the transformation coefficients in the matrix you want to solve (ordered in row-major format), and T would be a vector of target points in the format shown above.
To solve for the parameters, you simply have to use the mldivide operator or \ in MATLAB, which will compute the least squares estimate for you. Therefore:
a = S^{-1} * T
As such, simply build your matrix like above, then use the \ operator to solve for your transformation parameters in your matrix. When you're done, reshape T into a 3 x 3 matrix. Therefore:
S = ... ; %// Enter in your source points here like above
T = ... ; %// Enter in your target points in a right hand side vector like above
a = S \ T;
similarity_matrix = reshape(a, 3, 3).';
With regards to your error in small perturbations of each of the co-ordinates, the more points you have the better. Using 4 will certainly give you a solution, but it isn't enough to mitigate any errors in my opinion.
Minor Note: This (more or less) is what fitgeotrans does under the hood. It computes the best homography given a bunch of source and target points, and determines this using least squares.
Hope this answered your question!
The answer by #rayryeng is correct, given that you have a set of up to 3 points in a 3-dimensional space. If you need to transform m points in n-dimensional space (m>n), then you first need to add m-n coordinates to these m points such that they exist in m-dimensional space (i.e. the a matrix in #rayryeng becomes a square matrix)... Then the procedure described by #rayryeng will give you the exact transformation of points, you then just need to select only the coordinates of the transformed points in the original n-dimensional space.
As an example, say you want to transform the points:
(2 -2 2) -> (-3 5 -4)
(2 3 0) -> (3 4 4)
(-4 -2 5) -> (-4 -1 -2)
(-3 4 1) -> (4 0 5)
(5 -4 0) -> (-3 -2 -3)
Notice that you have m=5 points which are n=3-dimensional. So you need to add coordinates to these points such that they are n=m=5-dimensional, and then apply the procedure described by #rayryeng.
I have implemented a function that does that (find it below). You just need to organize the points such that each of the source-points is a column in a matrix u, and each of the target points is a column in a matrix v. The matrices u and v are going to be, thus, 3 by 5 each.
WARNING:
the matrix A in the function may require A LOT of memory for moderately many points nP, because it has nP^4 elements.
To overcome this, for square matrices u and v, you can simply use T=v*inv(u) or T=v/u in MATLAB notation.
The code may run very slowly...
In MATLAB:
u = [2 2 -4 -3 5;-2 3 -2 4 -4;2 0 5 1 0]; % setting the set of source points
v = [-3 3 -4 4 -3;5 4 -1 0 -2;-4 4 -2 5 -3]; % setting the set of target points
T = findLinearTransformation(u,v); % calculating the transformation
You can verify that T is correct by:
I = eye(5);
uu = [u;I((3+1):5,1:5)]; % filling-up the matrix of source points so that you have 5-d points
w = T*uu; % calculating target points
w = w(1:3,1:5); % recovering the 3-d points
w - v % w should match v ... notice that the error between w and v is really small
The function that calculates the transformation matrix:
function [T,A] = findLinearTransformation(u,v)
% finds a matrix T (nP X nP) such that T * u(:,i) = v(:,i)
% u(:,i) and v(:,i) are n-dim col vectors; the amount of col vectors in u and v must match (and are equal to nP)
%
if any(size(u) ~= size(v))
error('findLinearTransform:u','u and v must be the same shape and size n-dim vectors');
end
[n,nP] = size(u); % n -> dimensionality; nP -> number of points to be transformed
if nP > n % if the number of points to be transform exceeds the dimensionality of points
I = eye(nP);
u = [u;I((n+1):nP,1:nP)]; % then fill up the points to be transformed with the identity matrix
v = [v;I((n+1):nP,1:nP)]; % as well as the transformed points
[n,nP] = size(u);
end
A = zeros(nP*n,n*n);
for k = 1:nP
for i = ((k-1)*n+1):(k*n)
A(i,mod((((i-1)*n+1):(i*n))-1,n*n) + 1) = u(:,k)';
end
end
v = v(:);
T = reshape(A\v, n, n).';
end

How to implement correlation kernel for image

I read a paper that said about correlation kernel that defined:
W(x−y)=(α/1+d(|y−x|))
where α=(∫(1+d(y−x)dy)−1, (d(|y−x|)) is spatial Euclidean distance from the central pixel.
Given an image I. Could you help me implement convolution that kernel with the image by matlab code. Thank you so much
OK! Sorry for the delay. Referencing the paper, the convolution kernel can be written as:
d(|y-x|) is the Euclidean distance between the centre pixel y and a location in the kernel x. \alpha is used to ensure that the entire area under the kernel is 1. However, you did not specify how big this kernel is. As such, we will specify the rows and columns of this kernel to be M and N respectively. Let's also assume that the size of the kernel for each dimension is odd. The reason why is because the shape of the kernel will be an even square and makes implementation easier. As such, here are the steps that I would perform to do this:
Define a grid of X and Y co-ordinates, and ensure that the centre pixel is at 0.
Compute each term in the convolution kernel without the \alpha term.
Sum up all of the terms in the kernel, then divide every value in this kernel by this term so that the entire area of the kernel is 1.
Let's do this step by step:
Step #1
We can do this by using meshgrid. meshgrid (in this case) creates a 2D grid of (X,Y) co-ordinates. X defines the horizontal co-ordinate for each location in X, while Y defines this vertically. By calling meshgrid(1:m, 1:n), I am creating a n x m grid for both X and Y, where each row of X progresses from 1 to m, while each column of Y progresses from 1 to n. Therefore, these will both be n x m matrices. Calling the above with m = 4 and n = 4 computes:
m = 4;
n = 4;
[X,Y] = meshgrid(1:m, 1:n)
X =
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4
Y =
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
As such, we simply have to modify this, but we ensure that the centre is at (0,0), and also ensure that the size of X and Y are odd. Let's say that M = 5 and N = 5. We can then define our X and Y co-ordinates like so:
M = 5;
N = 5;
[X,Y] = meshgrid(-floor(N/2):floor(N/2), -floor(M/2):floor(M/2))
X =
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
-2 -1 0 1 2
Y =
-2 -2 -2 -2 -2
-1 -1 -1 -1 -1
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
As you can see here, the centre pixel for both X and Y are defined as (0,0). Everywhere else has its (X,Y) co-ordinates defined with respect to the centre.
Step #2
We simply have to compute the Euclidean distance between the centre pixel to every point in the kernel. This can be done by:
dis = sqrt(X.^2 + Y.^2)
dis =
2.8284 2.2361 2.0000 2.2361 2.8284
2.2361 1.4142 1.0000 1.4142 2.2361
2.0000 1.0000 0 1.0000 2.0000
2.2361 1.4142 1.0000 1.4142 2.2361
2.8284 2.2361 2.0000 2.2361 2.8284
Doing some quick calculation checks, you can see that this agrees with our understanding of Euclidean distance. Moving to the left by 1 from the centre is a distance of 1. Moving to the left by 1 then up by 1 gives us a Euclidean distance of \sqrt(1^2 + 1^2) = \sqrt(2) = 1.4142. Doing similar checks with each element will demonstrate that this is indeed a Euclidean distance field from the centre pixel. After we do this, let's compute the kernel terms without the \alpha term.
kern = 1 ./ (1 + dis)
kern =
0.2612 0.3090 0.3333 0.3090 0.2612
0.3090 0.4142 0.5000 0.4142 0.3090
0.3333 0.5000 1.0000 0.5000 0.3333
0.3090 0.4142 0.5000 0.4142 0.3090
0.2612 0.3090 0.3333 0.3090 0.2612
Step #3
The last step we need is to normalize the mask so that the total sum of the kernel is 1. This can simply be done by:
kernFinal = kern / sum(kern(:))
kernFinal =
0.0275 0.0325 0.0351 0.0325 0.0275
0.0325 0.0436 0.0526 0.0436 0.0325
0.0351 0.0526 0.1052 0.0526 0.0351
0.0325 0.0436 0.0526 0.0436 0.0325
0.0275 0.0325 0.0351 0.0325 0.0275
This should finally give you the correlation kernel that you are seeking. You can now use this in convolution (i.e. using imfilter or conv2).
Hopefully I have answered your question adequately. Good luck!

Algorithm for best-effort classification of vector

Given four binary vectors which represent "classes":
[1,0,0,0,0,0,0,0,0,0]
[0,0,0,0,0,0,0,0,0,1]
[0,1,1,1,1,1,1,1,1,0]
[0,1,0,0,0,0,0,0,0,0]
What methods are available for classifying a vector of floating point values into one of these "classes"?
Basic rounding works in most cases:
round([0.8,0,0,0,0.3,0,0.1,0,0,0]) = [1 0 0 0 0 0 0 0 0 0]
But how can I handle some interference?
round([0.8,0,0,0,0.6,0,0.1,0,0,0]) != [1 0 0 0 0 1 0 0 0 0]
This second case should be a better match for 1000000000, but instead, I have lost the solution entirely as there is no clear match.
I want to use MATLAB for this task.
Find the SSD (sum of squared differences) of your test vector with each "class" and use the one with the least SSD.
Here's some code: I added a 0 to the end of the test vector you provided since it was only 9 digits whereas the classes had 10.
CLASSES = [1,0,0,0,0,0,0,0,0,0
0,0,0,0,0,0,0,0,0,1
0,1,1,1,1,1,1,1,1,0
0,1,0,0,0,0,0,0,0,0];
TEST = [0.8,0,0,0,0.6,0,0.1,0,0,0];
% Find the difference between the TEST vector and each row in CLASSES
difference = bsxfun(#minus,CLASSES,TEST);
% Class differences
class_diff = sum(difference.^2,2);
% Store the row index of the vector with the minimum difference from TEST
[val CLASS_ID] = min(class_diff);
% Display
disp(CLASSES(CLASS_ID,:))
For illustrative purposes, difference looks like this:
0.2 0 0 0 -0.6 0 -0.1 0 0 0
-0.8 0 0 0 -0.6 0 -0.1 0 0 1
-0.8 1 1 1 0.4 1 0.9 1 1 0
-0.8 1 0 0 -0.6 0 -0.1 0 0 0
And the distance of each class from TEST looks like this, class_diff:
0.41
2.01
7.61
2.01
And obviously, the first one is the best match since it has the least difference.
This is the same thing as Jacob did, only with four different distance measures:
Euclidean distance
City-block distance
Cosine distance
Chebychev distance
%%
CLASSES = [1,0,0,0,0,0,0,0,0,0
0,0,0,0,0,0,0,0,0,1
0,1,1,1,1,1,1,1,1,0
0,1,0,0,0,0,0,0,0,0];
TEST = [0.8,0,0,0,0.6,0,0.1,0,0,0];
%%
% sqrt( sum((x-y).^2) )
euclidean = sqrt( sum(bsxfun(#minus,CLASSES,TEST).^2, 2) );
% sum( |x-y| )
cityblock = sum(abs(bsxfun(#minus,CLASSES,TEST)), 2);
% 1 - dot(x,y)/(sqrt(dot(x,x))*sqrt(dot(y,y)))
cosine = 1 - ( CLASSES*TEST' ./ (norm(TEST)*sqrt(sum(CLASSES.^2,2))) );
% max( |x-y| )
chebychev = max( abs(bsxfun(#minus,CLASSES,TEST)), [], 2 );
dist = [euclidean cityblock cosine chebychev];
%%
[minDist classIdx] = min(dist);
Pick the one you like :)
A simple Euclidean distance algorithm should suffice. The class with the minimum distance to the point would be your candidate.
http://en.wikipedia.org/wiki/Euclidean_distance