Meaning of result in SVM using libsvm - matlab

I have recently started using libsvm package in matlab. I am getting the accuracy as a vector. I don't understand it. can someone explain this output.
thanks in advance.
predict_label =
1
-1
1
-1
-1
-1
-1
-1
1
-1
-1
1
-1
-1
-1
accuracy =
86.6667
0.5333
0.5455
prob_values =
0.6648 0.3352
0.0275 0.9725
0.5591 0.4409
0.3320 0.6680
0.2842 0.7158
0.1899 0.8101
0.4817 0.5183
0.1820 0.8180
0.7234 0.2766
0.2326 0.7674
0.0189 0.9811
0.7356 0.2644
0.2289 0.7711
0.0743 0.9257
0.0285 0.9715
this is my output of from this command:
[predict_label, accuracy, prob_values] = svmpredict(testLabel, [(1:N2)' testData*trainData'], model, '-b 1')
where N2 is a fixed value. The problem is the accuracy term.

From this reference:
The function 'svmpredict' has three outputs. The first one,
predictd_label, is a vector of predicted labels. The second output,
accuracy, is a vector including accuracy (for classification), mean
squared error, and squared correlation coefficient (for regression).
The third is a matrix containing decision values or probability
estimates (if '-b 1' is specified).

Related

proper way to normalize my distance matrices (matlab)

I am facing a doubt about a comparison that I want to do between two distance matrices. Lets say that I have my ground truth matrix:
gt = [1 0 0 0 1;
0 1 0 0 1;
0 0 1 0 0;
0 0 0 1 0];
and then I have two other extracted matrices:
v1 = [0.6136 0.1012 0.1146 0.1647 0.7445;
0.2264 0.7457 -0.0015 -0.0093 1.0026;
-0.0107 0.1975 1.1219 0.1699 0.1926;
-0.0019 0.0564 0.1560 0.7723 0.0565];
v2 = [0.8209 0.1390 0.1538 0.0203 0.9997;
0.2295 0.7720 -0.0028 -0.0112 1.0329;
-0.0167 0.2593 0.8172 0.2227 0.2501;
-0.0000 0.0549 0.1561 1.2728 0.0569];
Then I want to extract the distance matrix of each column of the above matrices to the columns of the ground truth matrix gt. The way I am getting this distance is dist1 = pdist2(gt', V1','euclidean'); and dist2 = pdist2(gt', V2','euclidean');. However, the result two distance matrices are not comparable right? Since the value range of each of the v1 and v2 matrices are different, therefore I need to apply a kind of normalization in order to be able to make conclusions on the result (please correct, if I am wrong).
However, I am not sure if this should be before or after I compute the distance matrices and what type of normalization to use. The negative values are playing a role of penalizing against (for that reason I am saying that I might need to apply the normalization after I compute the distance matrix, otherwise my first pick would be to normalize the v1 and v2 before I get their distance to the gt), therefore their affect should be kept and after the normalization.
Can you please give some feedback on that, how and what type of normalization to apply.
Thanks

Matrix to generate finite difference

I'm implementing a finite difference scheme for a 2D PDE problem. I wish to avoid using a loop to generate the finite differences. For instance to generate a 2nd order central difference of u(x,y)_xx, I can multiply u(x,y) by the following:
Is there a nice matrix representation for u_xy = (u_{i+1,j+1} + u_{i-1,j-1} - u_{i-1,j+1} - u_{i+1,j-1})/(4dxdy)? It's a harder problem to code as it's in 2D - I'd like to multiply some matrix by u(x,y) to avoid looping. Many thanks!
If your points are stored in a N-by-N matrix then, as you said, left multiplying by your finite difference matrix gives an approximation to the second derivative with respect to u_{xx}. Right-multiplying by the transpose of the finite difference matrix is equivalent to an approximation u_{yy}. You can get an approximation to the mixed derivative u_{xy} by left-multiplying and right-multiplying by e.g. a central difference matrix
delta_2x =
0 1 0 0 0
-1 0 1 0 0
0 -1 0 1 0
0 0 -1 0 1
0 0 0 -1 0
(then divide by the factor 4*Dx*Dy), so something like
U_xy = 1/(4*Dx*Dy) * delta_2x * U_matrix * delta_2x';
If you cast a matrix as a N^2 vector
U_vec = U_matrix(:);
then these operators can be expressed using a Kronecker product, implemented in MATLAB as kron: We have
A*X*B = kron(B',A)*X(:);
so for your finite difference matrices
U_xy_vec = 1/(4*Dx*Dy)*(kron(delta_2x,delta_2x)*U_vec);
If instead you have an N-by-M matrix U_mat, then left matrix multiplication is equivalent to kron(eye(M),delta_2x_N) and right multiplication to kron(delta_2y_M,eye(N)), where delta_2y_M (delta_2x_N) is the M-by-M (N-by-N) central difference matrix, so the operation is
U_xy_vec = 1/(4*Dx*Dy) * kron(delta_2y_M,delta_2y_N)*U_vec;
Here is an MATLAB code example:
N = 20;
M = 30;
Dx = 1/N;
Dy = 1/M;
[Y,X] = meshgrid((1:(M))./(M+1),(1:(N))/(N+1));
% Example solution and mixed derivative (chosen for 0 BCs)
U_mat = sin(2*pi*X).*(sin(2*pi*Y.^2));
U_xy = 8*pi^2*Y.*cos(2*pi*X).*cos(2*pi*Y.^2);
% Centred finite difference matrices
delta_x_N = 1/(2*Dx)*(diag(ones(N-1,1),1) - diag(ones(N-1,1),-1));
delta_y_M = 1/(2*Dy)*(diag(ones(M-1,1),1) - diag(ones(M-1,1),-1));
% Cast U as a vector
U_vec = U_mat(:);
% Mixed derivative operator
A = kron(delta_y_M,delta_x_N);
U_xy_num = A*U_vec;
U_xy_matrix = reshape(U_xy_num,N,M);
subplot(1,2,1)
contourf(X,Y,U_xy_matrix)
colorbar
title 'Numeric U_{xy}'
subplot(1,2,2)
contourf(X,Y,U_xy)
colorbar
title 'Analytic U_{xy}'
You can obviously create the matrix yourself, but in Matlab there is tridiag for this purpose.
For example
>> full(gallery('tridiag',5,-1,2,-1))
ans =
2 -1 0 0 0
-1 2 -1 0 0
0 -1 2 -1 0
0 0 -1 2 -1
0 0 0 -1 2
Using sparse functionality available in MATLAB to generate finite difference approximation matrix is a good option.. It saves lot (indeed very much) of memory...

compute weights by Generalized Hebbian Algorithm in matlab

I have a task to do some calculations in matlab .. I use the Generalized Hebbian Algorithm to compute some weights , here is the functions of Hebbian Algorithm , slice 15
http://www.eit.lth.se/fileadmin/eit/courses/eitn55/Downloads/ICA_Ch6.pdf
here is my code
alfa=0.5;
e=randn(3,5000);
A=[1 0 0;-0.5 0.5 0;0.3 0.1 0.1];
x=A*e;
W=rand(3);
nn=size(x);
for n=1:nn
y=W*x(:,n);
k=tril(y*y')*W;
W(:,n+1)= alfa*(y*x(:,n)'-k);
end
In my task I know that x=A*e;
but I do not know if I am iterating in correct way or not?
is my for loop doing correct?
and are those equations below correct?
y=W*x(:,n);
k=tril(y*y')*W;
W(:,n+1)= alfa*(y*x(:,n)'-k);
W(:,n+1) should print out a 3*3 matrix (that what I understood)...
Matlab says when I run this code : Error using *
Inner matrix dimensions must agree.
thanks
If you check size of each matrix, you will find out that the order is incorrect:
size(x)
ans =
3 5000
size(W)
ans =
3 3
so you should multiply them as
for n=1:nn
y=W*x;
end
However this part does not make sense either,
k=tril(y'*y)*W;
because tril(y'*y) is a matrix size 5000x5000 and W is 3x3. So I guess you should change it to:
k=tril(y*y')*W;
Then alfa*(y*x'-k); would be a 3x3 matrix.

Determining the separability of 3D kernels (Can a 1D kernel get the result of 3D convolution?)

I have three kernels of size 2×2×2 (as define below by ker1, ker2, ker3). I wanted to know how I can determine whether these kernels are separable (for 3D convolution purposes). I read online how this can be done in MATLAB for 2D kernels. But rank of a 3D array! mmmm, I don't think there's such thing. Maybe other methods?
The main question is: can a 1D kernel get the result of 3D convolution (not using FFT)?
MATLAB command convn is very fast for computation of the convolution of the 3D array. However, I am writing a standalone C++ application separate from MATLAB and cannot use convn in my code. If separability of the above mentioned kernels can be determined, it will greatly help me use 1D convolution in my code, which is also easy to implement.
I would be thankful to have the thoughts of my friends in the community on this matter.
>> % I am investigating this in MATLAB
ker1(:,:,1) =
-1 1
-1 1
ker1(:,:,2) =
-1 1
-1 1
>>
ker2(:,:,1) =
-1 -1
-1 -1
ker2(:,:,2) =
1 1
1 1
>>
ker3(:,:,1) =
-1 -1
1 1
ker3(:,:,2) =
-1 -1
1 1
>> my3Darray = ones( 200,200,200 );
>> res1 = convn( my3Darray, ker1 );
>> res2 = convn( my3Darray, ker2 );
>> res3 = convn( my3Darray, ker3 );
Yes, all these three tensors can be written as the outer product * of three vectors a*b*c.
ker1 = [ 1,1] * [-1,1] * [ 1,1]
ker2 = [ 1,1] * [ 1,1] * [-1,1]
ker3 = [-1,1] * [ 1,1] * [ 1,1]
(This of course is not Matlab syntax. You can think of the first vector as column-vector, the second as row vector and the third as 'top'-vector.)

Why sprank(A) and A\b report different rank in matlab?

I have a point set P and I construct it's adjacent matrix A by k-nearest neighbor. Each row of A is [...+1...-1...], indicates a pair of neighbor points. The size of A is 48348 x 8058, sprank(A) is 8058. But when I do the following, it gives me a warning: "Warning: Rank deficient, rank = 8055, tol = 8.307912e-10."
a=A*b;
c=A\a;
and norm(c-b) is quite large. It seems something is wrong with the adjacent matrix A, but I can't figure it out. Thanks in advance!
sprank only tells you how many rows/columns of your matrix have non-zero elements, while A\b is reporting the actual rank of the matrix which indicates how many rows of your matrix are linearly independent. For example, for following matrix:
A = [-1 1 0 0;
0 1 -1 0;
1 0 -1 0;
0 0 1 -1]
sprank(A) is 4 but rank(A) is only 3 because you can write the third row as a linear combination of the other rows, specifically A(2,:) - A(1,:).
The issue that you need to address is either in how you're computing A (if you expect that to generate a system of linearly independent equations) or you need to find a way to use A that doesn't require factorizing a rank deficient matrix.