Matlab Stats Svm error in testing - matlab

I am using the Matlab 2012 svm included in the stats package. I have a binary classification problem where I train a set of vectors and test another set of vectors as the following matlab code shows:
%set the maximum number of iterations
optSVM = statset('MaxIter', 1000000);
%train the classifier with a set of feature vectors
SVMtrainModel = svmtrain(training_vectors_matrix(:,2:end), training_vectors_matrix(:,1), 'kernel_function' , 'linear', 'options', optSVM, 'tolkkt', 0.01);
%read the test vectors
TestV = csvread(test_file);
%Test the feature vectors in the built classifier
TestAttribBin = svmclassify(SVMtrainModel, TestV(:,2:end))
It' s a quite simple code and would run normally. The training runs ok, but when I test the following error happens:
Subscript indices must either be real positive integers or logicals.
Error in svmclassify (line 140)
outclass= glevels(outclass(~unClassified),:);
So, are my feature vectors with any problem? If I run this same code in different feature vectors (training and testing vectors) the code runs ok. I already checked the feature vectors and there are no NaNs. What should be the cause of this problem?

This should be solvable keeping my generic solution to this problem in mind.
1) Run the code with dbstop if error
It will now stop at the line that you provided:
outclass= glevels(outclass(~unClassified),:);
2) Check the possible solutions.
In this case I assume that glevels and outclass are both
variables. The next thing to do would be to carefully examine
everything that could be an index.
Starting inside out:
The first index is ~unClassified, as the ~ operation did not fail, it is safe to say that this is now a logical vector.
The second and lastindex is outclass(~unClassified), this one is most likely not consisting of only numbers like 1,2,3,... or true,false values.
The test whether the values are all valid is quite simple, one of these two should hold:
To confirm that the values in x are logical: class(x) should return 'logical'
To confirm that the values in x are real positive integers: isequal(x, max(1,round(abs(x)))) should return 'true'.

This problem can be solve if you remove your NaN rows or data:
Features(~any(~isnan(Features), 2),:)=[];
maybe you have complex numbers too, then use this code:
Features3(any(isnan(Features3),2),:)=0;
Features3 =real(Features3);
first line, make all NaN values turn to zero and the second line turns all complex numbers to be real.

Related

Simple binary logistic regression using MATLAB

I'm working on doing a logistic regression using MATLAB for a simple classification problem. My covariate is one continuous variable ranging between 0 and 1, while my categorical response is a binary variable of 0 (incorrect) or 1 (correct).
I'm looking to run a logistic regression to establish a predictor that would output the probability of some input observation (e.g. the continuous variable as described above) being correct or incorrect. Although this is a fairly simple scenario, I'm having some trouble running this in MATLAB.
My approach is as follows: I have one column vector X that contains the values of the continuous variable, and another equally-sized column vector Y that contains the known classification of each value of X (e.g. 0 or 1). I'm using the following code:
[b,dev,stats] = glmfit(X,Y,'binomial','link','logit');
However, this gives me nonsensical results with a p = 1.000, coefficients (b) that are extremely high (-650.5, 1320.1), and associated standard error values on the order of 1e6.
I then tried using an additional parameter to specify the size of my binomial sample:
glm = GeneralizedLinearModel.fit(X,Y,'distr','binomial','BinomialSize',size(Y,1));
This gave me results that were more in line with what I expected. I extracted the coefficients, used glmval to create estimates (Y_fit = glmval(b,[0:0.01:1],'logit');), and created an array for the fitting (X_fit = linspace(0,1)). When I overlaid the plots of the original data and the model using figure, plot(X,Y,'o',X_fit,Y_fit'-'), the resulting plot of the model essentially looked like the lower 1/4th of the 'S' shaped plot that is typical with logistic regression plots.
My questions are as follows:
1) Why did my use of glmfit give strange results?
2) How should I go about addressing my initial question: given some input value, what's the probability that its classification is correct?
3) How do I get confidence intervals for my model parameters? glmval should be able to input the stats output from glmfit, but my use of glmfit is not giving correct results.
Any comments and input would be very useful, thanks!
UPDATE (3/18/14)
I found that mnrval seems to give reasonable results. I can use [b_fit,dev,stats] = mnrfit(X,Y+1); where Y+1 simply makes my binary classifier into a nominal one.
I can loop through [pihat,lower,upper] = mnrval(b_fit,loopVal(ii),stats); to get various pihat probability values, where loopVal = linspace(0,1) or some appropriate input range and `ii = 1:length(loopVal)'.
The stats parameter has a great correlation coefficient (0.9973), but the p values for b_fit are 0.0847 and 0.0845, which I'm not quite sure how to interpret. Any thoughts? Also, why would mrnfit work over glmfit in my example? I should note that the p-values for the coefficients when using GeneralizedLinearModel.fit were both p<<0.001, and the coefficient estimates were quite different as well.
Finally, how does one interpret the dev output from the mnrfit function? The MATLAB document states that it is "the deviance of the fit at the solution vector. The deviance is a generalization of the residual sum of squares." Is this useful as a stand-alone value, or is this only compared to dev values from other models?
It sounds like your data may be linearly separable. In short, that means since your input data is one dimensional, that there is some value of x such that all values of x < xDiv belong to one class (say y = 0) and all values of x > xDiv belong to the other class (y = 1).
If your data were two-dimensional this means you could draw a line through your two-dimensional space X such that all instances of a particular class are on one side of the line.
This is bad news for logistic regression (LR) as LR isn't really meant to deal with problems where the data are linearly separable.
Logistic regression is trying to fit a function of the following form:
This will only return values of y = 0 or y = 1 when the expression within the exponential in the denominator is at negative infinity or infinity.
Now, because your data is linearly separable, and Matlab's LR function attempts to find a maximum likelihood fit for the data, you will get extreme weight values.
This isn't necessarily a solution, but try flipping the labels on just one of your data points (so for some index t where y(t) == 0 set y(t) = 1). This will cause your data to no longer be linearly separable and the learned weight values will be dragged dramatically closer to zero.

fundamental matrix keeps changing in matlab but same using open cv.what might be the reason?

I have calculated fundamental matrix using the function shown below
[fMatrix, epipolarInliers, status] = estimateFundamentalMatrix(...
matchedPoints1, matchedPoints2, 'Method', 'RANSAC', ...
'NumTrials', 10000, 'DistanceThreshold', 0.1, 'Confidence', 99.99);
but here in this case fundamental matrix keeps varying each time i run the program.
but when i use the code shown below in open cv i get the same fundamental matrix everytime i run the program.code is shown below
cv::Mat fundamental=cv::findFundamentalMat(cv::Mat(selPointsLeft),cv::Mat(selPointsRight),CV_FM_RANSAC);
in both the cases i used surf features to extract the match features.Matchpoints1=selpointsleft and matchpoints2=selpointsright.
What might the reason for this?
RANSAC is an abbreviation for "RANdom SAmple Consensus". That said, we must expect the output matrix to change as the samples are randomly picked.
In OpenCV, the values are picked from a uniformly distributed list of random values. Thus we get the same values every time we run the code.
In Matlab, it that seems to pick a completely random value and hence the problem. You will have to check whether there is a way to random-seed the picking of random values, which I am quiet unsure of.

Weird behavior of a sparse Matrix under MATLAB

I have been given this 63521x63521 real sparse symmetric matrix in MATLAB and for some reason it seems to be behaving weirdly for some commands.
I am not sure if there is a 'defect' in the matrix file or in the way I am using
MATLAB's commands.
Consider the following script. I have indicated the output of each of the steps.
% Gives sparsity shown as expected, so this works fine
spy(rYbus)
% I want the top 3 singular values of rYbus. But this line Returns empty matrix! Why/
S = svds(rYbus,3);
% Set exact answer and rhs and solve the linear system with iterative and direct method
b_exact = ones(size(Ybus,1),1);
rhs = rYbus*b_exact ;
% Following line gives Warning: Matrix is singular, close to singular or badly scaled.
% Results may be inaccurate. RCOND = NaN.
% > In Ybustest at 14.
b_numerical_1 = rYbus\rhs;
% Even for a single GMRES iteration b_numerical_2 is a vector of Nans. Why?
b_numerical_2 = gmres(rYbus,rhs,[],[],1);
Can anyone point out what may have gone wrong?
I have already used the "isnan" function to verify that the matrix rYbus
does not have any nans. The size of the matrix is 63521 x 63521
Have you checked if your input sparse matrix rYbus has any NaNs? If I remember correctly, svds can give you an empty matrix instead of an error.
Another possible error is the size of rYbus. What is the size of it?

How to find if a matrix is Singular in Matlab

I use the function below to generate the betas for a given set of guess lambdas from my optimiser.
When running I often get the following warning message:
Warning: Matrix is singular to working precision.
In NSS_betas at 9
In DElambda at 19
In Individual_Lambdas at 36
I'd like to be able to exclude any betas that form a singular matrix form the solution set, however I don't know how to test for it?
I've been trying to use rcond() but I don't know where to make the cut off between singular and non singular?
Surely if Matlab is generating the warning message it already knows if the matrix is singular or not so if I could just find where that variable was stored I could use that?
function betas=NSS_betas(lambda,data)
mats=data.mats2';
lambda=lambda;
yM=data.y2';
nObs=size(yM,1);
G= [ones(nObs,1) (1-exp(-mats./lambda(1)))./(mats./lambda(1)) ((1-exp(-mats./lambda(1)))./(mats./lambda(1))-exp(-mats./lambda(1))) ((1-exp(-mats./lambda(2)))./(mats./lambda(2))-exp(-mats./lambda(2)))];
betas=G\yM;
r=rcond(G);
end
Thanks for the advice:
I tested all three examples below after setting the lambda values to be equal so guiving a singular matrix
if (~isinf(G))
r=rank(G);
r2=rcond(G);
r3=min(svd(G));
end
r=3, r2 =2.602085213965190e-16; r3= 1.075949299504113e-15;
So in this test rank() and rcond () worked assuming I take the benchmark values as given below.
However what happens when I have two values that are close but not exactly equal?
How can I decide what is too close?
rcond is the right way to go here. If it nears the machine precision of zero, your matrix is singular. I usually go with:
if( rcond(A) < 1e-12 )
% This matrix doesn't look good
end
You can experiment with a value that suites your needs, but taking the inverse of a matrix that is even close to singular with MATLAB can produce garbage results.
You could compare the result of rank(G) with the number of columns of G. If the rank is less than the column dimension, you will have a singular matrix.
you can also check this by:
min(svd(A))>eps
and verifying that the smallest singular value is larger than eps, or any other numerical tolerance that is relevant to your needs. (the code will return 1 or 0)
Here's more info about it...
Condition number (Maximal singular value/Minimal singular value) is another good method:
cond(A)
It uses svd. It should be as close to 1 as possible. Very large values mean that the matrix is almost singular. Inf means that it is precisely singular.
Note that almost all of the methods mentioned in other answers use somehow svd :
There are special tools designed for this problem, appropriately called "rank revealing matrix factorizations". To my best (albeit a little old) knowledge, a good enough way to decide whether a n x n matrix A is nonsingular is to go with
det(A) <> 0 <=> rank(A) = n
and use a rank-revealing QR factorization of A:
AP = QR
where Q is orthogonal, P is a permutation matrix and R is an upper triangular matrix with the property that the magnitude of the diagonal elements is decreased along the diagonal.

scipy generalized eigenproblem with positive semidefinite

Hi, guys!!!
I want to compute generalized eigendecomposition of the form:
Lf = lambda Af
by using scipy.sparse.linalg.eigs function, but get this error:
/usr/local/lib/python2.7/dist-packages/scipy/linalg/decomp_lu.py:61: RuntimeWarning: Diagonal number 65 is exactly zero. Singular matrix.
RuntimeWarning)
** On entry to DLASCL parameter number 4 had an illegal value
I am passing three arguments, a diagonal matrix, a positive semi-definite (PSD) matrix and numeric value K (first K eigenvalues). Matlab's eigs function performs well using the same input parameters, but in SciPy as I have understood, in order to compute with PSD I need to specify sigma parameter as well.
So, my question is: is there a way to avoid setting sigma parameter, as it is in MatLab, or if not, how to pick up sigma value?
Looking forward to getting advices or hints...
Thank you in advance!
The error appears to mean that in your generalized eigenproblem
L x = lambda A x
the matrix A is not positive definite (check the eigs docstring -- in your case the matrix is probably singular). This is a requirement for ARPACK mode 2. However, you can try specifying sigma=0 to switch to ARPACK mode 3 (but note that the meaning of the which parameter is inverted in this case!).
Now, I'm not sure what Matlab does, but a possibility is that it's calculating the pseudoinverse rather than the inverse of A. To emulate this, do
from scipy.sparse.linalg import LinearOperator
from scipy.linalg import lstsq
Ainv = LinearOperator(matvec=lambda x: lstsq(A, x)[0], shape=A.shape)
w, v = eigs(L, M=A, Minv=Ainv)
Check the results --- I don't know what will happen in this case.
Alternatively, you may try to specify a nonzero sigma. What you should select depends on the matrices involved. It affects the eigenvalues that are picked --- for instance with which='LM' are those for which lambda' = 1/(lambda - sigma) is large. Otherwise, it can probably be chosen arbitrarily, of course it's probably better for the Krylov progress if the transformed eigenvalues lambda' which you are interested in become well separated from the other eigenvalues.