I am a novice when it comes to the topic of Structure From Motion. I have been trying to follow the tutorial here in the MathWorks webpage for SFM: LINK.
However, after running the code, I get this error message:
Warning: Maximum number of trials reached. Consider increasing the maximum
distance or decreasing the desired confidence.
> In vision.internal.ransac.msac (line 136)
In estimateEssentialMatrix (line 161)
In helperEstimateRelativePose (line 43)
In PERFORM_SFM (line 70)
Error using helperEstimateRelativePose (line 70)
Unable to compute the Essential matrix
Error in PERFORM_SFM (line 70)
[relativeOrient, relativeLoc, inlierIdx] = helperEstimateRelativePose(...
Could someone help me understand why this is happening? Could someone provide me an different approach?
I just managed to solve this same error. In my case, it seems I was using too many images, so the resulting equation system was overdetermined and, therefore, the matrix could not be computed. I just tested with a number of images similar to the example (6, in my case) and enough camera movement from frame to frame, and it works like it should.
Hope this helps.
As #Ander Biguri said, consider increasing the maximum distance or decreasing the desired confidence. You can do this by modifying the Matlab built-in function helperEstimateRelativePose.m: line 43. Then you can add as many images as you want. After modification, it should look like this:
[E, inlierIdx] = estimateEssentialMatrix(matchedPoints1, matchedPoints2,...
cameraParams, 'Confidence', 50, 'MaxDistance', 5);
But be careful editing the built-in functions. In my case, I modified the function and saved into another folder by another name and add that folder into path.
I hope this will help someone.
Related
I am trying to run a repeated measures ANOVA in Matlab with 4 factors including one factor representing my subjects which I want as a random factor.
The code I have is as follows:
[p,table,stats] = anovan(COORDS_SUBJ_II,{group_hand,group_stim,group_time,group_subs},'random',4,'varnames',{'HAND','STIM','TIME','SUBS'});
Here, all variables have the same dimension, which is 1350x1(all types are 'double'). I checked my code with some proposed code on the net and it matches, yet I keep getting the following error...
Error using chi2inv (line 3)
P and V must be of common size or scalars
Error in anovan>varcompest (line 838)
L = msTerm .* dfTerm ./ chi2inv(1-alpha/2,dfTerm);
Error in anovan>getRandomInfo (line 811)
[varest,varci] = varcompest(ems,randomterms,msTerm,dfTerm,alpha);
Error in anovan (line 296)
getRandomInfo(msterm,dfterm,mse,dfe,emsMat,randomterm,...
My dependent variable (COORDS_SUBJ_II) has a couple of NaN's in it, although I ran the code once where I replaced those NaN's with random numbers and it still gives me the same error. I am kind of lost right now and would appreciate any help.
Best
ty
Found it out. A toolbox I downloaded a while ago also had the chi2inv command and this prompted the error =)
I am quite new to matlab/programming in general and am trying to learn how to use the package "OMP Analysis" from geomar to figure out source water masses (https://www.mathworks.com/matlabcentral/fileexchange/1334-omp-analysis). So far I've only tried running the test data provided in the package. I select the "interactive(listing)" option, select extended water mass analysis, then don't change anything else. Everything works fine until I get to the final prompt, "Do you want to see more graphic output?". I input "y" and get the following error message:
Index exceeds matrix dimensions.
Error in contour2 (line 27)
para=A(st,:);
Error in omp2 (line 272)
contour2
Error in omp2int (line 454)
omp2
Error while evaluating DestroyedObject Callback.
From what I can tell the error is originating in that one line from contour2 and then messing up subsequent commands. The dimension of A is a 2x204 double and the dimension of st is a 204x1 double.
Any help in figuring out this problem would be greatly appreciated!
I have several sets of data that I want to fit but not all of them look the same (some look like a Gaussian with one peak, some like two Gaussians with 2 peaks or Lorentzians). I wanted to try this method
http://www.mathworks.com/matlabcentral/fileexchange/31562-data-driven-fitting-with-matlab/content/fitit.m
but the program given is not complete so I can not use it (there is no line that defines 'train' and 'test'). I am writing it so that it suits and works for my data (based on the code that it is given and the demo). I was able to find the best fit but I am also trying to use the bootstrap technique in order to find the confidence intervals. My data is xdata and ydata and they are sorted and the duplicates have been removed before I use them in my program.
cpart=cvpartition(size(xdata,1),'k',10);
tr_x=xdata(training(cpart,1));
tr_y=ydata(training(cpart,1));
tst_x=xdata(test(cpart,1));
tst_y=ydata(test(cpart,1));
all_span=linspace(0.01,0.99,99);
s=zeros(length(all_span);
for k=1:length(all_span)
f = #(tr_x,tr_y,tst_x,tst_y) norm(tst_y mylowess (tr_x, tr_y, tst_x, all_span (k)))^2
s(k) = sum(crossval(f,datax,datay,'partition',cpart));
end
[~,mj]=min(s);
n_span=all_span(mj);%n_span is the optimal span
function ys=mylowess(x1,y1,xs,span)
ys1 = smooth(x1,y1,span,'loess');
ys = interp1(x1,ys1,xs,'linear',NaN);
if any(isnan(ys))
ys(xs<x1(1)) = ys1(1);
ys(xs>x1(end)) = ys1(end);
end
So up to this point I understand the program and I have managed to find the optimal span. I want to find the confidence intervals but so far I was not able to make it work.
When I type:
NB=length(xdata);
f=#(xdata,ydata) mylowess(xdata,ydata,xdata,n_span);
yboot2 = bootstrp(NB,f,xdata,ydata)';
I get the following error
Error using griddedInterpolant
The grid vectors are not strictly monotonic increasing.
Error in interp1 (line 186)
F = griddedInterpolant(X,V,method);
Error in mylowess (line 26)
ysmooth=interp1(xdata,ysmooth1,xinput,'linear',NaN);
As I said before there are no duplicates in xdata and I have already sorted xdata before I used them in the program. Can anyone see the mistake I am making? Or is there an easier way to get the confidence intervals?
Thank you for your help.
I am working on a project that needs to use hidden markov models. I downloaded Kevin Murphy's toolbox. I have some problems about the usage. In the toolbox webpage, he says that first input of dhmm_em and dhmm_logprob are symbol sequence data. On their examples, they give row vectors as data. So, when I give my symbol sequence as row vector, I get error;
??? Error using ==> assert at 9
assertion violated:
Error in ==> fwdback at 105
assert(approxeq(sum(alpha(:,t)),1))
Error in ==> dhmm_logprob at 17
[alpha, beta, gamma, ll] = fwdback(prior,
transmat, obslik, 'fwd_only', 1);
Error in ==> mainCourseProject at 110
loglik(train_act) =
dhmm_logprob(orderedSymbols,
hmm{train_act}.prior,
hmm{train_act}.trans,
hmm{act}.emiss);
However, before giving this error, code works for some symbol vectors. When I give my data as column vector, functions work fine, no errors. So why exactly am I getting this error?
You might say that I should be giving not single vectors, but vector sets, I also tried to collect my feature vectors in a struct and give row vectors as such, but nothing changed, I still get assertion error.
By the way, my symbol sequence does not have any zeros, I am doing everything almost the same as they showed in their examples, so I would be greatful if anyone could help me please.
Im not sure, but from the function call stack shown above, shouldn't the last line be hmm{train_act}.emiss instead of hmm{act}.emiss.
In other words when you computing the log-probability of a sequence, you should pass components that belong to the same HMM model (transition matrix, emission matrix, and prior probabilities).
By the way, the ASSERT in the code is a sanity check that a vector of probabilities should sum to 1. Oftentimes, when working with very small values (log-probabilities), numerical stability issues can creep in... You could edit the APPROXEQ function to relax the comparison a bit, by giving it a bigger margin of error
This error message and the code it refers to are human-readable. An assertion is a guard put in by the programmer, to ensure that certain conditions are met. In this case, what is the condition? approxeq(sum(alpha(:,t)),1) I'd venture to say that approxeq wants the values to be approximately equal, so this boils down to: sum(alpha(:,t)) ~= 1
Without knowing anything about the code, I'd also guess that these refer to probabilities. The probabilities of a node's edges must sum to one. Hopefully this starts you down a productive debugging path. If you can't figure out what's wrong with your input that produces this condition, start wading into the code a bit to see where this alpha vector comes from, and how it ended up invalid.
I'm doing some cross-validation using a Matlab Weka Interface that I got from file exchange. My loop structure seems to work fine for Weka's Logistic classifier. However, when I try to do the exact same thing for AdaBoostM1, it throws the following error:
??? Java exception occurred: java.lang.ArrayIndexOutOfBoundsException
Error in ==> wekaClassify at 24 classProbs(t+1,:) = (classifier.distributionForInstance(testData.instance(t)))';
Error in ==> classifier_search at 225 [pred ~] = wekaClassify(matlab2weka('instance', featurelabels, tester), classifier);
I have determined through some testing that this only occurs when the number of instances in the training set is greater than the number of instances in the test set. I am sure you can see why that is a problem for me, since in most situations the training set is greater than the test set in size.
Is there something different about how I should format my inputs when using Adaboost rather than Logistic? Any information you can give regarding this problem would be so helpful.
I downloaded this code from this page: http://www.mathworks.com/matlabcentral/fileexchange/21204-matlab-weka-interface
Emails bounce from the account of the guy who made it, and he doesn't seem to respond to comments on the page - I'm hoping that maybe someone here has used this.
EDIT: Here is the code that I use to train and test the classifier:
classifier = trainWekaClassifier(matlab2weka('training', featurelabels, train), 'meta.AdaBoostM1', { strcat('-P 100 -S 1 -I ', num2str(r), '-W weka.classifiers.trees.DecisionStump')});
[pred ~] = wekaClassify(matlab2weka('instance', featurelabels, tester), classifier);
I haven't used this combination of software, so I can only take a guess at what could cause this.
Are your training/testing data matrices the right way round? They should be N-by-D (N instances, D features).
If you were passing in a D-by-N training matrix and a D-by-M testing matrix, then I would expect it to work only when M < N - which is what you describe - and even then, it wouldn't give a meaningful result.