Output after Triangulation (MATLAB) isn't accurate - matlab

I am using MATLAB- CVST to perform Stereo Camera Calibration. From 28 images (6 X 7 corners), stereoParams are obtained (stereoParams.MeanReprojectionError = 0.3168).
Next, I took took a checkerboard stereo pair (CB_I1 & CB_I2).
To CB_I1 I applied the following functions:
undistortImage
detectCheckerboardPoints
generateCheckerboardPoints
extrinsics: This gives me Translation vector (T) and Rotation Matrix (R)
Next, To both CB_I1 & CB_I2, I apply the following functions:
undistortImage
detectCheckerboardPoints
triangulate: this gives me worldPoints
Inverse Translation and Rotation Transform
This is my code:
CB_I1_undist = undistortImage(CB_I1, stereoParams.CameraParameters1);
CB_I2_undist = undistortImage(CB_I2, stereoParams.CameraParameters2);
[imagePoints1, ~] = detectCheckerboardPoints(CB_I1_undist);
[imagePoints, ~] = detectCheckerboardPoints(CB_I2_undist);
worldPoints = triangulate(imagePoints1,imagePoints,stereoParams);
Translated_pnts = zeros(size(worldPoints));
Translated_pnts(:,1) = worldPoints(:,1) - T(1);
Translated_pnts(:,2) = worldPoints(:,2) - T(2);
Translated_pnts(:,3) = worldPoints(:,3) - T(3);
Rotated_pnts = Translated_pnts * (R');
Transformed_points = Rotated_pnts;
Ultimately, Transformed_points looks like this:
and so on....for 42 points.
How do I interpret this? I expect Transformed_points to be:
since each square size is 40mm.
Primarily, is this an error?
What is the cause? Is it because of the high reprojection error? (if so, how can it be minimised?)
How can I reduce this as much as possible?
I want this as close as possible to the ideal value. What are the different ways by which I can improve my algorithm's accuracy?
Let me know if you need any other information.

You say you have a mean re-projection error of 0.3~.
If I calculate the distance between your first and second point:
sqrt((-0.2006+0.1993)^2+(-1.2843-39.1922)^2+(3.0466-2.0656)^2)
ans >>
40.4884
Well thats what you expect right? Its not 100% accurate of course.
Also,look again at your points. They are exactly where you expect them to be. Instead of 120 you have 120.06. Instead of 160, you have 159.94.
you are missing the points by around 0.3 millimeters. 0.3 MILIMETERS.
Take a ruler and try to measure 0.3 millimeters!!
Thats 4 time a human hair!
Its more or less the minimum distance the human eye can distinguish!
3 times the with of paper!
0.6 times the size OF A BACTERIA! (amoeba Proteus)
WOW, I think thats a quite good error to have, dont you?
Anyway, you can decrease that error using more calibration images, but yeah, I'd say you are doing a good job already.
A good way of measuring the error so it has more meaning is to calculate the pixel error, not the real physical error. If you divide the error by the length of a pixel, you can know how many pixels of error you have. You will see that most likely in your case you are having sub-pixel accuracy (pixel error < 1). This is very good because it means that your error is smaller than what you can measure, so, in some sense (not really, but yeah) you are breaking the Shanon principle! Good job
Source of random data: http://www.wolframalpha.com/input/?i=300+micrometres

To Ander's answer I would just like to add that you should be sure to measure the square size very precisely, using a caliper. If you can get sub-millimeter accuracy on that, that will have an effect on your reconstruction accuracy. Also, please be sure not to use lossy compression on your images, i. e. no jpeg. Jpeg artifacts will also decrease your accuracy.

Related

How do I get lines to stop extending beyond plot border? (Matlab)

I am writing a report for a class and am having some issues with the lines of an unstable plot going beyond the boundary of the graph and overlapping the title and xlabel. This is despite specifying a ylim from -2 to 2. Is there a good way to solve this issue?
Thanks!
plot(X,u(:,v0),X,u(:,v1),X,u(:,v2),X,u(:,v3),X,u(:,v4))
titlestr = sprintf('Velocity vs. Distance of %s function using %s: C=%g, imax=%g, dx=%gm, dt=%gsec',ICFType,SDType,C,imax,dx,dt);
ttl=title(titlestr);
ylabl=ylabel("u (m/s)");
xlabl=xlabel("x (m)");
ylim([-2 2])
lgnd=legend('t=0','t=1','t=2','t=3','t=4');
ttl.FontSize=18;
ylabl.FontSize=18;
xlabl.FontSize=18;
lgnd.FontSize=18;
EDIT: Minimum reproducible example
mgc=randi([-900*10^10,900*10^10], [1000,2]);
mgc=mgc*1000000;
plot(mgc(:,1),mgc(:,2))
ylim([-1,1])
This is odd. It really looks like a Bug... partly
The reason is probably that the angle of the lines are so narrow that MATLAB runs into rounding errors when calculating the points to draw for your limits very small limits given very large numbers. (You see that you don't run into this problem when you don't scale the matrix mgc.
mgc = randi([-900*10^10,900*10^10], [1000,2]);
plot(mgc(:,1),mgc(:,2))
ylim([-1,1])
but if you scale it further, you run into this problem...
mgc = randi([-900*10^10,900*10^10], [1000,2]);
plot(mgc(:,1)*1e6,mgc(:,2)*1e6)
ylim([-1,1])
While those numbers are nowhere near the maximum number a double can represent (type realmax in the command window to see that this is a number with 308 zeros!); limiting the plot to [-1,1] on one of the axes -- note that you obtain the same phenom on the x-axis -- let MATLAB run into precision problems.
First of all, you see that it plots much less lines than before (in my case)... although, I just said to zoom on the y-axis. The thing is, that MATLAB does not recalculate the lines for the section but it really zooms into it (I guess that this may cause resolution errors with regard to pixels?)
Well, lets have a look at the data (Pro-tip, you can get the data of a line from a MATLAB figure by calling this snippet
datObj = findobj(gcf,'-property','YData','-property','XData');
X = datObj.XData;
Y = datObj.YData;
xlm = get(gca,'XLim'); % get the current x-limits
) We see that it represents the original data set, which is not surprising as you can also zoom out again.
Note that his only occurs if you have such a chaotic, jagged line. If you sort it, it does not happen.
quick fix:
Now, what happens, if we calculate the exact points for this section?
m = diff(Y)./diff(X); % slope
n = Y(1:end-1)-m.*X(1:end-1); % offset
x = [(-1-n); (1-n)]./m;
y = ones(size(x))./[-1 1].';
% plot
plot([xMinus1;xPlus1],(ones(length(xMinus1),2).*[-1 1]).')
xlim(xlm); % limit to exact same scale as before
The different colors indicate that they are now individual lines and not a single wild chaos;)
It seems Max pretty much hit the nail on the head as it pertains to the reason for this error is occurring. Per Enrico's advice I went ahead and submitted a bug report. MathWorks responded saying they were unsure it was "unexpected behavior" and would look into it more shortly. They also did suggest a temporary workaround (which, in my case, may be permanent).
This workaround is to put
set(gca,'ClippingStyle','rectangle');
directly after the plotting line.
Below is a modified version of the minimum reproducible example with this modification.
mgc=randi([-900*10^10,900*10^10], [1000,2]);
mgc=mgc*1000000;
plot(mgc(:,1),mgc(:,2))
set(gca,'ClippingStyle','rectangle');
ylim([-1,1])

Matlab Curve fitting returns different parameter values every time

I'm trying to fit some data with the following equation, which is a modified error function and contains 4 unknown parameters (I'm mainly interested in par4):
Y = par1+(par2*(erf((X-par3)/ par4)))
If I set some sensible start points, the data point are fitted reasonably well (R-square ~ 0.95) but the error on some of the parameters found has a very wide range. Below is a image of the data to fit, the fitted curve and the parameters found.
Data and fit:
par4 = 0.9109 (-52.89, 54.71)
par2 = 0.02647 (0.02421, 0.02872)
par3 = 38.15 (8.128, 68.18)
par1 = 0.8647 (0.8624, 0.867)
If I change even slightly the start points (e.g. for par4, from 0.5 to 0.1), the final fit values change. Similarly, if I make `TolFun value larger the final parameters and their error change as well as the fit. In some cases, I'm even presented with no error intervals for the found parameters. For instance (if I set par4 start point = 3 and the others remain unchanged) I get:
par4 = 0.1712
par2 = 0.02674
par3 = 38.63
par1 = 0.8649 (0.8625, 0.8672)
As you can imagine, I'm no expert in the maths of the fitting process but I thought that the different results mean that the data is poor (no data points in the centre of the curve) and many curves can reproduce it with a relatively good R-square. Also, the large error intervals may mean that the spread in the data is large. Are my thoughts correct?
Also, what does the option TolFun control? And, I know the error intervals are calculated on 95% (2sigma) criterion; can I change that to 68% (1sigma) to have a narrower error interval?

Matlab Kolmogorov-Smirnov Test

I'm using MATLAB to analyze some neuroscience data, and I made an interspike interval distribution and fit an exponential to it. Then, I wanted to check this fit using a Kolmogorov-Smirnov test with MATLAB.
The data for the neuron spikes is just stored in a vector of spikes. The spikes vector is a 111 by 1 vector, where each entry is another vector. Each entry in thie spikes vector represents a trial. The number of spikes in each trial varies. For example, spikes{1} is a [1x116 double], meaning there are 116 spikes. The next has 115 spikes, then 108, etc.
Now, I understand that the kstest in MATLAB takes a couple of parameters. You enter the data in the first one, so I took all the interspike intervals and created a row vector alldiffs which stores all the interspike intervals. I want to set my CDF to that for an exponential function fit:
test_cdf = [transpose(alldiffs), transpose(1-exp(-alldiffs*firingrate))];
Note that the theoretical exponential (with which I fit the data) is r*exp(-rt) where r is the firing rate. I get a firing rate of about 0.2. Now, when I put this all together, I run the kstest:
[h,p] = kstest(alldiffs, 'CDF', test_cdf)
However, the result is a p value on the order of 1.4455e-126. I've tried redoing the test_cdf with another of the methods on Mathworks' website documentation:
test_cdf = [transpose(alldiffs), cdf('exp', transpose(alldiffs), 1/firingrate)];
This gives the exact same result! Is the fit just horrible? I don't know why I get such low p-values. Please help!
I would post an image of the fit, but I don't have enough reputation.
P.S. If there is a better place to post this, let me know and I'll repost.
Here is an example with fake data and yet another way to create the CDF:
>> data = exprnd(.2, 100);
>> test_cdf = makedist('exp', 'mu', .2);
>> [h, p] = kstest(data, 'CDF', test_cdf)
h =
0
p =
0.3418
However, why are you doing a KS Test?
All models are wrong, some are useful.
No neuron is perfectly a Poisson process and with enough data, you'll always have a significantly non-exponential ISI, as measured by a KS test. That doesn't mean you can't make the simplifying assumption of an exponential ISI, depending on what phenomena you're trying model.

fit function of Matlab is really slow

Why is the fitfunction from Matlab so slow? I'm trying to fit a gauss4 so I can get the means of the gaussians.
here's my plot,
I want to get the means from the blue data and red data.
I'm fitting a gaussian there but this function is really slow.
Is there an alternative?
fa = fit(fn', facm', 'gauss4');
acm = [fa.b1 fa.b2 fa.b3 fa.b4];
a_cm = sort(acm, 'ascend');
I would apply some of the options available with fit. These include smoothing by setting SmoothingParam (your data is quite noisy, the alternative of applying a time domain filter may also help*), and setting the values of your initial parameter estimates, with StartPoint. Your fits may also not be converging because you set your tolerances (TolFun, TolX) too low, although from inspection of your fits that does not appear to be the case, in fact the opposite is likely, you probably want to increase the MaxIter and/or MaxFunEvals.
To figure out how to get going you can also try the Spectr-O-Matic toolbox. It requires Matlab 7.12. It includes a script called GaussFit.m to fit gauss4 to data, but it also uses the fit routine and provides examples on how to set and get parameters.
Note that smoothing will of course broaden your peaks, but you can subtract the contribution after the fact. The effect on the mean should not be deleterious, on the contrary, since you are presumably removing noise this should be more accurate.
In general functions will be faster if you apply it to a shorter series. Hence, if speedup is really important you could downsample.
For example, if you have a vector that you want to downsample by a factor 2: (you may need to make sure it fits first)
n = 2;
x = sin(0.01:0.01:pi);
x_downsampled = x(1:n:end)+x(2:n:end);
You will now see that x_downsampled is much smaller (and should thus be easier to process), but will still have the same shape. In your case I think this is sufficient.
To see what you got try:
plot(x)
Now you can simply process x_downsampled and map your solution, for example
f = find(x_downsampled == max(x_downsampled));
location_of_maximum = f * n;
Needless to say this should be done in combination with the most efficient options that the fit function has to offer.

Finding normal distributions overlap using double Integral (dblquad) in MATLAB. Strange behaviour

I am calculating the overlap of two normal bivariate distribution using the following function
function [ oa ] = bivariate_overlap_integral(mu_x1,mu_y1,mu_x2,mu_y2)
%calculating pdf. Using x as vector because of MATLAB requirements for integration
bpdf_vec1=#(x,y,mu_x,mu_y)(exp(-((x-mu_x).^2)./2.-((y-mu_y)^2)/2)./(2*pi));
%calcualting overlap of two distributions at the point x,y
overlap_point = #(x,y) min(bpdf_vec1(x,y,mu_x1,mu_y1),bpdf_vec1(x,y,mu_x2,mu_y2));
%calculating overall overlap area
oa=dblquad(overlap_point,-100,100,-100,100);
You can see that this involves taking a double integral (x: -100 to 100, y:-100 to 100, ideally -inf to inf but suffices at the moment) from the function overlap_point which is minimum of 2 pdf-s given by the function bpdf_vec1 of two distributions at the point x,y.
Now, PDF is never 0, so I would expect that the larger the area of the interval, the larger will the end result become, obviously with the negligible difference after a certain point. However, it appears that sometimes, when I decrease the size of the interval the result grows. For instance:
>> mu_x1=0;mu_y1=0;mu_x2=5;mu_y2=0;
>> bpdf_vec1=#(x,y,mu_x,mu_y)(exp(-((x-mu_x).^2)./2.-((y-mu_y)^2)/2)./(2*pi));
>> overlap_point = #(x,y) min(bpdf_vec1(x,y,mu_x1,mu_y1),bpdf_vec1(x,y,mu_x2,mu_y2));
>> dblquad(overlap_point,-10,10,-10,10)
ans =
0.0124
>> dblquad(overlap_point,-100,100,-100,100)
ans =
1.4976e-005 -----> strange, as theoretically cannot be smaller then the first answer
>> dblquad(overlap_point,-3,3,-3,3)
ans =
0.0110 -----> makes sense that the result is less than the first answer as the
interval is decreased
Here we can check that the overlaps are (close to) 0 at the border points of interval.
>> overlap_point (100,100)
ans =
0
>> overlap_point (-100,100)
ans =
0
>> overlap_point (-100,-100)
ans =
0
>> overlap_point (100,-100)
ans =
0
Does this perhaps have to do with the implementation of dblquad, or am I making a mistake somewhere? I use MATLAB R2011a.
Thanks
CONGRATULATIONS! You win the award for being the 12 millionth person to ask essentially this question. :) What I'm trying to say is this is an issue that everyone stumbles over at first. Honestly, this question gets asked over and over again, so really, this question should be marked as a dup.
What happens with these things is a bivariate normal is essentially a delta function when viewed from far enough away. And you don't need to really spread that region out too far, since the normal density drops off fast. It is essentially zero over most of the domain you are trying to integrate over, at least to within the tolerances employed.
So then if the quadrature happens to hit some sample points near the areas where there is mass, then you may get some realistic estimate of your integral. But if all the tool sees are numbers that are essentially zero over the entire domain, it concludes the integral is zero. Remember, adaptive integration tools are NOT omniscient. They do not know anything about your function. It is a black box to them. These are NOT symbolic tools.
BTW, this is NOT something I'd expect to see consistently different for Octave versus MATLAB. It is only an issue of the adaptive integrator, and where it chooses to set its sample points down.
OK, here are my octave results.
>fomat long
>z10 = dblquad(overlap_point,-10,10,-10,10)
z10 = 0.0124193303284589
>z100 = dblquad(overlap_point,-100,100,-100,100)
z100 = 0.0124193303245106
>z100 - z10
ans = -3.94835379669001e-012
>z10a = dblquad(overlap_point,-10,10,-10,10,1e-8)
z10a = 0.0124193306514996
>z100a = dblquad(overlap_point,-100,100,-100,100,1e-8)
z100a = 0.0124193306527155
>z100a-z10a
ans = 1.21588676627038e-012
BTW. I've noticed this type of problem before with numerical solutions. Sometimes you make a change that you expect will improve the accuracy of the result (in this case by making your limits closer to the ideal case of the full plane), but instead you get the opposite effect and the result becomes less accurate. What's happening in this case is that by going out "wider", to -100..100, you're shifting the focus away from where the really important action is happening in your function, which is close to the origin. At some point the implementation of dblquad that you're using must start increasing the intersample distance as you increase the limits, and then it starts missing some important stuff close to the origin.
Maybe someone with running a later version of matlab can check this out and see if it's been improved.