Calibrated camera get matched points for 3D reconstruction, ideal test failed - matlab

I have previously asked the question "Use calibrated camera get matched points for 3D reconstruction", but the problem was not described clearly. So here I use a detail case with every step to show. Hope there is someone can help figure out where my mistake is.
At first I made 10 3D points with coordinates:
>> X = [0,0,0;
-10,0,0;
-15,0,0;
-13,3,0;
0,6,0;
-2,10,0;
-13,10,0;
0,13,0;
-4,13,0;
-8,17,0]
these points are on the same plane showing in this picture:
My next step is to use the 3D-2D projection code to get the 2D coordinates. In this step, I used the MATLAB code from caltech calibration toolbox called "project_points.m". Also I used the OpenCV C++ code to verify the result and turned out the same. (I used cvProjectPoints2())
For the 1st projection, parameters are:
>> R = [0, 0.261799387, 0.261799387]
>> T = [0, 20, 100]
>> K = [12800, 0, 1850; 0, 12770, 1700; 0 0 1]
And no distortion
>> DisCoe = [0,0,0,0]
The rotation is just two rotations with pi/12. I then got the 1st view 2D coordinates:
>> Points1 = [1850, 4254;
686.5, 3871.7;
126.3, 3687.6;
255.2, 4116.5;
1653.9, 4987.6;
1288.6, 5391.0;
37.7, 4944.1;
1426.1, 5839.6;
960.0, 5669.1;
377.3, 5977.8]
For the 2nd view, I changed:
>> R = [0, -0.261799387, -0.261799387]
>> T = [0, -20, 100]
And then got the 2nd View 2D coordinates:
>> Points2 = [1850, -854;
625.4, -585.8;
-11.3, -446.3;
348.6, -117.7;
2046.1, -110.1;
1939.0, 442.9;
588.6, 776.9;
2273.9, 754.0;
1798.1, 875.7;
1446.2, 1501.8]
THEN will be the reconstruction steps, I have already built the ideal matched points(I guess so), next step is to calculate the Fundamental Matrix, using estimateFundamentalMatrix():
>> F = [-0.000000124206906, 0.000000155821234, -0.001183448392236;
-0.000000145592802, -0.000000088749112, 0.000918286352329;
0.000872420357685, -0.000233667041696, 0.999998470240927]
with known K, I used the matlab code below to calculate essential matrix and compute the R, t, finally 3D coordinates:
E = K'*F*K;
[u1,w1,v1] = svd(E);
t = (w1(1,1)+w1(2,2))/2;
w1_new = [t,0,0;0,t,0;0,0,0];
E_new = u1*w1_new*v1';
[u2,w2,v2] = svd(E_new);
W = [0,-1,0;1,0,0;0,0,1];
S = [0,0,-1];
P1 = K*eye(3,4);
R = u2*W'*v2';
t = u2*S;
P2 = K*[R t];
for i=1:size(Points1,1)
A = [P1(3,:)*Poinst1(i,1)-P1(1,:);P1(3,:)*Points1(i,2)-P1(2,:);P2(3,:)*Points2(i,1)-P2(1,:);P2(3,:)*Points2(i,2)-P2(2,:)];
[u3,w3,v3] = svd(A);
dpt(i,:) = [v3(1,4) v3(2,4) v3(3,4)];
end
From this code I got the result as below:
>>X_result = [-0.00624167168027166 -0.0964921215725801 -0.475261364542900;
0.0348079221692933 -0.0811757557821619 -0.478479857606225;
0.0555763217997650 -0.0735028994611970 -0.480026199527202;
0.0508767193762549 -0.0886557226954657 -0.473911682320574;
0.00192300693541664 -0.121188713743347 -0.466462048338988;
0.0150597271598557 -0.133665834494933 -0.460372995991565;
0.0590515135110533 -0.115505488681438 -0.460357357303399;
0.0110271144368152 -0.148447743355975 -0.455752218710129;
0.0266380667320528 -0.141395768700202 -0.454774266762764;
0.0470113238869852 -0.148215424398514 -0.445341461836899]
With showing these points in Geomagic, the result is "a little bit bending". But there positions seemed right. I don't know why this happened. Anybody have some idea? Please see the picture:

It looks like numerical inaccuracies, maybe inside your function estimateFundamentalMatrix().
My second guess is that your estimateFundamentalMatrix() is not handling the planar case, which is a degenerate case for some algorithms (for the linear 8-points algo do not work well with planar scene for example).
The uncalibrated fundamental matrix estimation is ambiguous for planar scenes (2 solutions at least). See for example "Multiple View Geometry" by Hartley & Zisserman.

Related

Reverse-calculating original data from a known moving average

I'm trying to estimate the (unknown) original datapoints that went into calculating a (known) moving average. However, I do know some of the original datapoints, and I'm not sure how to use that information.
I am using the method given in the answers here: https://stats.stackexchange.com/questions/67907/extract-data-points-from-moving-average, but in MATLAB (my code below). This method works quite well for large numbers of data points (>1000), but less well with fewer data points, as you'd expect.
window = 3;
datapoints = 150;
data = 3*rand(1,datapoints)+50;
moving_averages = [];
for i = window:size(data,2)
moving_averages(i) = mean(data(i+1-window:i));
end
length = size(moving_averages,2)+(window-1);
a = (tril(ones(length,length),window-1) - tril(ones(length,length),-1))/window;
a = a(1:length-(window-1),:);
ai = pinv(a);
daily = mtimes(ai,moving_averages');
x = 1:size(data,2);
figure(1)
hold on
plot(x,data,'Color','b');
plot(x(window:end),moving_averages(window:end),'Linewidth',2,'Color','r');
plot(x,daily(window:end),'Color','g');
hold off
axis([0 size(x,2) min(daily(window:end))-1 max(daily(window:end))+1])
legend('original data','moving average','back-calculated')
Now, say I know a smattering of the original data points. I'm having trouble figuring how might I use that information to more accurately calculate the rest. Thank you for any assistance.
You should be able to calculate the original data exactly if you at any time can exactly determine one window's worth of data, i.e. in this case n-1 samples in a window of length n. (In your case) if you know A,B and (A+B+C)/3, you can solve now and know C. Now when you have (B+C+D)/3 (your moving average) you can exactly solve for D. Rinse and repeat. This logic works going backwards too.
Here is an example with the same idea:
% the actual vector of values
a = cumsum(rand(150,1) - 0.5);
% compute moving average
win = 3; % sliding window length
idx = hankel(1:win, win:numel(a));
m = mean(a(idx));
% coefficient matrix: m(i) = sum(a(i:i+win-1))/win
A = repmat([ones(1,win) zeros(1,numel(a)-win)], numel(a)-win+1, 1);
for i=2:size(A,1)
A(i,:) = circshift(A(i-1,:), [0 1]);
end
A = A / win;
% solve linear system
%x = A \ m(:);
x = pinv(A) * m(:);
% plot and compare
subplot(211), plot(1:numel(a),a, 1:numel(m),m)
legend({'original','moving average'})
title(sprintf('length = %d, window = %d',numel(a),win))
subplot(212), plot(1:numel(a),a, 1:numel(a),x)
legend({'original','reconstructed'})
title(sprintf('error = %f',norm(x(:)-a(:))))
You can see the reconstruction error is very small, even using the data sizes in your example (150 samples with a 3-samples moving average).

Rotation matrix in Matlab

I am going to rotate from one frame to another one with rotation matrix. goal of program is to make my Gyro parallel to earth, it means output vector should has first two numbers zero and third one -9.81.
Codes:
vs1 = 1;
vs2 = -0.003;
vs3 = -9.808;
vst = [vs1 vs2 vs3]';
alpha = (acosd(vs1/sqrt(vs1^2+vs2^2)));
gama = (acosd(vs2/sqrt(vs1^2+vs2^2)));
beta = (acosd(vs3/sqrt(vs1^2+vs2^2+vs3^2)));
R1 = [(cosd(gama)*cosd(beta)*cosd(alpha))-(sind(gama)*sind(alpha)) (cosd(gama)*cosd(beta)*sind(al)+sind(gama)*cosd(al)) (-cosd(gama)*sind(beta));((-sind(gama)*cosd(beta)*cosd(alpha))-cosd(gama)*sind(alpha)) ((-sind(gama)*cosd(beta)*sind(alpha))+(cosd(gama)*cosd(alpha))) sind(gama)*sind(beta);sind(beta)*cosd(alpha) sind(beta)*sind(alpha) cosd(beta)];
disp (R1*vst)
result for vs1,vs2 and vs3 is : -0.00599, 0.0000359 and 9.858845622079866. first, I can not understand why program give me positive Z and why it does not make first two numbers zero?
thanks in advance
You have a bug in your code. There are two places where I think the variable "al" should actually be "alpha" if I'm following your code correctly.
But your code also generates alpha = 90 and gama = 180 for those inputs. All you're going to do is flip the axes to within machine precision with those inputs, so it's not going to achieve the results you're looking for.
1) Are you sure the input vector is correct? Why would gravity have a value near X=1 if you're nearly vertical (Z = -9.808)?

why is the vector coming out of 'trapz' function as NAN?

i am trying to calculate the inverse fourier transform of the vector XRECW. for some reason i get a vector of NANs.
please help!!
t = -2:1/100:2;
x = ((2/5)*sin(5*pi*t))./((1/25)-t.^2);
w = -20*pi:0.01*pi:20*pi;
Hw = (exp(j*pi.*(w./(10*pi)))./(sinc(w./(10*pi)))).*(heaviside(w+5*pi)-heaviside(w-5*pi));%low pass filter
xzohw = 0;
for q=1:20:400
xzohw = xzohw + x(q).*(2./w).*sin(0.1.*w).*exp(-j.*w*0.2*((q-1)/20)+0.5);%calculating fourier transform of xzoh
end
xzohw = abs(xzohw);
xrecw = abs(xzohw.*Hw);%filtering the fourier transform high frequencies
xrect=0;
for q=1:401
xrect(q) = (1/(2*pi)).*trapz(xrecw.*exp(j*w*t(q))); %inverse fourier transform
end
xrect = abs(xrect);
plot(t,xrect)
Here's a direct answer to your question of "why" there is a nan. If you run your code, the Nan comes from dividing by zero in line 7 for computing xzohw. Notice that w contains zero:
>> find(w==0)
ans =
2001
and you can see in line 7 that you divide by the elements of w with the (2./w) factor.
A quick fix (although it is not a guarantee that your code will do what you want) is to avoid including 0 in w by using a step which avoids zero. Since pi is certainly not divisible by 100, you can try taking steps in .01 increments:
w = -20*pi:0.01:20*pi;
Using this, your code produces a plot which might resemble what you're looking for. In order to do better, we might need more details on exactly what you're trying to do, or what these variables represent.
Hope this helps!

Solving a system of equations using Python/Scipy for a set of measurements

I have an physical instrument of measurement (force platform with load cells) which gives me three values, A, B and C. It happens, though, that these values - that should be orthogonal - actually are somewhat coupled, due to physical characteristics of the measuring device, which causes cross-talk between applied and returned values of force and torque.
Then, it is recommended that a calibration matrix be used to transform the measured values into a better estimate of the actual values, like this:
The problem is that it is necessary to perform a SET of measurements, so that different measured(Fz, Mx, My) and actual(Fz, Mx, My) are least-squared to get some C matrix that works best for the system as a whole.
I can solve Ax = B problems with scipy.linalg.lststq, or even scipy.linalg.solve (giving an exact solution) for ONE measurement, but how should I proceed to consider a set of different measurements, each one with its own equation giving a potentially different 3x3 matrix?
Any help is much appreciated, thanks for reading.
I posted a similar question containing just the mathematical part of this at math.stackexchange.com, and this answer solved the problem:
math.stackexchange.com/a/232124/27435
In case anyone have a similar problem in the future, here is the almost literal Scipy implementation of that answer (first lines are initialization boilerplate code):
import numpy
import scipy.linalg
### Origin of the coordinate system: upper left corner!
"""
1----------2
| |
| |
4----------3
"""
platform_width = 600
platform_height = 400
# positions of each load cell (one per corner)
loadcell_positions = numpy.array([[0, 0],
[platform_width, 0],
[platform_width, platform_height],
[0, platform_height]])
platform_origin = numpy.array([platform_width, platform_height]) * 0.5
# applying a known force at known positions and taking the measurements
measurements_per_axis = 5
total_load = 50
results = []
for x in numpy.linspace(0, platform_width, measurements_per_axis):
for y in numpy.linspace(0, platform_height, measurements_per_axis):
position = numpy.array([x,y])
for loadpos in loadcell_positions:
moments = platform_origin-loadpos * total_load
load = numpy.array([total_load])
result = numpy.hstack([load, moments])
results.append(result)
results = numpy.array(results)
noise = numpy.random.rand(*results.shape) - 0.5
measurements = results + noise
# now expand ("stuff") the 3x3 matrix to get a linearly independent 3x3 matrix
expands = []
for n in xrange(measurements.shape[0]):
k = results[n,:]
m = measurements[n,:]
expand = numpy.zeros((3,9))
expand[0,0:3] = m
expand[1,3:6] = m
expand[2,6:9] = m
expands.append(expand)
expands = numpy.vstack(expands)
# perform the actual regression
C = scipy.linalg.lstsq(expands, measurements.reshape((-1,1)))
C = numpy.array(C[0]).reshape((3,3))
# the result with pure noise (not actual coupling) should be
# very close to a 3x3 identity matrix (and is!)
print C
Hope this helps someone!

matlab variance of angles

I have a matrix containing angles and I need to compute the mean and the variance.
for the mean I procede in this way:
for each angles compute sin and cos and sum all sin and all cos
the mean is given by the atan2(sin, cos)
and it works
my question is how to compute the variance of angles knowing the mean?
thank you for the answers
I attach my matlab code:
for i=1:size(im2,1)
for j=1:size(im2,2)
y=y+sin(hue(i, j));
x=x+cos(hue(i, j));
end
end
mean=atan2(y, x);
if mean<0
mean=mean+(2*pi);
end
In order to compute variance of an angle you can not use standard variance. This is the formulation to compute var of an angles:
R = 1 - sqrt((sum(sin(angle)))^2 + (sum(cos(angle)))^2)/n;
There is similar other formulation as well:
var(angle) = var(sin(angle)) + var(cos(angle));
Ref:
http://www.ebi.ac.uk/thornton-srv/software/PROCHECK/nmr_manual/man_cv.html
I am not 100% sure what you are doing, but maybe this would achieve the same thing with the build in MATLAB functions mean and var.
>> [file path] = uigetfile;
>> someImage = imread([path file]);
>> hsv = rgb2hsv(someImage);
>> hue = hsv(:,:,1);
>> m = mean(hue(:))
m =
0.5249
>> v = var(hue(:))
v =
0.2074
EDIT: I am assuming you have an image because of your variable name hue. But it would be the same for any matrix.
EDIT 2: Maybe that is what you are looking for:
>> sumsin = sum(sin(hue(:)));
>> sumcos = sum(cos(hue(:)));
>> meanvalue = atan2(sumsin,sumcos)
meanvalue =
0.5276
>> sumsin = sum(sin((hue(:)-meanvalue).^2));
>> sumcos = sum(cos((hue(:)-meanvalue).^2));
>> variance = atan2(sumsin,sumcos)
variance =
0.2074
The variance of circular data can not be treated like the variance of unbounded data on the real line. (For very small variances, they are effectively equivalent, but for large variances the equivalence breaks down. It should be clear to you why this is.) I recommend Statistical Analysis of Circular Data by N.I. Fisher. This book contains a widely-used definition of circular variance that is computed from the mean resultant length of the unit vectors that correspond to the angles.
>> sumsin = sum(sin((hue(:)-meanvalue).^2));
>> sumcos = sum(cos((hue(:)-meanvalue).^2));
is wrong. You can't subtract angles like that.
By the way, this question really has nothing to do with MATLAB. You can probably get more/better answers posting on the statistics stack exchange
We had the same problem and in python, we could fix that with using scipy.cirvar which computes the circular variance for samples assumed to be in a range. For example:
from scipy.stats import circvar
circvar([0, 2*np.pi/3, 5*np.pi/3])
# 2.19722457734
circvar([0, 2*np.pi])
# -0.0
The problem with the proposed MATLAB code is that the variance of [0, 6.28] should be zero. By looking at the implementation of scipy.circvar it is like:
# Recast samples as radians that range between 0 and 2 pi and calculate the sine and cosine
samples, sin_samp, cos_samp, nmask = _circfuncs_common(samples, high, low)
sin_mean = sin_samp.mean()
cos_mean = cos_samp.mean()
R = np.minimum(1, hypot(sin_mean, cos_mean))
circular_variance = ((high - low)/2.0/pi)**2 * -2 * np.log(R)