Use MATLAB cameraParams in OpenCV program - matlab

I have a MATLAB program that loads two images and returns two camera matrices and a cameraParams object with distortion coefficients, etc. I would now like to use this exact configuration to undistort points and so on, in an OpenCV program that triangulates points given their 2D locations in two different videos.
function [cameraMatrix1, cameraMatrix2, cameraParams] = setupCameraCalibration(leftImageFile, rightImageFile, squareSize)
% Auto-generated by cameraCalibrator app on 20-Feb-2015
The thing is, the output of undistortPoints is different in MATLAB and OpenCV even though both use the same arguments.
As an example:
>> undistortPoints([485, 502], defaultCameraParams)
ans = 485 502
In Java, the following test mimics the above (it passes).
public void testUnDistortPoints() {
Mat srcMat = new Mat(2, 1, CvType.CV_32FC2);
Mat dstMat = new Mat(2, 1, CvType.CV_32FC2);
srcMat.put(0, 0, new float[] { 485, 502 } );
MatOfPoint2f src = new MatOfPoint2f(srcMat);
MatOfPoint2f dst = new MatOfPoint2f(dstMat);
Mat defaultCameraMatrix = Mat.eye(3, 3, CvType.CV_32F);
Mat defaultDistCoefficientMatrix = new Mat(1, 4, CvType.CV_32F);
Imgproc.undistortPoints(
src,
dst,
defaultCameraMatrix,
defaultDistCoefficientMatrix
);
System.out.println(dst.dump());
assertEquals(dst.get(0, 0)[0], 485d);
assertEquals(dst.get(0, 0)[1], 502d);
}
However, say I change the first distortion coefficient (k1). In MATLAB:
changedDist = cameraParameters('RadialDistortion', [2 0 0])
>> undistortPoints([485, 502], changedDist)
ans = 4.8756 5.0465
In Java:
public void testUnDistortPointsChangedDistortion() {
Mat srcMat = new Mat(2, 1, CvType.CV_32FC2);
Mat dstMat = new Mat(2, 1, CvType.CV_32FC2);
srcMat.put(0, 0, new float[] { 485, 502 } );
MatOfPoint2f src = new MatOfPoint2f(srcMat);
MatOfPoint2f dst = new MatOfPoint2f(dstMat);
Mat defaultCameraMatrix = Mat.eye(3, 3, CvType.CV_32F);
Mat distCoefficientMatrix = new Mat(1, 4, CvType.CV_32F);
distCoefficientMatrix.put(0, 0, 2f); // updated
Imgproc.undistortPoints(
src,
dst,
defaultCameraMatrix,
distCoefficientMatrix
);
System.out.println(dst.dump());
assertEquals(4.8756, dst.get(0, 0)[0]);
assertEquals(5.0465, dst.get(0, 0)[1]);
}
It fails with the following output:
[0.0004977131, 0.0005151587]
junit.framework.AssertionFailedError:
Expected :4.8756
Actual :4.977131029590964E-4
Why are the results different? I thought Java's distortion coefficient matrix includes both the radial and tangential distortion coefficients.
Also, is CV_64FC1 a good choice of type for the camera / distortion coefficient matrices?
I was trying to test the effect of changing the camera matrix itself (i.e. the value of f_x), but it's not possible to set the 'IntrinsicMatrix' parameter when using cameraparams, so I want to solve the distortion matrix problem first.
Any help would be greatly appreciated.

There is a couple of things you have to take into account when working with calibration models.
First, note there exist several camera calibration and distortion models: Tsai, ATAN, Pinhole, Ocam. I assume you want to use the Pinhole model, which is the used by OpenCV and the most common one. It models from 2 to 6 parameters for radial distortion (denoted as k1...k6) and 2 for tangential distortion (denoted as p1, p2), as you can read in the OpenCV doc. Bouget's calibration toolbox for Matlab uses this model too.
Second, there is not a standardized way to arrange distortion parameters in a vector. OpenCV expects items in this order: [k1 k2 p1 p2 k3...k6], being k3...k6 optional.
So, check the documentation of your Matlab calibration software and look for what model it uses and in which order the parameters are arranged. Then, make sure it meets the order in OpenCV.
The calibration parameters for OpenCV are ok as CV_32F and CV_64F as I recall.
Update
I don't know in Java, but in C++, when you create a Mat, its initial values are unspecified, so that this code may be creating a matrix with a 2f in the first item and garbage in the remaining ones:
Mat distCoefficientMatrix = new Mat(1, 4, CvType.CV_32F);
distCoefficientMatrix.put(0, 0, 2f);
Could you check if this is the problem?
A note for the future, to make things trickier, take into account that the intrinsic calibration matrix in OpenCV is the transpose of the one in Matlab.

Related

Opencv calcOpticalFlowPyrLK have got a bidirectional error as Matlab Computer Vision Toolbox?

I'm using opencv 3.0.
I used the klt to track features:
Size winSize(11, 11);
int maxLevel = 4;
TermCriteria termcrit(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 30, 0.01);
int flags = 0;
double minEigThreshold = 0.0001;
calcOpticalFlowPyrLK(previous_gray, current_gray, previous_corners, current_corners, status, err, winSize, maxLevel, termcrit, flags, minEigThreshold);
I eliminated vectors with status == 0, but I still have some outliers.
Trying to use same parameters, but in Matlab, I obtain similar vectors, but without outliers.
I also tried to use flags = OPTFLOW_LK_GET_MIN_EIGENVALS and vary minEigThreshold, but without success
In Matlab I used:
tracker = vision.PointTracker('MaxBidirectionalError',0.5,'BlockSize',[11 11],'NumPyramidLevels',4);
tracker.release();
initialize(tracker,Points,IM);
[points,validity] = tracker(currentframe);
Varying MaxBidirectionalError I eliminated the outliers.
Question is there is in opencv something similar to MaxBidirectionalError?
Thank you very much!
Joe
P.S. these are the output I obtained:
Opencv (there are some outliers):
Matlab (almost no outliers):
Input image (left)
Input Image (right)

Selecting the right essential matrix

I want to code myself the sfm pipeline using Matlab because I need some outputs that opencv functions don't provide. However, I'm using opencv for comparison.
The Opencv function [E,mask] = cv.findEssentialMat(points1, points2, 'CameraMatrix',K, 'Method','Ransac'); provides the essential matrix solution using Nister's fivepoint algorithm and RANSAC.
the inlier indices are found using :InliersIndices=find(mask>0);
I used this Matlab impelmentation of Nister's algorithm:
Fivepoint_algoithm_code
The call to the function is as follows:
[E_all, R_all, t_all, Eo_all] = five_point_algorithm( pts1, pts2, K, K);
The algorithm outputs up to 10 solutions of essential matrices. However, I encountered the following issues:
The impelmentation stated above is only for perfect correspondances (without Ransac) and I'm providing to the algorithm 5 correspondances using InliersIndices, the outputted essential matrices (up to 10) are all different from the one returned by Opencv.
All the returned essential matrices should be solutions so why when I triangulate for each one using the below function, I don't obtain the same 3D points?
How to choose the right essential marix solution?
I triangulate using the function of matlab toolbox
Projection matrices:
P1=K*[eye(3) [0;0;0]];
P2=K*[R_all{i} t_all{i}];
[pts3D,rep_error] = triangulate(pts1', pts2', P1',P2');
Edit
The returned E from [E,mask] = cv.findEssentialMat(points1, points2, 'CameraMatrix',K, 'Method','Ransac');
E =
0.0052 -0.7068 0.0104
0.7063 0.0050 -0.0305
-0.0113 0.0168 0.0002
For the 5-point Matlab implementation,5 random indices from inliers are taken so:
pts1 =
736.7744 740.2372 179.2428 610.5297 706.8776
112.2673 109.9687 45.7010 91.4371 87.8194
pts2 =
722.3037 725.3770 150.3997 595.3550 692.5383
111.7898 108.6624 43.6847 90.6638 86.8139
K =
723.3631 7.9120 601.7643
-3.8553 719.6517 182.0588
0.0075 0.0044 1.0000
and 4 solutions are returned:
E1 =
-0.2205 0.9436 -0.1835
0.8612 0.2447 -0.1531
0.4442 -0.0600 -0.0378
E2 =
-0.2153 0.9573 0.1626
0.8948 0.2456 -0.3474
0.1003 0.1348 -0.0306
E3 =
0.0010 -0.9802 -0.0957
0.9768 0.0026 -0.1912
0.0960 0.1736 -0.0019
E4 =
-0.0005 -0.9788 -0.1427
0.9756 0.0021 -0.1658
0.1436 0.1470 -0.0030
Edit2:
pts1 and pts2 when triangulated using the essential matrix E, R and t returned [R, t] = cv.recoverPose(E, p1, p2,'CameraMatrix',K);
X1 =
-0.0940 0.0478 -0.4984
-0.0963 0.0497 -0.4987
0.3033 0.1009 -0.5202
-0.0065 0.0636 -0.5053
-0.0737 0.0653 -0.5011
with
R =
-0.9977 -0.0063 0.0670
0.0084 -0.9995 0.0305
0.0667 0.0310 0.9973
and
t =
0.0239
0.0158
0.9996
When triangulated with the Matlab code, the chosen solution is E_all{2}
R_all{2}=
-0.8559 -0.2677 0.4425
-0.1505 0.9475 0.2821
-0.4948 0.1748 -0.8512
and
t_all{2}=
-0.1040
-0.1355
0.9853
X2 =
0.1087 -0.0552 0.5762
0.1129 -0.0578 0.5836
0.4782 0.1582 -0.8198
0.0028 -0.0264 0.2099
0.0716 -0.0633 0.4862
When doing
X1./X2
ans =
-0.8644 -0.8667 -0.8650
-0.8524 -0.8603 -0.8546
0.6343 0.6376 0.6346
-2.3703 -2.4065 -2.4073
-1.0288 -1.0320 -1.0305
There is an almost constant scale factor between triangulated 3D points.
However, rotation matrices are different and there is no scale factor between translations.
t./t_all{2}=
-0.2295
-0.1167
1.0145
which makes the plotted trajectory wrong
Answering your numbered questions:
Beware that Nister's 5 point algorithm has many implementations, but most of them don't work well. Personal experience and unpublished work by colleagues show that OpenCV does not have a good implementation. The open implementation in Bundler and other working SfM pipelines work better in practice (but there is a lot of room for improvement).
The 10 solutions are simply zeros of a certain polynomial equation. As far as the polynomial equation can describe the problem, these 10 solutions all zero the equation. The equation does not describe that these 10 points are real, or that the 3D points corresponding to the 5 point correspondences have to be the same for each solution, but only that there are some 3D points (for each solution) that project to the 5 points, without even considering if the 3D points are in front of the respective cameras. Moreover, there may well be two sets of 3D points and cameras that happen to generate the same images of 5 points, so you would have to weed them out with some other procedure (below).
The choice of the right solution among the 10 complex solutions is usually done by many techniques:
Discard solutions that would lead to purely complex points or 3D points with negative depth (currently Bundler does not do this last check)
Discard solutions that are not physical for some other reason (you may have to do some of that yourself for your application)
The more usual procedure: For each remaining solution, check which one is more consistent with additional correspondences. In a real system you don't know which additional correspondences are right and which are pure trash. So run RANSAC for each of the solutions and keep the one with the most inliers. This is computationally heavy so should be used as a last resort.
You can see how Bundler does this at file 5point.c line 668:
generate_Ematrix_hypotheses(5, r_pts_inner, l_pts_inner, &num_hyp, E);
for (i = 0; i < num_hyp; i++) {
int best_inlier;
double score = 0.0;
double E2[9], tmp[9], F[9];
memcpy(E2, E + 9 * i, 9 * sizeof(double));
E2[0] = -E2[0];
E2[1] = -E2[1];
E2[3] = -E2[3];
E2[4] = -E2[4];
E2[8] = -E2[8];
matrix_transpose_product(3, 3, 3, 3, K2_inv, E2, tmp);
matrix_product(3, 3, 3, 3, tmp, K1_inv, F);
inliers = evaluate_Ematrix(n, r_pts, l_pts, // r_pts_norm, l_pts_norm,
thresh_norm, F, // E + 9 * i,
&best_inlier, &score);
if (inliers > max_inliers ||
(inliers == max_inliers && score < min_score)) {
best = 1;
max_inliers = inliers;
min_score = score;
memcpy(E_best, E + 9 * i, sizeof(double) * 9);
r_best = r_pts_norm[best_inlier];
l_best = l_pts_norm[best_inlier];
}
inliers_hyp[i] = inliers;
}

Calibrated camera get matched points for 3D reconstruction, ideal test failed

I have previously asked the question "Use calibrated camera get matched points for 3D reconstruction", but the problem was not described clearly. So here I use a detail case with every step to show. Hope there is someone can help figure out where my mistake is.
At first I made 10 3D points with coordinates:
>> X = [0,0,0;
-10,0,0;
-15,0,0;
-13,3,0;
0,6,0;
-2,10,0;
-13,10,0;
0,13,0;
-4,13,0;
-8,17,0]
these points are on the same plane showing in this picture:
My next step is to use the 3D-2D projection code to get the 2D coordinates. In this step, I used the MATLAB code from caltech calibration toolbox called "project_points.m". Also I used the OpenCV C++ code to verify the result and turned out the same. (I used cvProjectPoints2())
For the 1st projection, parameters are:
>> R = [0, 0.261799387, 0.261799387]
>> T = [0, 20, 100]
>> K = [12800, 0, 1850; 0, 12770, 1700; 0 0 1]
And no distortion
>> DisCoe = [0,0,0,0]
The rotation is just two rotations with pi/12. I then got the 1st view 2D coordinates:
>> Points1 = [1850, 4254;
686.5, 3871.7;
126.3, 3687.6;
255.2, 4116.5;
1653.9, 4987.6;
1288.6, 5391.0;
37.7, 4944.1;
1426.1, 5839.6;
960.0, 5669.1;
377.3, 5977.8]
For the 2nd view, I changed:
>> R = [0, -0.261799387, -0.261799387]
>> T = [0, -20, 100]
And then got the 2nd View 2D coordinates:
>> Points2 = [1850, -854;
625.4, -585.8;
-11.3, -446.3;
348.6, -117.7;
2046.1, -110.1;
1939.0, 442.9;
588.6, 776.9;
2273.9, 754.0;
1798.1, 875.7;
1446.2, 1501.8]
THEN will be the reconstruction steps, I have already built the ideal matched points(I guess so), next step is to calculate the Fundamental Matrix, using estimateFundamentalMatrix():
>> F = [-0.000000124206906, 0.000000155821234, -0.001183448392236;
-0.000000145592802, -0.000000088749112, 0.000918286352329;
0.000872420357685, -0.000233667041696, 0.999998470240927]
with known K, I used the matlab code below to calculate essential matrix and compute the R, t, finally 3D coordinates:
E = K'*F*K;
[u1,w1,v1] = svd(E);
t = (w1(1,1)+w1(2,2))/2;
w1_new = [t,0,0;0,t,0;0,0,0];
E_new = u1*w1_new*v1';
[u2,w2,v2] = svd(E_new);
W = [0,-1,0;1,0,0;0,0,1];
S = [0,0,-1];
P1 = K*eye(3,4);
R = u2*W'*v2';
t = u2*S;
P2 = K*[R t];
for i=1:size(Points1,1)
A = [P1(3,:)*Poinst1(i,1)-P1(1,:);P1(3,:)*Points1(i,2)-P1(2,:);P2(3,:)*Points2(i,1)-P2(1,:);P2(3,:)*Points2(i,2)-P2(2,:)];
[u3,w3,v3] = svd(A);
dpt(i,:) = [v3(1,4) v3(2,4) v3(3,4)];
end
From this code I got the result as below:
>>X_result = [-0.00624167168027166 -0.0964921215725801 -0.475261364542900;
0.0348079221692933 -0.0811757557821619 -0.478479857606225;
0.0555763217997650 -0.0735028994611970 -0.480026199527202;
0.0508767193762549 -0.0886557226954657 -0.473911682320574;
0.00192300693541664 -0.121188713743347 -0.466462048338988;
0.0150597271598557 -0.133665834494933 -0.460372995991565;
0.0590515135110533 -0.115505488681438 -0.460357357303399;
0.0110271144368152 -0.148447743355975 -0.455752218710129;
0.0266380667320528 -0.141395768700202 -0.454774266762764;
0.0470113238869852 -0.148215424398514 -0.445341461836899]
With showing these points in Geomagic, the result is "a little bit bending". But there positions seemed right. I don't know why this happened. Anybody have some idea? Please see the picture:
It looks like numerical inaccuracies, maybe inside your function estimateFundamentalMatrix().
My second guess is that your estimateFundamentalMatrix() is not handling the planar case, which is a degenerate case for some algorithms (for the linear 8-points algo do not work well with planar scene for example).
The uncalibrated fundamental matrix estimation is ambiguous for planar scenes (2 solutions at least). See for example "Multiple View Geometry" by Hartley & Zisserman.

is There any function in opencv which is equivalent to matlab conv2

Is there any direct opencv function for matlab function conv2? I tried using cvFilter2D(), but it seems to be giving me different results than conv2().
For example:
CvMat * Aa = cvCreateMat(2, 2, CV_32FC1);
CvMat * Bb = cvCreateMat(2, 2, CV_32FC1);
CvMat * Cc = cvCreateMat(2, 2, CV_32FC1);
cvSetReal2D(Aa, 0, 0, 1);
cvSetReal2D(Aa, 0, 1, 2);
cvSetReal2D(Aa, 1, 0, 3);
cvSetReal2D(Aa, 1, 1, 4);
cvSetReal2D(Bb, 0, 0, 5);
cvSetReal2D(Bb, 0, 1, 5);
cvSetReal2D(Bb, 1, 0, 5);
cvSetReal2D(Bb, 1, 1, 5);
cvFilter2D(Aa, Cc, Bb);
This produces the matrix [20 30; 40 50]
In MATLAB:
>> A=[1 2; 3 4]
A =
1 2
3 4
>> B=[5 5; 5 5]
B =
5 5
5 5
>> conv2(A,B,'shape')
ans =
50 30
35 20
Please Help me.its very much useful for me.Thank you.
Regards
Arangarajan.
The numerical computing environment Matlab (or e.g. its free alternative GNU Octave) provides a function called conv2 for the two-dimensional convolution of a given matrix with a convolution kernel. While writing some C++ code based upon the free image processing library OpenCV, I found that OpenCV currently offers no equivalent method.
Although there is a filter2D() method that implements two-dimensional correlation and that can be used to convolute an image with a given kernel (by flipping that kernel and moving the anchor point to the correct position, as explained on the corresponding OpenCV documentation page), it would be nice to have a method offering the same border handling options as Matlab (“full”, “valid” or “same” convolution), e.g. for comparing results of the same algorithm implemented in both Matlab and C++ using OpenCV.
Here is what I came up with:
enum ConvolutionType {
/* Return the full convolution, including border */
CONVOLUTION_FULL,
/* Return only the part that corresponds to the original image */
CONVOLUTION_SAME,
/* Return only the submatrix containing elements that were not influenced by the border
*/
CONVOLUTION_VALID
};
void conv2(const Mat &img, const Mat& kernel, ConvolutionType type, Mat& dest) {
Mat source = img;
if(CONVOLUTION_FULL == type) {
source = Mat();
const int additionalRows = kernel.rows-1, additionalCols = kernel.cols-1;
copyMakeBorder(img, source, (additionalRows+1)/2, additionalRows/2,
(additionalCols+1)/2, additionalCols/2, BORDER_CONSTANT, Scalar(0));
}
Point anchor(kernel.cols - kernel.cols/2 - 1, kernel.rows - kernel.rows/2 - 1);
int borderMode = BORDER_CONSTANT;
filter2D(source, dest, img.depth(), flip(kernel), anchor, 0, borderMode);
if(CONVOLUTION_VALID == type) {
dest = dest.colRange((kernel.cols-1)/2, dest.cols - kernel.cols/2)
.rowRange((kernel.rows-1)/2, dest.rows - kernel.rows/2);
}
}
In my unit tests, this implementation yielded results that were almost identical with the Matlab implementation. Note that both OpenCV and Matlab do the convolution in Fourier space if the kernel is large enough. The definition of ‘large’ varies in both implementations, but results should still be very similar, even for large kernels.
Also, the performance of this method might be an issue for the ‘full’ convolution case, since the entire source matrix needs to be copied to add a border around it. Finally, If you receive an exception in the filter2D() call and you are using a kernel with only one column, this might be caused by this bug. In that case, set the borderMode variable to e.g. BORDER_REPLICATE instead, or use the latest version of the library from the OpenCV trunk.
If you are using convolution, there is problem at the edge of the matrix. The convolution mask needs values which are outside of the matrix. The algorithms from OpenCV and matlab use different strategies to cope with this problem. OpenCV just replicates the pixels of the border whereas matlab just assumes that all this pixels are zero.
So if you want to emulate the behaviour of matlab in OpenCV you can add this zero padding manually. There even is a dedicated function for this. Let me give you an example of how your code could be modified:
CvMat * Ccb = cvCreateMat(3, 3, CV_32FC1);
CvMat * Aab = cvCreateMat(3, 3, CV_32FC1);
cvCopyMakeBorder(Aa,Aab, cvPoint(0,0),IPL_BORDER_CONSTANT, cvScalarAll(0));
cvFilter2D(Aab, Ccb, Bb);
The result this gives is:
20.000 30.000 20.000
40.000 50.000 30.000
30.000 35.000 20.000
To get your intended result you just need to delete the first column and row to get rid of the additional data introduced by the border we added.

block processing with multiple input matrices

I'm working in matlab processing images for steganography. In my work so far I have been working with block processing command blockproc to break the image up into blocks to work on it. I'm now looking to start working with two image, the secret and the cover, but i can't find anyway to use blockproc with two input matrices instead of one.
Would anyone knowof a way to do this?
blockproc allows you to iterate over a single image only, but doesn't stop you from operating on whatever data you would like. The signature of the user function takes as input a "block struct", which contains not only the data field (which is used in all the blockproc examples) but also several other fields, one of which is "location". You can use this to determine "where you are" in your input image and to determine what other data you need to operate on that block.
for example, here's how you could do element-wise multiplication on 2 same-size images. This is a pretty clunky example but just here to demonstrate how this could look:
im1 = rand(100);
im2 = rand(100);
fun = #(bs) bs.data .* ...
im2(bs.location(1):bs.location(1)+9,bs.location(2):bs.location(2)+9);
im3 = blockproc(im1,[10 10],fun);
im4 = im1 .* im2;
isequal(im3,im4)
Using the "location" field of the block struct you can figure out the appropriate parts of a 2nd, 3rd, 4th, etc. data set you need for that particular block.
hope this helps!
-brendan
I was struggling with the same thing recently and solved it by combining both my input matrices into a single 3D matrix as follows. The commented out lines were my original code, prior to introducing block processing to it. The other problem I had was using variables other than the image matrix in the function: I had to do that part of the calculation first. If someone can simplify it please let me know!
%%LAB1 - L*a*b nearest neighbour classification
%distance_FG = ((A-FG_A).^2 + (B-FG_B).^2).^0.5;
%distance_BG = ((A-BG_A).^2 + (B-BG_B).^2).^0.5;
distAB = #(bs) ((bs.data(:,:,1)).^2 + (bs.data(:,:,2)).^2).^0.5;
AB = A - FG_A; AB(:,:,2) = B - FG_B;
distance_FG = blockproc(AB, [1000, 1000], distAB);
clear AB
AB = A - BG_A; AB(:,:,2) = B - BG_B;
distance_BG = blockproc(AB, [1000, 1000], distAB);
clear AB
I assume the solution to your problem lies in creating a new matrix that contains both input matrices.
e.g. A(:,:,1) = I1; A(:,:,2) = I2;
Now you can use blockproc on A.