I am using Matlab's Camera Calibrator to calibrate a bunch of images from a fisheye camera.
How can I translate the results into OpenCV format?
Results in Matlab:
Format I would like to convert them to:
I looked into and found out that Scaramuzza's model is the one that the Matlab fisheye calibration uses, but I don't see clearly anywhere what exactly each value stands for, or the order.
Edit: The cameraIntrinsicsToOpenCV function provided by Matlab does not work with fisheye models, it is only for pinhole/default camera model, hence the question!
From version R2021b, Matlab has introduced the cameraIntrinsicsToOpenCV function which does exactly that. Read more here
They also made it bijective with its twin function cameraIntrinsicsFromOpenCV.
Edit:
For fisheye cameras, use undistortFisheyeImage function to get the cameraIntrinsics which will contain all the data for opencv: FocalLength [fx fy] , PrincipalPoint [cx, cy], RadialDistortion [k1 k2 k3] and TangentialDistortion [p1 p2].
[UndistortedImage,camIntrinsics] = undistortFisheyeImage(Image,intrinsics);
Related
I am using SURF feature detector and descriptor to find feature points in images using MATLAB. I want to use these feature points and feature descriptors in another program that only accepts feature points and descriptors in Lowe's ASCII format. I found that SIFT feature descriptors are normalized to 512 and I need to do the same with SURF feature descriptor in MATLAB but I didn't get it. I have tried norm function with no luck/success. Here is how I implemented this but I could not get what I want.
I = imread('cameraman.tif');
[r, c, p] = size(I);
if p > 1
I = rgb2gray(I);
end
points = detectSURFFeatures(I);
[features, vldPoints] = extractFeatures(I, points, 'FeatureSize', 128,...
'Method', 'SURF');
% imshow(I); hold on;
% plot(points);
for ii = 1:size(features,1)
v = features(ii,:);
normFeatures(ii,:) = round(v/norm(v) * 512);
end
More about the question can be found here.
EDIT: I tried the same process to normalize the SIFT feature descriptors found using the original sift binary in MATLAB and it worked (I matched temp.key file provided by Lowe in the sift folder and my features files and both are same). It means the SURF 'features' are not the right data to normalize.
Please guide me about the SURF features found in MATLAB. I mean how they are different to sift feature descriptors?
This is Herbert, co-author of the SURF paper. Unfortunately, it is not possible to convert SURF features into SIFT features as the underlying math is different. Therefore, it is not possible to match SURF features against SIFT features. If you only want to normalize the features, please refer to the original source code for maybe better understanding https://github.com/herbertbay/SURF. Hope this helps...
Has anyone else noticed that the outputs of MATLAB's rgb2hsv() and OpenCV's cvtColor() (with the argument thereof being CV_BGR2HSV) appear to be calculated slightly differently?
For one, MATLAB's function maps any image input to the [0,1] interval, while OpenCV maintains the same interval of the input (i.e. an image with pixels in [0,255] in RGB keeps the same [0,255] interval in HSV).
But more importantly, when normalizing the cvtColor() output (e.g. mat = mat / 255), the values are not quite the same.
I couldn't find anything in the documentations about the specific formulas they use. Can anyone please shed some light on these differences?
For OpenCV the formula is right there in the document you point to. For Matlab, have a look here http://www.mathworks.com/matlabcentral/newsreader/view_thread/269237:
Just dive into the code - they gave it to you. Just put the cursor on
the function rgb2hsv() in your code and type control-d.
How is the reprojection error calculated in Matlab's triangulate function?
Sadly, the documentation gives no mathematical formula.
It only says: The vector contains the average reprojection error for each M world point.
What is the procedure/Matlab uses when calculating this error?
I searched SOF but found nothing on this IMHO important question.
UPDATE:
How can they use this error to filter out bad matches here : http://se.mathworks.com/help/vision/examples/sparse-3-d-reconstruction-from-two-views.html
AFAIK The reprojection error is calculated always in the same way (in the field of computer vision in general).
The reprojection is (as the name says) the error between the reprojected point in the camera and the original point.
So from 2 (or more) points in the camera you triangulate and get 3D points in the world system. Due to errors in the calibration of the cameras, the point will not be 100% accurate. What you do is take the result 3D point (P) and with the camera calibration parameters project it in the cameras again, obtaining new points (\hat{p}) near the original ones (p).
Then you calculate the euclidean distance between the original point and the "reprojected" one.
In case you want to know a bit more about the method used by Matlab, I'll enhance the bibliography they use giving you also the page number:
Multiple View Geometry in Computer Vision by Richard Hartley and
Andrew Zisserman (p312). Cambridge University Press, 2003.
But basically it is a least squares minimization, that has no geometrical interpretation.
You can find an explanation of reprojection errors in the context of camera calibration in the Camera Calibrator tutorial:
The reprojection errors returned by the triangulate function are essentially the same concept.
The way to use the reprojection errors to discard bad matches is shown in this example:
[points3D, reprojErrors] = triangulate(matchedPoints1, matchedPoints2, ...
cameraMatrix1, cameraMatrix2);
% Eliminate noisy points
validIdx = reprojErrors < 1;
points3D = points3D(validIdx, :);
This code excludes all 3D points for which the reprojection error was more than a pixel. You can also use validIdx to eliminate the corresponding 2D matches.
Above mentioned answers interpret re-projection error in a simplistic way as an actual reprojection in the camera. In more general sense, this error reflects the distance between a noisy image point and the point estimated from the model. One can imagine a tangental plane to some surface (model) in n-dimensional space where the noisy point is projected (hence it lands on the plane, not on the model!). n is not obligatory = 2 since a notion of a "point" can be generalized to, for example, concatenation of coordinates of two corresponding points for Homography.
It is important to understand that reprojection error is not a final answer. Overall_error^2 = reprojection_error^2 + estimation_error^2. The latter is the distance between estimation reprojected and true point on the model. More on this can be found in chapter 5 of Hatrtley andd Zisserman book Multiple View Geometry. They prove that reprojeciton error has a theoretical limit 0.6*sigma (for Homography estimation), where sigma is noise standard deviation.
They filter out bad matches by removing relevant indexes which have large re-projection errors.
This means that points that have large re-projection errors are the outliers.
I was given this task, I am a noob and need some pointers to get started with centroid calculation in Matlab:
Instead of an image first I was asked to simulate a Gaussian distribution(2 dimensional), add noise(random noise) and plot the intensities, now the position of the centroid changes due to noise and I need to bring it back to its original position by
-clipping level to get rid of the noise, noise reduction by clipping or smoothing, sliding average (lpf) (averaging filter 3-5 samples ), calculating the means or using Convolution filter kernel - which does matrix operations which represent the 2-D images
Since you are a noob, even if we wrote down the answer verbatim you probably won't understand how it works. So instead I'll do what you asked, give you pointers and you'll have to read the related documentation :
a) to produce a 2-d Gaussian use meshgrid or ndgrid
b) to add noise to the image look into rand ,randn or randi, depending what exactly you need.
c) to plot the intensities use imagesc
d) to find the centroid there are several ways, try to further search SO, you'll find many discussions. Also you can check TMW File exchange for different implementations for that.
I have an algorithm in C++ that uses Kalman Filter. Somewhere in the code a predict a Quaternion q' and then I update the Quaternion with Kalman Filter q.
I want to plot two graphics in Matlab with the evolution of the predicted quaternion and the corrected(updated) quaternion so I am using "engine.h" library to send quaternion info to Matlab during processing (actually what I send is a 4x1 matrix).
So my question is: What is the best way to plot a quaternion in Matlab so I can visually extract information? Is it maybe better to plot the angles separately?
I think a good option is sending the quaternion as a vector to MATLAB, using C++ MATLAB engine
[qx qy qz qw]
Then, in MATLAB environment you can use a toolbox for translating to Euler Angles, which is a common visual option.
For adding a path of a toolbox in matlab engine:
addpath(genpath('C:\Program Files (x86)\MATLAB\R2010a\toolbox\SpinCalc'));
With spincalc toolbox, converting would be something like this:
Angles=SpinCalc('QtoEA321',Quaternion,0,0);
Well, assuming that the question is "How to visualize in a nice way a 4D space",
I can think of a few options:
Show multiple slices of the space, that is for (x,y,z,t) -> (x,y), (y,z),etc..
Show (x,y) as scatter plot and encode the information of z in color, t in size of dot. For that you can use the scatter command :
SCATTER(X,Y,S,C) displays colored circles at the locations specified
by the vectors X and Y (which must be the same size).
If your question was "How to visualize in a nice way quarternions,
check this out