I am using Camera Calibration Toolbox for Matlab. After calibration I have intrinsic and extrinsic parameters of stereo camera system. Next, I would like to determine the distance between the camera system and the object. To get this information, I used the function stereo_triangulation which is included in the Toolbox. Input are two matrixes including pixel coordinates of correspondences in the left and right image.
I tried to get coordinates of correspondences with using of Basic Block Matching method which is described in Matlab's help for Stereo Vision.
Resolution of my pictures is 1280x960 pixels. I know that the biggest disparity is around 520 pixels. I set the maximum of disparity range to 520. But then determine the coordinates takes ages. It is not possible use in practice. Calculating of disparity map is much faster with using of Matlab's function disparity(). But I want the step before - coordinates of correspondences.
Please can you suggest how can I effectively get the coordinates with Matlab?
Disparity and 3D are related by simple formulas (see below) so the time for calculating 3D data and disparity map should be the same. The notation is
f - focal length in pixels,
B - separation between cameras,
u, v - row and column in the system centered on the middle of the image,
d-disparity,
x, y, z - 3D coordinates.
z=f*B/d;
x=z*u/f;
y=z*v/f;
1280x960 is too large resolution for any correlation stereo to work in real time. Think about it: you have to loop over a 2d image, over 2d correlation window and over the range of disparities. This means 5 embedded loops! I don't work with Matlab anymore but I know that it is quite slow.
Related
How do you determine that the intrinsic and extrinsic parameters you have calculated for a camera at time X are still valid at time Y?
My idea would be
to use a known calibration object (a chessboard) and place it in the camera's field of view at time Y.
Calculate the chessboard corner points in the camera's image (at time Y).
Define one of the chessboard corner points as world origin and calculate the world coordinates of all remaining chessboard corners based on that origin.
Relate the coordinates of 3. with the camera coordinate system.
Use the parameters calculated at time X to calculate the image points of the points from 4.
Calculate distances between points from 2. with points from 5.
Is that a clever way to go about it? I'd eventually like to implement it in MATLAB and later possibly openCV. I think I'd know how to do steps 1)-2) and step 6). Maybe someone can give a rough implementation for steps 2)-5). Especially I'd be unsure how to relate the "chessboard-world-coordinate-system" with the "camera-world-coordinate-system", which I believe I would have to do.
Thanks!
If you have a single camera you can easily follow the steps from this article:
Evaluating the Accuracy of Single Camera Calibration
For achieving step 2, you can easily use detectCheckerboardPoints function from MATLAB.
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
Assuming that you are talking about stereo-cameras, for stereo pairs, imagePoints(:,:,:,1) are the points from the first set of images, and imagePoints(:,:,:,2) are the points from the second set of images. The output contains M number of [x y] coordinates. Each coordinate represents a point where square corners are detected on the checkerboard. The number of points the function returns depends on the value of boardSize, which indicates the number of squares detected. The function detects the points with sub-pixel accuracy.
As you can see in the following image the points are estimated relative to the first point that covers your third step.
[The image is from this page at MATHWORKS.]
You can consider point 1 as the origin of your coordinate system (0,0). The directions of the axes are shown on the image and you know the distance between each point (in the world coordinate), so it is just the matter of depth estimation.
To find a transformation matrix between the points in the world CS and the points in the camera CS, you should collect a set of points and perform an SVD to estimate the transformation matrix.
But,
I would estimate the parameters of the camera and compare them with the initial parameters at time X. This is easier, if you have saved the images that were used when calibrating the camera at time X. By repeating the calibrating process using those images you should get very similar results, if the camera calibration is still valid.
Edit: Why you need the set of images used in the calibration process at time X?
You have a set of images to do the calibrations for the first time, right? To recalibrate the camera you need to use a new set of images. But for checking the previous calibration, you can use the previous images. If the parameters of the camera are changes, there would be an error between the re-estimation and the first estimation. This can be used for evaluating the validity of the calibration not for recalibrating the camera.
I'm trying to make a 3D reconstruction from a set of uncalibrated photographs in MATLAB. I use SIFT to detect feature points and matches between images. I want to make a projective reconstruction first and then update this to a metric one using auto-calibration.
I know how to estimate the 3D points from 2 images by computing the fundamental matrix, camera matrices and triangulation. Now say I have 3 images, a, b and c. I compute the camera matrices and 3D points for image a and b. Now I want to update the structure by adding image c. I estimate the camera matrix by using known 3D points (calculated from a and b) that match with 2D points in image c, since:
However when I reconstruct the 3D points between b and c they don't add up with the existing 3D points from a and b. I'm assuming this is because I don't know the correct depth estimates of the points (depicted by s in above formula).
With the factorization method of Sturm and Triggs I can estimate the depths and find the structure and motion. However in order to do this, all points have to be visible in all views, which is not the case for my images. How can I estimate the depths for points not visible in all views?
This is not a question about Matlab. It is about an algorithm.
It is not mathematically possible to estimate the position of a 3D point in an image when you don't see an observation of the point in said image.
There are extensions for factorization to work with missing data. However, the field seems to have converged to Bundle Adjustment as the Gold Standard.
An excellent tutorial on how to achieve what you want can be found here, which is a culmination of several years of research into a working application. Starting from projective reconstruction up to the metric upgrade.
I have 512x512x313 volume of dicom images and i have a point represented in world coordinates say (57.7475 63.4184 83.1515), how could I get the corresponding pixel coordinate of that world coordinate in Matlab??
I hate to burst your bubble, but what you are asking for is impossible. The only way that I can think of where you are able to get a correspondence between real-world co-ordinates and pixel co-ordinates is if you calibrate the camera that was used to capture the images. Once you know the intrinsic and extrinsic parameters, you then have a transformation matrix that can map real-world co-ordinates to pixel co-ordinates.
I'm assuming you don't have calibration information for your camera, and so an alternative approach would be knowing which pixels in your image map to which real-world co-ordinates. You would need to know point correspondences between those points that map between the real-world and to your image. Once you know this, you would then compute the camera transformation matrix through least-squares and then you would use this matrix to determine which points map from the real-world to your image.
Unless you have pixel correspondences to each of your real-world co-ordinates, it is impossible to do what you're asking.
FWIW, if you want to see the procedure on how to obtain the transformation matrix, check out these notes: http://www.peterhillman.org.uk/downloads/whitepapers/calibration.pdf. This was a great starting point for me when I started learning about camera calibration. Take a look at Section #5 (Page #8), as this is what I believe you are looking for.... but you will need to have correspondences between the real-world co-ordinates and your image.
Good luck!
I have two images(both are exactly same images) and I am trying to calculate the disparity between them using sum of squared distances and reconstruct disparity in 3D space.
Do I need to rectify the image before calculating disparity?
The following are the steps that I have done so far for disparity map computation(I have tried with rectification and without rectification but both are returning all zeroes disparity matrix).
For each pixel in the left image X,
Take the pixels in the same row in the right image.
Separate the row in right image to windows.
For each window,
Calculate the disparity for each pixel in that window with X
Select the pixel in the window which gives minimum SSD with X
Find the pixel with minimum disparity among all windows as the best match to X
Am I doing it correctly?
How can I visualise the 3D reconstruction of the disparity as scatter plot in matlab?
Rectification guarantees that matches are to be found in the same row (for horizontally separated cameras). If you have doubts about rectification of your images you can try to compare rows by drawing horizontal lines between horizontally separated images. If the lines hit the same features you are fine, see the picture below where images are NOT rectified. The fact that they are distorted means there was a lens distortion correction as well as attempted (but not actually performed correctly) rectification.
Now, let’s see what you meant by the same images. Did you mean the images of the same object that were taken from different viewpoints? Note that if the images are literally the same (the same viewpoints) the disparity will be zero as was noted in another answer. The definition of disparity (for horizontally separated cameras) is a value of shift (in the same row) between matching features. The disparity is related to depth (if optical axes of cameras are parallel) as disparity d=f*B/z, where z - depth, B - baseline or separation between cameras and f is a focal length. You can transform the formula above into disparity/B=f/z which basically says that disparity related to camera separation as focal length is related to distance. In other words, the ratios of horizontal and distance measures are equal.
If your images are taken with the cameras shifted horizontally the disparity (in a simple correlation algorithm) is typically calculated in 5-embedded loops:
loop over image1 y
loop over image1 x
loop over disparity d
loop over correlation window y
loop over correlation window x
Disparity, or D_best, gives you the best matching window between image1 and image2 across all possible values of d. Finally, scatterplots are for 3D point clouds while disparity can be rather visualized as a heat color map. If you need to visualize 3D reconstruction or simply saying a 3D point cloud calculate X, Y, Z as:
Z=fB/D, X=uZ/f, Y=v*Z/f, where u and v are related to column and row of wxh image as
u=col-w/2 and v=h/2-row, that is u, v form an image centered coordinate system.
If your two images are exactly the same, then the disparity would be 0 for every pixel. You either have to use two separate cameras to take the images, or take them with a single camera from two different locations. The best way to do 3D reconstruction is to use a calibrated stereo pair of cameras. Here is an example of how to do that using the Computer Vision System Toolbox for MATLAB.
I'm receiving depth images of a tof camera via MATLAB. the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV function, which are implemented in MATLAB via mex-files.
But later on I can't use those drivers anymore nor use openCV functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of each pixel of the depth image is working. Until now I am solving the implicit equations of the undistortion via the newton method (which isn't really fast...). But I want to implement the undistortion of the openCV function.
... and there is my problem: I dont really understand it and I hope you can help me out there. how is it actually working? I tried to search through the forum, but havent found any useful threads concerning this case.
greetings!
The equations of the projection of a 3D point [X; Y; Z] to a 2D image point [u; v] are provided on the documentation page related to camera calibration :
(source: opencv.org)
In the case of lens distortion, the equations are non-linear and depend on 3 to 8 parameters (k1 to k6, p1 and p2). Hence, it would normally require a non-linear solving algorithm (e.g. Newton's method, Levenberg-Marquardt algorithm, etc) to inverse such a model and estimate the undistorted coordinates from the distorted ones. And this is what is used behind function undistortPoints, with tuned parameters making the optimization fast but a little inaccurate.
However, in the particular case of image lens correction (as opposed to point correction), there is a much more efficient approach based on a well-known image re-sampling trick. This trick is that, in order to obtain a valid intensity for each pixel of your destination image, you have to transform coordinates in the destination image into coordinates in the source image, and not the opposite as one would intuitively expect. In the case of lens distortion correction, this means that you actually do not have to inverse the non-linear model, but just apply it.
Basically, the algorithm behind function undistort is the following. For each pixel of the destination lens-corrected image do:
Convert the pixel coordinates (u_dst, v_dst) to normalized coordinates (x', y') using the inverse of the calibration matrix K,
Apply the lens-distortion model, as displayed above, to obtain the distorted normalized coordinates (x'', y''),
Convert (x'', y'') to distorted pixel coordinates (u_src, v_src) using the calibration matrix K,
Use the interpolation method of your choice to find the intensity/depth associated with the pixel coordinates (u_src, v_src) in the source image, and assign this intensity/depth to the current destination pixel.
Note that if you are interested in undistorting the depthmap image, you should use a nearest-neighbor interpolation, otherwise you will almost certainly interpolate depth values at object boundaries, resulting in unwanted artifacts.
The above answer is correct, but do note that UV coordinates are in screen space and centered around (0,0) instead of "real" UV coordinates.
Source: own re-implementation using Python/OpenGL. Code:
def correct_pt(uv, K, Kinv, ds):
uv_3=np.stack((uv[:,0],uv[:,1],np.ones(uv.shape[0]),),axis=-1)
xy_=uv_3#Kinv.T
r=np.linalg.norm(xy_,axis=-1)
coeff=(1+ds[0]*(r**2)+ds[1]*(r**4)+ds[4]*(r**6));
xy__=xy_*coeff[:,np.newaxis]
return (xy__#K.T)[:,0:2]