Change checkboard size in Matlab Stereo Calibration App - matlab

Matlab Stereo Calibration App only ask the square size once, when adding the first image.
Is there a way I can:
Change the checkerboard square size?
Set different values to X and Y size (rectangles instead of a square)?
I hope that the Matlab Computer Vision System Toolbox is not that limited, since Bouguetj's Matlab Camera Calibration Toolbox allows to set value to X, Y and even different rectangles sizes for the checkerboard rectangles.

The app assumes that checkerboards in all calibration images have the same size (same square size, and the same number of squares). You have to set the square size once, in the beginning of the sessions. If you want to change it, you would have to start a new calibration session, and add the images again.
Under the hood, the app calls the detectCheckerboardPoints function to detect the checkerboard in an image. It may work with "rectangular squares", but I am not sure. You can certainly try it, and if it works you would need to generate the world coordinates of your points yourself, because generateCheckerboardPoints assumes squares, and not rectangles. Then you can do the calibration programmatically using the estimateCameraParameters function.

Related

Camera Calibration Toolbox for Matlab Undistort Image loses image borders

I am using the Camera Calbration Toolbox from Cal Tech to undistort images. However, when I do so the images lose their borders in the final undistorted image. I am curious as to whether there is a way to avoid this, as the entire image is important. Thanks in advance.
Do you have access to Matlab's Computer Vision System Toolbox? It includes a function undistortImage that allows you to set the output view to include the entire undistorted image, like so:
outImg = undistortImage(inImg, cameraParams, 'OutputView', 'full');
As far as I know, the Camera Calibration Toolbox's undistort function doesn't include this functionality. If you don't have the above toolbox, you could try zero-padding your image with enough border that the actual image will remain in the undistorted frame, and then you could crop the result using the actual image's bounding box. This should probably be a last resort though. Undistorting the padded image won't yield exactly the same results as with the original image. Pad as little as possible!

Verify that camera calibration is still valid

How do you determine that the intrinsic and extrinsic parameters you have calculated for a camera at time X are still valid at time Y?
My idea would be
to use a known calibration object (a chessboard) and place it in the camera's field of view at time Y.
Calculate the chessboard corner points in the camera's image (at time Y).
Define one of the chessboard corner points as world origin and calculate the world coordinates of all remaining chessboard corners based on that origin.
Relate the coordinates of 3. with the camera coordinate system.
Use the parameters calculated at time X to calculate the image points of the points from 4.
Calculate distances between points from 2. with points from 5.
Is that a clever way to go about it? I'd eventually like to implement it in MATLAB and later possibly openCV. I think I'd know how to do steps 1)-2) and step 6). Maybe someone can give a rough implementation for steps 2)-5). Especially I'd be unsure how to relate the "chessboard-world-coordinate-system" with the "camera-world-coordinate-system", which I believe I would have to do.
Thanks!
If you have a single camera you can easily follow the steps from this article:
Evaluating the Accuracy of Single Camera Calibration
For achieving step 2, you can easily use detectCheckerboardPoints function from MATLAB.
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
Assuming that you are talking about stereo-cameras, for stereo pairs, imagePoints(:,:,:,1) are the points from the first set of images, and imagePoints(:,:,:,2) are the points from the second set of images. The output contains M number of [x y] coordinates. Each coordinate represents a point where square corners are detected on the checkerboard. The number of points the function returns depends on the value of boardSize, which indicates the number of squares detected. The function detects the points with sub-pixel accuracy.
As you can see in the following image the points are estimated relative to the first point that covers your third step.
[The image is from this page at MATHWORKS.]
You can consider point 1 as the origin of your coordinate system (0,0). The directions of the axes are shown on the image and you know the distance between each point (in the world coordinate), so it is just the matter of depth estimation.
To find a transformation matrix between the points in the world CS and the points in the camera CS, you should collect a set of points and perform an SVD to estimate the transformation matrix.
But,
I would estimate the parameters of the camera and compare them with the initial parameters at time X. This is easier, if you have saved the images that were used when calibrating the camera at time X. By repeating the calibrating process using those images you should get very similar results, if the camera calibration is still valid.
Edit: Why you need the set of images used in the calibration process at time X?
You have a set of images to do the calibrations for the first time, right? To recalibrate the camera you need to use a new set of images. But for checking the previous calibration, you can use the previous images. If the parameters of the camera are changes, there would be an error between the re-estimation and the first estimation. This can be used for evaluating the validity of the calibration not for recalibrating the camera.

3D reconstruction based on stereo rectified edge images

I have two closed curve stereo rectified edge images. Is it possible to find the disparity(along x-axis in image coordinates) between the edge images and do a 3D reconstruction since I know the camera matrix. I am using matlab for the process. And I will not be able to do a window based technique as it's a binary image since a window based technique requires texture. The question how will I compute the disparity between the edge images? The images are available in the following links. Left Edge image https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?dl=0 Right Edge Image https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?dl=0
For this type of images, you can easily map each edge pixel from the left image to its counterpart in the right image, and therefore calculate the disparity for those pixels as usual.
The mapping can be done in various ways, depending on how typical these images are. For example, using DTW like approach to match curvatures.
For all other pixels in the image, you just don't have any information.
#Photon: Thanks for the suggestion. I did what you suggested. I matched each edge pixel in the left and right image in a DTW like fashion. But there are some pixels whose y-pixel coordinate value differ by 1 or 2 pixels, albeit they are properly rectified. So I calculated the depth by averaging those differing(up to 2-pixel difference in y-axis) edge pixels using least squares method. But I ended getting this space curve (https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?dl=0) when they actually should have been like this (https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?dl=0) which is obtained using RGB images.I couldn't think of any other reason why it would be the case since I compared by traversing along the 408 edge pixels.

Does the chessboard pattern need to remain the same after a camera has been calibrated?

Does the chessboard pattern need to remain constant after the camera has been calibrated?
In other words, if I calibrate with a 9x6 block board with 25mm square blocks, can I use a 32 mm square 9x6 board with the same intrinsic matrix? Does it effect the focal length, and if so, why/how?
Once you calibrate, you can use any checkerboard to localize your camera. In fact, you can use any set of reference points with known 3D world coordinates, as long as you can accurately detect them in the image. The extrinsics function in the Computer Vision System Toolbox takes a set of image points and a set of corresponding world points, regardless of where they come from.

Stereo matching

I am using Camera Calibration Toolbox for Matlab. After calibration I have intrinsic and extrinsic parameters of stereo camera system. Next, I would like to determine the distance between the camera system and the object. To get this information, I used the function stereo_triangulation which is included in the Toolbox. Input are two matrixes including pixel coordinates of correspondences in the left and right image.
I tried to get coordinates of correspondences with using of Basic Block Matching method which is described in Matlab's help for Stereo Vision.
Resolution of my pictures is 1280x960 pixels. I know that the biggest disparity is around 520 pixels. I set the maximum of disparity range to 520. But then determine the coordinates takes ages. It is not possible use in practice. Calculating of disparity map is much faster with using of Matlab's function disparity(). But I want the step before - coordinates of correspondences.
Please can you suggest how can I effectively get the coordinates with Matlab?
Disparity and 3D are related by simple formulas (see below) so the time for calculating 3D data and disparity map should be the same. The notation is
f - focal length in pixels,
B - separation between cameras,
u, v - row and column in the system centered on the middle of the image,
d-disparity,
x, y, z - 3D coordinates.
z=f*B/d;
x=z*u/f;
y=z*v/f;
1280x960 is too large resolution for any correlation stereo to work in real time. Think about it: you have to loop over a 2d image, over 2d correlation window and over the range of disparities. This means 5 embedded loops! I don't work with Matlab anymore but I know that it is quite slow.