I have been trying to do a stereo calibration with the Matlab camera calibration toolbox. I have two cameras being triggered at the same time, and I'm grabbing corners from 25 pairs of images. The individual calibrations are working, though one camera calibration uses only 24 of the 25 images (when I reproject on images, only 24 images pop up. When I try to use the L and R calibration.mat files for a stereo calibration, it throws Disabling view XX - L and R views are found inconsistent for every single pair (and it says there are only 24 pairs of images, not 25). I've read the help file but I don't think that it addresses my problem. Please advise!
Please check out the CameraCalibrator App that is a part of the Computer Vision System Toolbox for MATLAB. It gives you a very easy way to calibrate a single camera, including automatic checkerboard detection. It does not let calibrate a stereo pair, but it gets you half-way there.
Related
I am trying to obtain a disparity map from a homemade stereo camera setup. The baseline is 125mm and both cameras are fixed to a 3D-printed support. I've previously calibrated the cameras with 15 images of a checkboard pattern of 80mm square size using MatLab's calibration tool.
Using the intrinsics an extrinsics given by MatLab calibration tool, I rectified the images and build the disparity map on a MatLab script. However, the disparity is not good enough for my application. Do you think the calibration is not good or could be due to other problems?
Here are the results:
As you see through the lines I draw, the rectification of the images is not well done, since the epipolar constraint doesn't apply.
As you can see I used one of the calibration images to check. However, it happens the same on other images. I'm particularly concerned about the ground, as it contains a lot of noise and invalid points, which is not good enough for my algorithms, so I need to improve it.
Matlab Stereo Calibration App only ask the square size once, when adding the first image.
Is there a way I can:
Change the checkerboard square size?
Set different values to X and Y size (rectangles instead of a square)?
I hope that the Matlab Computer Vision System Toolbox is not that limited, since Bouguetj's Matlab Camera Calibration Toolbox allows to set value to X, Y and even different rectangles sizes for the checkerboard rectangles.
The app assumes that checkerboards in all calibration images have the same size (same square size, and the same number of squares). You have to set the square size once, in the beginning of the sessions. If you want to change it, you would have to start a new calibration session, and add the images again.
Under the hood, the app calls the detectCheckerboardPoints function to detect the checkerboard in an image. It may work with "rectangular squares", but I am not sure. You can certainly try it, and if it works you would need to generate the world coordinates of your points yourself, because generateCheckerboardPoints assumes squares, and not rectangles. Then you can do the calibration programmatically using the estimateCameraParameters function.
After stereo calibration, when I run the Matlab example for stereo depth estimation (SDE), the distances are wrong: at about 2 meters, it always reports distance as less than 1m.
And my 3D scene reconstruction looks cone-shaped instead of like the real scene.
Disparity map is very noisy (non-smooth), but resembles the scene.
If I 'feed' the SDE script the example file instead of webcam input, it runs okay, all looks great; when I feed it from two webcams ( 'Logitech HD Pro Webcam C920' ) that's when I get the above bad results, beginning with rough disparity map.
I've tried many different calibration attempts with just a few images up to about 60, with Matlab's checkerboard pattern at different angles (never > 45) and distance to cameras about 8 to 20'. Camera lenses are 3.8175" apart always, and are mounted to top edge of laptop. Followed Matlab's recommended workflow.
What am I doing wrong in calibration?
Matlab R2015a.
Laptop Windows 7 64-bit
Checkerboard pattern is 37" x 27"
............JUST DISCOVERED PROBLEM:
Was creating disparity map with this:
disparityMap = disparity(frameLeftGray, frameRightGray);
However, my camera #1 is on the right, and Matlab says default disparity range is [0 64] and for cam #1 right it should be [-128 0], but that changes the disparity map to all uniform blue.
I got it working. (1) left/right of calibration and images and detection data structures must match. (2) Use mm for checkboard square size. Inches causes malfunction, because all else is in mm.
first I will try to explane what I have to do and then I will ask my question to the problem.
My task is to detect small balls (2mm) in gelatine using two webcams.
The steps for detection are these:
Image taking using two webcams (position: 90 degree to each other)
Stereo calibration of each pair of images
Masking of the areas in the images which are not necessary to analyse
Rectification of each pair of images
Circle detection resulting in structure with the positions (x, y) of the center of each circle (in reality of each ball)
Association of the resulted position to get something like a 3D coordinate to know the position of the balls (this is my problem)
Now the problem (step 6.):
What possibilities are given to compute the 3D-coordinates of each center of the balls using the 2D-coordinates of the two images.
I'm searching here
http://de.mathworks.com/help/vision/stereo-vision.html
for ideas, but I hope you know some easy way and have some ideas.
I can not upload any images (because I'm new at stackoverflow)
Take a look at this example: Depth Estimation from Stereo Video. The example takes a pair images with a calibrated stereo camera, rectifies the images, detects a person, and gets the 3D coordinates of the centroid of the person. You can do the same thing to find the balls.
Calibrate the cameras using the Stereo Camera Calibrator app.
Take two images
Rectify the images using the rectifyStereoImages function
Compute stereo disparity using the disparity funciton
Get the 3D coordinates for every pixel using the reconstructScene funciton
Detect the balls in image 1
Look up the 3D coordinates of their centroids
Once you have the disparity and the dense reconstruction from reconstructScene there is no need to find correspondences between the images. disparity already did that for you.
I have a video of moving parts taken using a static camera. I wish to track & analyze the co-ordinates of various parts in the video. But the co-ordinates values are affected by camera movement. How do I calibrate the camera shake? I don't have any static point in the video (except for the top&bottom edges of video).
All I wish to get is the co-ordinates of (centroids, may be) moving parts adjusted for camera shake. I use MATLAB's computer vision toolbox to process the video.
I've worked on super-resolution algorithms in the past, and as a side affect, I got image stabilization using phase correlation. It's very resilient to noise, and it's quite fast. You should be able to achieve sub-pixel accuracy using a weighted centroid around the peak location, or some kind of peak-fitting routine. Running phase correlation on successive frames will tell you the translational shift that occurs frame-to-frame. You can use an affine warp to remove the shift.
A similar, but slower, approach is here this example is using Normalized Cross Correlation.
If you're using Matlab 2013a or later then video stabilization can be done using point matching Point Matching or by Template Matching. I guess they're available in Matlab 2012b but I haven't tested it out.