Using Vision Caltech Camera Calibration toolbox for MATLAB - matlab

I am currently using the camera calibration toolbox from vision caltech:http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html
It has been good so far and I was able to obtain the camera parameters of my camera and everything using the checkered board.
So now I have been taking pictures with my camera and I want to undistort them using the calibration results I obtained from the caltech calibration toolbox.
The parameters are saved in a .mat file but I cannot find a way to use them on other images I took.
Anyone knows how to do that?
Thanks;

There is an easier way to calibrate cameras in MATLAB. The Computer Vision System Toolbox also includes the undistortImage function, which is what you are looking for.

Related

Stereo Vision for Obstacle Detection

I'm working on stereo vision project with Halcon/NET. My project is to scanning the surface of a metal plate. Is it possible to detect small hole(1-3mm) on it with stereo vision?
If you are somewhat familiar with epipolar geometry and MRF optimization, you can have a look at this classic paper on 'Depth Estimation from Video'.
http://www.cad.zju.edu.cn/home/bao/pub/Consistent_Depth_Maps_Recovery_from_a_Video_Sequence.pdf
For camera calibration, you can use their ACTS software from here -
http://www.zjucvg.net/acts/acts.html
It accepts a video sequence and generates camera parameters and depth maps.
I hope it helps!
Yes, it is definitely possible to detect it - but I doubt you need stereo vision for it. Stereo vision is only useful when you want to recover 3D information (depth) from a scene.
Detection and classification can be achieved through deep learning methods too, it will also be probably more intuitive that way - but it depends on how unique your 'hole' is compared to the background of your scene. A problem of similar novelty has been discussed in this paper.
The same problem persists for stereo-vision, if the background of your scene has similar features to what you are trying to 'detect' it will create problems during stereo-matching.
Even if you use a simple 'edge' detector using a monocular vision system, it will still cause a problem.

Matlab Camera Calibration: saving chessboard calibration images

I'm currently trying to calibrate a camera and use those data to calibrate a projector. The first steps of this process start with printing a chessboard, generate chessboard calibration images and use them to calibrate the camera. Matlab documentation is quite thorough but it doesn't mention how one can generate its own chessboard calibration images. I can only assume this is a fairly simple thing to do but I'm new to Matlab so haven't figured out yet so any help would be greatly appreciated.
You generate the calibration images by taking pictures of the chessboard with your camera.
Also, if you have a recent version of MATLAB with the Computer Vision System Toolbox, try the Camera Calibrator app
https://sites.google.com/site/visheshpanjabihomepage/codes
The second set of codes, will help you collect the images for the camera calibration toolbox by Dr. Boquet (Link of which is given on the prescribed page itself).

Camera Calibration on MATLAB

I am very new to camera calibration and I have been trying to work with the camera calibration app from MATLAB's computer vision toolbox.
So I followed the steps they suggested on the website and so far so good, I was able to obtain the intrinsic parameters of the camera.
So now, I am kind of confused about what should I do with the "cameraParameter" object that was created when the calibration was done.
So my questions are:
(1) What should I do with the cameraParameter object that was created?
(2) How do I use this object when I am using the camera to capture images of something?
(3) Do I need the checker board around each time I capture images for my experiment?
(4) Should the camera be placed at the same spot each time?
I am sorry if those questions are really beginner level, camera calibration is new to me and I was not able to find my answers.
Thank you so much for your help.
I assume you are working with 1 just camera, so only intrinsic parameters of the camera are in the game.
(1),(2). Once your camera is calibrated, you need to use this parameters to undistort the image. Cameras dont take the images as they are in reality as the lenses distort it a bit, and the calibration parameters are for fixing the images. More in wikipedia.
About when you need to recalibrate the camera (3): if you set up the camera and don't change its focus, then you can use the same calibration parameters, but once you change the focal distance a recalibration is necessary.
(4) As long as you dont change the focal distance and you are not using a stereo camera sistem you can change your camera freely.
What you are looking for are two separate calibration steps: Alignment of depth image to color image and conversion from depth to a point cloud. Both functions are provided by windows sdk. There are matlab wrappers that call these SDK functions. You may want to do your own calibration only if you are not satisfied with the manufacturer calibration information stored on Kinect. Usually the error is within 1-2 pixels in the 2D alignment, and 4mm in 3D.
When you calibrate a single camera, you can use the resulting cameraParameters object for several things. First, you can remove the effects of lens distortion using the undistortImage function, which requires cameraParameters. There is also a function called extrinsics, which you can use to locate your calibrated camera in the world relative to some reference object (e. g. a checkerboard). Here's an example of how you can use a single camera to measure planar objects.
A depth sensor, like the Kinect, is a bit of different animal. It already gives you the depth in real units at each pixel. Additional calibration of the Kinect is useful if you want a more precise 3D reconstruction than what it gives you out of the box.
It would generally be helpful if you could tell us more about what it is you are trying to accomplish with your experiments.

Can I use Matlab simulink 3D Animation as a 3D CG viewer?

Simulink 3D Animation is a toolbox for simulink. I read the documentation of it and understood that you can load popular 3D CG data into it and view it at least statically, with some programming in matlab.
Assume I have loaded some 3D object into Simulink 3D Animation successfully. Then Can I rotate the 3D object or do other standard operation on it without programming in Simulink 3D Animation or matlab? For example, I expect it has a rotate buttons to let me rotate the 3D object.
As the second minor question, can you use Simulink 3D animation when you have only matlab but simulink?
Thank you in advance.
Yes, despite the name, Simulink 3D Animation works with MATLAB only, without Simulink (see System requirements).
For the rest, I would go with #thewaywewalk' suggestion, and try it out and/or watch some videos or webinars.
This is an example of using MATLAB 3D Animation Toolbox for object detection and tracking with an Unmanned Aerial Vehicle.
PAPER:https://ieeexplore.ieee.org/document/9373417
CODE:https://github.com/gsilano/MAT-Fly

matlab control the digital camera

I have a digital camera that doesn't support for taking a photo from the computer.
can I control this camera by matlab? or may I need a digital camera that supports it in order to control it by the matlab?
I just want to take a photo from matlab (there is a usb that is connected between the camera and the computer).
If you want to control cameras with Matlab you should have the Image Acquisition Toolbox. Additionally the camera you want to connect must be supported by the Toolbox.
You may want to check out
http://www.mathworks.com/products/imaq/
In case your camera is also having A/V R output(like in Sony digital camera), you can use a USB TV Tuner for capturing the analog signal and then use matlab to capture the video/images in real time from camera. Works for me !!