How to remove the effect due to camera shake from a video using MATLAB? - matlab

I have a video of moving parts taken using a static camera. I wish to track & analyze the co-ordinates of various parts in the video. But the co-ordinates values are affected by camera movement. How do I calibrate the camera shake? I don't have any static point in the video (except for the top&bottom edges of video).
All I wish to get is the co-ordinates of (centroids, may be) moving parts adjusted for camera shake. I use MATLAB's computer vision toolbox to process the video.

I've worked on super-resolution algorithms in the past, and as a side affect, I got image stabilization using phase correlation. It's very resilient to noise, and it's quite fast. You should be able to achieve sub-pixel accuracy using a weighted centroid around the peak location, or some kind of peak-fitting routine. Running phase correlation on successive frames will tell you the translational shift that occurs frame-to-frame. You can use an affine warp to remove the shift.
A similar, but slower, approach is here this example is using Normalized Cross Correlation.

If you're using Matlab 2013a or later then video stabilization can be done using point matching Point Matching or by Template Matching. I guess they're available in Matlab 2012b but I haven't tested it out.

Related

How to superimpose two stereo images so that they become aligned? (Camera intrinsics and extrinsics known)

I'd like to find a transformation that projects the image from the Left camera onto the image from the Right camera so that the two become aligned. I already managed to do that with two similar cameras (IGB and RGB), by using the disparity map and shifting each pixel by the corresponding disparity value. My problem is that this doesn't work for other cameras that I'm using (for example multispectral and infrared sensors), because the calculated disparity maps have very little detail. I am currently using the Matlab Computer Vision Tool Box, and I suspect that the problem is the poor correlation of information in the images (little correspondences found by the disparity algorithm).
I would like to know if there is another way of doing this transformation, for example just by using the Extrinsic and Intrinsic Parameters of the cameras (the are already calibrated).
The disparity IS the transformation you are looking for. Any sensible left-right mapping depends on 3D information, which the disparity provides. Anything else is just hallucinating some values based on assumptions that may or may not make sense.

Rectificated images doesn't complain epipolar geometry

I am trying to obtain a disparity map from a homemade stereo camera setup. The baseline is 125mm and both cameras are fixed to a 3D-printed support. I've previously calibrated the cameras with 15 images of a checkboard pattern of 80mm square size using MatLab's calibration tool.
Using the intrinsics an extrinsics given by MatLab calibration tool, I rectified the images and build the disparity map on a MatLab script. However, the disparity is not good enough for my application. Do you think the calibration is not good or could be due to other problems?
Here are the results:
As you see through the lines I draw, the rectification of the images is not well done, since the epipolar constraint doesn't apply.
As you can see I used one of the calibration images to check. However, it happens the same on other images. I'm particularly concerned about the ground, as it contains a lot of noise and invalid points, which is not good enough for my algorithms, so I need to improve it.

Stereo Camera Geometry

This paper describes nicely the geometry of a stereo image system. I am trying to figure out, if the cameras tilted towards each other with a certain angle, how the calculation would change? I looked around but couldn't find any reference to tilted camera systems.
Unfortunately, the calculation changes significantly. The rectified case (where both cameras are well-aligned to each other) has the advantage that you can calculate the disparity and the depth is proportional to the disparity. This is not the case in the general case.
When you introduce tilts, you end up with something called epipolar geometry. Here is a paper about this I just googled. In order to calculate the depth from a pixel-pair you need the fundamental matrix or the essential matrix. Both are not easy to obtain from the image pair. If, however, you have the geometric relation of both cameras (translation and rotation), calculating these matrices is a lot easier.
There are several ways to calculate the depth of a pixel-pair. One way is to use the fundamental matrix to rectify both images (although rectifying is not easy either, or even unique) and run a simple disparity check.

Headlights detection using Difference of Gaussian (DoG)

I am developing a project of detecting vehicles' headlights in night scene. First I am working on a demo on MATLAB. My detection method is edge detection using Difference of Gaussian (DoG): I take the convolution of the image with Gaussian blur with 2 difference sigma then minus 2 filtered images to find edge. My result is shown below:
Now my problem is to find a method in MATLAB to circle the round edge such as car's headlights and even street lights and ignore other edge. If you guys got any suggestion, please tell me.
I think you may be able to get a better segmentation using a slightly different approach.
There is already strong contrast between the lights and the background, so you can take advantage of this to segment out the bright spots using a simple threshold, then you can apply some blob detection to filter out any small blobs (e.g. streetlights). Then you can proceed from there with contour detection, Hough circles, etc. until you find the objects of interest.
As an example, I took your source image and did the following:
Convert to 8-bit greyscale
Apply Gaussian blur
Threshold
This is a section of the source image:
And this is the thresholded overlay:
Perhaps this type of approach is worth exploring further. Please comment to let me know what you think.

How to calculate perspective transformation using ellipse

I'm very new to 3D image processing.i'm working in my project to find the perspective angle of an circle.
A plate having set of white circles,using those circles i want to find the rotation angles (3D) of that plate.
For that i had finished camera calibration part and got camera error parameters.The next step i have captured an image and apply the sobel edge detection.
After that i have a little bit confusion about the ellipse fitting algorithm.i saw a lot of algorithms in ellipse fit.which one is the best method and fast method?
after finished ellipse fit i don't know how can i proceed further?how to calculate rotation and translation matrix using that ellipse?
can you tell me which algorithm is more suitable and easy. i need some matlab code to understand concept.
Thanks in advance
sorry for my English.
First, find the ellipse/circle centres (e.g. as Eddy_Em in other comments described).
You can then refer to Zhang's classic paper
https://research.microsoft.com/en-us/um/people/zhang/calib/
which allows you to estimate camera pose from a single image if some camera parameters are known, e.g. centre of projection. Note that the method fails for frontal recordings, i.e. the more of a perspective effect, the more accurate your estimate will be. The algorithm is fairly simple, you'll need a SVD and some cross products.