Deblurring of motion blurred images - matlab

I'm looking at an interesting problem of deblurring motion blurred images. Rather than going for guesses of psf, I'm interested in finding out the actual blur parameters (angle and length). I was successful in finding angle of blur to a certain extent, and need a good technique for finding blur length. If any one has a good idea or code or reference to suggest, it will be helpful. I'm working with MATLAB.

The "actual blur parameters" are just a much higher order approximation to the point spread function than the angle/length, which would just be a simple 2d model of said function. If you are interested in angle/length, you could estimate the PSF with blind deconvolution and try to recover angle/length by modeling the PSF as a 2d Multivariate Gaussian.

Related

Smoothing a noisy image then sharpening

I have been trying to restore a noisy image on MATLAB. I started with an original grayscale image of mine and then I applied Gaussian noise. I then took the noisy image and applied a Gaussian smoothing filter. After applying the smoothing filter, I applied a Laplacian filter over the Gaussian Blurred image and got a black image with some "edges" showing. What I am confused about is what to do next. I tried using the imadd function on MATLAB and adding the Gaussian blurred image with output of the Laplacian filter, but my results are not as good as I thought they would be. The "restored" image is nowhere near as good as I thought it would be!
Am I doing this correctly?
#eigenchris basically nailed it right on the head, but I would like to elaborate some more on why we believe this is a bad idea. Blurring the image removes high frequency content (i.e. edges). If you try to apply a high-pass filter like the Laplacian to the low-pass result, you will probably not get anything at all.
Specifically, the high frequency components were removed when you Gaussian blurred the image, and so if you apply a high-pass filter to an image with high frequency components already removed, you will probably get an almost zero output.
The moral of this story is that you can't sharpen an already blurred image because it relies on high frequency information to facilitate the sharpening. You are essentially amplifying the high frequency content so that the edges stand out more, and hence it is a sharpened result.
One thing I could suggest is to perhaps look into deconvolution techniques, like the Wiener filter. The Wiener filter essentially tries to undo the effects performed by a filter done on an image.
One great example can be found on this MathWorks link: http://www.mathworks.com/help/images/examples/deblurring-images-using-a-wiener-filter.html
As such, blur the image to eliminate any noise, then reverse the blur with Wiener filtering so you can get an OK version of the original, then sharpen that reconstructed image.
Good luck!

Stereo Camera Geometry

This paper describes nicely the geometry of a stereo image system. I am trying to figure out, if the cameras tilted towards each other with a certain angle, how the calculation would change? I looked around but couldn't find any reference to tilted camera systems.
Unfortunately, the calculation changes significantly. The rectified case (where both cameras are well-aligned to each other) has the advantage that you can calculate the disparity and the depth is proportional to the disparity. This is not the case in the general case.
When you introduce tilts, you end up with something called epipolar geometry. Here is a paper about this I just googled. In order to calculate the depth from a pixel-pair you need the fundamental matrix or the essential matrix. Both are not easy to obtain from the image pair. If, however, you have the geometric relation of both cameras (translation and rotation), calculating these matrices is a lot easier.
There are several ways to calculate the depth of a pixel-pair. One way is to use the fundamental matrix to rectify both images (although rectifying is not easy either, or even unique) and run a simple disparity check.

Headlights detection using Difference of Gaussian (DoG)

I am developing a project of detecting vehicles' headlights in night scene. First I am working on a demo on MATLAB. My detection method is edge detection using Difference of Gaussian (DoG): I take the convolution of the image with Gaussian blur with 2 difference sigma then minus 2 filtered images to find edge. My result is shown below:
Now my problem is to find a method in MATLAB to circle the round edge such as car's headlights and even street lights and ignore other edge. If you guys got any suggestion, please tell me.
I think you may be able to get a better segmentation using a slightly different approach.
There is already strong contrast between the lights and the background, so you can take advantage of this to segment out the bright spots using a simple threshold, then you can apply some blob detection to filter out any small blobs (e.g. streetlights). Then you can proceed from there with contour detection, Hough circles, etc. until you find the objects of interest.
As an example, I took your source image and did the following:
Convert to 8-bit greyscale
Apply Gaussian blur
Threshold
This is a section of the source image:
And this is the thresholded overlay:
Perhaps this type of approach is worth exploring further. Please comment to let me know what you think.

How to calculate perspective transformation using ellipse

I'm very new to 3D image processing.i'm working in my project to find the perspective angle of an circle.
A plate having set of white circles,using those circles i want to find the rotation angles (3D) of that plate.
For that i had finished camera calibration part and got camera error parameters.The next step i have captured an image and apply the sobel edge detection.
After that i have a little bit confusion about the ellipse fitting algorithm.i saw a lot of algorithms in ellipse fit.which one is the best method and fast method?
after finished ellipse fit i don't know how can i proceed further?how to calculate rotation and translation matrix using that ellipse?
can you tell me which algorithm is more suitable and easy. i need some matlab code to understand concept.
Thanks in advance
sorry for my English.
First, find the ellipse/circle centres (e.g. as Eddy_Em in other comments described).
You can then refer to Zhang's classic paper
https://research.microsoft.com/en-us/um/people/zhang/calib/
which allows you to estimate camera pose from a single image if some camera parameters are known, e.g. centre of projection. Note that the method fails for frontal recordings, i.e. the more of a perspective effect, the more accurate your estimate will be. The algorithm is fairly simple, you'll need a SVD and some cross products.

Remove paper texture pattern from a photograph

I've scanned an old photo with paper texture pattern and I would like to remove the texture as much as possible without lowering the image quality. Is there a way, probably using Image Processing toolbox in MATLAB?
I've tried to apply FFT transformation (using Photoshop plugin), but I couldn't find any clear white spots to be paint over. Probably the pattern is not so regular for this method?
You can see the sample below. If you need the full image I can upload it somewhere.
Unfortunately, you're pretty much stuck in the spatial domain, as the pattern isn't really repetitive enough for Fourier analysis to be of use.
As #Jonas and #michid have pointed out, filtering will help you with a problem like this. With filtering, you face a trade-off between the amount of detail you want to keep and the amount of noise (or unwanted image components) you want to remove. For example, the median filter used by #Jonas removes the paper texture completely (even the round scratch near the bottom edge of the image) but it also removes all texture within the eyes, hair, face and background (although we don't really care about the background so much, it's the foreground that matters). You'll also see a slight decrease in image contrast, which is usually undesirable. This gives the image an artificial look.
Here's how I would handle this problem:
Detect the paper texture pattern:
Apply Gaussian blur to the image (use a large kernel to make sure that all the paper texture information is destroyed
Calculate the image difference between the blurred and original images
EDIT 2 Apply Gaussian blur to the difference image (use a small 3x3 kernel)
Threshold the above pattern using an empirically-determined threshold. This yields a binary image that can be used as a mask.
Use median filtering (as mentioned by #Jonas) to replace only the parts of the image that correspond to the paper pattern.
Paper texture pattern (before thresholding):
You want as little actual image information to be present in the above image. You'll see that you can very faintly make out the edge of the face (this isn't good, but it's the best I have time for). You also want this paper texture image to be as even as possible (so that thresholding gives equal results across the image). Again, the right hand side of the image above is slightly darker, meaning that thresholding it well will be difficult.
Final image:
The result isn't perfect, but it has completely removed the highly-visible paper texture pattern while preserving more high-frequency content than the simpler filtering approaches.
EDIT
The filled-in areas are typically plain-colored and thus stand out a bit if you look at the image very closely. You could also try adding some low-strength zero-mean Gaussian noise to the filled-in areas to make them look more realistic. You'd have to pick the noise variance to match the background. Determining it empirically may be good enough.
Here's the processed image with the noise added:
Note that the parts where the paper pattern was removed are more difficult to see because the added Gaussian noise is masking them. I used the same Gaussian distribution for the entire image but if you want to be more sophisticated you can use different distributions for the face, background, etc.
A median filter can help you a bit:
img = imread('http://i.stack.imgur.com/JzJMS.jpg');
%# convert rgb to grayscale
img = rgb2gray(img);
%# apply median filter
fimg = medfilt2(img,[15 15]);
%# show
imshow(fimg,[])
Note that you may want to pad the image first to avoid edge effects.
EDIT: A smaller filter kernel than [15 15] will preserve image texture better, but will leave more visible traces of the filtering.
Well i have tried out a different approach using Anisotropc diffusion using the 2nd coefficient that operates on wider areas
Here is the output i got:
From what i can See from the Picture, the Noise has a relatively high Frequency Compared to the image itself. So applying a low Pass filter should work. Have a look at the Power spectrum abs(fft(...)) to determine the cutoff Frequency.