How to improve image quality in Matlab - matlab

I'm building an "Optical Character Recognition" system.
so far the system is capable to identify licence plates in good quality without any noise.
what I want in the next level is to be able to identify licence plates in poor quality beacuse of different reasons.
for example, let's look at the next plate:
as you see, the numbers are not look clearly, because of light returns or something else.
for my question: how can I improve the image quality, so when I move to binary image the numbers will not fade away?
thanks in advance.

We can try to correct for lighting effect by fitting a linear plane over the image intensities, which will approximate the average level across the image. By subtracting this shading plane from the original image, we can attempt to
normalize lighting conditions across the image.
For color RGB images, simply repeat the process on each channel separately,
or even apply it to a different colorspace (HSV, Lab*, etc...)
Here is a sample implementation:
function img = correctLighting(img, method)
if nargin<2, method='rgb'; end
switch lower(method)
case 'rgb'
%# process R,G,B channels separately
for i=1:size(img,3)
img(:,:,i) = LinearShading( img(:,:,i) );
end
case 'hsv'
%# process intensity component of HSV, then convert back to RGB
HSV = rgb2hsv(img);
HSV(:,:,3) = LinearShading( HSV(:,:,3) );
img = hsv2rgb(HSV);
case 'lab'
%# process luminosity layer of L*a*b*, then convert back to RGB
LAB = applycform(img, makecform('srgb2lab'));
LAB(:,:,1) = LinearShading( LAB(:,:,1) ./ 100 ) * 100;
img = applycform(LAB, makecform('lab2srgb'));
end
end
function I = LinearShading(I)
%# create X-/Y-coordinate values for each pixel
[h,w] = size(I);
[X Y] = meshgrid(1:w,1:h);
%# fit a linear plane over 3D points [X Y Z], Z is the pixel intensities
coeff = [X(:) Y(:) ones(w*h,1)] \ I(:);
%# compute shading plane
shading = coeff(1).*X + coeff(2).*Y + coeff(3);
%# subtract shading from image
I = I - shading;
%# normalize to the entire [0,1] range
I = ( I - min(I(:)) ) ./ range(I(:));
end
Now lets test it on the given image:
img = im2double( imread('http://i.stack.imgur.com/JmHKJ.jpg') );
subplot(411), imshow(img)
subplot(412), imshow( correctLighting(img,'rgb') )
subplot(413), imshow( correctLighting(img,'hsv') )
subplot(414), imshow( correctLighting(img,'lab') )
The difference is subtle, but it might improve the results of further image processing and OCR task.
EDIT: Here is some results I obtained by applying other contrast-enhancement techniques IMADJUST, HISTEQ, ADAPTHISTEQ on the different colorspaces in the same manner as above:
Remember you have to fine-tune any parameter to fit your image...

It looks like your question has been more or less answered already (see d00b's comment); however, here are a few basic image processing tips that might help you here.
First, you could try a simple imadjust. This simply maps the pixel intensities to a "better" value, which often increases the contrast (making it easier to view/read). I have had a lot of success with it in my work. It is easy to use too! I think its worth a shot.
Also, this looks promising if you simply want a higher resolution image.
Enjoy the "pleasure" of image-processing in MATLAB!
Good luck,
tylerthemiler
P.S. If you are flattening the image to binary tho, you are most likely ruining the image to start with, so don't do that if you can avoid it!

As you only want to find digits (of which there are only 10), you can use cross-correlation.
For this you would Fourier transform the picture of the plate. You also Fourier transform a pattern you want to match a good representation of a picture of the digit 1. Then you multiply in fourier space and inversely Fourier transform the result.
In the final cross-correlation, you will see pronounced peaks, where the pattern overlaps nicely with your image.
You do this 10 times and know where each digit is. Note that you must correct the tilt before you do the cross correlation.
This method has the advantage that you don't have to threshold your image.
There are certainly much more sophisticated algorithms in the literature for assigning number plates. One could for example use Bayes theory to estimate which digit would most likely occur (this helps a lot if you already have a databases of possible numbers).

Related

Upscale whatershed output to match original image size

Introduction
Background: I am segmenting images using the watershed algorithm in MATLAB. For memory and time constraints, I prefer to perform this segmentation on subsampled images, let's say with a resize factor of 0.45.
The problem: I can't properly re-scale the output of the segmentation to the original image scale, both for visualization purposes and other post processing steps.
Minimal Working Example
For example, I have this image:
I run this minimal script and I get a watershed segmentation output L that consists in a label image, where each connected component is addressed with a natural number and the borders between the connected components are zero-valued:
im_orig = imread('kitty.jpg'); % Load image [530x530]
im_res = imresize(im_orig, 0.45); % Resize image [239x239]
im_res = rgb2gray(im_res); % Convert to grayscale
im_blur = imgaussfilt(im_res, 5); % Gaussian filtering
L = watershed(im_blur); % Watershed aglorithm
Now I have L that has the same dimension of im_res. How can I use the result stored in L to actually segment the original im_orig image?
Wrong solution
The first approach I tried was to resize L to the original scale by using imresize.
L_big = imresize(L, [size(im_orig,1), size(im_orig,2)]); % Upsample L
Unfortunately the upsampling of L produces a series of unwanted artifacts. It especially loses some of the fundamental zeros that represent the boundaries between the image segments. Here is what I mean:
figure; imagesc(imfuse(im_res, L == 0)); axis auto equal;
figure; imagesc(imfuse(im_orig, L_big == 0)); axis auto equal;
I know that this is due to the blurring caused by the upscaling process, but for now I couldn't think about anything else that could succeed.
The only other approach I thought about involve the use of Mathematical Morphology to "enlarge" the boundaries of the resized image and then upsample, but this would still lead to some unwanted artifacts.
TL;DR (or recap)
Is there a way to perform watershed on a downscaled image in MATLAB and then upscale the result to the original image, keeping the crisp region boundaries outputted by the algorithm? Is what I am looking for a completely absurd thing to ask?
If you only need the watershed segment borders after upsizing the image, then just make these little changes:
L_big = ~imresize(L==0, [size(im_orig,1), size(im_orig,2)]); % Upsample L
and here the results:
you can use nearest neighbor interpolation when resizing:
L_big = imresize(L, [size(im_orig,1), size(im_orig,2)],'nearest'); % Upsample L
Normally when we resize images we star fro the destination, iterate over x, y, and find the best matching pixel in the source. Here you want to do the reverse. Iterate over the source in x, y and write to the destination buffer, with 0 taking priority (so initialise to 0xFF, then don't overwrite any zeroes with other values),
There's unlikely to be a function that does this on the toolkit, you;ll hav e to roll your own.

plot polar grey values in matrix without interpolating every for loop

I have a matrix with grey values between 0 and 1. For every entry in the matrix, there are certain polar coordinates that indicate the position of the grey values. I already have either Theta and Rho values (polar) ,both in separate 512×960 matrices. And grayscale values (in a matrix called C) for every Theta and Rho combination. I have the same for X and Y, as I just use pol2cart for the transformation. The problem is that I cannot directly plot these values, as they do not yet fit in the 'bins' of the new matrix.
What I want: to put the grey values in a square matrix of size 1024×1024. I cannot do this directly, because the polar coordinates fall in between the grid of this matrix. Therefore, we now use interpolation, but this is extremely time consuming and has to be done separately for every dataset, although the transformation from the original matrices to this final one will always be the same. Therefore, I'd like to solve this matrix once (either analytically or numerically) and use a matrix multiplication or something similar to apply the manipulation efficiently in every cycle of the code.
One example of what one of these transformations could look like this:
The zeros in the first matrix are the grid, and the value 1 (in between the grid) is the grey value that falls in between four grid points, then I'd like to transform to the second matrix (don't mind the visual spacing between the points).
For every dataset, I have hundreds of these matrices, so I would like to make the code more efficient.
Background: I'm using TriScatteredInterp now for the interpolation. We tried scatteredInterpolant as well, but that is slower. I also posted a related question, but decided to split the two possible solutions, because the solution I ask for here is also applicable to non-MATLAB code and will probably be faster and makes for a smoother (no continuous popping up of figures) execution of the code.
Using the image processing toolbox
Images work a bit differently than the data you have. However, it's fairly straightforward to map one representation into the other.
There is only one problem I see: wrapping. Obviously, θ = 2π = 0, but MATLAB does not know that. AFAIK, there is no easy way to tell MATLAB that.
Why does this matter? Well, simply put, inter-pixel interpolation uses information from the nearest N neighbors to find intermediate colors, with N depending on the interpolation kernel. When doing this somewhere in the middle of the image there is no problem, but at the edges MATLAB has to know that the left edge equals the right edge. This is not standard image processing, and I'm not aware of any function that is capable of this.
Implementation
Now, when disregarding the wrapping problem, this is one way to do it:
function resize_polar()
%% ORIGINAL IMAGE
% ==========================================================================
% Some random greyscale data
C = double(rgb2gray(imread('stars.png')))/255;
% Your current size, and desired size
sz_x = size(C,2); new_sz_x = 1024;
sz_y = size(C,1); new_sz_y = 1024;
% Ranges for teat and rho;
% replace with your actual values
rho_start = 0; theta_start = 0;
rho_end = 10; theta_end = 2*pi;
% Generate regularly spaced grid;
theta = linspace(theta_start, theta_end, sz_x);
rho = linspace(rho_start, rho_end, sz_y);
[theta, rho] = meshgrid(theta,rho);
% Make plot of generated data
plot_polar(theta, rho, C, 'Original image');
% Resize data
[theta,rho,C] = resize_polar_data(theta, rho, C, [new_sz_y new_sz_x]);
% Make plot of generated data
plot_polar(theta, rho, C, 'Rescaled image');
end
function [theta,rho,data] = resize_polar_data(theta,rho,data, new_dims)
% Create fake RGB image cube
IMG = cat(3, theta,rho,data);
% Rescale as if theta and rho are RG color data in the RGB
% image cube
IMG = imresize(IMG, new_dims, 'nearest');
% Split up the data again
theta = IMG(:,:,1);
rho = IMG(:,:,2);
data = IMG(:,:,3);
end
function plot_polar(theta, rho, data, label)
[X,Y] = pol2cart(theta, rho);
figure('renderer', 'opengl')
clf, hold on
surf(X,Y,zeros(size(X)), data, ...
'edgecolor', 'none');
colormap gray
title(label);
end
The images used and plotted:
Le awesomely-drawn 512×960 PNG image
Now, the two look the same (couldn't really come up with a better-suited image), so you'll have to believe me that the 512×960 has indeed been rescaled to 1024×1024, with nearest-neighbor interpolation.
Here are some timings for the actual imresize() operation for some simple kernels:
nearest : 0.008511 seconds.
bilinear: 0.019651 seconds.
bicubic : 0.025390 seconds. <-- default kernel
But this depends strongly on your hardware; I believe imresize offloads a lot of work to the GPU, so if you have a crappy one, it'll be slower.
Wrapping
If the wrapping problem is really important to you, you can modify the function above to do the following:
first, rescale the image with imresize() like before
horizontally concatenate the second half of the grayscale data and the first half. Meaning, you swap the first and second halves to make the left and right edges (0 and 2π) touch in the middle.
rescale this intermediate image with imresize()
Extract the central vertical strip of the rescaled intermediate image
split that up in two equal-width strips
and replace the edge strips of the output image with the two strips you just created
Now, this is kind of a brute force approach: you are re-scaling an image twice, and most of the pixels of the second image round will be discarded. If performance is a problem, you can of course apply the rescale to only the central strip of that intermediate image. But, well, that will be a bit more complicated.

How do I denoise a simple grayscale image

Here is the original image with better vision: we can see a lot of noise around the main skeleton, the circle thing, which I want to remove them, and do not affect the main skeleton. I'm not sure if it called noise
I'm doing it to deblurring a image, and this image is the motion blur kernel which identify the camera motion when the camera capture a image.
ps: this image is the kernel for one case, and what I need is a general method in here. thank you for your help
there is a paper in CVPR2014 named "Separable Kernel for Image Deblurring" which talk about this, I want to extract main skeleton of the image to make the kernel more robust, sorry for the explaination here as my English is not good
and here is the ture grayscale image:
I want it to be like this:
How can I do it using Matlab?
here are some other kernel images:
As #rayryeng well explained, median filtering is the best option to clean noise in the image, which I realized when I had studied about image restoration. However, in your case, what you need to do seems to me not cleaning noise in the image. You want to more likely eliminate the sparks in the image.
Simply I applied single thresholding to your noisy image to eliminate sparks.
Try this:
desIm = imread('http://i.stack.imgur.com/jyYUx.png'); % // Your expected (desired) image
nIm = imread('http://i.stack.imgur.com/pXO0p.png'); % // Your original image
nImgray = rgb2gray(nIm);
T = graythresh(nImgray)*255; % // Thereshold value
S = size(nImgray);
R = zeros(S) + 5; % // Your expected image bluish so I try to close it
G = zeros(S) + 3; % // Your expected image bluish so I try to close it
B = zeros(S) + 20; % // Your expected image bluish so I try to close it
logInd = nImgray > T; % // Logical index of pixel exclude spark component
R(logInd) = nImgray(logInd); % // Get original pixels without sparks
G(logInd) = nImgray(logInd); % // Get original pixels without sparks
B(logInd) = nImgray(logInd); % // Get original pixels without sparks
rgbImage = cat(3, R, G, B); % // Concatenating Red Green Blue channels
figure,
subplot(1, 3, 1)
imshow(nIm); title('Original Image');
subplot(1, 3, 2)
imshow(desIm); title('Desired Image');
subplot(1, 3, 3)
imshow(uint8(rgbImage)); title('Restoration Result');
What I got is:
The only thing I can see that is different between the two images is that there is some quantization noise / error around the perimeter of the object. This resembles salt and pepper noise and the best way to remove that noise is to use median filtering. The median filter basically analyzes local overlapping pixel neighbourhoods in your image, sorts the intensities and chooses the median value as the output for each pixel neighbourhood. Salt and pepper noise corrupts image pixels by randomly selecting pixels and setting their intensities to either black (pepper) or white (salt). By employing the median filter, sorting the intensities puts these noisy pixels at the lower and higher ends and by choosing the median, you would get the best intensity that could have possibly been there.
To do median filtering in MATLAB, use the medfilt2 function. This is assuming you have the Image Processing Toolbox installed. If you don't, then what I am proposing won't work. Assuming that you do have it, you would call it in the following way:
out = medfilt2(im, [M N]);
im would be the image loaded in imread and M and N are the rows and columns of the size of the pixel neighbourhood you want to analyze. By choosing a 7 x 7 pixel neighbourhood (i.e. M = N = 7), and reading your image directly from StackOverflow, this is the result I get:
Compare this image with your original one:
If you also look at your desired output, this more or less mimics what you want.
Also, the code I used was the following... only three lines!
im = rgb2gray(imread('http://i.stack.imgur.com/pXO0p.png'));
out = medfilt2(im, [7 7]);
imshow(out);
The first line I had to convert your image into grayscale because the original image was in fact RGB. I had to use rgb2gray to do that. The second line performs median filtering on your image with a 7 x 7 neighbourhood and the final line shows the image in a separate window with imshow.
Want to implement median filtering yourself?
If you want to get an idea of how to actually write a median filtering algorithm yourself, check out my recent post here. A question poser asked to implement the filtering mechanism without using medfilt2, and I provided an answer.
Matlab Median Filter Code
Hope this helps.
Good luck!

abs function for fft2 is not working in MATLAB

i am trying to plot the figure of FFT magnitude of an image using the following code in the command window:
a= imread('lena','png')
figure,imshow(a)
ffta=fft2(a)
fftshift1=fftshift(ffta)
magnitude=abs(fftshift1)
figure,imshow(magnitude),title('magnitude')
However, the figure with the title magnitude shows nothing, even though MATLAB shows that it has computed abs() on fftshift. The figure is still empty, and there is no error. Also, why do we need to compute the phase shift before magnitude?
The reason why this is probably happening is because of the following things:
When you take the 2D fft of your image, it will produce a double valued result, even though your image is mostly unsigned 8-bit integer. MATLAB assumes that double formatted images have their intensities / colours between [0,1]. By doing imshow on just the magnitude itself, you will most likely get an entirely white image because I suspect a good majority of the FFT coefficients are bigger than 1. This is probably the blank figure that you're referring to.
Even if you rescale the magnitude so that it is between [0,1], the DC coefficient will be so large that if you try to display the image, you'll only see a white dot in the middle while every other component will be black.
As a side note, the reason why you are doing fftshift is because by default, MATLAB assumes that the origin of the FFT for 2D is located at the top left corner. Doing fftshift will allow the origin to be in the middle, which is what we would intuitively expect of the 2D FFT.
In order to remedy this situation, I would suggest doing a log transformation on the FFT coefficients so you can visually see the results. I would also normalize the coefficients once you log transform it so that they go between [0,1]. Do not actually modify the FFT coefficients as this would be improper. You need to leave them the same way that it is because if you intend to do any processing on the spectrum, you would start by working on the raw image. Doing filter design or anything of that sort will require the raw spectrum, as the final filter will depend on these coefficients untouched. Unless you actually want to do a log operation as part of your pipeline, then leave these coefficients as is. As such, this can be done through the following MATLAB code:
imshow(log(1 + magnitude), []);
I'm going to show an example, using your code that you have provided but using another image as you haven't provided one here. I'm going to use the cameraman.tif image that's part of the MATLAB system path. As such:
a= imread('cameraman.tif');
figure,imshow(a);
ffta=fft2(a);
fftshift1=fftshift(ffta);
magnitude=abs(fftshift1);
figure;
imshow(log(1 + magnitude), []); %// NEW
title('magnitude')
This is what I get:
As you can see, the magnitude is displayed more nicely. Also, the DC coefficient is in the middle of the spectrum thanks to fftshift.
If you want to apply this for colour images, fft2 should still work. It will apply the 2D fft to each colour plane by itself. However, if you want this to work, you'll not only need to take the log transform, but you'll also need to normalize each plane separately. You have to do this because if we tried doing the imshow command we did earlier, it would normalize it so that the greatest value in the spectrum of the colour image gets normalized to 1. This will inevitably produce that same small dot effect that we talked about earlier.
Let's try a colour image that's built-in to MATLAB: onion.png. We will use the same code that you used above, but we need an additional step of normalizing each colour plane by itself. As such:
a = imread('onion.png');
figure,imshow(a);
ffta=fft2(a);
fftshift1=fftshift(ffta);
magnitude=abs(fftshift1);
logMag = log(1 + magnitude); %// New
for c = 1 : size(a,3); %// New - normalize each plane
logMag(:,:,c) = mat2gray(logMag(:,:,c));
end
figure; imshow(logMag); title('magnitude');
Note that I had to loop through each colour plane and use mat2gray to normalize each plane to [0,1]. Also, I had to create a new variable called logMag because I have to modify each colour plane individually, and you can't do this with a single imshow call.
With this, these are the results I get:
What's different with this spectrum is that we are applying the FFT to each colour plane separately, and so you'll see a whole bunch of colour spatters because for each location in this image, we are visualizing a linear combination of components from the red, green and blue channels. For each location, we have a value in between [0,1] for each colour plane, and the combination of these give you a colour at this location. You could say that darker colours are for locations that have a relatively low magnitude for at least one of the colour channels, while locations that are brighter have a relatively high magnitude for at least one of the colour channels.
Hope this helps!
Can't be sure about your version of "lena.png", but if it's a color RGB picture, you need to convert it first to grayscale, or at least select which RGB plane you want to examine.
I.e., the following works for http://optipng.sourceforge.net/pngtech/img/lena.png (color png):
clear; close all;
a = imread('lena','png');
ag = rgb2gray(a);
ag = im2double(ag);
figure(1);
imshow(ag);
F = fftshift( fft2(ag) ); % also try fft2(ag, N, N) where N < image size. Say N=128.
magnitude=abs(F);
figure(2);
imshow(magnitude);

median filter vs. pseudomedian filter in matlab

Anyone knows why the pseudomedian filter is faster than the median filter?
I used medfilt2.m for median filtering and I implemented my own pseudomedian filter which is:
b = strel('square',3);
psmedIm = (0.5*imclose(noisedIm,b)) + (0.5*imopen(noisedIm,b));
where b is a square flat structuring element and noisedIm is an image noised by a salt and pepper noise.
Also I don't understand why the image generated using the pseudomedian filter isn't denoised.
Thank you!
In terms of your speed query, I'd propose that your pseudomedian filter is faster because it doesn't involve sorting. The true median filter requires that you sort elements and find the central value, which takes a fair bit of time.
The reason why your salt and pepper noise isn't removed is that you're always maintaining their effects because you're always using both the min and max values inside the structuring element when you use imclose and imopen. Because you're just weighting each by half, if there's a white pixel, the 0.5 factor contribution from the max function will bump the pixel value up, and vice versa for black pixels.
EDIT: Here's a quick demo I did that helps your pseudomedian behave a little more nicely with salt and pepper noise. The big difference is that it tries to use the 'best parts' of the opened and closed images rather than making them fight it out. I think it works quite well for eliminating the salt and pepper noise you used as an example.
img = imread('cameraman.tif');
img = imnoise(img, 'salt & pepper', 0.01);
subplot(2,2,1); imshow(img);
b = strel('square', 3);
closed = double(imclose(img, b));
opened = double(imopen(img, b));
subplot(2,2,2); imshow(closed,[]);
subplot(2,2,3); imshow(opened,[]);
img = double(img);
img = img + (closed - img) + (opened - img);
subplot(2,2,4); imshow(img,[]);
EDIT: Here's the result of running the code:
EDIT 2: Here's the underlying theory (it's not overly mathematical and based entirely on intuition!)
Salt and pepper noise exists as pure white and pure black pixels scattered randomly. The idea is that the 'closed' and 'opened' images will each eliminate one of the halves -- either the white salt noise or the black pepper noise -- and the pixel value in that location should be corrected by one of the operations. We just don't know which one. So we know that one of the images out of both 'closed' and 'open' is 'correct' for that pixel because the operation should have effectively 'median-ed' that pixel correctly. Since the one that is 'incorrect' should have exactly the same value at that pixel (white or black) as the original image, subtracting its value doesn't affect the original image. Only the 'correct' one (which differs by the exact amount required to return the image to its supposedly correct value) is right, so we adjust the image at that pixel by the corresponding amount. Thus, taking the noisy original image and adding to it both the differences gives us something with much of the noise reduced.