I want to develop a matlab program that can extract and recognize the plate number of vehicle with template matching method.
Here is my code:
function letters = PengenalanPlatMobil(citra)
%load NewTemplates
%global NewTemplates
citra=imresize(citra,[400 NaN]); % Resizing the image keeping aspect ratio same.
citra_bw=rgb2gray(citra); % Converting the RGB (color) image to gray (intensity).
citra_filt=medfilt2(citra_bw,[3 3]); % Median filtering to remove noise.
se=strel('disk',1);
citra_dilasi=imdilate(citra_filt,se); % Dilating the gray image with the structural element.
citra_eroding=imerode(citra_filt,se); % Eroding the gray image with structural element.
citra_edge_enhacement=imsubtract(citra_dilasi,citra_eroding); % Morphological Gradient for edges enhancement.
imshow(citra_edge_enhacement);
citra_edge_enhacement_double=mat2gray(double(citra_edge_enhacement)); % Converting the class to double.
citra_double_konv=conv2(citra_edge_enhacement_double,[1 1;1 1]); % Convolution of the double image f
citra_intens=imadjust(citra_double_konv,[0.5 0.7],[0 1],0.1); % Intensity scaling between the range 0 to 1.
citra_logic=logical(citra_intens); % Conversion of the class from double to binary.
% Eliminating the possible horizontal lines from the output image of regiongrow
% that could be edges of license plate.
citra_line_delete=imsubtract(citra_logic, (imerode(citra_logic,strel('line',50,0))));
% Filling all the regions of the image.
citra_fill=imfill(citra_line_delete,'holes');
% Thinning the image to ensure character isolation.
citra_thinning_eroding=imerode((bwmorph(citra_fill,'thin',1)),(strel('line',3,90)));
%Selecting all the regions that are of pixel area more than 100.
citra_final=bwareaopen(citra_thinning_eroding,125);
[labelled jml] = bwlabel(citra_final);
% Uncomment to make compitable with the previous versions of MATLAB®
% Two properties 'BoundingBox' and binary 'Image' corresponding to these
% Bounding boxes are acquired.
Iprops=regionprops(labelled,'BoundingBox','Image');
%%% OCR STEP
[letter{1:jml}]=deal([]);
[gambar{1:jml}]=deal([]);
for ii=1:jml
gambar= Iprops(ii).Image;
letter{ii}=readLetter(gambar);
% imshow(gambar);
%
end
end
but the number recognized is always wrong and too much is detected or sometimes too little.
How to fix it?
Here is the images and this one
For number plate extraction you have to follow this algorithm(I used this in my project)
1. Find Histogram variation horizontally(by using imhist)
2. Find the part of histogram where you get maximum variation and get x1 and x2 value.
3. crop that image horizontally by using value of x1 and x2.
4. Repeat same process for vertical cropping.
Explanation:
In order to remove unnecessary information from the image, it requires only edges of the image to work. For detection of the edges, we make use of a built-in MATLAB function. But first we convert the original image to grayscale image.
This grayscale image is converted to binary image by determining a threshold for different intensities in the image. After binarization only, edge detection algorithm can be used. Here we have used ‘ROBERTS’. After extensive testing, it seemed our application the best. Then to determine the region of license plate we have done horizontal and vertical edge processing. First the horizontal histogram is calculated by traversing each column of an image. The algorithm starts traversing with the second pixel from the top of each column of image matrix. The difference between second and first pixel is calculated. If the difference exceeds certain threshold, it is added to total sum of differences. It traverses until the end of the column and the total sum of differences between neighbouring pixels are calculated. At the end, a matrix of the column-wise sum is created. The same process is carried out for vertical histogram. In this case, instead of columns, rows are processed.
After calculating horizontal and vertical histogram we have calculated a threshold value which is 0.434 times of maximum horizontal histogram value. Our next step for extraction is cropping the area of interest i.e. number plate area. For cropping we first crop original image horizontally and then vertically. In horizontal cropping we process image matrix column wise and compare its horizontal histogram value with the predefined threshold value. If certain value in horizontal histogram is more than threshold we mark it as our starting point for cropping and continue until threshold value we find-less than that is our end point. In this process we get many areas which have value more than threshold so we store all starting and end point in a matrix and compare width of each area, width is calculated difference of starting and end point. After that we find set of that staring and end point which map largest width. Then we crop image horizontally by using that starting and end point. This new horizontally cropped image is processed for vertical cropping. In vertical cropping we use same threshold comparison method but only difference is that this time we process image matrix row wise and compare threshold value with vertical histogram values. Again we get different sets of vertical start and end point again we find that set which map largest height and crop image by using that vertical start and end point. After vertical and horizontal cropping we get exact area of number plate from original image in RGB format.
For Recognition use template matching with correlation(using corr2() in matlab)
I would change the loop following character detection to
[gambar{1:jml}]=deal([]);
for ii=1:jml
gambar{ii}= Iprops(ii).Image;
%letter{ii}=readLetter(gambar);
imshow(gambar{ii});
end
I think what you want to do at this point is either
(1) pick the roi in advance before applying character extraction and ocr.
or
(2) apply ocr to all of the characters from the entire image and then use proximity rules or other rules to identify the license plate number.
Edit:
If you run the following loop after character extraction you can get an idea what I mean by "proximity":
[xn yn]=size(citra); % <-- citra is the original image matrix
figure, hold on
[gambar{1:jml}]=deal([]);
for ii=1:jml
gambar{ii}= double(Iprops(ii).Image)*255;
bb=Iprops(ii).BoundingBox;
image([bb(1) bb(1)+bb(3)],[yn-bb(2) yn-bb(2)-bb(4)],gambar{ii});
end
Here is the image after edge detection:
and after character extraction (after running the loop above):
Related
I have this image with white points on a dark background:
I want to group pixels that are close by into a single blob. In this image, that would mean that there will be two blobs in the image, one for the pixels at the top and one for the pixels at the bottom. Any pixels that are not too close to these two blobs must be changed into the background color (A threshold must be specified to choose which pixels fall into the blobs and which of them are too far). How do I go about this? Any Matlab function that can be used?
To group dots, one can simply smooth the image sufficiently to blur them together. The dots that are close together (with respect to the blur kernel size) will be merged, the dots that are further apart will not.
The best way to smooth the image is using a Gaussian filter. MATLAB implements this using the imgaussfilt function since 2015a. For older versions of MATLAB (or Octave, as I'm using here) you can use fspecial and imfilter instead. But you have to be careful because fspecial makes it really easy to create a kernel that is not at all a Gaussian kernel. This is the reason that that method is deprecated now, and the imgaussfilt function was created.
Here is some code that does this:
% Load image
img = imread('https://i.stack.imgur.com/NIcb9.png');
img = rgb2gray(img);
% Threshold to get dots
dots = img > 127; % doesn't matter, this case is trivial
% Group dots
% smooth = imgaussfilt(img,10); % This works for newer MATLABs
g = fspecial("gaussian",6*10+1,10);
smooth = imfilter(img,g,'replicate'); % I'm using Octave here, it doesn't yet implement imgaussfilt
% Find an appropriate threshold for dot density
regions = smooth > 80; % A smaller value makes for fewer isolated points
% Dots within regions
newDots = dots & regions;
To identify blobs that are within the same region, simply label the regions image, and multiply with the dots image:
% Label regions
regions = bwlabel(regions);
% Label dots within regions
newDots = regions .* dots;
% Display
imshow(label2rgb(newDots,'jet','k'))
I want to obtain a n*m matrix with a approximated "height" at each discrete point. The input is a picture (see link below) of the contours from a map, each contourline represents an 5m increase or decrease of the height.
My thoughts:
I imported the picture as a logical png to a matrix called A which means that every contourline in the matrix is a connected strip of '1's and everything else is just 0.
My initial thought was to just start in the upper left corner of the matrix, set that height to zero, declare a new matrix 'height' and start with figuring out height(:,1) by adding 5 meters each time we meet a '1' in the A matrix. Knowing the whole first colonn I now for each row start from the left and add 5 m each time we meet a '1'.
I quickly realized however that this wouldn't work since there is no way for the algorithm to understand whether it should add or subtract height, i.e if we are running uphill or downhill.
If I somehow could approximate the gradient from the intensity of contourlines that would be great even though it would always be possible for a uphill to be a downhill and vice versa but then I could manually decide which is true of these two cases.
Picture:
WORK IN PROGRESS
%% Read and binarize the image
I=imread('https://i.stack.imgur.com/pRkiY.jpg');
I=rgb2gray(I);
I=I>graythresh(I)*255;
%% Get skeleton, i.e. the lines!
sk=bwmorph(~I,'skel',Inf);
%% lines are too thin, dilate them
dilated=~imdilate(sk, strel('disk', 2, 4));
%% label the image!
test=bwlabel(dilated,8);
imshow(test,[]); colormap(plasma); % use colormap parula if you prefer.
Missing: label each adjacent area with a number +1 (or -1) its neighbours (No idea how to do this)
Missing: Interpolate flat areas. This should be doable once the altitudes are known. One can set the pixels in the skeleton image to the altitudes and interpolate the rest using griddata, which will be slow, but still doable.
Disclaimer: not full answer yet, feel free to edit or reuse the code in this answer to further it!
I want to find per column the first and last blue pixels and them find the other boundary ( first and last blue pixel rows) inside this rows achieved before.
I made the alterations based on the previous answeers, but now I have this error when it tries to find the bif_first and last: Subscripted assignment dimension mismatch. bif_first2(2,z)=find(dx(:,z)==1,2,'first');
What is wrong? Please see my code below:
rgbImage_blue=zeros(size(movingRegistered));
lumen_first=zeros(1,size(movingRegistered,2)); lumen_last=zeros(1,size(movingRegistered,2)); bif_first=zeros(1,size(movingRegistered,2)); bif_last=zeros(1,size(movingRegistered,2)); bif_last2=zeros(2,size(movingRegistered,2)); bif_first2=zeros(2,size(movingRegistered,2));
blue=cat(3,0,0,255);
ix=all(bsxfun(#eq,movingRegistered,blue),3);% find all the blue pixels, and put them at 1 % logical array of where blue pixels are dx=[zeros(1,size(movingRegistered,2));diff(ix)]; % zeros to make the same number of columns and rows % the difference by row for all columns
nTops=sum(dx==1); % the transitions to blue
nBots=sum(dx==-1); % and from blue... % see if are consistent; if not, something's not right in image
if(nTops~=nBots), error('Mismatch in Top/Bottom'),end
for z=1:1:size(movingRegistered,2);
if nTops(1,z)==2; bifurcation=false;lumen=true; %only existis two boundaries no bifurcation
lumen_first(1,z)=find(ix(:,z)==1,1,'first');
lumen_last(1,z)=find(ix(:,z)==1,1,'last');
end
if nTops(1,z)>2;
bifurcation=true;
lumen_first(1,z)=find(ix(:,z)==1,1,'first');
lumen_last(1,z)=find(ix(:,z)==1,1,'last');
bif_first2(2,z)=find(dx(:,z)==1,2,'first');
bif_first(1,z)=bif_first2(2,z);
bif_last2(2,z)=find(dx(:,z)==1,2,'last');
bif_last(1,z)=bif_last2(2,z);
end
end
Your problem is you are comparing a n*m*3 image with a 3*1 vector. This operation is not defined.
Use this code:
blue=cat(3,0,0,250)
ix=all(bsxfun(#eq,movingRegistered,blue),3)
Images in Matlab use the third dimension for color, that's why I created blue to be a 1*1*3 vector. Now this vector is compared to the image, using bsxfun to expand the vector to match the image size. This operation compares each color channel individually, so all is used to collect the data for all three channels.
I have 10 gray scale images<2559*3105>. These images are taken from X-ray reflectivity measurement. Each image has two spots except first, showing intensity of X-ray. First image has one highest intensity spot. From second to tenth image each has two spots first one is same as in the first image but second one differs with respect to the location and intensity value. I want to search and crop these spots. The problem is when i apply a condition that find() maximum intensity point in the image, it always points to the spot which is common in all images.
here's some basic image processing code that allows you to select the indices of the spots
%# read the image
im=rgb2gray(imread('a.jpg'));
%# select only relevant area
d=im(5:545,5:660);
%# set a threshold and filter
thres = (max([min(max(d,[],1)) min(max(d,[],2))])) ;
filt=fspecial('gaussian', 7,1);
% reduce noise threshold and smooth the image
d=medfilt2(d);
d=d.*uint8(d>thres);
d=conv2(double(d),filt,'same') ;
d=d.*(d>thres);
% find coonected objets 1
L = bwlabel(d);
%# or also
CC = bwconncomp(d);
Both L and CC have information about the indices of the 2 blobs, so you can now select only that part of the image using them
If I explain why, this might make more sense
I have a logical matrix (103x3488) output of a photo of a measuring staff having been run through edge detect (1=edge, 0=noedge). Aim- to calculate the distance in pixels between the graduations on the staff. Problem, staff sags in the middle.
Idea: User inputs co-ordinates (using ginput or something) of each end of staff and the midpoint of the sag, then if the edges between these points can be extracted into arrays I can easily find the locations of the edges.
Any way of extracting an array from a matrix in this manner?
Also open to other ideas, only been using matlab for a month, so most functions are unknown to me.
edit:
Link to image
It shows a small area of the matrix, so in this example 1 and 2 are the points I want to sample between, and I'd want to return the points that occur along the red line.
Cheers
Try this
dat=imread('83zlP.png');
figure(1)
pcolor(double(dat))
shading flat
axis equal
% get the line ends
gi=floor(ginput(2))
x=gi(:,1);
y=gi(:,2);
xl=min(x):max(x); % line pixel x coords
yl=floor(interp1(x,y,xl)); % line pixel y coords
pdat=nan(length(xl),1);
for i=1:length(xl)
pdat(i)=dat(yl(i),xl(i));
end
figure(2)
plot(1:length(xl),pdat)
peaks=find(pdat>40); % threshhold for peak detection
bigpeak=peaks(diff(peaks)>10); % threshold for selecting only edge of peak
hold all
plot(xl(bigpeak),pdat(bigpeak),'x')
meanspacex=mean(diff(xl(bigpeak)));
meanspacey=mean(diff(yl(bigpeak)));
meanspace=sqrt(meanspacex^2+meanspacey^2);
The matrix pdat gives the pixels along the line you have selected. The meanspace is edge spacing in pixel units. The thresholds might need fiddling with, depending on the image.
After seeing the image, I'm not sure where the "sagging" you're referring to is taking place. The image is rotated, but you can fix that using imrotate. The degree to which it needs to be rotated should be easy enough; just input the coordinates A and B and use the inverse tangent to find the angle offset from 0 degrees.
Regarding the points, once it's aligned straight, all you need to do is specify a row in the image matrix (it would be a 1 x 3448 vector) and use find to get non-zero vector indexes. As the rotate function may have interpolated the pixels somewhat, you may get more than one index per "line", but they'll be identifiable as being consecutive numbers, and you can just average them to get an approximate value.