I would like to recognize one white pixel of the carplate on a image. I do not know how to recignize it because the color of the car is somewhere more "whiter" than the carplate's white color. I wrote a runable code but I have to select one of the car plate points with mouse, and then I found the carplate and I could draw a rectangle around it.
Do you have any idea how to automatize finding one white pixel of the carplate?
Here is the code:
clear all;
close all;
I= imread('volvo_frame_0001.bmp');
figure, imshow(I)
level=0.5;
BW=im2bw(I,level);
figure, imshow(BW);
BW2 = bwselect(BW,4);
figure, imshow(BW2);
hold on;
C = corner(BW2);
min_x = min(C(:,1));
max_x = max(C(:,1));
min_y = min(C(:,2));
max_y = max(C(:,2));
figure, imshow(I);
hold on;
BoxPolygon = [min_x, max_y; max_x, max_y; max_x, min_y; min_x, min_y; min_x, max_y;];
line(BoxPolygon(:, 1), BoxPolygon(:, 2), 'Color', 'g');
Actually the answer of your question is an application field called Licence Plate Recognition (LPR) in which you can find hundreds, if we don't say thousands, of programs either in Matlab or any other language, such as a this free Matlab code.
Anyway, if you insist on writing the code from the scratch, I suggest you not to look for white pixels! because you can't tell a pixel is white or not since the pixels values are in wide range (0 - 256^3) and the "color label" is a term which you cannot simply assign to a pixel (see this as an illustration to the fact). Instead you better use other "features" of plates. Such as the fact that a plate is a "rectangle" and is the one with identical "ratio" of sides. Thereafter you can use "canny" to find edges (dramatic changes of light or color in an image) by which shape characteristics can be more easily judged. which means you can more easily find rectangles within this kind of image.
When you found rectangles, you can check some other "features" within those rectangles to ensure whether the rectangle is a plate or some other form of object which is close to a plate in shape, not in content. For example the histogram or any other method.
Hope this helps
Cheers
Related
everyone. I am trying to get to boundary dimension of the bubble inside the water using MATLAB. The code and result are shown below.
clear;
clc;
i1=imread('1.jpg');
i2=imread('14.jpg');
% i1=rgb2gray(i1);
% i2=rgb2gray(i2);
[m,n]=size(i1);
im1=double(i1);
im2=double(i2);
i3=zeros(size(i1));
threshold=29;
for i=1:m;
for j=1:n;
if abs((im2(i,j))-(im1(i,j)))>threshold ;
i3(i,j)=1;
else abs((im2(i,j))-(im1(i,j)))<threshold;
i3(i,j)=0;
end
end;
end;
se = strel('square', 5);
filteredForeground = imopen(i3, se);
figure; imshow(filteredForeground); title('Clean Foreground');
BW1 = edge(filteredForeground,'sobel');
subplot(2,2,1);imshow(i1);title('BackGround');
subplot(2,2,2);imshow(i2);title('Current Frame');
subplot(2,2,3);imshow(filteredForeground);title('Clean Foreground');
subplot(2,2,4);imshow(BW1);title('Edge');
As the figure shows, the result is not very satisfactory. So can anyone give me some advice to improve my result? And how can I output the boundary coordinate to file and get the real dimension of the bubble? Thank you very much!
First note that your background removal is almost useless.
If we plot diffI=i2-i1; imshow(diffI,[]);colorbar, we can see that the difference is almost as big as the image itself. You need to understand that what its visually similar to you, its not necessarily similar numerically, and this is a great example for it.
Therefore you dont have what you think you have. The background is there in your thresholding. Then, note that the object you want to segment, its not just whiter. Its definitely as dark as the background in some areas. This means that a simple segmentation by value thresholding will not work. You need better segmentation techniques.
I happen to have a copy of this level set algorithm in my MATLAB, the "Distance Regularized Level Set Evolution".
When I run the code demo_1 with your image, I get the following (nice gif!):
(Uncompressed)
Full code of the demo:
% This Matlab code demonstrates an edge-based active contour model as an application of
% the Distance Regularized Level Set Evolution (DRLSE) formulation in the following paper:
%
% C. Li, C. Xu, C. Gui, M. D. Fox, "Distance Regularized Level Set Evolution and Its Application to Image Segmentation",
% IEEE Trans. Image Processing, vol. 19 (12), pp. 3243-3254, 2010.
%
% Author: Chunming Li, all rights reserved
% E-mail: lchunming#gmail.com
% li_chunming#hotmail.com
% URL: http://www.imagecomputing.org/~cmli//
clear all;
close all;
Img=imread('https://i.stack.imgur.com/Wt9be.jpg');
Img=double(Img(:,:,1));
%% parameter setting
timestep=1; % time step
mu=0.2/timestep; % coefficient of the distance regularization term R(phi)
iter_inner=5;
iter_outer=300;
lambda=5; % coefficient of the weighted length term L(phi)
alfa=-3; % coefficient of the weighted area term A(phi)
epsilon=1.5; % papramater that specifies the width of the DiracDelta function
sigma=.8; % scale parameter in Gaussian kernel
G=fspecial('gaussian',15,sigma); % Caussian kernel
Img_smooth=conv2(Img,G,'same'); % smooth image by Gaussiin convolution
[Ix,Iy]=gradient(Img_smooth);
f=Ix.^2+Iy.^2;
g=1./(1+f); % edge indicator function.
% initialize LSF as binary step function
c0=2;
initialLSF = c0*ones(size(Img));
% generate the initial region R0 as two rectangles
initialLSF(size(Img,1)/2-5:size(Img,1)/2+5,size(Img,2)/2-5:size(Img,2)/2+5)=-c0;
% initialLSF(25:35,40:50)=-c0;
phi=initialLSF;
potential=2;
if potential ==1
potentialFunction = 'single-well'; % use single well potential p1(s)=0.5*(s-1)^2, which is good for region-based model
elseif potential == 2
potentialFunction = 'double-well'; % use double-well potential in Eq. (16), which is good for both edge and region based models
else
potentialFunction = 'double-well'; % default choice of potential function
end
% start level set evolution
for n=1:iter_outer
phi = drlse_edge(phi, g, lambda, mu, alfa, epsilon, timestep, iter_inner, potentialFunction);
if mod(n,2)==0
figure(2);
imagesc(Img,[0, 255]); axis off; axis equal; colormap(gray); hold on; contour(phi, [0,0], 'r');
drawnow
end
end
% refine the zero level contour by further level set evolution with alfa=0
alfa=0;
iter_refine = 10;
phi = drlse_edge(phi, g, lambda, mu, alfa, epsilon, timestep, iter_inner, potentialFunction);
finalLSF=phi;
figure(2);
imagesc(Img,[0, 255]); axis off; axis equal; colormap(gray); hold on; contour(phi, [0,0], 'r');
hold on; contour(phi, [0,0], 'r');
str=['Final zero level contour, ', num2str(iter_outer*iter_inner+iter_refine), ' iterations'];
title(str);
Ander pointed out in his answer that the background image doesn't match the background of the bubble image. My very best advice to you is not to try to fix this in code, but to fix your experimental setup. If you fix this in software, you'll get a complicated program with lots of "magic numbers" that nobody will be able to maintain after you graduate and leave. Anybody wanting to continue your work will have a hard time adjusting the program to match some new experimental conditions. Fixing the setup will lead to an experiment that is much easier to reproduce and to build on.
So what is wrong with the background picture? First of all, make sure the illumination hasn't changed since you took it. Let's assume you took the pictures in succession, and the change in background illumination is due to shadows of the bubble on the background.
In your previous question about this topic you got some so-so advice about your experimental setup. This picture is from that question:
This looks really great, you have a transparent tank, and a big white surface behind it. I recommend that you take out the reticulated sheet from behind it, and put all your lights on the white background. The goal is to get back-illuminated bubbles. The bubbles will cast a shadow, but it will be towards the camera, not the background -- they will darken the image, making detection really simple. But you need to make sure there is no direct light falling on the bubbles, since the reflection of that light towards the camera will causes highlights (as you see in your picture) that could be brighter than the background, or at least will reduce contrast.
If you keep some distance between the tank and the white background, then when focusing the camera on the bubbles that background will be out of focus and blurred, meaning that it will be fairly uniform. The less detail in the background, the easier the detection of bubbles is.
If you need the markings from the reticulated sheet, then I recommend you use a transparent sheet for that, on which you can draw lines with a permanent marker.
Sorry, this was not at all a programming answer... :)
So here is what this could look like. An example image with bubbles that we've used in Delft for many decades as an exercise:
I actually don't know what it is from, but they seem to be small bubbles in liquid. Some are out of focus, you won't have this problem. Segmentation is quite simple (This uses MATLAB with DIPimage):
img = readim('bubbles.tif');
background = closing(img,25); % estimate of background
out = threshold(background - img);
out = fillholes(out);
traces = traceobjects(out);
If you have a background image (which of course you'll have), then you don't need to estimate it. What the code then does is simply threshold the difference between the background and the image (since the bubbles are darker, I subtract the image from the background instead of the other way around), and a very simple post-processing to fill up the holes in the objects. Depending on what your images look like, you might need a bit more preprocessing or postprocessing... Think about noise removal in the input image!
The last line traces the object boundaries, yielding a polygon for each bubble (this last command is only in DIPimage 3.0, which isn't officially released yet, but you can compile it yourself if you're adventurous). Alternatively, use the bwboundaries function from the Image Processing Toolbox:
traces = bwboundaries(dip_array(out));
In the above image, blue circle at the center and rectangle at the bottom represent defects while light blue area represents normal area.
When I use slice function for 3d representation, how do I get rid of light blue area so that we only see circle and rectangle in 3d plot?
You mean like this?
Here is the code I came up with (note that it depends on a threshold value that you have to compute youself, from your whole dataset, this is why it is a little noisy):
clear all;
close all;
pkg load image
im=double(rgb2gray(imread("5JpXg.jpg")));
im=im(10:end-10,10:end-10);
%you can try to find a better threshold based on your data
threshold=100;
im(im<threshold)=0;%or im(im>threshold)=0 if you want everything to be blank except the circle and the rectangle
[m n]=size(im);
num_non_zero_pixels=size(im(im~=0),1);
x=zeros(3,num_non_zero_pixels);
counter=1;
for i=1:m
for j=1:n
if(0~=im(i,j))
x(1,counter)=i;
x(2,counter)=j;
x(3,counter)=im(i,j);
counter=counter+1;
end
end
end
plot3(x(1,:)',x(2,:)',x(3,:)',"*");
I'm working with leaves' images in matlab. I will compare portions of those leaves through some similarity functions (euclidean for example), but first I need to extract portions of each leave and then save them. So, this is my problem now: how do I select those portions and draw a rectangle that shows me what will be cut? I already got the centroid and a boundingBox by using regionprops function (you can see those in red in the image firstResultsMatlab.png). However, I'm struggling by trying to draw and extract portions like those that are in blue (same image). I don't want to get parts from the black background, only portions from the leave.
I've also added an images of leaf as an example of what I've been working on and the code I used to get a boundingBox and the centroid. Any ideas are welcomed!
Thank you very much in advance.
I = imread('C:\Users\IBM_ADMIN\Desktop\Mestrado\Imagens_Final\IMG1_N1_1.png');
L = bwlabel(I);
s = regionprops(L,'BoundingBox');
stat = regionprops(L,'centroid');
hold on;
colors = hsv(numel(s));
for k = 1:numel(s)
him = imshow(I);
hold on;
rectangle('Position', s(k).BoundingBox, 'EdgeColor', colors(k,:));
plot(stat(k).Centroid(1),stat(k).Centroid(2),'rx');
end
matlab-results
leaf-1
I am doing a road sign recognition program in Matlab and I want to recognize circular roadsigns. Therefore I use the matlab function imfindcircles. I would like to crop only the circular roadsigns and to put them in an isolate figure. However, we have other roadsigns on each figure (triangles or squares) but I don't want them. I have no idea how to do this. Here my code :
[im_bw,map] = imread('roadsign.JPG'); %image black and white
S = regionprops(im_bw,'Extrema','Centroid','BoundingBox');
[centers, radii] = imfindcircles(im_bw,[12 40]);
for k = 1:length(S)
im_cercle = imcrop(im_bw, S(k).BoundingBox);
im_cercle = padarray(im_cercle, [20 20]); % put each roadsigns in a small figure
if radii(k) ~= 0 % Error
figure,imshow(im_cercle); title 'Circle spotted'; % Show every circular roadsigns in a figure
else
figure('visible','off'),imshow(im_cercle); title 'wrong raodsign';
end
end
I tried some other conditions with centres and radii but when I execute the code, i get dimension errors or sometimes it shows me a shape which is not a circle. I also tried to do a variable that only sets when he finds a circle, but without results. Can you help me please ?
Thanks in advance.
What you do you expect the output of regionprops to be? It isn't picking out the circles - it's picking out all the "areas" (anything that is a connected area in im_bw). In addition while imfindcircles can find circles that overlap with other things, regionprops will detect overlapping areas as a single object.
On the other hand, you call imfindcircles and then do nothing with the output.
Rather than doing anything with regionprops, just use the values of centers, radii to define a bounding box around each detected circle (optionally with some additional padding), crop that area out of the image and save/display it.
As a preface: this is my first question - I've tried my best to make it as clear as possible, but I apologise if it doesn't meet the required standards.
As part of a summer project, I am taking time-lapse images of an internal melt figure growing inside a crystal of ice. For each of these images I would like to measure the perimeter of, and area enclosed by the figure formed. Linked below is an example of one of my images:
The method that I'm trying to use is the following:
Load image, crop, and convert to grayscale
Process to reduce noise
Find edge/perimeter
Attempt to join edges
Fill perimeter with white
Measure Area and Perimeter using regionprops
This is the code that I am using:
clear; close all;
% load image and convert to grayscale
tyrgb = imread('TyndallTest.jpg');
ty = rgb2gray(tyrgb);
figure; imshow(ty)
% apply a weiner filter to remove noise.
% N is a measure of the window size for detecting coherent features
N=20;
tywf = wiener2(ty,[N,N]);
tywf = tywf(N:end-N,N:end-N);
% rescale the image adaptively to enhance contrast without enhancing noise
tywfb = adapthisteq(tywf);
% apply a canny edge detection
tyedb = edge(tywfb,'canny');
%join edges
diskEnt1 = strel('disk',8); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
figure; imshow(tyjoin1)
It is at this stage that I am struggling. The edges do not quite join, no matter how much I play around with the morphological structuring element. Perhaps there is a better way to complete the edges? Linked is an example of the figure this code outputs:
The reason that I am trying to join the edges is so that I can fill the perimeter with white pixels and then use regionprops to output the area. I have tried using the imfill command, but cannot seem to fill the outline as there are a large number of dark regions to be filled within the perimeter.
Is there a better way to get the area of one of these melt figures that is more appropriate in this case?
As background research: I can make this method work for a simple image consisting of a black circle on a white background using the below code. However I don't know how edit it to handle more complex images with edges that are less well defined.
clear all
close all
clc
%% Read in RGB image from directory
RGB1 = imread('1.jpg') ;
%% Convert RPG image to grayscale image
I1 = rgb2gray(RGB1) ;
%% Transform Image
%CROP
IC1 = imcrop(I1,[74 43 278 285]);
%BINARY IMAGE
BW1 = im2bw(IC1); %Convert to binary image so the boundary can be traced
%FIND PERIMETER
BWP1 = bwperim(BW1);
%Traces perimeters of objects & colours them white (1).
%Sets all other pixels to black (0)
%Doing the same job as an edge detection algorithm?
%FILL PERIMETER WITH WHITE IN ORDER TO MEASURE AREA AND PERIMETER
BWF1 = imfill(BWP1); %This opens figure and allows you to select the areas to fill with white.
%MEASURE PERIMETER
D1 = regionprops(BWF1, 'area', 'perimeter');
%Returns an array containing the properties area and perimeter.
%D1(1) returns the perimeter of the box and an area value identical to that
%perimeter? The box must be bounded by a perimeter.
%D1(2) returns the perimeter and area of the section filled in BWF1
%% Display Area and Perimeter data
D1(2)
I think you might have room to improve the effect of edge detection in addition to the morphological transformations, for instance the following resulted in what appeared to me a relatively satisfactory perimeter.
tyedb = edge(tywfb,'sobel',0.012);
%join edges
diskEnt1 = strel('disk',7); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
In addition I used bwfill interactively to fill in most of the interior. It should be possible to fill the interior programatically but I did not pursue this.
% interactively fill internal regions
[ny nx] = size(tyjoin1);
figure; imshow(tyjoin1)
tyjoin2=tyjoin1;
titl = sprintf('click on a region to fill\nclick outside window to stop...')
while 1
pts=ginput(1)
tyjoin2 = bwfill(tyjoin2,pts(1,1),pts(1,2),8);
imshow(tyjoin2)
title(titl)
if (pts(1,1)<1 | pts(1,1)>nx | pts(1,2)<1 | pts(1,2)>ny), break, end
end
This was the result I obtained
The "fractal" properties of the perimeter may be of importance to you however. Perhaps you want to retain the folds in your shape.
You might want to consider Active Contours. This will give you a continous boundary of the object rather than patchy edges.
Below are links to
A book:
http://www.amazon.co.uk/Active-Contours-Application-Techniques-Statistics/dp/1447115570/ref=sr_1_fkmr2_1?ie=UTF8&qid=1377248739&sr=8-1-fkmr2&keywords=Active+shape+models+Andrew+Blake%2C+Michael+Isard
A demo:
http://users.ecs.soton.ac.uk/msn/book/new_demo/Snakes/
and some Matlab code on the File Exchange:
http://www.mathworks.co.uk/matlabcentral/fileexchange/28149-snake-active-contour
and a link to a description on how to implement it: http://www.cb.uu.se/~cris/blog/index.php/archives/217
Using the implementation on the File Exchange, you can get something like this:
%% Load the image
% You could use the segmented image obtained previously
% and then apply the snake on that (although I use the original image).
% This will probably make the snake work better and the edges
% in your image is not that well defined.
% Make sure the original and the segmented image
% have the same size. They don't at the moment
I = imread('33kew0g.jpg');
% Convert the image to double data type
I = im2double(I);
% Show the image and select some points with the mouse (at least 4)
% figure, imshow(I); [y,x] = getpts;
% I have pre-selected the coordinates already
x = [ 525.8445 473.3837 413.4284 318.9989 212.5783 140.6320 62.6902 32.7125 55.1957 98.6633 164.6141 217.0749 317.5000 428.4172 494.3680 527.3434 561.8177 545.3300];
y = [ 435.9251 510.8691 570.8244 561.8311 570.8244 554.3367 476.3949 390.9586 311.5179 190.1085 113.6655 91.1823 98.6767 106.1711 142.1443 218.5872 296.5291 375.9698];
% Make an array with the selected coordinates
P=[x(:) y(:)];
%% Start Snake Process
% You probably have to fiddle with the parameters
% a bit more that I have
Options=struct;
Options.Verbose=true;
Options.Iterations=1000;
Options.Delta = 0.02;
Options.Alpha = 0.5;
Options.Beta = 0.2;
figure(1);
[O,J]=Snake2D(I,P,Options);
If the end result is an area/diameter estimate, then why not try to find maximal and minimal shapes that fit in the outline and then use the shapes' area to estimate the total area. For instance, compute a minimal circle around the edge set then a maximal circle inside the edges. Then you could use these to estimate diameter and area of the actual shape.
The advantage is that your bounding shapes can be fit in a way that minimizes error (unbounded edges) while optimizing size either up or down for the inner and outer shape, respectively.