Matlab: extract portions of an image (without its background) - matlab

I'm working with leaves' images in matlab. I will compare portions of those leaves through some similarity functions (euclidean for example), but first I need to extract portions of each leave and then save them. So, this is my problem now: how do I select those portions and draw a rectangle that shows me what will be cut? I already got the centroid and a boundingBox by using regionprops function (you can see those in red in the image firstResultsMatlab.png). However, I'm struggling by trying to draw and extract portions like those that are in blue (same image). I don't want to get parts from the black background, only portions from the leave.
I've also added an images of leaf as an example of what I've been working on and the code I used to get a boundingBox and the centroid. Any ideas are welcomed!
Thank you very much in advance.
I = imread('C:\Users\IBM_ADMIN\Desktop\Mestrado\Imagens_Final\IMG1_N1_1.png');
L = bwlabel(I);
s = regionprops(L,'BoundingBox');
stat = regionprops(L,'centroid');
hold on;
colors = hsv(numel(s));
for k = 1:numel(s)
him = imshow(I);
hold on;
rectangle('Position', s(k).BoundingBox, 'EdgeColor', colors(k,:));
plot(stat(k).Centroid(1),stat(k).Centroid(2),'rx');
end
matlab-results
leaf-1

Related

Template matching

I am trying to detect elevators on floor plans in MATLAB. The code I have now is not detecting elevators, it is instead just pointing at the edges of the image. I am expecting to detect all the elevators on a floor plan. Elevators are represented by a square or rectangle with an x inside, similar to the template image. I have attached the template, image and a result screenshot.
Template image:
Image:
Results:
Code:
template= rgb2gray(imread('ele7.png'));
image = rgb2gray(imread('floorplan.jpg'));
%imshowpair(image,template,'montage')
c = normxcorr2(template,image);% perform cross-correlation
figure, surf(c), shading flat
[ypeak, xpeak] = find(c==max(c(:)));%peak of correlation
%Compute translation from max location in correlation matrix, =padding
yoffSet = ypeak-size(template,1);
xoffSet = xpeak-size(template,2);
%Display matched area
figure
hAx = axes;
imshow(image,'Parent', hAx);
imrect(hAx, [xoffSet+1, yoffSet+1, size(template,2), size(template,1)]);
To check if everything runs smoothly, you should plot the correlation:
figure, surf(c)
As mention by #cris-luengo , it's easy to fail with the sizes of the image and so on. However, I've seen that you followed the tutorial on https://es.mathworks.com/help/images/ref/normxcorr2.html . Since both images, already black and white images (or 2-colour images), normxcorr2 works well with rgb images (with textures and objects, etc...). Thus, I think that is not a correct approach to use normxcorr2.
An approach I would consider is look for branches. Using Matlab's help and bwmorph:
BW = imread('circles.png');
imshow(BW);
BW1 = bwmorph(BW,'skel',Inf);
You first skeletonize the image, then you can use any of the functions that displayed on bwmorph's help (https://es.mathworks.com/help/images/ref/bwmorph.html). In this case, I'd search for branch points, i.e. crosslinks. It is as simple as:
BW2 = bwmorph(BW1,'branchpoints');
branchPointsPixels = find(BW2 == 1);
The indices of the branch points pixels, will be where it finds an X. However, it can be any rotated X (or +, ...). So you'll find more points that you desire, you would need to filter the points in order to get what you want.

how to to make a car park shows different colors for each dots using matlab?

I convert my car park image to binary image and clear the unwanted white dots/region to get this image:
This is my codes:
sceneImage = imread('nocars10green.jpg');
figure;
imshow(sceneImage);
hsvscene = rgb2hsv (sceneImage);
figure;
imshow (hsvscene);
grayscene = rgb2gray (hsvscene);
figure;
imshow (grayscene);
bwScene = im2bw (grayscene);
figure;
imshow (bwScene);
str = strel('disk',4)
bw = imerode(bwScene,str)
figure;
imshow (bw);
How do I convert the binary image after erode so that I can show different colors for different dots?
I read in this journal.
Al-Kharusi, Hilal, and Ibrahim Al-Bahadly. "Intelligent parking management system based on image processing." World Journal of Engineering and Technology 2014 (2014).
it is mentioned:
if (newmatrix(y,x) > 0) % an object is there, if (e(newmatrix(y,x)) = 0) this object has not been seen
(newmatrix(y,x)) = x; make the value and index 3 equal to the current X coordinate.
and this is their output image:
But I don't understand how it works. Can anyone explain to me how it work and how to write the commands to convert my binary image to get the same as their output image in order to get different colors of each dots?
or if there is any other way to convert it?
The term you have to search the web for is connected component labeling or just labeling.
Given the provided image with 10 white dots on black background you can do the following:
Find the blobs:
https://en.wikipedia.org/wiki/Connected-component_labeling
https://de.mathworks.com/help/images/ref/bwlabel.html
Then color them. For example by using label2rgb:
You can display the output matrix as a pseudocolor indexed image. Each
object appears in a different color, so the objects are easier to
distinguish than in the original image. For more information, see
label2rgb.

remove background or mark filled areas from imfill

I'm doing some image segmentation in matlab of grayscale images taken from a drone using a thermo sensitive camera. The idea is that you should be able to put in a video whereafter it analyzes every frame and give a new video as output, now where each person is marked, clustered and a total count in the frame is given. So far what I am doing to remove the background is first imtophat and then some threshold on top of this I build some analysis to identify the people from e.g. fences, houses etc. However this threshold is way to static, so once there is a shift in outdoor temperature or the layer changes e.g. from grass to tarmac then I either get to many things in the picture or I remove some of the people. So what I am ultimately looking for is a way to get rid of the background. So what I have left is buildings, cars, people etc..
This is the ultimate goal and a solution to this would be highly appreciated.
What I tried to do was to first use the following code on the first picture (where pic1 is the original picture):
%Make it double
pic2 = double(pic1);
%Remove some noise
pic2 = wiener2(pic2);
%Make the pedestrians larger
pic2 = imdilate(pic2,strel('disk',5));
%In case of shadows take these to some minimum
pic3 = pic2.*(pic2>mean(mean(pic2))) + mean(mean(pic2))*(pic2<mean(mean(pic2)));
%Remove some of the background
pic4 = imtophat(pic3,strel('disk',10));
%Make the edges stand out.
hy = fspecial('sobel');
hx = hy';
Iy = imfilter(gaussian, hy, 'replicate');
Ix = imfilter(gaussian, hx, 'replicate');
gradmag = sqrt(Ix.^2 + Iy.^2);
%Threshold the edges
BW = gradmag>100;
%Close the circles
BW2 = imclose(BW1,strel('disk',5))
Now I have a binary image of the edges of the objects in the picture. And I want to fill out the pedestrians, such that I have an initial guess of where they are and how they look. So I apply imfill.
[BW3] = imfill(BW2);
Then what I want is the coordinates of all the pixels that matlab have turned white for me. How do I get that? I have tried with [BW3,locations] = infill(BW2), but this does not work (as I want it to.)
As testing you can use the attached picture. Also if you are trying the solve the ultimate problem at the top, then I have no problem of getting the house, the cars and the pedestrians out - the house and the cars I can perfectly fine sort out if they appear whole.
To get the pixel that imfill changes for you, compare the before and after image and use find to get the coordinates of the points whose values have been changed.
diffimg = (BW2 ~= Bw3);
[y, x] = find(diffimg);

How to select the largest contour in MATLAB

In my progress work, I have to detect a parasite. I have found the parasite using HSV and later made it into a grey image. Now I have done edge detection too. I need some code which tells MATLAB to find the largest contour (parasite) and make the rest of the area as black pixels.
You can select the "largest" contour by filling in the holes that each contour surrounds, figure out which shape gives you the largest area, then use the locations of the largest area and copy that over to a final image. As what Benoit_11 suggested, use regionprops - specifically the Area and PixelList flags. Something like this:
im = imclearborder(im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg')));
im_fill = imfill(im, 'holes');
s = regionprops(im_fill, 'Area', 'PixelList');
[~,ind] = max([s.Area]);
pix = sub2ind(size(im), s(ind).PixelList(:,2), s(ind).PixelList(:,1));
out = zeros(size(im));
out(pix) = im(pix);
imshow(out);
The first line of code reads in your image from StackOverflow directly. The image is also a RGB image for some reason, and so I convert this into binary through im2bw. There is also a white border that surrounds the image. You most likely had this image open in a figure and saved the image from the figure. I got rid of this by using imclearborder to remove the white border.
Next, we need to fill in the areas that the contour surround, so use imfill with the holes flag. Next, use regionprops to analyze the different filled objects in the image - specifically the Area and which pixels belong to each object in the filled image. Once we obtain these attributes, find the filled contour that gives you the biggest area, then access the correct regionprops element, extract out the pixel locations that belong to the object, then use these and copy over the pixels to an output image and display the results.
We get:
Alternatively, you can use the Perimeter flag (as what Benoit_11) suggested, and simply find the maximum perimeter which will correspond to the largest contour. This should still give you what you want. As such, simply replace the Area flag with Perimeter in the third and fourth lines of code and you should still get the same results.
Since my answer was pretty much all written out I'll give it to you anyway, but the idea is similar to #rayryeng's answer.
Basically I use the Perimeter and PixelIdxList flags during the call to regionprops and therefore get the linear indices of the pixels forming the largest contour, once the image border has been removed using imclearborder.
Here is the code:
clc
clear
BW = imclearborder(im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg')));
S= regionprops(BW, 'Perimeter','PixelIdxList');
[~,idx] = max([S.Perimeter]);
Indices = S(idx).PixelIdxList;
NewIm = false(size(BW));
NewIm(Indices) = 1;
imshow(NewIm)
And the output:
As you see there are many ways to achieve the same result haha.
This could be one approach -
%// Read in image as binary
im = im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg'));
im = im(40:320,90:375); %// clear out the whitish border you have
figure, imshow(im), title('Original image')
%// Fill all contours to get us filled blobs and then select the biggest one
outer_blob = imfill(im,'holes');
figure, imshow(outer_blob), title('Filled Blobs')
%// Select the biggest blob that will correspond to the biggest contour
outer_blob = biggest_blob(outer_blob);
%// Get the biggest contour from the biggest filled blob
out = outer_blob & im;
figure, imshow(out), title('Final output: Biggest Contour')
The function biggest_blob that is based on bsxfun is an alternative to what other answers posted here perform with regionprops. From my experience, I have found out this bsxfun based technique to be faster than regionprops. Here are few benchmarks comparing these two techniques for runtime performances on one of my previous answers.
Associated function -
function out = biggest_blob(BW)
%// Find and labels blobs in the binary image BW
[L, num] = bwlabel(BW, 8);
%// Count of pixels in each blob, basically should give area of each blob
counts = sum(bsxfun(#eq,L(:),1:num));
%// Get the label(ind) cooresponding to blob with the maximum area
%// which would be the biggest blob
[~,ind] = max(counts);
%// Get only the logical mask of the biggest blob by comparing all labels
%// to the label(ind) of the biggest blob
out = (L==ind);
return;
Debug images -

MATLAB Image Processing - Find Edge and Area of Image

As a preface: this is my first question - I've tried my best to make it as clear as possible, but I apologise if it doesn't meet the required standards.
As part of a summer project, I am taking time-lapse images of an internal melt figure growing inside a crystal of ice. For each of these images I would like to measure the perimeter of, and area enclosed by the figure formed. Linked below is an example of one of my images:
The method that I'm trying to use is the following:
Load image, crop, and convert to grayscale
Process to reduce noise
Find edge/perimeter
Attempt to join edges
Fill perimeter with white
Measure Area and Perimeter using regionprops
This is the code that I am using:
clear; close all;
% load image and convert to grayscale
tyrgb = imread('TyndallTest.jpg');
ty = rgb2gray(tyrgb);
figure; imshow(ty)
% apply a weiner filter to remove noise.
% N is a measure of the window size for detecting coherent features
N=20;
tywf = wiener2(ty,[N,N]);
tywf = tywf(N:end-N,N:end-N);
% rescale the image adaptively to enhance contrast without enhancing noise
tywfb = adapthisteq(tywf);
% apply a canny edge detection
tyedb = edge(tywfb,'canny');
%join edges
diskEnt1 = strel('disk',8); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
figure; imshow(tyjoin1)
It is at this stage that I am struggling. The edges do not quite join, no matter how much I play around with the morphological structuring element. Perhaps there is a better way to complete the edges? Linked is an example of the figure this code outputs:
The reason that I am trying to join the edges is so that I can fill the perimeter with white pixels and then use regionprops to output the area. I have tried using the imfill command, but cannot seem to fill the outline as there are a large number of dark regions to be filled within the perimeter.
Is there a better way to get the area of one of these melt figures that is more appropriate in this case?
As background research: I can make this method work for a simple image consisting of a black circle on a white background using the below code. However I don't know how edit it to handle more complex images with edges that are less well defined.
clear all
close all
clc
%% Read in RGB image from directory
RGB1 = imread('1.jpg') ;
%% Convert RPG image to grayscale image
I1 = rgb2gray(RGB1) ;
%% Transform Image
%CROP
IC1 = imcrop(I1,[74 43 278 285]);
%BINARY IMAGE
BW1 = im2bw(IC1); %Convert to binary image so the boundary can be traced
%FIND PERIMETER
BWP1 = bwperim(BW1);
%Traces perimeters of objects & colours them white (1).
%Sets all other pixels to black (0)
%Doing the same job as an edge detection algorithm?
%FILL PERIMETER WITH WHITE IN ORDER TO MEASURE AREA AND PERIMETER
BWF1 = imfill(BWP1); %This opens figure and allows you to select the areas to fill with white.
%MEASURE PERIMETER
D1 = regionprops(BWF1, 'area', 'perimeter');
%Returns an array containing the properties area and perimeter.
%D1(1) returns the perimeter of the box and an area value identical to that
%perimeter? The box must be bounded by a perimeter.
%D1(2) returns the perimeter and area of the section filled in BWF1
%% Display Area and Perimeter data
D1(2)
I think you might have room to improve the effect of edge detection in addition to the morphological transformations, for instance the following resulted in what appeared to me a relatively satisfactory perimeter.
tyedb = edge(tywfb,'sobel',0.012);
%join edges
diskEnt1 = strel('disk',7); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
In addition I used bwfill interactively to fill in most of the interior. It should be possible to fill the interior programatically but I did not pursue this.
% interactively fill internal regions
[ny nx] = size(tyjoin1);
figure; imshow(tyjoin1)
tyjoin2=tyjoin1;
titl = sprintf('click on a region to fill\nclick outside window to stop...')
while 1
pts=ginput(1)
tyjoin2 = bwfill(tyjoin2,pts(1,1),pts(1,2),8);
imshow(tyjoin2)
title(titl)
if (pts(1,1)<1 | pts(1,1)>nx | pts(1,2)<1 | pts(1,2)>ny), break, end
end
This was the result I obtained
The "fractal" properties of the perimeter may be of importance to you however. Perhaps you want to retain the folds in your shape.
You might want to consider Active Contours. This will give you a continous boundary of the object rather than patchy edges.
Below are links to
A book:
http://www.amazon.co.uk/Active-Contours-Application-Techniques-Statistics/dp/1447115570/ref=sr_1_fkmr2_1?ie=UTF8&qid=1377248739&sr=8-1-fkmr2&keywords=Active+shape+models+Andrew+Blake%2C+Michael+Isard
A demo:
http://users.ecs.soton.ac.uk/msn/book/new_demo/Snakes/
and some Matlab code on the File Exchange:
http://www.mathworks.co.uk/matlabcentral/fileexchange/28149-snake-active-contour
and a link to a description on how to implement it: http://www.cb.uu.se/~cris/blog/index.php/archives/217
Using the implementation on the File Exchange, you can get something like this:
%% Load the image
% You could use the segmented image obtained previously
% and then apply the snake on that (although I use the original image).
% This will probably make the snake work better and the edges
% in your image is not that well defined.
% Make sure the original and the segmented image
% have the same size. They don't at the moment
I = imread('33kew0g.jpg');
% Convert the image to double data type
I = im2double(I);
% Show the image and select some points with the mouse (at least 4)
% figure, imshow(I); [y,x] = getpts;
% I have pre-selected the coordinates already
x = [ 525.8445 473.3837 413.4284 318.9989 212.5783 140.6320 62.6902 32.7125 55.1957 98.6633 164.6141 217.0749 317.5000 428.4172 494.3680 527.3434 561.8177 545.3300];
y = [ 435.9251 510.8691 570.8244 561.8311 570.8244 554.3367 476.3949 390.9586 311.5179 190.1085 113.6655 91.1823 98.6767 106.1711 142.1443 218.5872 296.5291 375.9698];
% Make an array with the selected coordinates
P=[x(:) y(:)];
%% Start Snake Process
% You probably have to fiddle with the parameters
% a bit more that I have
Options=struct;
Options.Verbose=true;
Options.Iterations=1000;
Options.Delta = 0.02;
Options.Alpha = 0.5;
Options.Beta = 0.2;
figure(1);
[O,J]=Snake2D(I,P,Options);
If the end result is an area/diameter estimate, then why not try to find maximal and minimal shapes that fit in the outline and then use the shapes' area to estimate the total area. For instance, compute a minimal circle around the edge set then a maximal circle inside the edges. Then you could use these to estimate diameter and area of the actual shape.
The advantage is that your bounding shapes can be fit in a way that minimizes error (unbounded edges) while optimizing size either up or down for the inner and outer shape, respectively.