I create an image, which has random groups of random pixels:
img=ones(100,100)
numRandom = 505;
linearIndices = ceil(numel(img) * rand(1, numRandom));
img(linearIndices) = 0;
imshow(img)`
Then I turn this image into binary and find the area of each group of pixels with:
regionprops(L, 'Area');
I also need the perimeter of each group. Unfortunately, regionprops doesn't give me correct results (for example, if there is one pixel the function returns 0 instead of 4), so I think that it is better to find number of neighbour pixels of each group (so that for the case of only one pixel the answer will be 4). If the group is on the border of the image it should also be taken into consideration.
Can anybody give me a tip about how to do it?
Perimeter and regionprops is not what you need then, or find all these single pixels using regionprops(L, 'Area')==1 and set their perimeter to 4....
From Matlab documentation:
Perimeter — is the distance around the boundary of the region. regionprops computes the perimeter by calculating the distance between each adjoining pair of pixels around the border of the region. If the image contains discontiguous regions, regionprops returns unexpected results. The following figure shows the pixels included in the perimeter calculation for this object.
From this image you can see that the edge pixels are counted only once, not twice.
Related
I have some binary images that want to classify them base on shape of them in MATLAB. If they have circular or elliptical shape they belong to class one,if they have elliptical shape with dent in their boundary they belong to class two. I dont know how can I use this feature. Can any body help me with this?
You can use the eccentricity property in regionprops. From MATLAB documentation of eccentricity:
The eccentricity is the ratio of the distance between the foci of the ellipse and its major axis length. The value is between 0 and 1. (0 and 1 are degenerate cases. An ellipse whose eccentricity is 0 is actually a circle, while an ellipse whose eccentricity is 1 is a line segment.)
So as the value of eccentricity increases , the ellipse starts becoming flatter. Hence, at its maximum value = 1, it is a line segment.
To check if there is a dent in the ellipse, you can use check for convexity. Whenever there is a dent in an ellipse, it will be non-convex. In other words, if you try to fit a convex polygon, it won't be able to approximate the shape well enough. You can use convexArea property to check the same. From MATLAB documentation of convexArea:
Returns a p-by-2 matrix that specifies the smallest convex polygon that can contain the region. Each row of the matrix contains the x- and y-coordinates of one vertex of the polygon. Only supported for 2-D label matrices.
So you use bwlabel to create a 2-D label matrix from your binary image and then check the difference between the area of your binary image and the area of the fitted convex polygon. Measuring area could be as simple as counting pixels. You already know that the number pixels of your fitted convex polygon = p. Just take the absolute difference between p and the number of pixels in your original binary image. You should be able to easily set a threshold to classify into one of the two classes.
I think you can write the code for this. Hope this helps.
Working on 2D Rectangular Nesting. Need to find the utilization percentage of the material. Assuming i have the length, breadth, left-bottom position of each rectangle. What is the best way to determine the Boundary-Cut Utilization?
Objective:- To find the AREA under the RED Line.
Sample images attached to depict what i have done and what i need.
What i have done
what i need
Another Example image of rectangles packed with allowance
If you're interested in determining the total "area" underneath the red line, one suggestion I have is if you have access to the Image Processing Toolbox, simply create a binary image where we draw all of the rectangles on the image at once, fill all of the holes, then to determine the area, just determine the total sum of all of the binary "pixels" in the image. You said you have the (x,y) positions of the bottom-left corner of each rectangle, as well as the width and height of each rectangle. To make this compatible in an image context, the y axis is usually flipped so that the top-left corner of the space is the origin instead of the bottom-left. However, this shouldn't affect our analysis as we are simply reflecting the whole 2D space downwards.
Therefore, I would start with a blank image that is the same size as the grid you are dealing with, then writing a loop that simply sets a rectangular grid of coordinates to true for each rectangle you have. After, use imfill to fill in any of the holes in the image, then calculate the total sum of the pixels to get the area. The definition of a hole in an image processing context is any black pixels that are completely surrounded by white pixels. Therefore, should we have gaps that are surrounded by white pixels, these will get filled in with white.
Therefore, assuming that we have four separate variables of x, y, width and height that are N elements long, where N is the number of rectangles you have, do something like this:
N = numel(x); %// Determine total number of rectangles
rows = 100; cols = 200; %// Define dimensions of grid here
im = false(rows, cols); %// Declare blank image
%// For each rectangle we have...
for idx = 1 : N
%// Set interior of rectangle at location all to true
im(y(idx)+1:y(idx)+height(idx), x(idx)+1:x(idx)+width(idx)) = true;
end
%// Fill in the holes
im_filled = imfill(im, 'holes');
%// Determine total area
ar = sum(im_filled(:));
The indexing in the for loop:
im(y(idx)+1:y(idx)+height(idx), x(idx)+1:x(idx)+width(idx)) = true;
Is a bit tricky to deal with. Bear in mind that I'm assuming that y accesses the rows of the image and x accesses the columns. I'm also assuming that x and y are 0-based, so the origin is at (0,0). Because we access arrays and matrices in MATLAB starting at 1, we need to offset the coordinates by 1. Now, the beginning index for the row starts from y(idx)+1. We end at y(idx) + height(idx) because we technically start at y(idx)+1 but then we need to go up to height(idx) but then we also subtract by 1 as your coordinates begin at 0. Take for example a line with the width of 20, from x = 0 to x = 19. This width is 20, but we draw from 0, up to 20-1 which is 19. Because of the indexing starting at 1 for MATLAB, and the subtraction of 1 due to the 0 indexing, the +1 and -1 cancel, which is why we are just left with y(idx) + height(idx). The same can be said with the x coordinate and the width.
Once we draw all of the rectangles in the image, we use imfill to fill up the holes, then we can sum up the total area by just unrolling the whole image into a single vector and invoking sum. This should (hopefully) get what you need.
Now, if you want to find the area without the filled in holes (I suspect this is what you actually need), then you can skip the imfill step. Simply apply the sum on the im, instead of im_filled, and so:
ar = sum(im(:));
This will sum up all of the "white" pixels in the image, which is effectively the area. I'm not sure what you're actually after, so use one or the other depending on your needs.
Boundary-Cut Area without using Image Processing Toolbox.
The Detailed question description and answer could be found here
This solution is applicable only to Rectangular parts.
I was working on my image processing problem with detecting coins.
I have some images like this one here:
and wanted to separate the falsely connected coins.
We already tried the watershed method as stated on the MATLAB-Homepage:
the-watershed-transform-strategies-for-image-segmentation.html
especially since the first example is exactly our problem.
But instead we get a somehow very messed up separation as you can see here:
We already extracted the area of the coin using the regionprops Extrema parameter and casting the watershed only on the needed area.
I'd appreciate any help with the problem or even another method of getting it separated.
If you have the Image Processing Toolbox, I can also suggest the Circular Hough Transform through imfindcircles. However, this requires at least version R2012a, so if you don't have it, this won't work.
For the sake of completeness, I'll assume you have it. This is a good method if you want to leave the image untouched. If you don't know what the Hough Transform is, it is a method for finding straight lines in an image. The circular Hough Transform is a special case that aims to find circles in the image.
The added advantage of the circular Hough Transform is that it is able to detect partial circles in an image. This means that those regions in your image that are connected, we can detect them as separate circles. How you'd call imfindcircles is in the following fashion:
[centers,radii] = imfindcircles(A, radiusRange);
A would be your binary image of objects, and radiusRange is a two-element array that specifies the minimum and maximum radii of the circles you want to detect in your image. The outputs are:
centers: A N x 2 array that tells you the (x,y) co-ordinates of each centre of a circle that is detected in the image - x being the column and y being the row.
radii: For each corresponding centre detected, this also gives the radius of each circle detected. This is a N x 1 array.
There are additional parameters to imfindcircles that you may find useful, such as the Sensitivity. A higher sensitivity means that it is able to detect circular shapes that are more non-uniform, such as what you are showing in your image. They aren't perfect circles, but they are round shapes. The default sensitivity is 0.85. I set it to 0.9 to get good results. Also, playing around with your image, I found that the radii ranged from 50 pixels to 150 pixels. Therefore, I did this:
im = im2bw(imread('http://dennlinger.bplaced.net/t06-4.jpg'));
[centers,radii] = imfindcircles(im, [50 150], 'Sensitivity', 0.9);
The first line of code reads in your image directly from StackOverflow. I also convert this to logical or true black and white as the image you uploaded is of type uint8. This image is stored in im. Next, we call imfindcircles in the method that we described.
Now, if we want to visualize the detected circles, simply use imshow to show your image, then use the viscircles to draw the circles in the image.
imshow(im);
viscircles(centers, radii, 'DrawBackgroundCircle', false);
viscircles by default draws the circles with a white background over the contour. I want to disable this because your image has white circles and I don't want to show false contouring. This is what I get with the above code:
Therefore, what you can take away from this is the centers and radii variables. centers will give you the centre of each detected circle while radii will tell you what the radii is for each circle.
Now, if you want to simulate what regionprops is doing, we can iterate through all of the detected circles and physically draw them onto a 2D map where each circle would be labeled by an ID number. As such, we can do something like this:
[X,Y] = meshgrid(1:size(im,2), 1:size(im,1));
IDs = zeros(size(im));
for idx = 1 : numel(radii)
r = radii(idx);
cen = centers(idx,:);
loc = (X - cen(1)).^2 + (Y - cen(2)).^2 <= r^2;
IDs(loc) = idx;
end
We first define a rectangular grid of points using meshgrid and initialize an IDs array of all zeroes that is the same size as the image. Next, for each pair of radii and centres for each circle, we define a circle that is centered at this point that extends out for the given radius. We then use these as locations into the IDs array and set it to a unique ID for that particular circle. The result of IDs will be that which resembles the output of bwlabel. As such, if you want to extract the locations of where the idx circle is, you would do:
cir = IDs == idx;
For demonstration purposes, this is what the IDs array looks like once we scale the IDs such that it fits within a [0-255] range for visibility:
imshow(IDs, []);
Therefore, each shaded circle of a different shade of gray denotes a unique circle that was detected with imfindcircles.
However, the shades of gray are probably a bit ambiguous for certain coins as this blends into the background. Another way that we could visualize this is to apply a different colour map to the IDs array. We can try using the cool colour map, with the total number of colours to be the number of unique circles + 1 for the background. Therefore, we can do something like this:
cmap = cool(numel(radii) + 1);
RGB = ind2rgb(IDs, cmap);
imshow(RGB);
The above code will create a colour map such that each circle gets mapped to a unique colour in the cool colour map. The next line applies a mapping where each ID gets associated with a colour with ind2rgb and we finally show the image.
This is what we get:
Edit: the following solution is more adequate to scenarios where one does not require fitting the exact circumferences, although simple heuristics could be used to approximate the radii of the coins in the original image based on the centers found in the eroded one.
Assuming you have access to the Image Processing toolbox, try imerode on your original black and white image. It will apply an erosion morphological operator to your image. In fact, the Matlab webpage with the documentation of that function has an example strikingly similar to your problem/image and they use a disk structure.
Run the following code (based on the example linked above) assuming the image you submitted is called ima.jpg and is local to the code:
ima=imread('ima.jpg');
se = strel('disk',50);
eroded = imerode(ima,se);
imshow(eroded)
and you will see the image that follows as output. After you do this, you can use bwlabel to label the connected components and compute whatever properties you may want, for example, count the number of coins or detect their centers.
I've a binary image containing an object as illustrated in the figure below. The centerline of the object is depicted in red. For each pixel belonging to the object, I would like to relabel it with a color. For instance, pixels whose orthogonal distance to the centerline are half of the distance to the object boundary from the centerline, should be labeled blue, otherwise green. An illustration is given below. Any ideas?
Also, how could I fit a 1D gaussian centered in the object centerline and orthogonal to it?
The image in full resolution can be found under: http://imgur.com/AUK9Hs9
Here is what comes to mind (providing you have the Image Processing Toolbox):
Create two binary images, one BWin with 1 (true) pixels at the location of your red line, and one BWout that is the opposite of your white region (1 outisde the region and 0 (false) inside).
Like this:
BWin:
BWout:
Then apply the euclidean transform to both using bwdist:
Din = bwdist(BWin);
Dout = bwdist(BWout);
You now have two images with pixel intensities that represent the euclidean distance to the closest non-0 pixel.
Now subtract both, the values of the difference will be positive on one side of the equidistance and negative on the other side:
blueMask=Din-Dout>0;
greenMask=~BWout & blueMask;
You can then populate the RGB layer using the masks:
Result=zeros(size(II));
Result(:,:,1)=BWin;
Result(:,:,2)=greenMask;
Result(:,:,3)=~blueMask & ~BWin;
imshow(Result);
If I explain why, this might make more sense
I have a logical matrix (103x3488) output of a photo of a measuring staff having been run through edge detect (1=edge, 0=noedge). Aim- to calculate the distance in pixels between the graduations on the staff. Problem, staff sags in the middle.
Idea: User inputs co-ordinates (using ginput or something) of each end of staff and the midpoint of the sag, then if the edges between these points can be extracted into arrays I can easily find the locations of the edges.
Any way of extracting an array from a matrix in this manner?
Also open to other ideas, only been using matlab for a month, so most functions are unknown to me.
edit:
Link to image
It shows a small area of the matrix, so in this example 1 and 2 are the points I want to sample between, and I'd want to return the points that occur along the red line.
Cheers
Try this
dat=imread('83zlP.png');
figure(1)
pcolor(double(dat))
shading flat
axis equal
% get the line ends
gi=floor(ginput(2))
x=gi(:,1);
y=gi(:,2);
xl=min(x):max(x); % line pixel x coords
yl=floor(interp1(x,y,xl)); % line pixel y coords
pdat=nan(length(xl),1);
for i=1:length(xl)
pdat(i)=dat(yl(i),xl(i));
end
figure(2)
plot(1:length(xl),pdat)
peaks=find(pdat>40); % threshhold for peak detection
bigpeak=peaks(diff(peaks)>10); % threshold for selecting only edge of peak
hold all
plot(xl(bigpeak),pdat(bigpeak),'x')
meanspacex=mean(diff(xl(bigpeak)));
meanspacey=mean(diff(yl(bigpeak)));
meanspace=sqrt(meanspacex^2+meanspacey^2);
The matrix pdat gives the pixels along the line you have selected. The meanspace is edge spacing in pixel units. The thresholds might need fiddling with, depending on the image.
After seeing the image, I'm not sure where the "sagging" you're referring to is taking place. The image is rotated, but you can fix that using imrotate. The degree to which it needs to be rotated should be easy enough; just input the coordinates A and B and use the inverse tangent to find the angle offset from 0 degrees.
Regarding the points, once it's aligned straight, all you need to do is specify a row in the image matrix (it would be a 1 x 3448 vector) and use find to get non-zero vector indexes. As the rotate function may have interpolated the pixels somewhat, you may get more than one index per "line", but they'll be identifiable as being consecutive numbers, and you can just average them to get an approximate value.