Matlab area calculation - matlab

If I have an arbitrary shape (attached is a really simple mock up), how would I estimate the area of the enclosed surface in Matlab. To get some random points along the curve, I used the ginput command to get a rough estimate of the curve, with unequal spacing between the points. I want to get an estimate of the area, but I believe the trapz command would overestimate the area due to the overlap (Please correct me if I am wrong here). Is there a more accurate way to obtain the area?
Thanks!

Well, you didn't really give enough info to solve the problem entirely, but here's one approach you can take to find the boundary automatically in order to calculate the area:
% Get image and convert to logical mask
img = ~im2bw(imread('polyarea.jpg'));
% Fill the hole
img = imfill(img,'holes');
% Get boundary
b = bwboundaries(img);
% Approximate area of boundary
area = polyarea(b{1}(:,1), b{1}(:,2));
% Print area
disp(['Area: ' num2str(area)]);
imshow(img);
hold on;
plot(b{1}(:,2),b{1}(:,1),'go');
The idea is you have an input, you form a logical mask, get the boundary of the mask, and then you can approximate the area enclosed by the boundary by using polyarea.
The output is:
Area: 228003
Additionally, you could also use regionprops(img,'Area') which outputs:
ans =
Area: 229154

Related

Cropping circles in an image with imfindcircles

I am doing a road sign recognition program in Matlab and I want to recognize circular roadsigns. Therefore I use the matlab function imfindcircles. I would like to crop only the circular roadsigns and to put them in an isolate figure. However, we have other roadsigns on each figure (triangles or squares) but I don't want them. I have no idea how to do this. Here my code :
[im_bw,map] = imread('roadsign.JPG'); %image black and white
S = regionprops(im_bw,'Extrema','Centroid','BoundingBox');
[centers, radii] = imfindcircles(im_bw,[12 40]);
for k = 1:length(S)
im_cercle = imcrop(im_bw, S(k).BoundingBox);
im_cercle = padarray(im_cercle, [20 20]); % put each roadsigns in a small figure
if radii(k) ~= 0 % Error
figure,imshow(im_cercle); title 'Circle spotted'; % Show every circular roadsigns in a figure
else
figure('visible','off'),imshow(im_cercle); title 'wrong raodsign';
end
end
I tried some other conditions with centres and radii but when I execute the code, i get dimension errors or sometimes it shows me a shape which is not a circle. I also tried to do a variable that only sets when he finds a circle, but without results. Can you help me please ?
Thanks in advance.
What you do you expect the output of regionprops to be? It isn't picking out the circles - it's picking out all the "areas" (anything that is a connected area in im_bw). In addition while imfindcircles can find circles that overlap with other things, regionprops will detect overlapping areas as a single object.
On the other hand, you call imfindcircles and then do nothing with the output.
Rather than doing anything with regionprops, just use the values of centers, radii to define a bounding box around each detected circle (optionally with some additional padding), crop that area out of the image and save/display it.

Calculate the average of part of the image

How can i calculate the average of a certain area in an image using mat-lab?
For example, if i have an intensity image with an area that is more alight and i want to know what is the average of the intensity there- how do i calculate it?
I think i can find the coordinates of the alight area by using the 'impixelinfo' command.
If there is another more efficient way to find the coordinates i will also be glad to know.
After i know the coordinates how do i calculate the average of part of the image?
You could use one of the imroi type functions in Matlab such as imfreehand
I = imread('cameraman.tif');
h = imshow(I);
e = imfreehand;
% now select area on image - do not close image
% this makes a mask from the area you just drew
BW = createMask(e);
% this takes the mean of pixel values in that area
I_mean = mean(I(BW));
Alternatively, look into using regionprops, especially if there's likely to be more than one of these features in the image. Here, I'm finding points in the image above some threshold intensity and then using imdilate to pick out a small area around each of those points (presuming the points above the threshold are well separated, which may not be the case - if they are too close then imdilate will merge them into one area).
se = strel('disk',5);
BW = imdilate(I>thresh,se);
s = regionprops(BW, I, 'MeanIntensity');

MATLAB Image Processing - Find Edge and Area of Image

As a preface: this is my first question - I've tried my best to make it as clear as possible, but I apologise if it doesn't meet the required standards.
As part of a summer project, I am taking time-lapse images of an internal melt figure growing inside a crystal of ice. For each of these images I would like to measure the perimeter of, and area enclosed by the figure formed. Linked below is an example of one of my images:
The method that I'm trying to use is the following:
Load image, crop, and convert to grayscale
Process to reduce noise
Find edge/perimeter
Attempt to join edges
Fill perimeter with white
Measure Area and Perimeter using regionprops
This is the code that I am using:
clear; close all;
% load image and convert to grayscale
tyrgb = imread('TyndallTest.jpg');
ty = rgb2gray(tyrgb);
figure; imshow(ty)
% apply a weiner filter to remove noise.
% N is a measure of the window size for detecting coherent features
N=20;
tywf = wiener2(ty,[N,N]);
tywf = tywf(N:end-N,N:end-N);
% rescale the image adaptively to enhance contrast without enhancing noise
tywfb = adapthisteq(tywf);
% apply a canny edge detection
tyedb = edge(tywfb,'canny');
%join edges
diskEnt1 = strel('disk',8); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
figure; imshow(tyjoin1)
It is at this stage that I am struggling. The edges do not quite join, no matter how much I play around with the morphological structuring element. Perhaps there is a better way to complete the edges? Linked is an example of the figure this code outputs:
The reason that I am trying to join the edges is so that I can fill the perimeter with white pixels and then use regionprops to output the area. I have tried using the imfill command, but cannot seem to fill the outline as there are a large number of dark regions to be filled within the perimeter.
Is there a better way to get the area of one of these melt figures that is more appropriate in this case?
As background research: I can make this method work for a simple image consisting of a black circle on a white background using the below code. However I don't know how edit it to handle more complex images with edges that are less well defined.
clear all
close all
clc
%% Read in RGB image from directory
RGB1 = imread('1.jpg') ;
%% Convert RPG image to grayscale image
I1 = rgb2gray(RGB1) ;
%% Transform Image
%CROP
IC1 = imcrop(I1,[74 43 278 285]);
%BINARY IMAGE
BW1 = im2bw(IC1); %Convert to binary image so the boundary can be traced
%FIND PERIMETER
BWP1 = bwperim(BW1);
%Traces perimeters of objects & colours them white (1).
%Sets all other pixels to black (0)
%Doing the same job as an edge detection algorithm?
%FILL PERIMETER WITH WHITE IN ORDER TO MEASURE AREA AND PERIMETER
BWF1 = imfill(BWP1); %This opens figure and allows you to select the areas to fill with white.
%MEASURE PERIMETER
D1 = regionprops(BWF1, 'area', 'perimeter');
%Returns an array containing the properties area and perimeter.
%D1(1) returns the perimeter of the box and an area value identical to that
%perimeter? The box must be bounded by a perimeter.
%D1(2) returns the perimeter and area of the section filled in BWF1
%% Display Area and Perimeter data
D1(2)
I think you might have room to improve the effect of edge detection in addition to the morphological transformations, for instance the following resulted in what appeared to me a relatively satisfactory perimeter.
tyedb = edge(tywfb,'sobel',0.012);
%join edges
diskEnt1 = strel('disk',7); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
In addition I used bwfill interactively to fill in most of the interior. It should be possible to fill the interior programatically but I did not pursue this.
% interactively fill internal regions
[ny nx] = size(tyjoin1);
figure; imshow(tyjoin1)
tyjoin2=tyjoin1;
titl = sprintf('click on a region to fill\nclick outside window to stop...')
while 1
pts=ginput(1)
tyjoin2 = bwfill(tyjoin2,pts(1,1),pts(1,2),8);
imshow(tyjoin2)
title(titl)
if (pts(1,1)<1 | pts(1,1)>nx | pts(1,2)<1 | pts(1,2)>ny), break, end
end
This was the result I obtained
The "fractal" properties of the perimeter may be of importance to you however. Perhaps you want to retain the folds in your shape.
You might want to consider Active Contours. This will give you a continous boundary of the object rather than patchy edges.
Below are links to
A book:
http://www.amazon.co.uk/Active-Contours-Application-Techniques-Statistics/dp/1447115570/ref=sr_1_fkmr2_1?ie=UTF8&qid=1377248739&sr=8-1-fkmr2&keywords=Active+shape+models+Andrew+Blake%2C+Michael+Isard
A demo:
http://users.ecs.soton.ac.uk/msn/book/new_demo/Snakes/
and some Matlab code on the File Exchange:
http://www.mathworks.co.uk/matlabcentral/fileexchange/28149-snake-active-contour
and a link to a description on how to implement it: http://www.cb.uu.se/~cris/blog/index.php/archives/217
Using the implementation on the File Exchange, you can get something like this:
%% Load the image
% You could use the segmented image obtained previously
% and then apply the snake on that (although I use the original image).
% This will probably make the snake work better and the edges
% in your image is not that well defined.
% Make sure the original and the segmented image
% have the same size. They don't at the moment
I = imread('33kew0g.jpg');
% Convert the image to double data type
I = im2double(I);
% Show the image and select some points with the mouse (at least 4)
% figure, imshow(I); [y,x] = getpts;
% I have pre-selected the coordinates already
x = [ 525.8445 473.3837 413.4284 318.9989 212.5783 140.6320 62.6902 32.7125 55.1957 98.6633 164.6141 217.0749 317.5000 428.4172 494.3680 527.3434 561.8177 545.3300];
y = [ 435.9251 510.8691 570.8244 561.8311 570.8244 554.3367 476.3949 390.9586 311.5179 190.1085 113.6655 91.1823 98.6767 106.1711 142.1443 218.5872 296.5291 375.9698];
% Make an array with the selected coordinates
P=[x(:) y(:)];
%% Start Snake Process
% You probably have to fiddle with the parameters
% a bit more that I have
Options=struct;
Options.Verbose=true;
Options.Iterations=1000;
Options.Delta = 0.02;
Options.Alpha = 0.5;
Options.Beta = 0.2;
figure(1);
[O,J]=Snake2D(I,P,Options);
If the end result is an area/diameter estimate, then why not try to find maximal and minimal shapes that fit in the outline and then use the shapes' area to estimate the total area. For instance, compute a minimal circle around the edge set then a maximal circle inside the edges. Then you could use these to estimate diameter and area of the actual shape.
The advantage is that your bounding shapes can be fit in a way that minimizes error (unbounded edges) while optimizing size either up or down for the inner and outer shape, respectively.

Hough transform in MATLAB without using hough function

I found an implementation of the Hough transform in MATLAB at Rosetta Code, but I'm having trouble understanding it. Also I would like to modify it to show the original image and the reconstructed lines (de-Houghing).
Any help in understanding it and de-Houghing is appreciated. Thanks
Why is the image flipped?
theImage = flipud(theImage);
I can't wrap my head around the norm function. What is its purpose, and can it be avoided?
EDIT: norm is just a synonym for euclidean distance: sqrt(width^2 + height^2)
rhoLimit = norm([width height]);
Can someone provide an explanation of how/why rho, theta, and houghSpace is calculated?
rho = (-rhoLimit:1:rhoLimit);
theta = (0:thetaSampleFrequency:pi);
numThetas = numel(theta);
houghSpace = zeros(numel(rho),numThetas);
How would I de-Hough the Hough space to recreate the lines?
Calling the function using a 10x10 image of a diagonal line created using the identity (eye) function
theImage = eye(10)
thetaSampleFrequency = 0.1
[rho,theta,houghSpace] = houghTransform(theImage,thetaSampleFrequency)
The actual function
function [rho,theta,houghSpace] = houghTransform(theImage,thetaSampleFrequency)
%Define the hough space
theImage = flipud(theImage);
[width,height] = size(theImage);
rhoLimit = norm([width height]);
rho = (-rhoLimit:1:rhoLimit);
theta = (0:thetaSampleFrequency:pi);
numThetas = numel(theta);
houghSpace = zeros(numel(rho),numThetas);
%Find the "edge" pixels
[xIndicies,yIndicies] = find(theImage);
%Preallocate space for the accumulator array
numEdgePixels = numel(xIndicies);
accumulator = zeros(numEdgePixels,numThetas);
%Preallocate cosine and sine calculations to increase speed. In
%addition to precallculating sine and cosine we are also multiplying
%them by the proper pixel weights such that the rows will be indexed by
%the pixel number and the columns will be indexed by the thetas.
%Example: cosine(3,:) is 2*cosine(0 to pi)
% cosine(:,1) is (0 to width of image)*cosine(0)
cosine = (0:width-1)'*cos(theta); %Matrix Outerproduct
sine = (0:height-1)'*sin(theta); %Matrix Outerproduct
accumulator((1:numEdgePixels),:) = cosine(xIndicies,:) + sine(yIndicies,:);
%Scan over the thetas and bin the rhos
for i = (1:numThetas)
houghSpace(:,i) = hist(accumulator(:,i),rho);
end
pcolor(theta,rho,houghSpace);
shading flat;
title('Hough Transform');
xlabel('Theta (radians)');
ylabel('Rho (pixels)');
colormap('gray');
end
The Hough Transform is a "voting" approach where each image point casts a vote on the existence of a certain line (not a line segment) in an image. The voting is carried out in the parameter space for a line: the polar coordinate representation of normal vectors.
We discretize the parameter space and allow each image point to suggest parameters which would be compatible with a line through the point. Each of your questions can be addressed in terms of how the parameter space is treated in code. Wikipedia has a good article with worked examples that might clarify things (if you are having any conceptual troubles).
For your specific questions:
The image is flipped so the origin is the bottom right corner. As far as I can tell this step is not technically necessary. It does change the outcome somewhat due to discretization issues. The other implementations on Rosetta Code do not flip the image.
rhoLimit holds the maximum radius of an image point in polar coordinates (recall the norm of a vector is its magnitude).
rho and theta are discretizations of the polar coordinate plane according to a sampling rate. houghSpace creates a matrix with an element for each possible combination of the discrete rho/theta values.
The Hough Transform does not specify the lengths of putative lines; the peaks in the voting space just specify the polar coordinates of the normal vector of the line. You can "de-Hough" by selecting the peaks and drawing the corresponding lines, or perhaps by drawing every possible line and using the number of votes as a grayscale weight. It is not possible to re-create the original image from the Hough Transform, just the lines identified by the transform (and your thresholding scheme on the votes).
Following the example from the question produces the following graph. The placement of grid lines and the datatips cursor can be a bit misleading (though the variable values in the 'tip are correct). Since this is an image of the parameter space and not the image space the sampling rate we chose is determining the number of bins in each variable. At this sampling rate, the image points are compatible with more than one possible line; in other words our lines have subpixel resolution, in the sense that they cannot be drawn without overlap in a 10x10 image.
Once we have chosen a peak, such as that corresponding to the line with normal (rho,theta) = (6.858,0.9), we can draw that line in an image however we choose. Automated peak picking, that is thresholding to find the highly up-voted lines, is its own problem - you could ask a another question about the topic in DSP or about a particular algorithm here.
For example methods see the code and documentation of MATLAB's houghpeaks and houghlines functions.

Matlab - Propagate points orthogonally on to the edge of shape boundaries

I have a set of points which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge.
I have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. I have approximately 80 to 150 points that I wish to propagate on the shape boundary.
I want to cast a ray from each point from the set of points in an orthogonal direction and detect at which point it intersects the shape boundary at. The orthogonal direction has already been determined. For the required purposes it is calculated taking the normal of the contour calculated for point, using point-1 and point+1.
What would be the best method to do this?
Are there some sort of ray tracing algorithms that could be used?
Thank you very much in advance for any help!
EDIT: I have tried to make the question much clearer and added a image describing the problem. In the image the grey line represents the shape contour, the red dots the points
I want to propagate and the green line an imaginary orthongally cast ray.
alt text http://img504.imageshack.us/img504/3107/orth.png
ANOTHER EDIT: For clarification I have posted the code used to calculate the normals for each point. Where the xt and yt are vectors storing the coordinates for each point. After calculating the normal value it can be propagated by using the linspace function and the requested length of the orthogonal line.
%#derivaties of contour
dx=[xt(2)-xt(1) (xt(3:end)-xt(1:end-2))/2 xt(end)-xt(end-1)];
dy=[yt(2)-yt(1) (yt(3:end)-yt(1:end-2))/2 yt(end)-yt(end-1)];
%#normals of contourpoints
l=sqrt(dx.^2+dy.^2);
nx = -dy./l;
ny = dx./l;
normals = [nx,ny];
It depends on how many unit vectors you want to test against one shape. If you have one shape and many tests, the easiest thing to do is probably to convert your shape coordinates to polar coordinates which implicitly represent your solution already. This may not be a very effective solution however if you have different shapes and only a few tests for every shape.
Update based on the edited question:
If the rays can start from arbitrary points, not only from the origin, you have to test against all the points. This can be done easily by transforming your shape boundary such that your ray to test starts in the origin in either coordinate direction (positive x in my example code)
% vector of shape boundary points (assumed to be image coordinates, i.e. integers)
shapeBoundary = [xs, ys];
% define the start point and direction you want to test
startPoint = [xsp, ysp];
testVector = unit([xv, yv]);
% now transform the shape boundary
shapeBoundaryTrans(:,1) = shapeBoundary(:,1)-startPoint(1);
shapeBoundaryTrans(:,2) = shapeBoundary(:,2)-startPoint(2);
rotMatrix = [testVector(2), testVector(1); ...
testVector(-1), testVector(2)];
% somewhat strange transformation to keep it vectorized
shapeBoundaryTrans = shapeBoundaryTrans * rotMatrix';
% now the test is easy: find the points close to the positive x-axis
selector = (abs(shapeBoundaryTrans(:,2)) < 0.5) & (shapeBoundaryTrans(:,1) > 0);
shapeBoundaryTrans(:,2) = 1:size(shapeBoundaryTrans, 1)';
shapeBoundaryReduced = shapeBoundaryTrans(selector, :);
if (isempty(shapeBoundaryReduced))
[dummy, idx] = min(shapeBoundaryReduced(:, 1));
collIdx = shapeBoundaryReduced(idx, 2);
% you have a collision with point collIdx of your shapeBoundary
else
% no collision
end
This could be done in a nicer way probably, but you get the idea...
If I understand your problem correctly (project each point onto the closest point of the shape boundary), you can
use sub2ind to convert the "2 row by n column matrix" description to a BW image with white pixels, something like
myimage=zeros(imagesize);
myimage(imagesize, x_coords, y_coords) = 1
use imfill to fill the outside of the boundary
run [D,L] = bwdist(BW) on the resulting image, and just read the answers from L.
Should be fairly straightforward.