Area integral invariant computation - matlab

Area integral invariant is a type of signature used in image processing. Does anyone know the algorithm for the computation of AII?
i.e. I want to calculate the area enclosed by a boundary and the intersected circle...
the boundary is not a curve with a equation but from a arbitrary profile. The image below is just a schematic drawing. The real boundary can be much more complex with the enclosed area in various positions of the boundary, i.e. top, bottom, left side...
The red area. I am using MATLAB and the image is mostly binary ones.

If you know the equation of the circle and the line then its quite easy, if you are doing this in an image.
Select the pixels that are inside the circle (easily done with the equation of the circle). If you need to compute the AII as a ratio, then count the pixels you have.
Separate the pixels above and below the line. You can do this easily if you know the equation of the line, or the value of the line in each column. Go column by column and discard the pixels that are above the value of the line. Count the result.
That's it! If you want the AII without a ratio, then the number of pixels in 2 is the result. If you want it as a ratio, divide the number of pixels of 2 by the number of pixels of 1.

If you only have the image and no equations, you can still select all the pixels you want by giving your algorithm one pixel in the area you want to calculate and then recusivly check all neighbors and add them to you area, if they are white. When you are done, just count the pixels you have. The result is in some sense the are of the region you wanted.

Related

Problems in implementing the integral image in MATLAB

I tried to implement the integral image in MATLAB by the following:
im = imread('image.jpg');
ii_im = cumsum(cumsum(double(im)')');
im is the original image and ii_im is the integral image.
The problem here is the value in ii_im flows out of the 0 to 255 range.
When using imshow(ii_im), I always get a very bright image which I am not sure is the correct result. Am I correct here?
You're implementing the integral image calculations right, but I don't understand why you would want to visualize it - especially since the sums will go beyond any normal integer range. This is expected as you are performing a summation of intensities bounded by larger and larger rectangular neighbourhoods as you move to the bottom right of the image. It is inevitable that you will get large numbers towards the bottom right. Also, you will obviously get a white image when trying to show this image because most of the values will go beyond 255, which is visualized as white.
If I can add something, one small optimization I have is to get rid of the transposing and use cumsum to specify the dimension you want to work on. Specifically, you can do this:
ii_im = cumsum(cumsum(double(im), 1), 2);
It doesn't matter what direction you specify first (2 then 1, or 1 then 2). The summation of all pixels within each bounded area, as long as you specify all directions to operate on, should be the same.
Back to your question for display, if you really, really, really really... I mean really want to, you can normalize the contrast by doing:
imshow(ii_im, []);
However, what you should expect is a gradient image which starts to be dark from the top, then becomes brighter when you get to the bottom right of the image. Remember, each point in the integral image calculates the total summation of pixel intensities bounded by the top left corner of the image to this point, thus forming a rectangle of intensities you need to sum over. Therefore, as we move further down and to the right of the integral image, the total summation should increase.
With the cameraman.tif image, this is the original image, as well as it's integral image visualized using the above command:
Either way, there is absolutely no reason why you would want to visualize it. You would use this directly with whatever application requires it (adaptive thresholding, Viola-Jones detector, etc.)
Another option could be applying a log operation for each value in the integral image. Something like:
imshow(log(1 + ii_im), []);
However, this will make most of the pixels have the same contrast and this is probably not useful. This is what I get with cameraman.tif:
The moral of this story is that you need some sort of contrast normalization so that you can fit all of the values in your integral image within the confines of the data type that is used to display the image on the screen using imshow.

Matlab image processing - problems with recognizing circles [duplicate]

I have the image includes circular, elipsoidal, square objects and somethings like these. I want to get only circual objects. I applyed a filter by using Solidity and Enccentricity levels of objets but I could not remove square objects. Square objects which have not sharp corners have nearly same Solidity and Enccentricity level with circular objects.
My question is that is there any other parameter or way to detect square objects?
You can compare the area of the mask to its perimeter using the following formula
ratio = 4 * pi * Area / ( Perimeter^2 )
For circles this ration should be very close to one, for other shapes it should be significantly lower.
See this tutorial for an example.
The rationale behind this formula: circles are optimal in their perimeter-area ratio - max area for given perimeter. Given Perimeter, you can estimate radius of equivalent circle by Perimeter = 2*pi*R, using this estimated R you can compute the "equivalent circle area" using eqArea = pi*R^2. Now you only need to check the ratio between the actual area of the shape and the "equivalent area" computed.
Note: since Area and Perimeter of objects in mask are estimated based on the pixel-level discretization these estimates may be quite crude especially for small shapes. Consider working with higher resolution masks if you notice quantization/discretization errors.
There exists a Hough transform (imfindcircles) in order to find circles within an image which is what you needed in the first place.

Matlab - Concatenation of overlapping blocks with weighted average

I'm looking for a quick way to combine overlapping blocks into one image. Assume the size of the full image and the coordinates of each block within the full image are known. Also assume the blocks are regularly spaced both horizontally and vertically.
The catch - in the overlapping region, a pixel in the output image should get a value according to a weighted average of the corresponding pixels in the overlapping blocks. The weights should be proportional to the distance from the block center.
So, for example, take a pixel location p (relative to the full image coordinates) in the overlapping region between block B1 and B2. Assume the overlap region is due to a horizontal shift only of size h. If B1(p) and B2(p) are the values at that location as they appear in blocks B1,B2, and d1,d2 are the respective distances of p from the center of blocks B1 and B2 then in the output image O the location p will get O(p) = (h-d1)/h*B1(p) + (h-d2)/h*B2(p).
Note that generally, there can be up to 4 overlapping blocks in any region.
I'm looking for the best way to do this in Matlab. Hopefully, for any choice of distance function.
blockproc and alike can help splitting an image into blocks but allow for very basic combination of results. imfuse comes close to what I need, but offers simple non-weighted alpha blending only. bwdist seems to be useful, but I haven't figured what the most efficient method to put it to use is.
You should use the command im2col.
Once you have all your patches in vectors aligned in one matrix you'll be able to work on the columns (Filtering per patch) and rows (Filtering between patches).
It will be trickier than the classic usage of im2col but it should work.

what does MajorAxisLength property in regionprop matlab function mean?

I am using regionprop function in matlab to get MajorAxisLength of an image. I think logically this number should not be greater than sqrt(a^2+b^2) in wich a abd b are the width and heigth of the image. but for my image it is. My black and white image contains a black circle in the center of the image. I think this is strange. Can anybody help me?
Thanks.
If you look at the code of regionprops (subfunction ComputeEllipseParams), you see that they use the second moment to estimate the ellipsoid radius. This works very well for ellipsoid-shaped features, but not very well for features with holes. The second moment increases if you remove pixels from around the centroid (which is, btw, why they make I-beams). Thus, the bigger the 'hole' in the middle of your image, the bigger the apparent ellipsoid radius.
In your case, you may be better off using the extrema property of regionprops, and to calculate the largest radius from there.

Techniques for differentiating between circle rectangle and triangle?

What coding techniques would allow me to to differentiate between circles, rectangles, and triangles in black and white image bitmaps?
You could train an Artificial Neural Network to classify the shapes :P
If noise is low enough to extract curves, approximations can be used: for each shape select parameters giving least error (he method of least squares may help here) and then compare these errors...
If the image is noisy, I would consider Hough transform - it may be used to detect shapes with small count of parameters, like circle (harder for rectangles and triangles).
just an idea off of the top of my head: scan the (pixel) image line-by-line, pixel-by-pixel. If you encounter the first white pixel (assuming it has a black background) you keep it's position as a starting point and look at the eight pixels surrounding it in every direction for the next white pixel. If you find an adjacent second pixel you can establish a directional vector between those two pixels.
Now repeat this until the direction of your vector changes (or the change is above a certain threshold). Keep the last point before the change as the endpoint of your first line and repeat the process for the next line.
Then calculate the angle between the two lines and store it. Now trace the third line. Calculate the angle between the 2nd and 3rd line as well.
If both angles are rectangular you probably found a rectangle, otherwise you probably found a triangle. If you can't find any straight line you could conclude that you found a circle.
I know the algorithm is a bit sketchy but I think (with some refinement) it could work if your image's quality is not too bad (too much noise, gaps in the lines etc.).
You are looking for the Hough Transform. For an implementation, try the AForge.NET framework. It includes circle and line hough transformations.