Ignoring MSER components with bounding ellipse overlapping with a binary mask - matlab

This question seems a little basic, but I would like to have some inputs about an efficient way of doing this.
Suppose I have the following image :
I also have a binary mask image as follows :
I detect MSER features on this image and plot the corresponding bounding ellipses.
What I need is that I want all those MSER regions removed, whose bounded ellipses overlap with the mask image. My issue is that I have a number of such operations and have to process a large number of images. Thus, what is the most efficient and fast way of doing this, which requirest minimal memory usage ?

It depends how your ellipses are stored, and perhaps on the size of your image. If they are represented as masks then I would be tempted to superimpose all the ellipses first and then do an intersection operation with the rectangle. Then you have a mask which you can apply to the original image.
If your ellipses are stored in a symbolic form - like the output of regionprops - it might be more efficient to test them against the rectangle first and only if they intersect would you convert it into a mask and add it to the overall mask.

Related

How to check if two shapes in a binary image are similar in MATLAB?

I have two binary images, each of which have a single white filled parallelogram and a black background. The only difference between the two images is that the parallelograms are in different locations and are slightly different from one another in shape. All the parameters between the two images are the same except for that one change.
I want to check how similar the shape of the two parallelograms are, by using some sort of comparing measure.
I looked into ssimval function in MATLAB but it seems to be taking the whole image into consideration rather than just the white blobs. Is there any other function I can use for this purpose?
For visually checking similarity, you can plot their probability density function and for numeric similarity, compute some similarity measure, such as, KL Divergence, etc.
In a simple way, you can segment your binary image with simple bwlabel function. Then use regionprops function to find perimeter and area of your desire segment. Moreover, center of region is also another comparison point.
You could do it with polygons, by using the polyshape class.
First convert the binary mask to a set of corner points. You can do it with a convex hull, by calling regionprops(bwI, 'ConvexHull').
Then convert the corner points into polygons, by calling polyshape.
Finally measure the dissimiliarities of the polygons by measuring their turning distance. Turning distance is rotation- and scaling invariant, so you might want to add additive terms to your distance metric for those if your problem demands it.
A very simple solution for comparing two binary image is using boolean operations.
Your images contains zero and one values. so If you use boolean operation.
suppose your two images are : B1 , B2
C = B1 & (~B2)
if sum(C(:))==0
% two image are same
else
% two image are different
end

MATLAB: Using hough transform to detect circle

I am writing a matlab code that takes in a photo and detects the circular object. For example, the function takes a picture of a peach (circular object) as an input and will return the same image with the peach circled.
Currently, I am using hough transform, utilizing imfindcircles function. However, this function requires me to specify radius range and some sort of sensitivity/threshold value. These values differ for different sizes of image and round objects. So, to get the desired output, I will have to manually change these values for each input image, which is not what I want. I'm going to use this function on 100+ images, so it's impossible for me to do this manually.
My question is is there any way I can make my circular object detection function less manual and possibly completely automatic (does not require me to input any values, just the image)?
Complexity of circle detection
The Hough transform is a voting procedure that requires assumptions be made about the minimum and maximum radii of your circles. Generally speaking using the Randomized Hough Transform for Circles you would pick three-points and then try to form a circle and check if the radius is within the desired range. Running this for a good number of iterations you should find peaks (multiple hits) in your accumulator matrix that represent circles. If you didn't make any assumptions about object size I think it is obvious this method wouldn't work.
Do some routine pre-processing to adjust for contrast and brightness e.g. contrast stretching, histogram equalization. If you might have some noise in the images, then apply bit of gaussian smoothing as well.
Normalizing images this way will reduce inter-image variance and help you with setting thresholds.
the Hough Transform can be used to detect circles, lines, etc.You can refer the demos in Matlab. There are several cases for the application of Hough Transform.

How Do I Find The Bounding Box For All Regions?

I'm using the MNIST digit images for a machine learning experiment, and I'm trying to center each image based on position, rather than the center of mass that they are centered on by default.
I'm using the regionprops class, BoundingBox method to extract the images. I create a B&W copy of the greyscale, use this to determine the BoundingBox properties (regionprops works only B&W images) and then apply that on the greyscale original to extract the precise image rectangle. This works fine on ~98% of the images.
The problem I have is that the other ~2% of images has some kind of noise or errant pixel in the upper left corner, and I end up extracting only that pixel, with the rest of the image discarded.
How can I incorporate all elements of the image into a single rectangle?
EDIT: Further research has made me realise that I can summarise and rephrase this question as "How do I find the bounding box for all regions?". I've tried adjusting a label matrix so that all regions are the same label, to no avail.
You can use an erosion mask with the same size of that noise to make it totally disappear " using imerode followed by imdilate to inverse erosion ", or you can use median filter

Segmenting 3D shapes out of thick "lines"

I am looking for a method that looks for shapes in 3D image in matlab. I don't have a real 3D sample image right now; in fact, my 3D image is actually a set of quantized 2D images.
The figure below is what I am trying to accomplish:
Although the example figure above is a 2D image, please understand that I am trying to do this in 3D. The input shape has these "tentacles", and I have to look for irregular shapes among them. The size of the tentacle from one point to another can change around but at "consistent and smooth" pace - that is it can be big at first, then gradually smaller later. But if suddenly, the shape just gets bigger not so gradually, like the red bottom right area in the figure above, then this is one of the volume of interests. Note that these shapes have more tendency to be rounded and spherical, but some of them are completely arbitrary and random.
I've tried the following methods so far:
Erode n times and dilate n times: given that the "tentacles" are always smaller than the volume of interest, this method will work as long as the volume is not too small. And, we need to have a mechanism to deal with thicker portion of the tentacle that becomes false positive somehow.
Hough Transform: although I have been suggested this method earlier (from Segmenting circle-like shapes out of Binary Image), I see that it works for some of the more rounded shape cases, but at the same time, more difficult cases such that of less-rounded, distorted, and/or arbitrary shapes can slip through this method.
Isosurface: because of my input is a set of 2D quantized images, using an isosurface allow me to reconstruct image in 3D and see things clearer. However, I'm not sure what could be done further in this case.
So can anyone suggests some other techniques for segmenting such shape out of these "tentacles"?
Every point on your image has the property that it is either part of the tentacle, or part of the volume of interest. If it is unknown apriori what the expected girth of the tentacle is, then 1 wont work because we won't be able to set n. However, we know that the n that erases the tentacle is smaller than the n that erases the node. You can for each point replace it with an integer representing the distance to the edge. Effectively, this can be done via successive single pixel erosion, and replacing each pixel with the count of the iteration at which it was erased. Lets call this the thickness at the pixel, but my rusty old mind tells me that there was a term of art for this.
Now we want to search for regions that have a higher-than-typical morphological distance from the boundary. I would do this by first skeletonizing the image (http://www.mathworks.com/help/toolbox/images/ref/bwmorph.html) and then searching for local maxima of the thickness along the skeleton. These are points on the skeleton where the thickness is larger than the neighbor points.
Finally I would sort the local maxima by the thickness, a threshold on which should help to separate the volumes of interest from the false positives.

Texture analysis on irregular region of interest

I have an image which I would like to extract the GLCM texture in an area of interest(AOI). But AOI is a non-rectangular shape.
As an image is always stored as a matrix in Matlab, even if the AOI is an irregular polygonal area the neighboring pixels will also have to be used to make it a rectangular region. Since all the pixels outside the area of interest are made equal to zero, does this affect the features extracted from texture analysis.
Is it possible to do any kind of image analysis on non-rectangular regions?
Yes, if the pixels outside the area of interest were being used when computing the gray level cooccurrence matrix, then the result would be incorrect -- that is, would not suit your requirements, as border processing is a matter of choice.
Existing software systems offer this feature:
If you use matlab, according to http://www.mathworks.com/help/toolbox/images/ref/graycomatrix.html, you would need to assign to the pixels of the input image which are outside the AOI the value Nan.
In Mathematica, very conveniently the function ImageCooccurrence has an option named Masking which allows to pass any AOI as a binary mask. From http://reference.wolfram.com/mathematica/ref/ImageCooccurrence.html: