Why do MSER results have overlapping pixels - matlab

First, I'm using opencv MSER via matlab via 2 different methods:
Through the matlab's detectMSERFeatures function. This seems to call the
opencv MSER (via the call to ocvExtractMSER in the detectMSERFeatures function)
Through a more direct approach: opencv 3.0 wrapper/matlab bindings found in https://github.com/kyamagu/mexopencv
Either way, I can get back lists of lists of pixels (aka regions) that I imagine are a translation of the opencv MSER::detectRegions 2nd arg, "std::vector< std::vector< Point > > &msers"
The result can end up a list of multiple regions each region with its own set of points. However, the points of the regions are not mutually exclusive. In fact, they typically, for my data in which the foreground is typically roundish blobs, tend to all be part of the same single connected component. This is true even if the blob doesn't even have any holes (I might understand if the regions corresponded to contours and the blob had holes).
I'm assuming that this many-region-to-one mapping of regions to even a solid blob is due to opencv's MSER, in its native C++(?) implementation, doing the same but I confess I haven't verified that (but I surely don't understand it.)
So, does anybody know why MSER would yield multiple overlapping regions for a single solid connected component? Is there any sense to choosing one and if so how? (Right now I just combine them all)
EDIT - I tried an image with one blob which then I replicated to have a single image where the left half was the same as the right (each half being the same, each with the same blob). MSER returned 9 lists/regions all corresponding to the two blobs. So, I wold have to do connected component analysis just to figure out which subsets of the regions belonged to what blob and so apparently there can't be any straightforward way to choose a particular subset of the returned regions that would give the best representation of the two blobs (if such a thing was even sensible if you knew there was just one blob as per my last pre-edit question)
The picture below was made by plotting all 4 regions (lists of points) returned for my single blob image. The overlay was created by:
obj = cv.MSER('MinArea',20,'MaxArea',3000,'Delta',2.5);
[chains, bboxes] = obj.detectRegions(Region8b)
a=cellfun(#(x) cat(1,x{:}),chains,'UniformOutput',false) % get rid of extra layer of cells that detectRegions seems to give it.
% b=cat(1,a{:}); % all the regions points in a single list. Not used here.
ptsstrs={'rx','wo','cd','k.'};
for k=1:4
plot(a{k}(:,1),a{k}(:,2),ptsstrs{k},'MarkerSize',15);
end
So, you can see they overlap but there also seems to be an order to it where I think each subsequent region/list is a superset of the list before it.

"The MSER detector incrementally steps through the intensity range of the input image to detect stable regions. The ThresholdDelta parameter determines the number of increments the detector tests for stability. " This from Matlab help. It's reasonable that you find overlap and subsets. Apparently, the region changes as the algorithm moves up or down in intensity.

Related

Compare two nonlinear transformed (monochromatic) images

Given are two monochromatic images of same size. Both are prealigned/anchored to one common point. Some points of the original image did move to a new position in the new image, but not in a linear fashion.
Below you see a picture of an overlay of the original (red) and transformed image (green). What I am looking for now is a measure of "how much did the "individual" points shift".
At first I thought of a simple average correlation of the whole matrix or some kind of phase correlation, but I was wondering whether there is a better way of doing so.
I already found that link, but it didn't help that much. Currently I implement this in Matlab, but this shouldn't be the point I guess.
Update For clarity: I have hundreds of these image pairs and I want to compare each pair how similar they are. It doesn't have to be the most fancy algorithm, rather easy to implement and yielding in a good estimate on similarity.
An unorthodox approach uses RASL to align an image pair. A python implementation is here: https://github.com/welch/rasl and it also
provides a link to the RASL authors' original MATLAB implementation.
You can give RASL a pair of related images, and it will solve for the
transformation (scaling, rotation, translation, you choose) that best
overlays the pixels in the images. A transformation parameter vector
is found for each image, and the difference in parameters tells how "far apart" they are (in terms of transform parameters)
This is not the intended use of
RASL, which is designed to align large collections of related images while being indifferent to changes in alignment and illumination. But I just tried it out on a pair of jittered images and it worked quickly and well.
I may add a shell command that explicitly does this (I'm the author of the python implementation) if I receive encouragement :) (today, you'd need to write a few lines of python to load your images and return the resulting alignment difference).
You can try using Optical Flow. http://www.mathworks.com/discovery/optical-flow.html .
It is usually used to measure the movement of objects from frame T to frame T+1, but you can also use it in your case. You would get a map that tells you the "offset" each point in Image1 moved to Image2.
Then, if you want a metric that gives you a "distance" between the images, you can perhaps average the pixel values or something similar.

Fit 3D matrices to same gray values

I'm trying to fit two data sets. Those contain the results of measuring the same object with two different measurement devices (x-ray vs. µct).
I did manage to reconstruct the image data and fit the orientation and offset of the stacks. It looks like this (one image from a stack of about 500 images):
The whole point of this is to compare several denoising algorithms on the x-ray data (left). It is assumed that the data from µCT (right) is close to the real signal without any noise. So, I want to compare the denoised x-ray data from each of the algorithms to the "pure" signal from µCT to see which algorithm produces the lowest RMS-error. Therefore, I need to somehow fit the grayvalues from the left part to those of the right part without manipulating the noise too much.
The gray values in the right are in the range of 0 to 100 whereas the x-ray data ranges from about 4000 to 30000. The "bubbles" are in a range of about 8000 to 11000. (those are not real bubbles but an artificial phantom with holes out of a 3D printer)
What I tried to do is (kind of) band pass those bubbles and map them to ~100 while shifting everything else towards 4 (which is the value for the background on the µCT data).
That's the code for this:
zwst = zwsr;
zwsr(zwst<=8000)=round(zwst(zwst<=6500)*4/8000);
zwsr(zwst<=11000 & zwst>8000)= round(zwst(zwst<=11000 & zwst>8000)/9500*100);
zwsr(zwst>11000)=round(zwst(zwst>11000)*4/30000);
The results look like this:
Some of those bubbles look distorted and the noise part in the background is gone completely. Is there any better way to fit those gray values while maintaining the noisy part?
EDIT: To clarify things: The µCT data is assumed to be noise free while the x-ray data is assumed to be noisy. In other words, µCT = signal while x-ray = signal + noise. To quantize the quality of my denoising methods, I want to calculate x-ray - µCT = noise.
Too long for a comment, and I believe a reasonable answer:
There is a huge subfield of image processing/ signal processing called image fusion. There is even a specific Matlab library for that using wavelets (http://uk.mathworks.com/help/wavelet/gs/image-fusion.html).
The idea behind image fusion is: given 2 images of the same thing but with very different resolution/data, how can we create a single image containing the information of both?
Stitching both images "by hand" does not give very good result generally so there are a big amount of techniques to do it mathematically. Waveletes are very common here.
Generally this techniques are widely used in medical imaging , as (like in your case) different imaging techniques give different information, and doctors want all of them together:
Example (top row: images pasted together, bottom row: image fusion techniques)
Have a look to some papers, some matlab tutorials, and probably you'll get there with the easy-to-use matlab code, without any fancy state of the art programming.
Good luck!

Is there any regularity-detection tool for regions inside an image?

I'm working on MATLAB on some regions inside an image. I'm at a point in which I would like to be able to separate regions which exhibit some kind of regularity (e.g., being circle-ish or square-ish) from regions which does not resemble any known figure and which for my application are mere noise. I'll illustrate this using a descriptive MS Paint image:
Is there any tool that, most of the times (or even less, I know this can't be 100/100) will recognize the red thing as being different?
I'll deal with many shapes in a single image, so I don't mind if I carry on some red monsters along the way, as long as the majority of them is kicked out. Of course I know the indices of these regions, so I can manipulate them in MATLAB.
Many algorithms come to mind, e.g., getting the boundary and checking for its regularity/the number of times it changes curvature/..., checking for variations in vertical length through different columns (nearly 0 for the linear feature, really high for the red stuff), ...
However I was hoping in some help from a tool out there. It doesn't matter if this tool won't cover all cases (for example, will kick out circles), I've been very broad to get the maximum number of inputs from you guys - any tool will be inspiring and helpful (and, however, we can't expect a perfect answer for the deeper question - recognizing regular shapes - which seems more like a AI field of research). I also think that, while being broad, this is totally non-subjective so should fit in SO. Thank you.
Side note 1: I'll deal mostly with elongated, extended features like the top-right one, so circles are not that relevant.
Side note 2: To be 100% clear, I would need something (be it an already existant tool, or some ideas pointed out by you) that acts on the indices of the shapes, in terms of rows-columns into the original image, or on the boundary of the shape itself.
Side note 3: Apart from tools/suggestions/ideas, you are welcomed to write down some lines of code ;) I'm getting the regions as connected components from bwconncomp.
I had to solve a similar problem recently that involved counting the number of indentations on blobs within in an image (basically, the connected components returned by bwconncomp). The method I used was to look at curvature changes along the boundary calculated via the FFT. In your case, the red blobs would have a large number of curvature variations, whereas the black regions would not. It's a pretty easy calculation and relatively fast. The code is on github here:
https://github.com/mjsottile/blobdents
The file of interest is src/countindents.m. A short description of the approach is here:
http://arxiv.org/abs/1501.07692
I went for the easier road as suggested by #Mikhail in comments.
I found out regionprops has a really helpful tool called Solidity. Quoting docs,
Returns a scalar specifying the proportion of the pixels in the convex hull that are also in the region. Computed as Area/ConvexArea.
Convex hull is defined as the smallest convex polygon that can contain the region. So Solidity goes up to 1 if the shape is kind of regular and has no convexity changes; down to 0 for my red shape, which leaves space between itself and the convex polygon.
Of course it never reaches 0, lowest value should belong to a kind of +-shaped sign.

Edge Detection Along Blood Vessel in Matlab

I am trying to specify a line across a blood vessel in an image, and then have matlab specify the edges of the vessel (which are contained within the line). The next part will be comparing changes in the distance between these edges over time (so across 1000x more images).
I have tried the following code to get started:
I = imread('Obj1.tif');
imshow(I,[]);
improfile
And I was looking at available methods to detect the edges from intensity along that line that gets plotted (tangents, maxima/minima etc) but I am not convinced this is the best method. I looked into other tools on matlab such as the Canny Method, Sobel etc, but the examples for all of these only show how to detect edges throughout the entire image. My coding skills are not sufficient to have the algorithms specified along a single line of the users choosing. Methods that I have looked at on pubmed also seem more complicated then perhaps I need.
Does anybody have any ideas or suggestions from the point that I am currently at?
Thank you

Matlab: Transparent Object Detection

I'm trying to detect a transparent object (glass bottle) in an image.
The image is taken from the Kinect so there's rgb and depth images available.
I read from a literature that the boundary of an transparent object have 'unknown depth values' and I can use that as a boundary condition for detecting the object.
The problem is I cannot find that information from my depth file ie. the depth of the image only returns either zero or other values but never 'unknown'
I assume kinect represent 'unknown depth values' as zeros but this raises another problem:
there's a lot of zeros in the image ( ie. boundary etc) how do I know which zero is for the object?
Thanks alot!!
You could try to detect the body of the transparent object rather than the border. The body should return values of whatever is behind it, but those values will be noisier. Take a time-running sample and calculate a running standard deviation. Look for the region of the image that has larger errors than elsewhere. This is simpler if you have access to the raw data (libfreenect). If the data is converted to distance, then the error is a function of distance, so you need to detect regions that are noisier than other regions at that distance, not just regions that are noisier than elsewhere.
I'd recommend you take a look at the following publication:
They were able to detect objects (such as water bottles and glasses). all undertaken in matlab.
Object localisation via action recognition.
J. Darby, B. Li, R. Cunningham and N. Costen.
ICPR, 2012.