Area of intersection of connected components - matlab

I am doing a segmentation task using MATLAB. To analyze the performance of my algorithm, I need the area of intersection of each connected component in both images.
In what way are the connected components labelled in an image? Also, does PixelIdxList list all the linear indices of points that are a part of the connected component?

In what way are the connected components labelled in an image?
bwconncomp discovers the connected components by either using 4-(or 8)-connected neighborhood for 2D images or 6-(18, 26)-connected neighborhood for 3D images. Labels are enumerated starting from the left-top corner in 2D and in 3D (first slice).
Also, does PixelIdxList list all the linear indices of points that are a part of the connected component?
Yes. So, once both images are labelled, you can use intersect to find intersection between different labels. Also, you might want to read about Jaccard index.

Related

Smooth circular data - Matlab

I am currently doing some image segmentation on a bone qCT picture, see for instance images below.
I am trying to find the different borders in the picture for instance the outer border separating the bone to the noisy background. In this analysis I am getting a list of points (vec(1,:) containing x values and vex(2,:) containing the y values) in random order.
To get them into order I am using using a block of code which effectively takes the first point vec(1,1),vec(1,2) and then finds the closest point among the rest of the points in the vector. And then repeats.
Now my problem is that I want to smooth the data but how do I do that as the points lie in a circular formation? (I do have the Curve Fitting Toolbox)
Not exactly a smoothing procedure, but a way to simplify your data would be to compute the boundary of the convex hull of the data.
K = convhull(O(1,:), O(2,:));
plot(O(1,K), O(2,K));
You could also consider using alpha shapes if you want more control.

Uncalibrated multi-view reconstruction depth estimation

I'm trying to make a 3D reconstruction from a set of uncalibrated photographs in MATLAB. I use SIFT to detect feature points and matches between images. I want to make a projective reconstruction first and then update this to a metric one using auto-calibration.
I know how to estimate the 3D points from 2 images by computing the fundamental matrix, camera matrices and triangulation. Now say I have 3 images, a, b and c. I compute the camera matrices and 3D points for image a and b. Now I want to update the structure by adding image c. I estimate the camera matrix by using known 3D points (calculated from a and b) that match with 2D points in image c, since:
However when I reconstruct the 3D points between b and c they don't add up with the existing 3D points from a and b. I'm assuming this is because I don't know the correct depth estimates of the points (depicted by s in above formula).
With the factorization method of Sturm and Triggs I can estimate the depths and find the structure and motion. However in order to do this, all points have to be visible in all views, which is not the case for my images. How can I estimate the depths for points not visible in all views?
This is not a question about Matlab. It is about an algorithm.
It is not mathematically possible to estimate the position of a 3D point in an image when you don't see an observation of the point in said image.
There are extensions for factorization to work with missing data. However, the field seems to have converged to Bundle Adjustment as the Gold Standard.
An excellent tutorial on how to achieve what you want can be found here, which is a culmination of several years of research into a working application. Starting from projective reconstruction up to the metric upgrade.

How to detect curves in a binary image?

I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).

Matlab: find major axis of binary area

the output of some processing consists of a binary map with several connected areas.
The objective is, for each area, to compute and draw on the image a line crossing the area on its longest axis, but not extending further. It is very important that the line lies just inside the area, therefore ellipse fitting is not very good.
Any hint on how to do achieve this result in an efficient way?
If you have the image processing toolbox you can use regionprops which will give you several standard measures of any binary connected region. This includes
You can also get the tightest rectangular bounding box, centroid, perimeter, orientation. These will all help you in ellipse fitting.
Depending on how you would like to draw your lines, the regionprops function also returns the length for major and minor axes in 2-D connected regions and does it on a per-connected-region basis, giving you a vector of axis lengths. If you specify 4 neighbor connected you are fairly sure that the length will be exclusively within the connected region. But this is not guaranteed since `regionprops' calculates major axis length of an ellipse that has the same normalized second central moment as the connected region.
My first inclination would be to treat the pixels as 2D points and use principal components analysis. PCA will give you the major axis of each region (princomp if you have the stat toolbox).
Regarding making line segments and not lines, not knowing anything about the shape of these regions, an efficient method doesn't occur to me. Assuming the region could have any arbitrary shape, you could just trace along each line until you reach the edge of the region. Then repeat in the other direction.
I assumed you already have the binary image divided into regions. If this isn't true you could use bwlabel (if the regions aren't touching) or k-means (if they are) first.

Creating 3D volume from 2D slice set of grayscale images

I am to create a 3D volume out of grayscale image set using Matlab. A set contains a continuous and quantized slices of 2D grayscale image. I am still considered myself a rookie in Matlab, but this is what I currently have in my mind:
create an empty space for 3D volume.
On each image, we perform all the preprocessing operation so that we only got the part that is of our interest. (In this question, assume that this preprocessing part always work flawlessly)
Go through the image, each pixel's x and y coordinate on 2D will be transfer to the empty space. For z coordinate, we can use the slice number with respect to the distance between each slice. If a pixel is adjacent to another pixel, the 3D points will be connected together.
Repeat the previous 2 steps until all slices are done. We will now have all the points connected just like in the 2D slices.
But here comes the trouble, how can we connect the points between the slices, so that these points can become a volume? Or is there a more robust way to do in Matlab? Any suggestion is highly appreciated.
Part 0 - Assumptions
all 2D images are of the same dimension, hence your 3D volume can hold all of them in a rectangular cube
majority of the pixels in each of the 2D images have 3D spatial relationships (you can't visualize much if the pixels in each of the 2D images are of some random distribution. )
Part 1 - Visualizing 3D Volume from A Stack of 2D Images
To visualize or reconstruct a 3D volume from a stack of 2D images, you can try the following toolkits in matlab.
1 3D CT/MRI images interactive sliding viewer
http://www.mathworks.com/matlabcentral/fileexchange/29134-3d-ctmri-images-interactive-sliding-viewer
[2] Viewer3D
http://www.mathworks.com/matlabcentral/fileexchange/21993-viewer3d
[3] Image3
http://www.mathworks.com/matlabcentral/fileexchange/21881-image3
[4] Surface2Volume
http://www.mathworks.com/matlabcentral/fileexchange/8772-surface2volume
[5] SliceOMatic
http://www.mathworks.com/matlabcentral/fileexchange/764
Note that if you are familiar with VTK, you can try this:
[6] matVTK
http://www.cir.meduniwien.ac.at/matvtk/
I am currently sticking with [5] SliceOMatic for its simplicity and ease of use. However, by default, rendering 3D is quite slow in Matlab. Turning on openGL would give faster rendering. (http://www.mathworks.com/help/techdoc/ref/opengl.html) Or simply put, set(gcf, 'Renderer', 'OpenGL').
Part 2 - Interpolating pixels in between the slices
To interpolate pixels in between the slices, you need to specify an interpolation method (some of the above toolkits have this capability / flexibility. Otherwise, to give you a head start, some examples for interpolation are bicubic, spline, polynomial etc..(you can work this out by looking up on google or google/scholar for interpolation methods much more specific to your problem domain).
Part 3 - 3D Pre-processing
Looking at your procedures, you process the volumetric data by processing each of the 2D images first. In many advanced algorithms, or in true 3D processing, what you can do is to process the volumetric data in 3D domain first (simply put, you take the 26 neighbors or more in to account first.). Once this step is done, you can simply output the volumetric data into a stack of 2D images for cross-sectional viewing or supply to one of the aforementioned toolkits for 3D viewing or output to third party 3D viewing applications.
I have followed the above concepts for my own medical imaging research projects and the above finding is based on my research experience documented here (with latest revisions).
MATLAB generally plots volumetric data using a 3d array. The data points are spatially evenly separated along each axis. If there are sites in the 3d array for which you do not have data for, usually they are assigned the NaN value and the various plotting functions can generally handle this in a reasonable way (i.e. will generally behave as you intended).
If you load the slices into the 3d array such that adjacent points in the z-direction of the data are also adjacent in the 3rd dimension of the array then you should be fine.