I have one 2D CT image and I want to convert it to 3D image using Markov Random Field. There are several papers in the literature in which this technique was used based on 3 2D orthogonal images. However, I can't find a simple and clear resource that explains the conversion process using MRF in clear steps. Here are some papers I found,
http://www.immijournal.com/content/pdf/s40192-014-0019-3.pdf
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1971113/
https://www.eecs.berkeley.edu/Research/Projects/CS/vision/papers/efros-iccv99.pdf
What I have understood is that the image is converted into a graph of connected pixels, and the properties of a pixel depend on the properties of its adjacent ones. Put it was not really clear how the process takes place. Also, the cost minimization process was confusing to me what parameters are we trying to minimize? And how this will lead to constructing a 3D image from the 3 2D orthogonal ones?
Can anyone please explain to me how the conversion algorithm works using MRF in steps?
Thank You
Related
I am trying to compute the 3D coordinates from several pair of two view points.
First, I used the matlab function estimateFundamentalMatrix() to get the F of the matched points (Number > 8) which is:
F1 =[-0.000000221102386 0.000000127212463 -0.003908602702784
-0.000000703461004 -0.000000008125894 -0.010618266198273
0.003811584026121 0.012887141181108 0.999845683961494]
And my camera - taken these two pictures - was pre-calibrated with the intrinsic matrix:
K = [12636.6659110566, 0, 2541.60550098958
0, 12643.3249022486, 1952.06628069233
0, 0, 1]
From this information I then computed the essential matrix using:
E = K'*F*K
With the method of SVD, I finally got the projective transformation matrices:
P1 = K*[ I | 0 ]
and
P2 = K*[ R | t ]
Where R and t are:
R = [ 0.657061402787646 -0.419110137500056 -0.626591577992727
-0.352566614260743 -0.905543541110692 0.235982367268031
-0.666308558758964 0.0658603659069099 -0.742761951588233]
t = [-0.940150699101422
0.320030970080146
0.117033504470591]
I know there should be 4 possible solutions, however, my computed 3D coordinates seemed to be not correct.
I used the camera to take pictures of a FLAT object with marked points. I matched the points by hand (which means there should not be obvious mistake exists about the raw material). But the result turned out to be a surface with a little bit banding.
I guess this might be due to the reason pictures did not processed with distortions (but actually I remember I did).
I just want to know whether this method to solve the 3D reconstruction issue right? Especially when we already know the camera intrinsic matrix.
Edit by JCraft at Aug.4: I have redone the process and got some pictures showing the problem, I will write another question with detail then post the link.
Edit by JCraft at Aug.4: I have posted a new question: Calibrated camera get matched points for 3D reconstruction, ideal test failed. And #Schorsch really appreciate your help formatting my question. I will try to learn how to do inputs in SO and also try to improve my gramma. Thanks!
If you only have the fundamental matrix and the intrinsics, you can only get a reconstruction up to scale. That is your translation vector t is in some unknown units. You can get the 3D points in real units in several ways:
You need to have some reference points in the world with known distances between them. This way you can compute their coordinates in your unknown units and calculate the scale factor to convert your unknown units into real units.
You need to know the extrinsics of each camera relative to a common coordinate system. For example, you can have a checkerboard calibration pattern somewhere in your scene that you can detect and compute extrinsics from. See this example. By the way, if you know the extrinsics, you can compute the Fundamental matrix and the camera projection matrices directly, without having to match points.
You can do stereo calibration to estimate the R and the t between the cameras, which would also give you the Fundamental and the Essential matrices. See this example.
Flat objects are critical surfaces, not possible to achive your goal from them. try adding two (or more) points off the plane (see Hartley and Zisserman or other text on the matter if still interested)
I am developing a project of detecting vehicles' headlights in night scene. First I am working on a demo on MATLAB. My detection method is edge detection using Difference of Gaussian (DoG): I take the convolution of the image with Gaussian blur with 2 difference sigma then minus 2 filtered images to find edge. My result is shown below:
Now my problem is to find a method in MATLAB to circle the round edge such as car's headlights and even street lights and ignore other edge. If you guys got any suggestion, please tell me.
I think you may be able to get a better segmentation using a slightly different approach.
There is already strong contrast between the lights and the background, so you can take advantage of this to segment out the bright spots using a simple threshold, then you can apply some blob detection to filter out any small blobs (e.g. streetlights). Then you can proceed from there with contour detection, Hough circles, etc. until you find the objects of interest.
As an example, I took your source image and did the following:
Convert to 8-bit greyscale
Apply Gaussian blur
Threshold
This is a section of the source image:
And this is the thresholded overlay:
Perhaps this type of approach is worth exploring further. Please comment to let me know what you think.
I am trying to detect corners (x/y coordinates) in 2D scatter vectors of data.
The data is from a laser rangefinder and our current platform uses Matlab (though standalone programs/libs are an option, but the Nav/Control code is on Matlab so it must have an interface).
Corner detection is part of a SLAM algorithm and the corners will serve as the landmarks.
I am also looking to achieve something close to 100Hz in terms of speed if possible (I know its Matlab, but my data set is pretty small.)
Sample Data:
[Blue is the raw data, red is what I need to detect. (This view is effectively top down.)]
[Actual vector data from above shots]
Thus far I've tried many different approaches, some more successful than others.
I've never formally studied machine vision of any kind.
My first approach was a homebrew least squares line fitter, that would split lines in half resurivly until they met some r^2 value and then try to merge ones with similar slope/intercepts. It would then calculate the intersections of these lines. It wasn't very good, but did work around 70% of the time with decent accuracy, though it had some bad issues with missing certain features completely.
My current approach uses the clusterdata function to segment my data based on mahalanobis distance, and then does basically the same thing (least squares line fitting / merging). It works ok, but I'm assuming there are better methods.
[Source Code to Current Method] [cnrs, dat, ~, ~] = CornerDetect(data, 4, 1) using the above data will produce the locations I am getting.
I do not need to write this from scratch, it just seemed like most of the higher-class methods are meant for 2D images or 3D point clouds, not 2D scatter data. I've read a lot about Hough transforms and all sorts of data clustering methods (k-Means etc). I also tried a few canned line detectors without much success. I tried to play around with Line Segment Detector but it needs a greyscale image as an input and I figured it would be prohibitivly slow to convert my vector into a full 2D image to feed it into something like LSD.
Any help is greatly appreciated!
I'd approach it as a problem of finding extrema of curvature that are stable at multiple scales - and the split-and-merge method you have tried with lines hints at that.
You could use harris corner detector for detecting corners.
I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).
I'm currently working with DICOM-RT files (which contain DICOM along with dose delivery data and structure set files). I'm mainly interested in the "structure set" file (i.e. RTSS.dcm), which contains the set of contour points for an ROI of interest. In particular, the contour points surround a tumor volume. For instance, a tumor would have a set of 5 contours, each contour being a set of points that encircle that slice of the tumor.
I'm trying to use MatLab to use these contour points to construct a tumor volume in a binary 3D matrix (0 = nontumor, 1=tumor), and need help.
One possible approach is to fill each contour set as a binary slice, then interpolate the volume between slices. So far I've used the fill or patch function to create binary cross-sections of each contour slice, but I'm having difficulty figuring out how to interpolate these binary slices into a 3D volume. None of the built-in functions appear to apply to this particular problem (although maybe I'm just using them wrong?). A simple linear interpolation doesn't seem appropriate either, since the edges of one contour should blend into the adjacent contour in all directions.
Another option would be to take the points and tesselate them (without making slices first). However, I don't know how to make MatLab only tesselate the surface of the tumor and not intersecting the tumor volume. Currently it seems to find triangles within the tumor. If I could get it into just a surface, I'm not sure how to take that and convert it into a binary 3D matrix volume either.
Does anyone have experience with either 3D slice interpolation OR tesselation techniques that might apply here? Or perhaps any relevant toolkits that exist? I'm stuck... :(
I'm open to approaches in other languages as well: I'm somewhat familiar with C# and Python, although I assumed MatLab would handle the matrix operations a little easier.
Thanks in advance!
I'm not sure from what program you're exporting your dicom-rt structure files, but I believe I found a more elegant solution for you, already described in an open-source software (GDCM, CMake, ITK) in an Insight journal article.
I was discussing a similar problem with one of our physicists, and we saw your solution. It's fine if whatever structure you're attempting to binarize has no concavities, but if so, they'll be rendered inaccurately.
This method is verified for dicom-rt structure sets from Eclipse and Masterplan. Hope it helps.
http://www.midasjournal.org/download/viewpdf/701/4
I think I found an answer in another post (here). Rather than trying to interpolate the "missing slices" between the defined contours, treating the contour points as a point cloud and finding the convex hull might be a more efficient way of doing it. This method created the binary 3D volume that I was after.
Here is the code I used, hope it might be helpful to those who need to work with DICOM-RT files:
function mask = DicomRT2BinaryVol(file)
points = abs(getContourPoints(file));
%%NOTE: The getContourPoints function simply reads the file using
%%'dicominfo' method and organizes the contour points into an n-by-3
%%matrix, each column being the X,Y,Z coordinates.
DT = DelaunayTri(points);
[X,Y,Z] = meshgrid(1:50,1:50,1:50);
simplexIndex = pointLocation(DT, X(:), Y(:), Z(:));
mask = ~isnan(simplexIndex);
mask = reshape(mask,size(X));
end
This method is a slightly modified version of the method posted by #gnovice in the link above.
iTk is an excellent library for this sort of thing: http://www.itk.org/
HTH