I am looking for a fast way to compute the volume of the intersection of two tetrahedra using CGAL.
If I read and understood the CGAL manual correctly I could create the two tetrahedra as 3D Nef Polyhedra, then compute their intersection as 3D Nef Polyhedron and calculate its volume.
Is that correct or are there better/easier way to accomplish this?
The final goal of this is to create an algorithm which creates a mapping between two tetrahedral meshes. (how much of element-i in mesh a is included in element-j in mesh b).
Related
I have an .obj file representing a 3D object.
I need to extract from this 3D object the contour that is obtained by intersection with a plane. So for example, I have an object representing a cylinder oriented with vertical axis, then I want to extract a circle contour when the intersecting plane is horizontal or a rectangular contour when the intersection plane is vertical. Any suggestion about how to do it?
Since I didn't know how to visualise this obj file, I have converted to a patch with the following code (some function taken from loadawobj from Matlab file exchange).
modelname='file.obj';
S=loadawobj(modelname);
mtl=loadawmtl(['obj/' S.mtllib]);
p3=patch('Vertices',S.v','Faces',S.f3');
for ii=1:length(S.umat3)
mtlnum=S.umat3(ii);
fvcd3(ii,:)=mtl(1).Kd';
end
p3.FaceVertexCData=fvcd3;
p3.FaceColor='flat';
But I don't necessarily need to extract the contour from the resulting patch if this is too complex to accomplish. If there is an easier procedure starting from the obj file, it's also fine and acceptable. Thank you!
That's the way I solved the problem, after collecting information all around the web. I couldn't find anything ready on line so I had to implement an algorithm on my own. The basic idea is very simple but there are many steps required. I start from two info: one array containing the coordinates of the cloud point and another array containing a bunch of tuples about how the 3 vertex are connected to form a triangle.
First of all you need to find a representation of the plane you want to use for your cutting. That means you just use one point and the normal to the plane to represent it. That plane is required in order to identify the cutting point on the structure.
Second step is to identify the triangles on the plane. In few words you just need to scroll over all the triangles of the structure and find those having one corner above the cutting plane and another corner below the cutting plane. Also don't forget to account for the condition where one corner is on the plane or two are on the plane. All the other triangles are not needed, since they are totally above or below the cutting plane.
Now you have a subset of all your triangles. You need to extract points of the contour. So for each triangle you have 3 vertex: in general case you can imagine that one vertex is above the plane and the other two are below. Then you have two lines cutting the plane. You can extract two point by simply intersecting these lines with the cutting plane.
By repeating this operation you get a series of points on 2D space. But they have no order and if you plot them as a continuous plot, you get lines jumping up and down since the points you have extracted are randomly located in the array. So, it's required to order them in a proper way. The method I used is the very simple: start from one point and connect to the closest one. There are some bad situations where that doesn't work but you can probably avoid it by adding some more rules on the algorithm.
What is the best way to:
1. Detect a plane in a set of 3D points with high noise around part of it (not all the points)?
Extract plane equation of two intersection planes in 3D data.
Thanks in advance
Use the pcfitplane function in the Computer Vision System Toolbox.
I want to generate a mesh for solving the Laplace equation on using FEM. I want to solve it on a cube and I would like to mesh the cube using tetrahedra. Is there a way to do this in MATLAB using unstructured meshes, i.e. that I can choose if the density if elements should be higher in a specific region?
I am working with structured 2.5D and unstructured 3D data, which generally is available in (X,Y,Z) coordinates, i.e. point clouds. Now I want to impose a regular voxel grid onto the data. This is not meant for visualization purposes, but rather for "cleaning" or fusing the data. I imagine cases, where e.g. 3 points fall within the volume of one voxel. Then I would aim at either just setting this voxel to "activated" and discarding the 3 original points or alternatively I would like to calculate the euclidian mean of the points and return the thus "cleaned" point cloud as an irregularly structured one again.
I hope I could make my intentions clear enough: It's not about visualization and not necessarily about using volumetric cubes instead of points. It's only about manipulating possibily irregular point clouds in a structured way.
I was thinking about kd-tree or octree based solutions in this context, but can anybody point me in the proper direction? Hinting at existing MATLAB implementations would be most appreciated.
If the data is irregularly spaced, what you want to use is something which both smooths and interpolates your data points. A very good method for doing so is Gaussian process regression. Here's an example for the same problem but in 2D.
I am to create a 3D volume out of grayscale image set using Matlab. A set contains a continuous and quantized slices of 2D grayscale image. I am still considered myself a rookie in Matlab, but this is what I currently have in my mind:
create an empty space for 3D volume.
On each image, we perform all the preprocessing operation so that we only got the part that is of our interest. (In this question, assume that this preprocessing part always work flawlessly)
Go through the image, each pixel's x and y coordinate on 2D will be transfer to the empty space. For z coordinate, we can use the slice number with respect to the distance between each slice. If a pixel is adjacent to another pixel, the 3D points will be connected together.
Repeat the previous 2 steps until all slices are done. We will now have all the points connected just like in the 2D slices.
But here comes the trouble, how can we connect the points between the slices, so that these points can become a volume? Or is there a more robust way to do in Matlab? Any suggestion is highly appreciated.
Part 0 - Assumptions
all 2D images are of the same dimension, hence your 3D volume can hold all of them in a rectangular cube
majority of the pixels in each of the 2D images have 3D spatial relationships (you can't visualize much if the pixels in each of the 2D images are of some random distribution. )
Part 1 - Visualizing 3D Volume from A Stack of 2D Images
To visualize or reconstruct a 3D volume from a stack of 2D images, you can try the following toolkits in matlab.
1 3D CT/MRI images interactive sliding viewer
http://www.mathworks.com/matlabcentral/fileexchange/29134-3d-ctmri-images-interactive-sliding-viewer
[2] Viewer3D
http://www.mathworks.com/matlabcentral/fileexchange/21993-viewer3d
[3] Image3
http://www.mathworks.com/matlabcentral/fileexchange/21881-image3
[4] Surface2Volume
http://www.mathworks.com/matlabcentral/fileexchange/8772-surface2volume
[5] SliceOMatic
http://www.mathworks.com/matlabcentral/fileexchange/764
Note that if you are familiar with VTK, you can try this:
[6] matVTK
http://www.cir.meduniwien.ac.at/matvtk/
I am currently sticking with [5] SliceOMatic for its simplicity and ease of use. However, by default, rendering 3D is quite slow in Matlab. Turning on openGL would give faster rendering. (http://www.mathworks.com/help/techdoc/ref/opengl.html) Or simply put, set(gcf, 'Renderer', 'OpenGL').
Part 2 - Interpolating pixels in between the slices
To interpolate pixels in between the slices, you need to specify an interpolation method (some of the above toolkits have this capability / flexibility. Otherwise, to give you a head start, some examples for interpolation are bicubic, spline, polynomial etc..(you can work this out by looking up on google or google/scholar for interpolation methods much more specific to your problem domain).
Part 3 - 3D Pre-processing
Looking at your procedures, you process the volumetric data by processing each of the 2D images first. In many advanced algorithms, or in true 3D processing, what you can do is to process the volumetric data in 3D domain first (simply put, you take the 26 neighbors or more in to account first.). Once this step is done, you can simply output the volumetric data into a stack of 2D images for cross-sectional viewing or supply to one of the aforementioned toolkits for 3D viewing or output to third party 3D viewing applications.
I have followed the above concepts for my own medical imaging research projects and the above finding is based on my research experience documented here (with latest revisions).
MATLAB generally plots volumetric data using a 3d array. The data points are spatially evenly separated along each axis. If there are sites in the 3d array for which you do not have data for, usually they are assigned the NaN value and the various plotting functions can generally handle this in a reasonable way (i.e. will generally behave as you intended).
If you load the slices into the 3d array such that adjacent points in the z-direction of the data are also adjacent in the 3rd dimension of the array then you should be fine.