Having data points in a 3D neighbourhood (see attached picture), I would like to define a measure of spatial spread to accept or reject the data set as a candidate for further analysis. One measure could be how many of the cells in the 3D subvolume (the bounding box) are occupied by the neighbourhood. Another would be how the data stretches along each of the axis. Are there other and better criteria for measuring the spatial spread ?
Thanks
Related
I'm trying to construct a kind of graph in matlab:
For each vertex I know its neighbours (thus, I have the edge list), and the distance between vertex-neighbour. This distance is saved as weight of the edge.
So, the weights are actually the physical distances between the nodes.
For now I could only associate a wider line to bigger weights but is not enough.
I actually would like having longer lines associated to a bigger weights, so that I could visually construct a suitable geometry from my data.
Any tips?
EDIT: The distances are such that the drawing is possibile.
I have a list of triangles in 3D that form a surface (ie a triangulation). The structure is a deformed triangular lattice. I want to know the change in area of the deformed hexagons of the voronoi tessalation of the lattice with respect to the rest area of the undeformed lattice cells (ie with respect to a regular hexagon). In fact, I really want the sum of the squared change in area of the hexagonal unit cells associated with those triangles.
Background/Math details:
I'm approximating a curved elastic sheet by a triangular lattice. One way to tune the poisson ratio (elastic constant) of the sheet is by adding a 'volumetric' strain energy term to the energy. I'm trying to compute a 'volumetric' strain energy of a deformed, elastic, triangular lattice, defined as: U_volumetric = 1/2 T (e_v)^2, where e_v=deltaV/V is determined by the change in area of a voronoi cell with respect to its reference area, which is a known constant.
Reference: https://www.researchgate.net/publication/265853755_Finite_element_implementation_of_a_non-local_particle_method_for_elasticity_and_fracture_analysis
Want:
Sum[ (DeltaA/ A).^2 ] over all hexagonal cells.
My data is stored in the variables:
xyz = [ x1,y1,z1; x2,y2,z2; etc] %the vertices/particles in 3D
TRI = [ vertex0, vertex1, vertex2; etc] %
where vertex0 is the row of xyz for the particle sitting at vertex 0 of the first triangle.
NeighborList = [ p1n1, p1n2, p1n3, p1n4, p1n5,p1n6 ; p2n1...]
% where p1n1 is particle 1's first nearest neighbor as a row index for xyz. For example, xyz(NL(1,1),:) returns the xyz location of particle 1's first neighbor.
AreaTRI = [ areaTRI1; areaTRI2; etc]
I am writing this in MATLAB.
As of now, I am approximating the amount of area attributed to each vertex as 1/3 of the triangle's area, then summing over the 6 nearest neighbor triangles. But a voronoi cell area will NOT be exactly equal to Sum_(i=0,1,...5) 1/3* areaTRI_i, so this is a bad approximation. See the image in the link above, which I think makes this clearer.
You can do this using the DUALMESH-submission on the file exchange:
DUALMESH is a toolbox of mesh processing routines that allow the construction of "dual" meshes based on underlying simplicial triangulations. Support is provided for various planar and surface triangulation types, including non-Delaunay and non-manifold types.
Simply use the following commands to generate a vector areas of all the dual elements' areas. The ordering will correspond to the nodes xyz.
[cp,ce,pv,ev] = makedual2(xyz, TRI);
[~,areas(cp(:,1))] = geomdual2(cp,ce,pv,ev);
You might want to have a look at the boundary areas using:
trisurf(TRI, xyz(:,1), xyz(:,2), areas);
The dual cells of boundary nodes theoretically are unbounded and thus should have infinite area. This submission handles it differently however: Instead of an unbounded cell it will return the intersection of the unbounded cell with the original mesh.
Also mind that your question is not well defined if the mesh you are working with is not planar, as the dual mesh cells will be planar and won't scale the same way as the triangles. So this solution will probably only work correctly if your mesh is really 2D. (From what I can tell, the paper you mention is also only for the 2D-case.)
I am currently doing some image segmentation on a bone qCT picture, see for instance images below.
I am trying to find the different borders in the picture for instance the outer border separating the bone to the noisy background. In this analysis I am getting a list of points (vec(1,:) containing x values and vex(2,:) containing the y values) in random order.
To get them into order I am using using a block of code which effectively takes the first point vec(1,1),vec(1,2) and then finds the closest point among the rest of the points in the vector. And then repeats.
Now my problem is that I want to smooth the data but how do I do that as the points lie in a circular formation? (I do have the Curve Fitting Toolbox)
Not exactly a smoothing procedure, but a way to simplify your data would be to compute the boundary of the convex hull of the data.
K = convhull(O(1,:), O(2,:));
plot(O(1,K), O(2,K));
You could also consider using alpha shapes if you want more control.
I am working with structured 2.5D and unstructured 3D data, which generally is available in (X,Y,Z) coordinates, i.e. point clouds. Now I want to impose a regular voxel grid onto the data. This is not meant for visualization purposes, but rather for "cleaning" or fusing the data. I imagine cases, where e.g. 3 points fall within the volume of one voxel. Then I would aim at either just setting this voxel to "activated" and discarding the 3 original points or alternatively I would like to calculate the euclidian mean of the points and return the thus "cleaned" point cloud as an irregularly structured one again.
I hope I could make my intentions clear enough: It's not about visualization and not necessarily about using volumetric cubes instead of points. It's only about manipulating possibily irregular point clouds in a structured way.
I was thinking about kd-tree or octree based solutions in this context, but can anybody point me in the proper direction? Hinting at existing MATLAB implementations would be most appreciated.
If the data is irregularly spaced, what you want to use is something which both smooths and interpolates your data points. A very good method for doing so is Gaussian process regression. Here's an example for the same problem but in 2D.
the output of some processing consists of a binary map with several connected areas.
The objective is, for each area, to compute and draw on the image a line crossing the area on its longest axis, but not extending further. It is very important that the line lies just inside the area, therefore ellipse fitting is not very good.
Any hint on how to do achieve this result in an efficient way?
If you have the image processing toolbox you can use regionprops which will give you several standard measures of any binary connected region. This includes
You can also get the tightest rectangular bounding box, centroid, perimeter, orientation. These will all help you in ellipse fitting.
Depending on how you would like to draw your lines, the regionprops function also returns the length for major and minor axes in 2-D connected regions and does it on a per-connected-region basis, giving you a vector of axis lengths. If you specify 4 neighbor connected you are fairly sure that the length will be exclusively within the connected region. But this is not guaranteed since `regionprops' calculates major axis length of an ellipse that has the same normalized second central moment as the connected region.
My first inclination would be to treat the pixels as 2D points and use principal components analysis. PCA will give you the major axis of each region (princomp if you have the stat toolbox).
Regarding making line segments and not lines, not knowing anything about the shape of these regions, an efficient method doesn't occur to me. Assuming the region could have any arbitrary shape, you could just trace along each line until you reach the edge of the region. Then repeat in the other direction.
I assumed you already have the binary image divided into regions. If this isn't true you could use bwlabel (if the regions aren't touching) or k-means (if they are) first.