MATLAB: Failed to create geometry. The stl file is invalid, more than two facets share an edge - matlab

im trying to create a crater alpha shape.
this is what i wrote so far:
ddr=0.12;
a=1/ddr/2; %semi major axis. horizontal axes are equal
n=91; %number of points for x y vectors.
x = linspace(-a,a,n);
y = linspace(-a,a,n);
[X,Y] = meshgrid(x,y);
Z =real(sqrt(1-(X.^2)/a^2-(Y.^2)/a^2))*-1+1; % the plus 1 for Z>0
shp = alphaShape(X(:),Y(:),Z(:));
plot(shp);
[elements,nodes] = boundaryFacets(shp);
nodes = nodes';
elements = elements';
model = createpde();
geometryFromMesh(model,nodes,elements);
but im getting the following error:
Failed to create geometry. The stl file is invalid, more than two facets share an edge.
thank you for your help

I suspected that the problem comes only because of the interface of one object over the another (possibly there are more than one objects in your geometry), even though there are no overlaps. Try import a single object instead of importing the whole geometry.
Another solution is to convert your stl file to faces and vertices using stlread.m
and reconstruct the geometry.
Good luck!

Related

Confused about MATLAB visualization

I have a 24 dimensional dataset of 100,000 data points which are mapped onto 2d plane using sammon mapping (a method ro reduce dimensionality and visualize). The scatter plot is shown below! Doesn't look too interesting..
But when i tilted my screen an looked at the figure, a square array of points appeared. I am not a matlab user and i am confused whether there is some meaning
to this for example are the points forming a clusters etc ? or is this just some sorcery.
Please use the scatter command and change the scatter properties.
fig = figure('Color','white');
sc = scatter(y_stacked(:,1), y_stacked(:,2));
sc.Marker = '.';
sc.SizeData = 1;
Then, make the figure window as big as possible and/or zoom in to examine your data points more closely.

Plotting arbitrary 3d finite element mesh with matlab

Hello guys I am trying to export a mesh from MSC Patran and then plot it in Matlab. The mesh can be of arbitrary shape. I have the x, y and z coordinates of all the nodes. So far I have tried many different options and here is why they failed:
Surfc() with meshgrid and griddata:
I generated a grid on x-y plane with meshgrid and then used griddata to obtain the z matrix. But this plot only works when there is only 1 z value corresponding to an x-y pair. In other words, for this to work z must be of type z = f(x,y).
pdegplot() : I found out that matlab can import and plot .stl files. I tried converting my coordinate matrix format and plot it with this function but it doesn't work either. Because apparently in .stl files an edge can not be shared by more than 2 elements. However my FEM files are always (i hope) shell elements. This means 3 or more elements can share the same elements.
Surfc() with 3d meshgrid: I found out that meshgrid() can take 3 inputs (x,y,z) and create a 3d mesh. However this didn't work either. I used a very small mesh with about 1000 nodes and the code was trying to generate 3 matrices with 1000x1000x1000 elements. That means about 3 gb of memory for a 1000 node mesh. Whats more, surfc couldn't plot even that.
Somehow importing other file formats automatically: so far I have been using patran neutral files (.out). I manually read the file and extract x,y,z data from it. Patran can also export to parasolid, iges and step file formats. I looked for direct ways of importing and plotting these in matlab but such functions don't exist as far as I have looked.
Manually generating a grid: Matlab can create 3D objects (like [x,y,z] = sphere()) and Surfc() can plot these despite what I said in (1.) and the x,y,z matrices generated by sphere() are not 3 dimensional like in (3.) so I tried following this and manually generate a 3d grid from my FEM file just for testing. I found that z has repeating columns and in each column (which acts as a layer) there are n values of x and y. When I tried doing the same thing for my mesh manually, surfc() didn't work again. It plotted a really weird shape that I can't even describe.
Finding a 3rd party plotting software: I tried using (light) software like gnuplot and visit but so far I am all wet. I am open to suggestions if you know any (preferably open source) software that can directly plot patran neutral files. But the software has to be capable of contour plotting also. As I am calculating a quantity for each node in Matlab and then plotting its contour on the mesh.
So you can have a tetramesh?
You seem to be working with FEM-stile meshes, thus the standard surface-plotting function wont work. For FEM-meshes of different shape (not tetra) you may need to write your own function...
If you have the grid points and grid cell connectivities in say variables p and c, you can use the external Matlab FEA Toolbox to plot both structured and unstructured grids with for example the plotgrid command
% Cread grid struct.
grid.p = p;
grid.c = c;
grid.a = gridadj(p,c,size(p,1)); % Reconstruct grid cell adjacencies.
grid.b = gridbdr(p,c,grid.a); % Reconstruct boundary information.
grid.s = ones(1,size(c,2)); % Set subdomain numbers to 1 for all grid cells.
% Plot grid.
plotgrid( grid )

Getting a list of pixels coordinates from a circular or oddly shape blob - Matlab

I am new to image processing and I am trying to obtain a list of pixel coordinates that are found within a circular/oval/oddly shape blob.
The only way that I can think of doing it is using a bounding box but unfortunately the bounding box does go over the area.
Anyone has a better idea?
Thanks
Just use find to obtain the pixel coordinates. Assuming your image is binary and stored in im, do:
[r,c] = find(im);
r and c will be the rows and columns of every pixel that is white. This assumes that the object is fully closed - one caveat I'd like to mention. If there are holes in the interior of the object, consider using imfill to fill in these holes, then combine it with find:
bw = imfill(im, 'holes');
[r,c] = find(bw);
If you have more than one object, use regionprops and specify the PixelList attribute:
s = regionprops(im, 'PixelList');
This will return a N element structure where each structure contains a PixelList field that contains the (x,y) coordinates of each unique object. In your case, this will be a M x 2 matrix where the first column are the x or column coordinates and the second column are the y or row coordinates.
To access an object's pixel coordinate list, simply do:
coords = s(idx).PixelList;
idx is the object you want to access.

Output of delaunay triangulation from lidar data

I have to generate the mesh of the 3D-point cloud. So I used the delaunay function to perform the triangulation of points. The lidar data is a result of a human head scanning.
dt = delaunay(X,Y);
trisurf(dt,X,Y,Z);
When I used delaunay with two inputs it gives me output but not perfect. So I used three inputs (X,Y,Z)
dt = delaunay(X,Y,Z);
trisurf(dt,X,Y,Z);
But now the result comes out worse. I don't know what the problem is?
This is the full code that I have written:
load Head_cloud_point.txt;
data = Head_cloud_point;
for i = 1 : 3
X = data(:, 1);
end
for i = 1 : 3
Y = data(:, 2);
end
for i = 1 : 3
Z = data(:, 3);
end
[m n] = size(X);
[o p] = size(Y);
[r s] = size(Z);
[XI,YI]= meshgrid(X(m,n),Y(o,p));
ZI = interp2(X,Y,Z,XI,YI);
% dt = delaunay(X,Y);
% trisurf(dt,X,Y,ZI);
Head_cloud_point is the file with X,Y,Z coordinates. I have to generate the mesh using these coordinates.
Well, Delaunay is not going to do the trick directly here, neither the 2D nor the 3D version. The main reason is the way Delaunay is working. You can get some of the way, but the result is in general not going to be perfect.
You have not specified whether the poing cloud is the surface of the head, or the entire inner of the head (though another answer indicates the former).
First remember that Delaunay is going to triangulate the convex hull of the data, filling out any concavities, e.g. a C-like shape will have the inner part of the C triangulated (Ending like a mirrored D triangulation).
Assuming the point cloud is the surface of the head.
When using 2D Delaunay on all (X,Y), it can not distinguish between coordinates at the top of the head and at the bottom/neck, so it will mix those when generating the triangulation. Basically you can not have two layers of skin for the same (X,Y) coordinate.
One way to circumvent this is to split the data in a top and bottom part, probably around the height of the tip of the nose, triangulate them individually and merge the result. That could give something fairly nice to look at, though there are other places where there are similar issues, for example around the lips and ears. You may also have to connect the two triangulations, which is somewhat difficult to do.
Another alternative could be to transform the (X,Y,Z) to spherical coordinates (radius, theta, gamma) with origin in the center of the head and then using 2D Delaunay on (theta,gamma). That may not work well around the ear, where there can be several layers of skin at the same (theta,gamma) direction, where again Delaunay will mix those. Also, at the back of the head (at the coordinate discontinuity) some connections will be missing. But at the rest of the head, results are probably nice. The Delaunay triangulation in (theta, gamma) is not going to be a Delaunay triangulation in (X,Y,Z) (the circumcircle associated with each triangle may contain other point in its interior), but for visualization purposes, it is fine.
When using the 3D Delaunay using (X,Y,Z), then all concavities are filled out, especially around the tip of the nose and the eyes. In this case you will need to remove all elements/rows in the triangulation matrix that represents something "outside" the head. That seems difficult to do with the data at hand.
For the perfect result, you need another tool. Try search for something like:
meshing of surface point cloud
Since you have a cloud of raw data representing a 3D surface, you need to do a 3D surface interpolation to remove the noise. This will determine a function z=f(x,y) that best fits your data. To do that, you can use griddata, triscatteredinterp (deprecated) or interp2.
Note: From the context of your question, I assumed you use MATLAB.
[EDIT]
As you indicated that your data represents a head, the surface of a head which is a spheroid, it is not a function of the form z=f(x,y). See this post concerning possible solutions to visualizing spherical surfaces http://www.mathworks.com/matlabcentral/newsreader/view_thread/2287.

DICOM affine matrix transformation from image space to patient space in Matlab

From the nifti header its easy to get the affine matrix. However in the DICOM header there are lots of entries, but its unclear to me which entries describe the transformation of which parameter to which new space.
I have found a tutorial which is quite detailed, but I cant find the entries they refer to. Also, that tutorial is written for Python, not Matlab. It lists these header entries:
Entries needed:
Image Position (0020,0032)
Image Orientation (0020,0037)
Pixel Spacing (0028,0030)
I cant find these if I load the header with dicominfo() . Maybe they are vendor specific or maybe they are nested away somewhere in the struct. Also the Pixel Spacing they refer to consists of two values, so I think their tutorial will only work for single slice transformations. More header entries about slice thickness and slicegap would be needed. Its also not easy to calculate the correct transformation for the z coordinates.
Does anybody know how to find these entries or how to transform image coordinates to patient coordinates with other information from a DICOM header? I use Matlab.
Ok so they were nested away in what might be a vendor specific entry of the struct. When loaded in Matlab, the name of the nest was inf.PerFrameFunctionalGroupsSequence.Item_X., then the framenumber, and then some more nesting which was more straightforward/self explanatory so I wont need to add it here. But search for the entries you need there. The slice spacing is called SpacingBetweenSlices (or slice thickness in the single slice case), the pixel spacing is called PixelSpacing and then there are ImagePositionPatient for the translation and ImageOrientationPatient for the rotation. Below is the code I wrote when following the steps from the nipy link below.
What happens is you load the direction cosines in a rotation matrix to align the basis vectors and you load the the pixel spacing and slice spacing in a matrix to scale the basis vectors and you load the image position to translate the new coordinate system. Finding the directoin cosines for the z direction takes some calculations because dicom apparently was designed for 2d images. In the single slice case the z direction cosines is the unit vector orthogonal to the x and y direction cosines (the cross product between the two) and in the multi slice case you can calculate it from all the differences in the translations between the slcies. After this you still want to apply the transformation which is also not immediately straightforward.
%load the header
inf = dicominfo(filename, 'dictionary', yourvendorspecificdictfilehere);
nSl = double(inf.MRSeriesNrOfSlices);
nY = double(inf.Height);
nX = double(inf.Width);
T1 = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PlanePositionSequence.Item_1.ImagePositionPatient);
%load pixel spacing / scaling / resolution
RowColSpacing = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PixelMeasuresSequence.Item_1.PixelSpacing);
%of inf.PerFrameFunctionalGroupsSequence.Item_1.PrivatePerFrameSq.Item_1.Pixel_Spacing;
dx = double(RowColSpacing(1));
dX = [1; 1; 1].*dx;%cols
dy = double(RowColSpacing(2));
dY = [1; 1; 1].*dy;%rows
dz = double(inf.SpacingBetweenSlices);%inf.PerFrameFunctionalGroupsSequence.Item_1.PrivatePerFrameSq.Item_1.SliceThickness; %thickness of spacing?
dZ = [1; 1; 1].*dz;
%directional cosines per basis vector
dircosXY = double(inf.PerFrameFunctionalGroupsSequence.Item_1.PlaneOrientationSequence.Item_1.ImageOrientationPatient);
dircosX = dircosXY(1:3);
dircosY = dircosXY(4:6);
if nSl == 1;
dircosZ = cross(dircosX,dircosY);%orthogonal to other two direction cosines!
else
N = nSl;%double(inf.NumberOfFrames);
TN = double(-eval(['inf.PerFrameFunctionalGroupsSequence.Item_',sprintf('%d', N),'.PlanePositionSequence.Item_1.ImagePositionPatient']));
dircosZ = ((T1-TN)./nSl)./dZ;
end
%all dircos together
dimensionmixing = [dircosX dircosY dircosZ];
%all spacing together
dimensionscaling = [dX dY dZ];
%mixing and spacing of dimensions together
R = dimensionmixing.*dimensionscaling;%maps from image basis to patientbasis
%offset and R together
A = [[R T1];[0 0 0 1]];
%you probably want to switch X and Y
%(depending on how you load your dicom into a matlab array)
Aold = A;
A(:,1) = Aold(:,2);
A(:,2) = Aold(:,1);
This results in this affine formula:
So basically I followed this tutorial. What was the biggest struggle was getting the Z direction and the translation correct. Also finding identifying and converting the correct entries was not straightforward for me. I do think my answer adds something to that tutorial though, because it was pretty hard to find the entries they refer to and now I wrote some Matlab code getting the affine matrix from a DICOM header. Before using the found affine matrix you also might need to find the Z coordinates for all of your frames, which might not be trivial if your dataset has more than four dimensions (dicomread puts all higher dimensions in one big fourth dimension)
-Edit-
Corrected Z direction and translation of the transformation