how to join vertices to form triangular mesh? - matlab

I have a set of triangulated mesh data. For every triangular face, I want to form a smaller similar triangle inside. I generated three new vertices for the new triangle in each original faces. How can I join all the three generated vertices in each faces using matlab?
Thanks for all help!

The vertices in your triangulated mesh data will be placed in a list.
v[0] = (x0,y0,z0)
v[1] = (x1,y1,z1)
.
.
v[n] = (xn,yn,zn)
Next to this list, there will be a list of triangle or polygon data.
triangle[0] = (0,1,2) // these are the indices of the vertex in the vertex list
triangle[1] = (1,2,3)
.
.
triangle[k] = ( , , )
If your new list of vertices has the same order as the original, you should be able to copy this list to a new mesh data object. If you now copy the same list of triangle or polygon data, your mesh should be created.

Related

How to attribute intersection points between poly and lineseg to poly

I'm trying to plot an animal's trajectory from a set of coordinates as a line segment. I want to see how many coordinates are plotted inside a circular zone vs outside. I think that coordinates which intersect with the circle are being counted as both inside and outside, but I would like them to be strictly counted as inside. This is what I have so far:
[in,out] = intersect(circle_poly,trajectory);
plot(circle_poly)
hold on
plot(in(:,1),in(:,2),'b',out(:,1),out(:,2),'r')
legend('Polygon','Inside','Outside','Location','NorthWest')
num_frames_in = numel([in]) %count num elements/frames in polygon
num_frames_out = numel([out]) %count num element/frames outside polygon
total_frames = num_frames_in + num_frames_out
Any help would be really appreciated since I'm new to Matlab!

How to close a mesh in unity?

My projet is : user could draw with finger and I generate a field base on that.
I already got that from the user drawing :
So this is a succession of mesh but it's not close. I just generate the mesh in one direction with some height.
In need to close it. I don't want to be able the see through it.
My problem is : this drawing is random, so there is convexe and not convexe part . Let's illustrate that :
1- First I put a yellow circle on each point from my mesh ( I have this list of point with each (x,y,z) coordinate)
2- Then, with each 3 point following I try to make a mesh :
It's Ok when the shape we want to fill is concave but it will (I think) bug if the shape is convex :
And there is also this kind of bug, when the mesh is too big :
At the end, I just want to be able to close any shape I have. I hope I'm clear.
So the answer was to use Triangulation Algorithm , I use this repo https://github.com/mattatz/unity-triangulation2D
Just add to your code :
using mattatz.Triangulation2DSystem;
and you could launch the example from the github repo :
// input points for a polygon2D contor
List<Vector2> points = new List<Vector2>();
// Add Vector2 to points
points.Add(new Vector2(-2.5f, -2.5f));
points.Add(new Vector2(2.5f, -2.5f));
points.Add(new Vector2(4.5f, 2.5f));
points.Add(new Vector2(0.5f, 4.5f));
points.Add(new Vector2(-3.5f, 2.5f));
// construct Polygon2D
Polygon2D polygon = Polygon2D.Contour(points.ToArray());
// construct Triangulation2D with Polygon2D and threshold angle (18f ~ 27f recommended)
Triangulation2D triangulation = new Triangulation2D(polygon, 22.5f);
// build a mesh from triangles in a Triangulation2D instance
Mesh mesh = triangulation.Build();
// GetComponent<MeshFilter>().sharedMesh = mesh;

How to get intersection area coordinates of two polygons on General Polygon Clipper(GPC)?

I'm using nutiteq library to draw polygons and getting the coordinates of the polygons with .getVertexList() command. Then I cast these coordinates to an array list . Then I cast these coordinates to another polygon list. GPC is calculating the intersection, union, XOR and difference areas integer values. Then I need to highlight the process area so I need processed areas coordinates but I can't get these coordinates directly from GPC.
The code I'm using for the area calculation is below. What should I do to get the coordinates of result polygon?. (I can't cast the coordinates directly by the way as you can see here...)
Thanks in advance.
public void IntersectionButton(View view) {
VectorElement selectedElement = mapView.getSelectedElement();
List<?> VisibleElements = selectedElement.getLayer().getVisibleElements();
ArrayList<Poly> polyList = new ArrayList<Poly>();
for (Object obj : VisibleElements) {
if (obj instanceof Polygon) {
Polygon poly = (Polygon) obj;
List<MapPos> geoList = poly.getVertexList();
Poly p = new PolyDefault();
for (MapPos pos : geoList) {
p.add(pos.x, pos.y);
}
polyList.add(p);
}
}
PolyDefault result = (PolyDefault) Clip.intersection(polyList.get(0), polyList.get(1));
int area = (int) (((int) result.getArea()) * (0.57417));
The result polygon seems to have all the methods you need:
getNumPoints() to get number of outer polygon points.
getX(i) to get X of specific outer polygon point, and getY(i) for Y.
getNumInnerPoly() to get number of holes in the polygon
getInnerPoly(i) to get specific hole. You iterate through hole similar way like outer polygon
You can construct new Nutiteq Polygon from this data, create list of MapPos for outer and list of list of MapPos for inner polygons (holes). What are values of X and Y, do they need further processing, is another question what you can investigate.

How can I draw a polygon on intersection coordinates on Nutiteq?

Hey people this is going to be my first question so dont hit me too hard !
Before I have already added polygons but the intersection is a bit complicating.
with pre-defined i mean for example intersection coordinates of two other polygons. I'm calculating the area of the polygon intersection but i also want to highlight the area. Thanks
You would need two steps:
calculate intersection: polygon from 2 polygons. I would use JTS for it, you would need to provide data in JTS objects.
highlight the intersection on mapview (nutiteq for example). You can just add the resulting polygon as one geometry element into geometry layer, just as any other polygon. Use special styling to make it look different. You would need to convert JTS polygon to Nutiteq Polygon object to show it on map
ArrayList<MapPos> keslist = new ArrayList<MapPos>();
for (int i = 0; i < sonuc.getNumPoints(); i++) {
double lon = sonuc.getX(i);
double lat = sonuc.getY(i);
MapPos mPos = new MapPos(lon, lat);
keslist.add(mPos);
}
PolygonStyle polygonStyle = PolygonStyle.builder().setColor(Color.GREEN).build();
StyleSet<PolygonStyle> polygonStyleSet = new StyleSet<PolygonStyle>(null);
polygonStyleSet.setZoomStyle(10, polygonStyle);
Polygon KesisimPol = new Polygon(keslist, new DefaultLabel("Kesişim"), polygonStyleSet, null);
GeometryLayer geomLayer = new GeometryLayer(mapView.getLayers().getBaseLayer().getProjection());
mapView.getLayers().addLayer(geomLayer);
geomLayer.add(KesisimPol);
}
Here is my solution. I've tried it works. Right now I'm trying to add this new polygon to editable objects layer. Because I can't use the result polygon in another intersection process.
I hope this will help the others.

How to find the 3D coordinates of a surface from the click location of the mouse on the ILNumerics surface plots?

Currently our system uses the ILNumerics 3D plot cube class with an ILNumerics surface component to display a 3D meshed surface. An aim for our system is to be able to interrogate individual points on the surface from a mouse click on the plot. We have the MouseClick event set up on our plot the problem is I am unsure on how to get the values for the particular point on the surface that has been clicked, could anyone help with this issue?
The conversion from 2D mouse coordinates to 3D 'model' coordinates is possible - under some limitations:
The conversion is not unambiguous. The mouse event only provides 2 dimensions: X and Y screen coordinates. In the 3D model there might be more than one point 'behind' this 2D screen point. Therefore, the best you can get is to compute a line in 3D, starting at the camera and ending in infinite depth.
While in theory it would be possible at least to try to find the crossing of the line with the 3D objects, ILNumerics currently does not. Even in the simple case of a surface it is easy to construct a 3D model which crosses the line at more than one point.
For a simplified situation a solution exists: If the Z coordinate in 3D does not matter, one can use common matrix conversions in order to acquire the X and Y coordinates in 3D and use these only. Let's say, your plot is a 2D line plot or a surface plot - but only watched from
'above' (i.e. The unrotated X-Y plane). The Z coordinate of the point clicked may not be of interest. Let's further assume, you have setup an ILScene scene in a common windows application with ILPanel:
private void ilPanel1_Load(object sender, EventArgs e) {
var scene = new ILScene() {
new ILPlotCube(twoDMode: true) {
new ILSurface(ILSpecialData.sincf(20,30))
}
};
scene.First<ILSurface>().MouseClick += (s,arg) => {
// we start at the mouse event target -> this will be the
// surface group node (the parent of "Fill" and "Wireframe")
var group = arg.Target.Parent;
if (group != null) {
// walk up to the next camera node
Matrix4 trans = group.Transform;
while (!(group is ILCamera) && group != null) {
group = group.Parent;
// collect all nodes on the path up
trans = group.Transform * trans;
}
if (group != null && (group is ILCamera)) {
// convert args.LocationF to world coords
// The Z coord is not provided by the mouse! -> choose arbitrary value
var pos = new Vector3(arg.LocationF.X * 2 - 1, arg.LocationF.Y * -2 + 1, 0);
// invert the matrix.
trans = Matrix4.Invert(trans);
// trans now converts from the world coord system (at the camera) to
// the local coord system in the 'target' group node (surface).
// In order to transform the mouse (viewport) position, we
// left multiply the transformation matrix.
pos = trans * pos;
// view result in the window title
Text = "Model Position: " + pos.ToString();
}
}
};
ilPanel1.Scene = scene;
}
What it does: it registers a MouseClick event handler on the surface group node. In the handler it accumulates the transformation matrices on the path from the clicked target (the surface group node) up to the next camera node the surface is a child of. While rendering, the (model) coordinates of the vertices are transformed by the local coordinate transformation matrix, hosted in every group node. All transformations are accumulated and so the vertex coordinates end up in the 'world coordinate' system, established by every camera. So rendering finds the 2D screen position from the 3D model vertex positions.
In order to find the 3D position from the 2D screen coordinates - one must go the other way around. In the example, we acquire the transformation matrices for every group node, multiply them all up and invert the resulting transformation matrix. This is needed, because such transforms naturally describe the conversion from the child node to the parent. Here, we need the other way around - hence the inversion is necessary.
This method gives the correct 3D coordinates at the mouse position. However, keep the limitations in mind! Here, we do not take into account any rotation of the plot cube (the plot cube must be left unrotated) and no projection transforms (plot cubes do use orthographic transform by default, which basically is a noop). In order to recognize those variables as well, you may extend the example accordingly.