How to check for "island" vertices in unity - unity3d

So, I'm using a boolean operator to get the intersection of a bunch of pieces and a wall. Most pieces work fine but occasionally the intersection isn't perfect and you get these vertices that aren't connected to the rest of the mesh and this results in the mesh collider being incorrect, as seen in this picture.
My question is whether there is a way to detect these 'island' or 'lone' vertices.
I can provide additional images, code, or such if needed.
Thanks for any help! Ps. first question here so please be patient with me :)

In the end, I kind of solved it by finding all connected vertices starting from a single vertex.
I started by picking the first triangle and adding it's vertices to a list of connected vertices. Then I go through the list of triangles comparing the position of their vertices to the list of connected vertices. If the triangle has a vertex with a position corresponding to a position in the list of connected vertices I add the entire triangle to the list. That's one iteration and I repeat this until all connected triangles are already in the list. If the connected triangle list is more than half of all triangles then I remove all other triangles, otherwise, I remove the current list of connected triangles. After that, I clear the vertices that aren't in a triangle like Leo Bartkus suggested.
It's extremely slow and it assumes there are only 2 separate islands or that you started on the biggest island, but it worked most of the time and was more for learning purposes anyways.
Thank's for the help!

Related

three.js calculate surfaces of stl files

I think i have a difficult problem right here..
I want to able to get the surfaces of f.e. the orange object in this three.js example https://threejs.org/examples/?q=stl#webgl_loader_stl
i want to click with the mouse, find the correct surface, which should then be highlighted, so i make sure this was the surface i want.
(i already implemented raycaster successfully, so thats not an issue)
The intersectObject method returns an array of intersections, each of which has face property. The face contains vertex indices.
For STL files containing multiple solids, each solid is assigned to a different group, and the groups are available in the geometry object that is returned from STLLoader. Each group is defined by a range of vertex indices.
So, I think you can correlate the vertex indices returned from the raycaster with the vertex indices in the geometry groups.

Unity3D dynamic mesh with hole

Dynamically creating a mesh with a hole in it from two lists of vertices
I am currently attempting to dynamically create a mesh (2D) with a hole in it. I have a list of Vector3 vertices for both the outline and the hole's outline.
My question:
How would I go about merging these two lists of vertices into a single mesh?
More detail: I have two meshes that overlap, and I'm trying to do a boolean difference between the two, to create a new mesh that will eventually replace the bigger one, to get rid of clipping. Example
Using the Clipper-Library (see http://www.angusj.com/delphi/clipper.php) is of no use, because it returns the same two sets of vertices that I set as input.
I'm guessing I need to somehow fix the triangles for the mesh to create triangles between the outer and inner vertices? (The meshes can be any shape/size, so finding out which vertexes to combine into triangles is no easy task).
Can anybody tell me how I would create a single mesh out of the two vertex-loops?
If you need a generic boolean algorithm this is a very hard problem, for example 3D Studio Max has two seperate boolean mesh creators, each failing at different sets of objects.
If you only need to subtract rectangular, aligned shapes, which do not touch, its simpler. For your specific case you can just join the list of vertexes, and fill new list of triangles - you'll need two tris per quad, so thats eight triangles stretched across eight verices.
It gets a bit harder if they start to touch, as you need to find intersection points and basically re-triangulate the outline.

How Does Unity Assign Pivot Point Location on Script Generated Meshes

I have tried to find any information on how the Unity assigns pivot points to object but all I keep finding is threads on how to move pivot points and that it can't be done. I am creating a 2D game with a background that is randomly created with meshes that are wrapped in empty GameObjects. These objects are organically shaped but they have a property that returns a rectangle that bounds the object so that they can be placed in a way that they are not overlapping. The trouble is that the algorithm assumes that the pivot point is going to be the center of the object. What I would like to know is how does Unity decide where the pivot point will be set to so that I can predict how much I will need to move my mesh inside the parent object so that the pivot point will be in the center of the bounding rectangle.
Possible fix:
Try create the meshes during runtime and see if it always places the pivot points at a certain corner or at least relatively speaking the same location.
If it does that you would know where the pivot point is and could take it into account in your code, if you also know the size of the mesh you spawn.
So I think most general and correct answer that I can come up with is that unity assigns the pivot point to the center of the GameObject that you apply the Mesh to. The local coordinates of the vertices of the mesh depending on how you create them mighht place your mesh so that its logical center is not the same as the that of the empty GameObject that it is attached to. What I did to fix the issue was to make a vector from local point (0,0,0) to the center of bounding rectangle and translate the vertices I use to make my mesh by that vector inverted. It wasn't perfect but by far close enough to ensure that I won't have any overlapping meshes.

THREE.JS connecting two geometries (sweep rail, caps)

I have two geometries, one is made from another by offsetting its vertices,
so both have same structure and hierarchy.
Need to connect these two geometries with caps (yellow geometry).
Pretty sure that the problem could be solved by finding edge points (yellow lines) on both sides for each element. As soon as these geometries have same # of vertices and herarchy caps could be easilly calculated.
But, for now, I don't have any idea how to determine these edge points.
My solution to this problem. Not ideal.
http://vkuchinov.github.io/BuildingCaps/

Open GL - ES 2.0 : Touch detection

Hi Guys I am doing some work on iOS and the work requires use of OpenGL es. So now I have a bunch of squares, cubes and triangles on the screen. Some of these geometries might overlap. Any ideas/ approaches for touch detection?
Regards
To follow up on the answer already given, squares, cubes and triangles are convex shapes so you can perform ray-object intersection quite easily, even directly from the geometry rather than from the mathematical description of the perfect object.
You're going to need to be able to calculate the distance of a point from the plane and the intersection of a ray with the plane. As a simple test you can implement yourself very quickly, for each polygon on the convex shape work out the intersection between the ray and the plane. Then check whether that point is behind all the planes defined by polygons that share an edge with the one you just tested. If so then the hit is on the surface of the object — though you should be careful about coplanar adjoining polygons and rounding errors.
Once you've found a collision you can easily get the length of the ray to the point of collision. The object with the shortest distance is the one that's in front.
If that's fast enough then great, otherwise you'll probably want to look into partitioning the world or breaking objects down to their silhouettes. Convex objects are really simple — consider all the edges that run between one polygon and the next. If only exactly one of those polygons is front facing then the edge is part of the silhouette. All the silhouettes edges together can be projected to a convex 2d shape on the view plane. You can then test touches by performing a 2d point-in-polygon from that.
A further common alternative that eliminates most of the maths is picking. You'd render the scene to an invisible buffer with each object appearing as a solid blob in a suitably unique colour. To test for touch, you'd just do a glReadPixels and inspect the colour.
For the purposes of glu on the iPhone, you can grab SGI's implementation (as used by MESA). I've used its tessellator in a shipping, production project before.
I had that problem in the past. What I have used is an implementation of glu unproject that you can find on google (it uses the inverse of the model view projection matrix and the viewport size). This allows you to map the 2D screen coordinates to a 3D vector into the world. Then, you can use this vector to intersect with your objects and see which one intersects (or comes really close to doing so).
I do hope there are better ways of doing this, so I look forward to other answers as well!
Once you get the inverse-modelview and cast your ray (vector), you still need to know if the ray intersects your geometry. One approach would be to grab the depth (z in view coordinate system) of the object's center and extend (stretch) your vector just that far. Then see if the vector's "head" ends within the volume of your object or not (you need the objects center and e.g. Its radius, if it's a sphere)