How to close a mesh in unity? - unity3d

My projet is : user could draw with finger and I generate a field base on that.
I already got that from the user drawing :
So this is a succession of mesh but it's not close. I just generate the mesh in one direction with some height.
In need to close it. I don't want to be able the see through it.
My problem is : this drawing is random, so there is convexe and not convexe part . Let's illustrate that :
1- First I put a yellow circle on each point from my mesh ( I have this list of point with each (x,y,z) coordinate)
2- Then, with each 3 point following I try to make a mesh :
It's Ok when the shape we want to fill is concave but it will (I think) bug if the shape is convex :
And there is also this kind of bug, when the mesh is too big :
At the end, I just want to be able to close any shape I have. I hope I'm clear.

So the answer was to use Triangulation Algorithm , I use this repo https://github.com/mattatz/unity-triangulation2D
Just add to your code :
using mattatz.Triangulation2DSystem;
and you could launch the example from the github repo :
// input points for a polygon2D contor
List<Vector2> points = new List<Vector2>();
// Add Vector2 to points
points.Add(new Vector2(-2.5f, -2.5f));
points.Add(new Vector2(2.5f, -2.5f));
points.Add(new Vector2(4.5f, 2.5f));
points.Add(new Vector2(0.5f, 4.5f));
points.Add(new Vector2(-3.5f, 2.5f));
// construct Polygon2D
Polygon2D polygon = Polygon2D.Contour(points.ToArray());
// construct Triangulation2D with Polygon2D and threshold angle (18f ~ 27f recommended)
Triangulation2D triangulation = new Triangulation2D(polygon, 22.5f);
// build a mesh from triangles in a Triangulation2D instance
Mesh mesh = triangulation.Build();
// GetComponent<MeshFilter>().sharedMesh = mesh;

Related

Adding a collider on a list of vectors

I'm trying to detect contours in a scene and add a collider to every detected object, I used the canny edge detector to get the coordinates of the detected objects.
Here is my output image
I need to add a collider to each black line to prevent my game object from going in/out of that area but I don't know how to do so exactly.
The findContours function returns a list of detected contours each stored as a vector of points but how do I use that to generate a collider?
Thank you for your help.
Update
Here is my source code (for the update method)
void Update ()
{
if (initDone && webCamTexture.isPlaying && webCamTexture.didUpdateThisFrame) {
//convert webcamtexture to mat
Utils.webCamTextureToMat (webCamTexture, rgbaMat, colors);
//convert to grayscale
Imgproc.cvtColor (rgbaMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
//Blurring
Imgproc.GaussianBlur(rgbaMat,blurMat,new Size(7,7),0);
Imgproc.Canny(blurMat, cannyMat, 50,100);
Mat inverted = ~cannyMat;
//convert back to webcam texture
Utils.matToTexture2D(inverted, texture, colors);
Mat hierarchy = new Mat();
List<MatOfPoint> contours = new List<MatOfPoint>();
Imgproc.findContours(inverted, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
}
}
Use a PolygonCollider2d.
You can edit the collider at runtime using the SetPath function, to which you will pass a list of 2d points (that you already computed using the findContours function.
You can have several paths in the polygon if you want your collider to have holes.

Why do vertices of a quad and the localScale of the quad not match in Unity?

I have a Quad whose vertices I'm printing like this:
public MeshFilter quadMeshFilter;
for(var vertex in quadMeshFilter.mesh.vertices)
{
print(vertex);
}
And, the localScale like this:
public GameObject quad;
print(quad.transform.localScale);
Vertices are like this:
(-0.5, -0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, 0.5)
while the localScale is:
(6.4, 4.8, 0)
How is this possible - because the vertices make a square but localScale does not.
How do I use vertices and draw another square in front of the quad?
I am not well versed in the matters of meshes, but I believe I know the answer to this question.
Answer
How is this possible
Scale is a value which your mesh is multiplied in size by in given directions (x, y, z). A scale of 1 is default size. A scale of 2 is double size and so on. Your localSpace coordinates will then be multiplied by this scale.
Say a localSpace coordinate is (1, 0, 2), the scale however, is (3, 1, 3). Meaning that the result is (1*3, 0*1, 2*3).
How do I use vertices and draw another square in front of the quad?
I'd personally just create the object and then move it via Unity's Transform system. Since it allows you to change the worldSpace coordinates using transform.position = new Vector3(1f, 5.4f, 3f);
You might be able to move each individual vertex in WorldSpace too, but I haven't tried that before.
I imagine it is related to this bit of code though: vertices[i] = transform.TransformPoint(vertices[i]); since TransformPoint converts from localSpace to worldSpace based on the Transform using it.
Elaboration
Why do I get lots of 0's and 5's in my space coordinates despite them having other positions in the world?
If I print the vertices of a quad using the script below. I get these results, which have 3 coordinates and can be multiplied as such by localScale.
Print result:
Script:
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
This first result is what we call local space.
There also exists something called WorldSpace. You can convert between local- and worldSpace.
localSpace is the objects mesh vertices in relation to the object itself while worldSpace is the objects location in the Unity scene.
Then you get the results as seen below, first the localSpace coordinates as in the first image, then the WorldSpace coordinates converted from these local coordinates.
Here is the script I used to print the above result.
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
Debug.Log("World Space");
for (int i = 0; i < vertices.Length; ++i)
{
vertices[i] = transform.TransformPoint(vertices[i]);
Debug.Log(vertices[i]);
}
Good luck with your future learning process.
This becomes clear once you understand how Transform hierarchies work. Its a tree, in which parent transform [3x3] matrix (position, rotation, scale (rotation is actually a quaternion but lets assume its euler for simplicity so that math works). by extension of this philosophy, the mesh itself can be seen as child to the gameoobject that holds it.
If you imagine a 1x1 quad (which is what is described by your vertexes), parented to a gameobject, and that gameobject's Transform has a non-one localScale, all the vertexes in the mesh get multiplied by that value, and all the positions are added.
now if you parent that object to another gameObject, and give it another localScale, this will again multiply all the vertex positions by that scale, translate by its position etc.
to answer your question - global positions of your vertexes are different than contained in the source mesh, because they are feed through a chain of Transforms all the way up to the scene root.
This is both the reason that we only have localScale and not scale, and this is also the reason why non-uniform scaling of objects which contain rotated children can sometimes give very strange results. Transforms stack.

What is the best way to evenly distribute objects to fill a curved space in Unity 3D?

I would like to fill this auditorium seating area with chairs (in the editor) and have them all face the same focal point (the stage). I will then be randomly filling the chairs with different people (during runtime). After each run the chairs should stay the same, but the people should be cleared so that during the next run the crowd looks different.
The seating area does not currently have a collider attached to it, and neither do the chairs or people.
I found this code which has taken care of rotating the chairs so they target the same focal point. But I'm still curious if there are any better methods to do this.
//C# Example (LookAtPoint.cs)
using UnityEngine;
[ExecuteInEditMode]
public class LookAtPoint : MonoBehaviour
{
public Vector3 lookAtPoint = Vector3.zero;
void Update()
{
transform.LookAt(lookAtPoint);
}
}
Additional Screenshots
You can write a editor script to automatically place them evenly. In this script,
I don't handle world and local/model space in following code. Remember to do it when you need to.
Generate parallel rays that come from +y to -y in a grid. The patch size of this grid depends on how big you chair and the mesh(curved space) is. To get a proper patch size. Get the bounding box of a chair(A) and the curved space mesh(B), and then devide them(B/A) and use the result as the patch size.
Mesh chairMR;//Mesh of the chair
Mesh audiMR;//Mesh of the auditorium
var patchSizeX = audiMR.bounds.size.X;
var patchSizeZ = audiMR.bounds.size.Z;
var countX = audiMR.bounds.size.x / chairMR.bounds.size.x;
var countZ = audiMR.bounds.size.z / chairMR.bounds.size.z;
So the number of rays you need to generate is about countX*countZ. Patch size is (patchSizeX, patchSizeZ).
Then, origin points of the rays can be determined:
//Generate parallel rays that come form +y to -y.
List<Ray> rays = new List<Ray>(countX*countZ);
for(var i=0; i<countX; ++i)
{
var x = audiMR.bounds.min.x + i * sizeX + tolerance /*add some tolerance so the placed chairs not intersect each other when rotate them towards the stage*/;
for(var i=0; i<countZ; ++i)
{
var z = audiMR.bounds.min.z + i * sizeZ + tolerance;
var ray = new Ray(new Vector3(x, 10000, z), Vector3.down);
//You can also call `Physics.Raycast` here too.
}
}
Get postions to place chairs.
attach a MeshCollider to your mesh temporarily
foreach ray, Physics.Raycast it (you can place some obstacles on places that will not have a chair placed. Set special layer for those obstacles.)
get hit point and create a chair at the hit point and rotate it towards the stage
Reuse these hit points to place your people at runtime.
Convert each of them into a model/local space point. And save them into json or asset via serialization for later use at runtime: place people randomly.

How to calculate UVs of a flat poligon mesh

My game generates a flat surface (the floor of a building). It's a flat poligon mesh as shown in the picture:
The poligon is generated procedurally and will be different each time.
I need to map UV coordinates so that a standard square texture of, say,a floor made of bricks, is properly displayed.
What is the best way to assing the correct UV coordinates to each vertex?
With an irregular shape, you might want to "paste" a texture across the mesh(imagine pasting a rectangular sticker across your mesh and cutting away those that fall outside your mesh shape).
For that type of mapping, you might want to use Mesh.bounds, which gives you the bounding box of your mesh in local coordinates, which is the area you are going to "paste" your texture over.
Mesh mesh = GetComponent<MeshFilter>();
Bounds bounds = mesh.bounds;
Get the vertices of your mesh:
Vector3[] vertices = mesh.vertices;
Now do the mapping:
Vector2[] uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
uvs[i] = new Vector2(vertices[i].x / bounds.size.x, vertices[i].z / bounds.size.z);
}
mesh.uv = uvs;

How to find the 3D coordinates of a surface from the click location of the mouse on the ILNumerics surface plots?

Currently our system uses the ILNumerics 3D plot cube class with an ILNumerics surface component to display a 3D meshed surface. An aim for our system is to be able to interrogate individual points on the surface from a mouse click on the plot. We have the MouseClick event set up on our plot the problem is I am unsure on how to get the values for the particular point on the surface that has been clicked, could anyone help with this issue?
The conversion from 2D mouse coordinates to 3D 'model' coordinates is possible - under some limitations:
The conversion is not unambiguous. The mouse event only provides 2 dimensions: X and Y screen coordinates. In the 3D model there might be more than one point 'behind' this 2D screen point. Therefore, the best you can get is to compute a line in 3D, starting at the camera and ending in infinite depth.
While in theory it would be possible at least to try to find the crossing of the line with the 3D objects, ILNumerics currently does not. Even in the simple case of a surface it is easy to construct a 3D model which crosses the line at more than one point.
For a simplified situation a solution exists: If the Z coordinate in 3D does not matter, one can use common matrix conversions in order to acquire the X and Y coordinates in 3D and use these only. Let's say, your plot is a 2D line plot or a surface plot - but only watched from
'above' (i.e. The unrotated X-Y plane). The Z coordinate of the point clicked may not be of interest. Let's further assume, you have setup an ILScene scene in a common windows application with ILPanel:
private void ilPanel1_Load(object sender, EventArgs e) {
var scene = new ILScene() {
new ILPlotCube(twoDMode: true) {
new ILSurface(ILSpecialData.sincf(20,30))
}
};
scene.First<ILSurface>().MouseClick += (s,arg) => {
// we start at the mouse event target -> this will be the
// surface group node (the parent of "Fill" and "Wireframe")
var group = arg.Target.Parent;
if (group != null) {
// walk up to the next camera node
Matrix4 trans = group.Transform;
while (!(group is ILCamera) && group != null) {
group = group.Parent;
// collect all nodes on the path up
trans = group.Transform * trans;
}
if (group != null && (group is ILCamera)) {
// convert args.LocationF to world coords
// The Z coord is not provided by the mouse! -> choose arbitrary value
var pos = new Vector3(arg.LocationF.X * 2 - 1, arg.LocationF.Y * -2 + 1, 0);
// invert the matrix.
trans = Matrix4.Invert(trans);
// trans now converts from the world coord system (at the camera) to
// the local coord system in the 'target' group node (surface).
// In order to transform the mouse (viewport) position, we
// left multiply the transformation matrix.
pos = trans * pos;
// view result in the window title
Text = "Model Position: " + pos.ToString();
}
}
};
ilPanel1.Scene = scene;
}
What it does: it registers a MouseClick event handler on the surface group node. In the handler it accumulates the transformation matrices on the path from the clicked target (the surface group node) up to the next camera node the surface is a child of. While rendering, the (model) coordinates of the vertices are transformed by the local coordinate transformation matrix, hosted in every group node. All transformations are accumulated and so the vertex coordinates end up in the 'world coordinate' system, established by every camera. So rendering finds the 2D screen position from the 3D model vertex positions.
In order to find the 3D position from the 2D screen coordinates - one must go the other way around. In the example, we acquire the transformation matrices for every group node, multiply them all up and invert the resulting transformation matrix. This is needed, because such transforms naturally describe the conversion from the child node to the parent. Here, we need the other way around - hence the inversion is necessary.
This method gives the correct 3D coordinates at the mouse position. However, keep the limitations in mind! Here, we do not take into account any rotation of the plot cube (the plot cube must be left unrotated) and no projection transforms (plot cubes do use orthographic transform by default, which basically is a noop). In order to recognize those variables as well, you may extend the example accordingly.