I'm trying to detect contours in a scene and add a collider to every detected object, I used the canny edge detector to get the coordinates of the detected objects.
Here is my output image
I need to add a collider to each black line to prevent my game object from going in/out of that area but I don't know how to do so exactly.
The findContours function returns a list of detected contours each stored as a vector of points but how do I use that to generate a collider?
Thank you for your help.
Update
Here is my source code (for the update method)
void Update ()
{
if (initDone && webCamTexture.isPlaying && webCamTexture.didUpdateThisFrame) {
//convert webcamtexture to mat
Utils.webCamTextureToMat (webCamTexture, rgbaMat, colors);
//convert to grayscale
Imgproc.cvtColor (rgbaMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
//Blurring
Imgproc.GaussianBlur(rgbaMat,blurMat,new Size(7,7),0);
Imgproc.Canny(blurMat, cannyMat, 50,100);
Mat inverted = ~cannyMat;
//convert back to webcam texture
Utils.matToTexture2D(inverted, texture, colors);
Mat hierarchy = new Mat();
List<MatOfPoint> contours = new List<MatOfPoint>();
Imgproc.findContours(inverted, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
}
}
Use a PolygonCollider2d.
You can edit the collider at runtime using the SetPath function, to which you will pass a list of 2d points (that you already computed using the findContours function.
You can have several paths in the polygon if you want your collider to have holes.
Related
I have written some code that snaps a cylinder to an existing cylinder using a for loop on a gameobject list I call cylinders. Below is the code I use for "snapping" the cylinder to another cylinder using the mouse position and a "translucentPrefab". I would like to know if there is another object obstructing the placement. For performance reasons I would like to avoid using another for loop through my list to check each position. Is there any good solution for this. Could I use a "fake" 2d array since I mostly use full integer boxes and set squares to occupied in that array. Or is there a smarter approach?
`if (worldMousePosition.x > centerPoint.x && Vector3.Distance(worldMousePosition, centerPoint) < snappingRange)
{
translucentPrefab.transform.position = rightPosition;
snapped = true;
left = false;
if (renderer != null)
{
// Set the prefab material to translucent and green
material.color = new Color(0, 1, 1, 0.5f);
}
}`
I have tried using box colliders in many ways to check in the same space as the new cylinder, however all attempts have been a faliure.
I suggest you to use Physics.SphereCastAll at point where you cursor is and just iterate over all object that SphereCastAll returns.
And if you don't want objects to count toward physics you can add special physic layer for just this. And that adjust collision matrix.
My projet is : user could draw with finger and I generate a field base on that.
I already got that from the user drawing :
So this is a succession of mesh but it's not close. I just generate the mesh in one direction with some height.
In need to close it. I don't want to be able the see through it.
My problem is : this drawing is random, so there is convexe and not convexe part . Let's illustrate that :
1- First I put a yellow circle on each point from my mesh ( I have this list of point with each (x,y,z) coordinate)
2- Then, with each 3 point following I try to make a mesh :
It's Ok when the shape we want to fill is concave but it will (I think) bug if the shape is convex :
And there is also this kind of bug, when the mesh is too big :
At the end, I just want to be able to close any shape I have. I hope I'm clear.
So the answer was to use Triangulation Algorithm , I use this repo https://github.com/mattatz/unity-triangulation2D
Just add to your code :
using mattatz.Triangulation2DSystem;
and you could launch the example from the github repo :
// input points for a polygon2D contor
List<Vector2> points = new List<Vector2>();
// Add Vector2 to points
points.Add(new Vector2(-2.5f, -2.5f));
points.Add(new Vector2(2.5f, -2.5f));
points.Add(new Vector2(4.5f, 2.5f));
points.Add(new Vector2(0.5f, 4.5f));
points.Add(new Vector2(-3.5f, 2.5f));
// construct Polygon2D
Polygon2D polygon = Polygon2D.Contour(points.ToArray());
// construct Triangulation2D with Polygon2D and threshold angle (18f ~ 27f recommended)
Triangulation2D triangulation = new Triangulation2D(polygon, 22.5f);
// build a mesh from triangles in a Triangulation2D instance
Mesh mesh = triangulation.Build();
// GetComponent<MeshFilter>().sharedMesh = mesh;
Box2D/Farseer 2D physics has a useful component which draws a simple representation of the physics world using primitives (lines, polygons, fills, colors). Here's an example:
What's the best way to accomplish this in Unity3D? Is there a simple way to render polygons with fill, lines, points, etc.? If so, I could implement the interface of DebugDraw with Unity's API, but I'm having trouble finding how to implement primitive rendering like this with Unity.
I understand it'll be in 3D space, but I'll just zero-out one axis and use it basically as 2D.
In case you mean actually a debug box just displayed in the SceneView not in the GameView you can use Gizmos.DrawWireCube
void OnDrawGizmos()
{
//store original gizmo color
var color = Gizmos.color;
// store original matrix
var matrix = Gizmos.matrix;
// set gizmo to local space
Gizmos.matrix = transform.localToWorldMatrix;
// Draw a yellow cube at the transform position
Gizmos.color = Color.yellow;
// here set the scale e.g. for a "almost" 2d box simply use a very small z value
Gizmos.DrawWireCube(transform.position, new Vector3(0.5f, 0.2f, 0.001f));
// restor matrix
Gizmos.matrix = matrix;
// restore color
Gizmos.color = color;
}
you can use OnDrawGizmosSelected to show the Gizmo only if the GameObject is selected
you could also extend this by getting the box size over the inspector
[SerializeField] private Vector3 _boxScale;
and using
Gizmos.DrawWireCube(transform.position, _boxScale);
My game generates a flat surface (the floor of a building). It's a flat poligon mesh as shown in the picture:
The poligon is generated procedurally and will be different each time.
I need to map UV coordinates so that a standard square texture of, say,a floor made of bricks, is properly displayed.
What is the best way to assing the correct UV coordinates to each vertex?
With an irregular shape, you might want to "paste" a texture across the mesh(imagine pasting a rectangular sticker across your mesh and cutting away those that fall outside your mesh shape).
For that type of mapping, you might want to use Mesh.bounds, which gives you the bounding box of your mesh in local coordinates, which is the area you are going to "paste" your texture over.
Mesh mesh = GetComponent<MeshFilter>();
Bounds bounds = mesh.bounds;
Get the vertices of your mesh:
Vector3[] vertices = mesh.vertices;
Now do the mapping:
Vector2[] uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
uvs[i] = new Vector2(vertices[i].x / bounds.size.x, vertices[i].z / bounds.size.z);
}
mesh.uv = uvs;
Currently our system uses the ILNumerics 3D plot cube class with an ILNumerics surface component to display a 3D meshed surface. An aim for our system is to be able to interrogate individual points on the surface from a mouse click on the plot. We have the MouseClick event set up on our plot the problem is I am unsure on how to get the values for the particular point on the surface that has been clicked, could anyone help with this issue?
The conversion from 2D mouse coordinates to 3D 'model' coordinates is possible - under some limitations:
The conversion is not unambiguous. The mouse event only provides 2 dimensions: X and Y screen coordinates. In the 3D model there might be more than one point 'behind' this 2D screen point. Therefore, the best you can get is to compute a line in 3D, starting at the camera and ending in infinite depth.
While in theory it would be possible at least to try to find the crossing of the line with the 3D objects, ILNumerics currently does not. Even in the simple case of a surface it is easy to construct a 3D model which crosses the line at more than one point.
For a simplified situation a solution exists: If the Z coordinate in 3D does not matter, one can use common matrix conversions in order to acquire the X and Y coordinates in 3D and use these only. Let's say, your plot is a 2D line plot or a surface plot - but only watched from
'above' (i.e. The unrotated X-Y plane). The Z coordinate of the point clicked may not be of interest. Let's further assume, you have setup an ILScene scene in a common windows application with ILPanel:
private void ilPanel1_Load(object sender, EventArgs e) {
var scene = new ILScene() {
new ILPlotCube(twoDMode: true) {
new ILSurface(ILSpecialData.sincf(20,30))
}
};
scene.First<ILSurface>().MouseClick += (s,arg) => {
// we start at the mouse event target -> this will be the
// surface group node (the parent of "Fill" and "Wireframe")
var group = arg.Target.Parent;
if (group != null) {
// walk up to the next camera node
Matrix4 trans = group.Transform;
while (!(group is ILCamera) && group != null) {
group = group.Parent;
// collect all nodes on the path up
trans = group.Transform * trans;
}
if (group != null && (group is ILCamera)) {
// convert args.LocationF to world coords
// The Z coord is not provided by the mouse! -> choose arbitrary value
var pos = new Vector3(arg.LocationF.X * 2 - 1, arg.LocationF.Y * -2 + 1, 0);
// invert the matrix.
trans = Matrix4.Invert(trans);
// trans now converts from the world coord system (at the camera) to
// the local coord system in the 'target' group node (surface).
// In order to transform the mouse (viewport) position, we
// left multiply the transformation matrix.
pos = trans * pos;
// view result in the window title
Text = "Model Position: " + pos.ToString();
}
}
};
ilPanel1.Scene = scene;
}
What it does: it registers a MouseClick event handler on the surface group node. In the handler it accumulates the transformation matrices on the path from the clicked target (the surface group node) up to the next camera node the surface is a child of. While rendering, the (model) coordinates of the vertices are transformed by the local coordinate transformation matrix, hosted in every group node. All transformations are accumulated and so the vertex coordinates end up in the 'world coordinate' system, established by every camera. So rendering finds the 2D screen position from the 3D model vertex positions.
In order to find the 3D position from the 2D screen coordinates - one must go the other way around. In the example, we acquire the transformation matrices for every group node, multiply them all up and invert the resulting transformation matrix. This is needed, because such transforms naturally describe the conversion from the child node to the parent. Here, we need the other way around - hence the inversion is necessary.
This method gives the correct 3D coordinates at the mouse position. However, keep the limitations in mind! Here, we do not take into account any rotation of the plot cube (the plot cube must be left unrotated) and no projection transforms (plot cubes do use orthographic transform by default, which basically is a noop). In order to recognize those variables as well, you may extend the example accordingly.