I have lots of meshes for a 2D project that I place on quads (as a material to the quads). They're like "maps"; filled from the inside but have transparent edges.
I make a polygon collider for each map and place it on top of it so that I can use Physics2D.Raycast() to detect whether the user has placed an object on the map or off the map. They're like shapes (polygons).
The process of making the polygon collider is time-consuming and the quality isn't so good. Is there some Mesh collider that detects transparency and therefore shapes itself to the shape of the map? Or is there a way to make a script that shapes the collider to the shape of the map?
Turns out that the Polygon Collider 2D has the feature to generate a polygon to such a transparent mesh. Just drag and drop a sprite on the polygon collider component.
Here is the solution: use RaycastAll to retrieve all the objects hit, then return the closest object such that the alpha value of the pixel is not 0.
private static RaycastHit? RaycastWithTransparency(Ray ray)
{
var res = Physics.RaycastAll(ray, float.MaxValue).ToList().OrderBy(h => h.distance);
foreach (var h in res)
{
var col = h.collider;
Renderer rend = h.transform.GetComponent<Renderer>();
Texture2D tex = rend.material.mainTexture as Texture2D;
var xInTex = (int) (h.textureCoord.x*tex.width);
var yInTex = (int) (h.textureCoord.y*tex.height);
var pix = tex.GetPixel(xInTex, yInTex);
if (pix.a > 0)
{
//Debug.Log("You hit: " + col.name + " position " + h.textureCoord.x + " , " + h.textureCoord.y);
return h;
}
}
return null;
}
Related
I am making a game with Unity, the project model is 2D.
What I have to do is a wheel divided into segments, each segment is an individual object, this wheel turns on itself with a certain speed, here is an imamgine for better understanding:
I have a "Selector", i.e. something to select a slice, so I made a temporary sprite, i.e. the red triangle, and a script to generate a Ray Cast to locate the selected slice, some images to better understand:
So far so good, my problem lies in the fact that the mesh of the slices is also the mesh of the collider, which is not convex but concave, so from what I've read on the Internet Unity does not allow to intercept objects with concave mesh by the Ray Cast due to calculation problems, so I can not intercept the slices, the only way to hit them is to tick the parameter "Convex"
of the Collider component, but I create a collider with a square shape, and so the selection precision is lacking, here are some pictures to better understand:
So I looked on the internet for a solution and found that the solution was to split the collider into several smaller but convex colliders, so I tried this, i.e. for each pair, i.e. 2 triangles, I created a collider that had a mesh made from the two triangles in question, so I got this:
But it is still not intercepted by RayCast, unless I tick the "Convex" parameter of the collider component, but even then a collider with a square shape is created.
Finally, here are some parts of the code within the post:
Code for generating ray cast:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Selector : MonoBehaviour
{
public GameObject objPosRef;
void Update()
{
if (Input.GetMouseButtonUp(1))
{
RaycastHit hit;
Debug.DrawRay(objPosRef.transform.position, new Vector3(0, 0.5f, 0), Color.green, 0.5f);
if (Physics.Raycast(objPosRef.transform.position, new Vector3(0, 0.5f, 0), out hit))
{
Debug.Log("Colpito: " + hit.collider.name);
}
}
}
}
Code to update the segment mesh:
...
private void UpdateMesh()
{
mesh.Clear();
mesh.vertices = vertex;
mesh.triangles = triangles;
createColldiers();
mesh.RecalculateNormals();
mesh.RecalculateBounds();
GetComponent<MeshRenderer>().material = new Material(material);
GetComponent<MeshRenderer>().material.color = color;
}
Code for creating segment colliders:
private void createColldiers()
{
int numColliders = (numVertex * 2 - 2) / 2;
for (int i = 0; i < numColliders; i++)
{
MeshCollider collider = gameObject.AddComponent<MeshCollider>();
Mesh mesh = new Mesh();
int[] tr = new int[6];
int k = 6 * i;
for (int j = 0; j < 6; j++)
{
tr[j] = triangles[k];
k++;
}
mesh.vertices = vertex;
mesh.triangles = tr;
collider.sharedMesh = mesh;
}
}
In summary I create a new mesh, the vertices of this mesh are identical to those of the segment mesh, although the new mesh does not need all the vertices, the triangles of the new mesh are only 2, and are taken in pairs from the triangle array of the segment mesh.
Sorry if my post is full of pictures, I hope I have been able to give you the best possible understanding of my problem, most likely the probelma will be something popping, thank you in advance for your help.
p.s: very few people say that raycasts can safely intercept objects with a concave collider, thus going against the grain of all other claims to the contrary, who is right?
Box2D/Farseer 2D physics has a useful component which draws a simple representation of the physics world using primitives (lines, polygons, fills, colors). Here's an example:
What's the best way to accomplish this in Unity3D? Is there a simple way to render polygons with fill, lines, points, etc.? If so, I could implement the interface of DebugDraw with Unity's API, but I'm having trouble finding how to implement primitive rendering like this with Unity.
I understand it'll be in 3D space, but I'll just zero-out one axis and use it basically as 2D.
In case you mean actually a debug box just displayed in the SceneView not in the GameView you can use Gizmos.DrawWireCube
void OnDrawGizmos()
{
//store original gizmo color
var color = Gizmos.color;
// store original matrix
var matrix = Gizmos.matrix;
// set gizmo to local space
Gizmos.matrix = transform.localToWorldMatrix;
// Draw a yellow cube at the transform position
Gizmos.color = Color.yellow;
// here set the scale e.g. for a "almost" 2d box simply use a very small z value
Gizmos.DrawWireCube(transform.position, new Vector3(0.5f, 0.2f, 0.001f));
// restor matrix
Gizmos.matrix = matrix;
// restore color
Gizmos.color = color;
}
you can use OnDrawGizmosSelected to show the Gizmo only if the GameObject is selected
you could also extend this by getting the box size over the inspector
[SerializeField] private Vector3 _boxScale;
and using
Gizmos.DrawWireCube(transform.position, _boxScale);
In Unity, say you have a 3D object,
Of course, it's trivial to get the AABB, Unity has direct functions for that,
(You might have to "add up all the bounding boxes of the renderers" in the usual way, no issue.)
So Unity does indeed have a direct function to give you the 3D AABB box instantly, out of the internal mesh/render pipeline every frame.
Now, for the Camera in question, as positioned, that AABB indeed covers a certain 2D bounding box ...
In fact ... is there some sort of built-in direct way to find that orange 2D box in Unity??
Question - does Unity have a function which immediately gives that 2D frustrum box from the pipeline?
(Note that to do it manually you just make rays (or use world to screen space as Draco mentions, same) for the 8 points of the AABB; encapsulate those in 2D to make the orange box.)
I don't need a manual solution, I'm asking if the engine gives this somehow from the pipeline every frame?
Is there a call?
(Indeed, it would be even better to have this ...)
My feeling is that one or all of the
occlusion system in particular
the shaders
the renderer
would surely know the orange box, and perhaps even the blue box inside the pipeline, right off the graphics card, just as it knows the AABB for a given mesh.
We know that Unity lets you tap the AABB 3D box instantly every frame for a given mesh: In fact does Unity give the "2D frustrum bound" as shown here?
As far as I am aware, there is no built in for this.
However, finding the extremes yourself is really pretty easy. Getting the mesh's bounding box (the cuboid shown in the screenshot) is just how this is done, you're just doing it in a transformed space.
Loop through all the verticies of the mesh, doing the following:
Transform the point from local to world space (this handles dealing with scale and rotation)
Transform the point from world space to screen space
Determine if the new point's X and Y are above/below the stored min/max values, if so, update the stored min/max with the new value
After looping over all vertices, you'll have 4 values: min-X, min-Y, max-X, and max-Y. Now you can construct your bounding rectangle
You may also wish to first perform a Gift Wrapping of the model first, and only deal with the resulting convex hull (as no points not part of the convex hull will ever be outside the bounds of the convex hull). If you intend to draw this screen space rectangle while the model moves, scales, or rotates on screen, and have to recompute the bounding box, then you'll want to do this and cache the result.
Note that this does not work if the model animates (e.g. if your humanoid stands up and does jumping jacks). Solving for the animated case is much more difficult, as you would have to treat every frame of every animation as part of the original mesh for the purposes of the convex hull solving (to insure that none of your animations ever move a part of the mesh outside the convex hull), increasing the complexity by a power.
3D bounding box
Get given GameObject 3D bounding box's center and size
Compute 8 corners
Transform positions to GUI space (screen space)
Function GUI3dRectWithObject will return the 3D bounding box of given GameObject on screen.
2D bounding box
Iterate through every vertex in a given GameObject
Transform every vertex's position to world space, and transform to GUI space (screen space)
Find 4 corner value: x1, x2, y1, y2
Function GUI2dRectWithObject will return the 2D bounding box of given GameObject on screen.
Code
public static Rect GUI3dRectWithObject(GameObject go)
{
Vector3 cen = go.GetComponent<Renderer>().bounds.center;
Vector3 ext = go.GetComponent<Renderer>().bounds.extents;
Vector2[] extentPoints = new Vector2[8]
{
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z+ext.z))
};
Vector2 min = extentPoints[0];
Vector2 max = extentPoints[0];
foreach (Vector2 v in extentPoints)
{
min = Vector2.Min(min, v);
max = Vector2.Max(max, v);
}
return new Rect(min.x, min.y, max.x - min.x, max.y - min.y);
}
public static Rect GUI2dRectWithObject(GameObject go)
{
Vector3[] vertices = go.GetComponent<MeshFilter>().mesh.vertices;
float x1 = float.MaxValue, y1 = float.MaxValue, x2 = 0.0f, y2 = 0.0f;
foreach (Vector3 vert in vertices)
{
Vector2 tmp = WorldToGUIPoint(go.transform.TransformPoint(vert));
if (tmp.x < x1) x1 = tmp.x;
if (tmp.x > x2) x2 = tmp.x;
if (tmp.y < y1) y1 = tmp.y;
if (tmp.y > y2) y2 = tmp.y;
}
Rect bbox = new Rect(x1, y1, x2 - x1, y2 - y1);
Debug.Log(bbox);
return bbox;
}
public static Vector2 WorldToGUIPoint(Vector3 world)
{
Vector2 screenPoint = Camera.main.WorldToScreenPoint(world);
screenPoint.y = (float)Screen.height - screenPoint.y;
return screenPoint;
}
Reference: Is there an easy way to get on-screen render size (bounds)?
refer to this
It needs the game object with skinnedMeshRenderer.
Camera camera = GetComponent();
SkinnedMeshRenderer skinnedMeshRenderer = target.GetComponent();
// Get the real time vertices
Mesh mesh = new Mesh();
skinnedMeshRenderer.BakeMesh(mesh);
Vector3[] vertices = mesh.vertices;
for (int i = 0; i < vertices.Length; i++)
{
// World space
vertices[i] = target.transform.TransformPoint(vertices[i]);
// GUI space
vertices[i] = camera.WorldToScreenPoint(vertices[i]);
vertices[i].y = Screen.height - vertices[i].y;
}
Vector3 min = vertices[0];
Vector3 max = vertices[0];
for (int i = 1; i < vertices.Length; i++)
{
min = Vector3.Min(min, vertices[i]);
max = Vector3.Max(max, vertices[i]);
}
Destroy(mesh);
// Construct a rect of the min and max positions
Rect r = Rect.MinMaxRect(min.x, min.y, max.x, max.y);
GUI.Box(r, "");
My game generates a flat surface (the floor of a building). It's a flat poligon mesh as shown in the picture:
The poligon is generated procedurally and will be different each time.
I need to map UV coordinates so that a standard square texture of, say,a floor made of bricks, is properly displayed.
What is the best way to assing the correct UV coordinates to each vertex?
With an irregular shape, you might want to "paste" a texture across the mesh(imagine pasting a rectangular sticker across your mesh and cutting away those that fall outside your mesh shape).
For that type of mapping, you might want to use Mesh.bounds, which gives you the bounding box of your mesh in local coordinates, which is the area you are going to "paste" your texture over.
Mesh mesh = GetComponent<MeshFilter>();
Bounds bounds = mesh.bounds;
Get the vertices of your mesh:
Vector3[] vertices = mesh.vertices;
Now do the mapping:
Vector2[] uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
uvs[i] = new Vector2(vertices[i].x / bounds.size.x, vertices[i].z / bounds.size.z);
}
mesh.uv = uvs;
I am trying to create my whole mesh from 5 submeshes via script in Unity. For each submesh I've got a separated indice array and material assigned. Curiously Unity only renders the first submesh, but if I inspect the mesh assigned to the mesh filter it says that there are more vertices and triangle than actually are rendered.
GameObject go = new GameObject("Island Prototype");
Mesh mesh = new Mesh();
mesh.vertices = this.vertices.ToArray();
mesh.subMeshCount = this.indices.Count;
int c = 0;
foreach (List<int> l in this.indices)
{
Debug.Log(l.Count);
mesh.SetTriangles(l.ToArray(), c);
c++;
}
mesh.RecalculateNormals();
List<Material> materials = new List<Material>();
materials.Add(fieldMaterial);
foreach (TileSettings ts in tiles)
{
materials.Add(fieldMaterial);
}
Debug.Log("Number of materials: " + materials.Count);
//mesh.RecalculateBounds();
//mesh.RecalculateNormals();
MeshRenderer mr = go.AddComponent<MeshRenderer>();
mr.sharedMaterials = materials.ToArray();
MeshFilter mf = go.AddComponent<MeshFilter>();
mf.mesh = mesh;
At the screenshot you can see, that the mesh inspector says the correct count of submeshes. There are also 5 materials attached to the renderer.
At the console I've printed the count of vertices, so submesh 3-5 doesn't own triangles at the moment, but this shouldn't be a problem, should it? At least submesh 2 should be rendered...