I am using a runtime gizmo to translate gameobjects in my game. I need to know how to clamp the objects movement in an irregular shape.
For example: If the shape is a Box.
n calculate the Bounds of this shape and using the Bounds.max.x, Bounds.min.x, Bounds.max.y, Bounds.min.y, Bounds.max.z, Bounds.min.z and clamp the position like this:
Vector3 pos = gameObject.transform.position;
pos.x = Mathf.Clamp(pos.x,Bounds.min.x,Bounds.max.x);
pos.y = Mathf.Clamp(pos.y,Bounds.min.y,Bounds.max.y);
pos.x = Mathf.Clamp(pos.z,Bounds.min.z,Bounds.max.z);
gameObject.transform.position = pos;
This is all good till it's a box or a rectangular shape, because I can use either Renderer.Bounds or Collider.Bounds, However, if it is a irregular shape like this:
calculate Bounds value, it will be incorrect since Bounding box/bounds will always be calculated in a square/box shape and will give incorrect max and min value, making the gameObject go outside the shape.
I am using the below RuntimeGizmo to translate my gameObject. https://github.com/HiddenMonk/Unity3DRuntimeTransformGizmo
p.s. The Shape in my case is a square shape and a L-shape room, made by using primitive cubes acting as walls. gameObject in question is a piece of furniture.
Related
I am creating a 2D platformer type game, and it has non flat terrain. How do I make it so that my character's legs always smoothly transition from a flat surface to a slope surface?
For reference you can check out [Alto's Adventure] (http://altosadventure.com/) (mobile). In that, the skateboard snaps to the curvy terrain.
Feel free to ask for more details.
To make such a possibility, you have to send a ray to the ground and then set the direction of up transform according to the Normal vector point of impact. The following code solves the basic problem.
public LayerMask groundLayer;
public float maxRayLength = 3;
public float offset = .5f; // set it to half of your character height
public void Update()
{
var ground = Physics2D.Raycast(transform.position, -transform.up, maxRayLength, groundLayer.value);
if (ground)
{
transform.up = ground.normal;
transform.position = new Vector3(ground.point.x, ground.point.y) + transform.up*offset;
}
}
In addition to the script, you must specify a suitable 2D collider as well as its layer. For layers, make sure that both the script inspector and the layer are set to the hypothetical ground layer.
Example Result:
As per my game requirements, I was giving manual force when two cars collide with each other and move back.
So I want the correct code that can justify this. Here is the example, collision response that I want to get:
As per my understanding, I have written this code:
Vector3 reboundDirection = Vector3.Normalize(transform.position - other.transform.position);
reboundDirection.y = 0f;
int i = 0;
while (i < 3)
{
myRigidbody.AddForce(reboundDirection * 100f, ForceMode.Force);
appliedSpeed = speed * 0.5f;
yield return new WaitForFixedUpdate();
i++;
}
I am moving, my cars using this code:
//Move the player forward
appliedSpeed += Time.deltaTime * 7f;
appliedSpeed = Mathf.Min(appliedSpeed, speed);
myRigidbody.velocity = transform.forward * appliedSpeed;
Still, as per my observation, I was not getting, collision response in the proper direction. What is the correct way for getting above image reference collision response?
Until you clarify why you have use manual forces or how you handle forces generated by Unity Engine i would like to stress one problem in your approach. You calculate direction based on positions but positions are the center of your cars. Therefore, you are not getting a correct direction as you can see from the image below:
You calculate the direction between two pivot or center points therefore, your force is a bit tilted in left image. Instead of this you can use ContactPoint and then calculate the direction.
As more detailed information so that OP can understand what i said! In the above image you can see the region with blue rectangle. You will get all the contact points for the corresponding region using Collision.contacts
then calculate the center point or centroid like this
Vector3 centroid = new Vector3(0, 0, 0);
foreach (ContactPoint contact in col.contacts)
{
centroid += contact.point;
}
centroid = centroid / col.contacts.Length;
This is the center of the rectangle to find the direction you need to find its projection on your car like this:
Vector3 projection = gameObject.transform.position;
projection.x = centroid.x;
gameObject.GetComponent<Rigidbody>().AddForce((projection - centroid )*100, ForceMode.Impulse);
Since i do not know your set up i just got y and z values from car's position but x value from centroid therefore you get a straight blue line not an arrow tilted to left like in first image even in the case two of second image. I hope i am being clear.
In Unity, say you have a 3D object,
Of course, it's trivial to get the AABB, Unity has direct functions for that,
(You might have to "add up all the bounding boxes of the renderers" in the usual way, no issue.)
So Unity does indeed have a direct function to give you the 3D AABB box instantly, out of the internal mesh/render pipeline every frame.
Now, for the Camera in question, as positioned, that AABB indeed covers a certain 2D bounding box ...
In fact ... is there some sort of built-in direct way to find that orange 2D box in Unity??
Question - does Unity have a function which immediately gives that 2D frustrum box from the pipeline?
(Note that to do it manually you just make rays (or use world to screen space as Draco mentions, same) for the 8 points of the AABB; encapsulate those in 2D to make the orange box.)
I don't need a manual solution, I'm asking if the engine gives this somehow from the pipeline every frame?
Is there a call?
(Indeed, it would be even better to have this ...)
My feeling is that one or all of the
occlusion system in particular
the shaders
the renderer
would surely know the orange box, and perhaps even the blue box inside the pipeline, right off the graphics card, just as it knows the AABB for a given mesh.
We know that Unity lets you tap the AABB 3D box instantly every frame for a given mesh: In fact does Unity give the "2D frustrum bound" as shown here?
As far as I am aware, there is no built in for this.
However, finding the extremes yourself is really pretty easy. Getting the mesh's bounding box (the cuboid shown in the screenshot) is just how this is done, you're just doing it in a transformed space.
Loop through all the verticies of the mesh, doing the following:
Transform the point from local to world space (this handles dealing with scale and rotation)
Transform the point from world space to screen space
Determine if the new point's X and Y are above/below the stored min/max values, if so, update the stored min/max with the new value
After looping over all vertices, you'll have 4 values: min-X, min-Y, max-X, and max-Y. Now you can construct your bounding rectangle
You may also wish to first perform a Gift Wrapping of the model first, and only deal with the resulting convex hull (as no points not part of the convex hull will ever be outside the bounds of the convex hull). If you intend to draw this screen space rectangle while the model moves, scales, or rotates on screen, and have to recompute the bounding box, then you'll want to do this and cache the result.
Note that this does not work if the model animates (e.g. if your humanoid stands up and does jumping jacks). Solving for the animated case is much more difficult, as you would have to treat every frame of every animation as part of the original mesh for the purposes of the convex hull solving (to insure that none of your animations ever move a part of the mesh outside the convex hull), increasing the complexity by a power.
3D bounding box
Get given GameObject 3D bounding box's center and size
Compute 8 corners
Transform positions to GUI space (screen space)
Function GUI3dRectWithObject will return the 3D bounding box of given GameObject on screen.
2D bounding box
Iterate through every vertex in a given GameObject
Transform every vertex's position to world space, and transform to GUI space (screen space)
Find 4 corner value: x1, x2, y1, y2
Function GUI2dRectWithObject will return the 2D bounding box of given GameObject on screen.
Code
public static Rect GUI3dRectWithObject(GameObject go)
{
Vector3 cen = go.GetComponent<Renderer>().bounds.center;
Vector3 ext = go.GetComponent<Renderer>().bounds.extents;
Vector2[] extentPoints = new Vector2[8]
{
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z+ext.z))
};
Vector2 min = extentPoints[0];
Vector2 max = extentPoints[0];
foreach (Vector2 v in extentPoints)
{
min = Vector2.Min(min, v);
max = Vector2.Max(max, v);
}
return new Rect(min.x, min.y, max.x - min.x, max.y - min.y);
}
public static Rect GUI2dRectWithObject(GameObject go)
{
Vector3[] vertices = go.GetComponent<MeshFilter>().mesh.vertices;
float x1 = float.MaxValue, y1 = float.MaxValue, x2 = 0.0f, y2 = 0.0f;
foreach (Vector3 vert in vertices)
{
Vector2 tmp = WorldToGUIPoint(go.transform.TransformPoint(vert));
if (tmp.x < x1) x1 = tmp.x;
if (tmp.x > x2) x2 = tmp.x;
if (tmp.y < y1) y1 = tmp.y;
if (tmp.y > y2) y2 = tmp.y;
}
Rect bbox = new Rect(x1, y1, x2 - x1, y2 - y1);
Debug.Log(bbox);
return bbox;
}
public static Vector2 WorldToGUIPoint(Vector3 world)
{
Vector2 screenPoint = Camera.main.WorldToScreenPoint(world);
screenPoint.y = (float)Screen.height - screenPoint.y;
return screenPoint;
}
Reference: Is there an easy way to get on-screen render size (bounds)?
refer to this
It needs the game object with skinnedMeshRenderer.
Camera camera = GetComponent();
SkinnedMeshRenderer skinnedMeshRenderer = target.GetComponent();
// Get the real time vertices
Mesh mesh = new Mesh();
skinnedMeshRenderer.BakeMesh(mesh);
Vector3[] vertices = mesh.vertices;
for (int i = 0; i < vertices.Length; i++)
{
// World space
vertices[i] = target.transform.TransformPoint(vertices[i]);
// GUI space
vertices[i] = camera.WorldToScreenPoint(vertices[i]);
vertices[i].y = Screen.height - vertices[i].y;
}
Vector3 min = vertices[0];
Vector3 max = vertices[0];
for (int i = 1; i < vertices.Length; i++)
{
min = Vector3.Min(min, vertices[i]);
max = Vector3.Max(max, vertices[i]);
}
Destroy(mesh);
// Construct a rect of the min and max positions
Rect r = Rect.MinMaxRect(min.x, min.y, max.x, max.y);
GUI.Box(r, "");
I have a Quad whose vertices I'm printing like this:
public MeshFilter quadMeshFilter;
for(var vertex in quadMeshFilter.mesh.vertices)
{
print(vertex);
}
And, the localScale like this:
public GameObject quad;
print(quad.transform.localScale);
Vertices are like this:
(-0.5, -0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, 0.5)
while the localScale is:
(6.4, 4.8, 0)
How is this possible - because the vertices make a square but localScale does not.
How do I use vertices and draw another square in front of the quad?
I am not well versed in the matters of meshes, but I believe I know the answer to this question.
Answer
How is this possible
Scale is a value which your mesh is multiplied in size by in given directions (x, y, z). A scale of 1 is default size. A scale of 2 is double size and so on. Your localSpace coordinates will then be multiplied by this scale.
Say a localSpace coordinate is (1, 0, 2), the scale however, is (3, 1, 3). Meaning that the result is (1*3, 0*1, 2*3).
How do I use vertices and draw another square in front of the quad?
I'd personally just create the object and then move it via Unity's Transform system. Since it allows you to change the worldSpace coordinates using transform.position = new Vector3(1f, 5.4f, 3f);
You might be able to move each individual vertex in WorldSpace too, but I haven't tried that before.
I imagine it is related to this bit of code though: vertices[i] = transform.TransformPoint(vertices[i]); since TransformPoint converts from localSpace to worldSpace based on the Transform using it.
Elaboration
Why do I get lots of 0's and 5's in my space coordinates despite them having other positions in the world?
If I print the vertices of a quad using the script below. I get these results, which have 3 coordinates and can be multiplied as such by localScale.
Print result:
Script:
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
This first result is what we call local space.
There also exists something called WorldSpace. You can convert between local- and worldSpace.
localSpace is the objects mesh vertices in relation to the object itself while worldSpace is the objects location in the Unity scene.
Then you get the results as seen below, first the localSpace coordinates as in the first image, then the WorldSpace coordinates converted from these local coordinates.
Here is the script I used to print the above result.
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
Debug.Log("World Space");
for (int i = 0; i < vertices.Length; ++i)
{
vertices[i] = transform.TransformPoint(vertices[i]);
Debug.Log(vertices[i]);
}
Good luck with your future learning process.
This becomes clear once you understand how Transform hierarchies work. Its a tree, in which parent transform [3x3] matrix (position, rotation, scale (rotation is actually a quaternion but lets assume its euler for simplicity so that math works). by extension of this philosophy, the mesh itself can be seen as child to the gameoobject that holds it.
If you imagine a 1x1 quad (which is what is described by your vertexes), parented to a gameobject, and that gameobject's Transform has a non-one localScale, all the vertexes in the mesh get multiplied by that value, and all the positions are added.
now if you parent that object to another gameObject, and give it another localScale, this will again multiply all the vertex positions by that scale, translate by its position etc.
to answer your question - global positions of your vertexes are different than contained in the source mesh, because they are feed through a chain of Transforms all the way up to the scene root.
This is both the reason that we only have localScale and not scale, and this is also the reason why non-uniform scaling of objects which contain rotated children can sometimes give very strange results. Transforms stack.
var startPoint =
shaft.transform.position
+ shaft.transform.forward;
var ray = new Ray(
startPoint,
-shaft.transform.forward
);
RaycastHit rayCastHit;
Physics.Raycast(ray, out rayCastHit);
var textured2D = (Texture2D)discoBall.renderer.material.mainTexture;
Vector2 textureCoord = rayCastHit.textureCoord;
Debug.Log(string.Format(
"{0},{1} at distance {2}",
textureCoord.x * textured2D.width,
textureCoord.y * textured2D.height,
rayCastHit.distance
));
I have a sphere with an object "Shaft" object inside it. I work out a startPoint as a distance away from the shaft in the direction the shaft points (to get outside the sphere). I then create a ray pointing back at the sphere with the same distance, so that it collides with the outside of my sphere.
The Debug.Log outputs x,y = 0,0 for the textureCoord, and the correct value of 0.35 for the distance. Why is the textureCoord always 0,0 when I do in fact have a material with a texture on my sphere?
Actually let me move my comment here, does the sphere use a mesh collider or a sphere collider? If it doesn't use a mesh collider, textureCoord returns Vector2.zero.
Update:
To change the collider type of a GameObject highlight it in the Unity editor and go to Component->Physics->MeshCollider. If prompted to 'Replace Existing Component', select 'Replace'.
GameObjects added from Unity's GameObject->Create_Other menu tend to default to colliders based on the shape of the object (spheres, boxes, etc), since mesh colliders are computationally more expensive.
Is the texture2d width and height actually what you expect? Possibly maintexture is not the correct texture on the shader you are using.