I am not understanding unity's explanation of the Collider.2D.bounds.
From Unity docs: "The world space bounding area of the collider."
Could someone give a better explanation? Furthermore, kindly explain collider.2d.bounds.max and min.
"the world space bounding area of the collider"
In unity you can get your objects in different coordinate representations, a local system that is independent from its parents and where the object itself is the center of the system and a world system.
While the local object system is centered around the object the world system is a fixed system and it describes where exactly the object is in your scene, without the world system you would not be able to know where exactly an object is in your scene as its values describe exactly this.
Object space:
World space:
In your case you get the bounds (in case of a box the minimal and maximal position) the object has in the world system.
"bounds min/max"
bounds.min.x will be the lowest x position of the bounds (of the object) and bounds.max.x will be the highest
Edit:
here you can see how bounding volumes are working. Bounding volumes will always include every single vertex around your object, but there are different kinds of bounding boxes, unity uses the axis-aligned bounding box (AABB) one.
Related
Does anybody know how to retrieve an actor's world space oriented bounding box 8 points in C++. Im reading the official documentation but it's a bit vague as it never specifies whether the bounds objects (FBox, FBoxShpereBounds) are local space, world space, axis aligned etc
I'm thinking something like below but I'm not sure if that's right
UStaticMeshComponent* pMesh = Cast<UStaticMeshComponent>(actor->GetComponentByClass(UStaticMeshComponent::StaticClass()));
if (pMesh)
{
UStaticMesh* pStaticMesh = pMesh->GetStaticMesh();
if (pStaticMesh && pStaticMesh->GetRenderData())
{
FStaticMeshRenderData* pRenderData = pStaticMesh->GetRenderData();
if (pRenderData)
FBoxSphereBounds bounds = pRenderData->Bounds;
bounds.TransformBy(actor>GetActorTransform());
}
}
Unreal maintains it's bounds as axis-aligned (AABB). This is often done in game engines for efficiency in the physics/collision subsystem. To get an AABB for an actor, you can use the following function - this is essentially equivalent to what you did above with pRenderData->Bounds but is independent of the actor implementation.
FBox GetActorAABB(const AActor& Actor)
{
FVector ActorOrigin;
FVector BoxExtent;
// First argument is bOnlyCollidingComponents - if you want to get the bounds for components that don't have collision enabled then set to false
// Last argument is bIncludeFromChildActors. Usually this won't do anything but if we've child-ed an actor - like a gun child-ed to a character then we wouldn't want the gun to be part of the bounds so set to false
Actor.GetActorBounds(true, ActorOrigin, BoxExtent, false);
return FBox::BuildAABB(ActorOrigin, BoxExtent);
}
From code above it looks like you want an oriented bounded box (OBB) since you are applying the transform to it. Trouble is the AABB Unreal maintains will be "fit" to the world space axes and what you are essentially doing just rotates the center point of the AABB which will not give a "tight fit" for rotation angles far from the world axes. The following two UE forum posts provide some insight into how you might do this:
https://forums.unrealengine.com/t/oriented-bounding-box-from-getlocalbounds/241396
https://forums.unrealengine.com/t/object-oriented-bounding-box-from-either-aactor-or-mesh/326571/4
If you want a true OBB, FOrientedBox is what you need, but the engine lacks documented utilities to do intersection or overlap tests with this structure depending on what you are trying to do, but these things do exist in the engine but you have to hunt through the source code to find them. In general, the separating axis theorem (SAT) can be used to find collisions between two convex hull shapes, which an OBB is by definition.
I saw some documents saying that there is no concepts of length in Unity. All you can do to determine the dimensions of the gameobjects is to use Scale.
Then how could I set the overall relative dimensions between the gameobjects?
For example, the dimension of a 1:1:1 plane is obviously different from a 1:1:1 sphere! Then how could I know what's the relative ratios between the plane and the sphere? 1 unit length of the plane is equal to how much unit of the diameter of the sphere!? Otherwise how could I know if I had set everything in the right proportion?
Well, what you say is right, but consider that objects could have a collider. And, in case of a sphere, you could obtain the radius with SphereCollider.radius.
Also, consider Bounds.extents, that's relative to the objects's bounding box.
Again, considering the Sphere, you can obtain the diameter with:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Bounds bounds = mesh.bounds;
float diameter = bounds.extents.x * 2;
All GameObjects in unity have a Transform component, which determines its position, rotation and scale. Most 3D Objects also have a MeshFilter component, which contains reference to the Mesh object.
The Mesh contains the actual shape of the object, for example six faces of a cube or, faces of a sphere. Unity provides a handful of built in objects (cube, sphere, cyliner, plane, quad), but this is just a 'starter kit'. Most of those built in objects are 1 unit in size, but this is purely because the vertexes have been placed in those positions (so you need to scale by 2 to get 2units size).
But there is no limit on positinos within a mesh, you can have a tiny tiny object od a whole terrain object, and have them massively different in size despite keeping their scale at 1.
You should try to learn some 3D modelling application to create arbitrary objects.
Alternatively try and install a plugin called ProBuilder which used to be quite expensive and is nowe free (since acquired by Unity) which enabels in-editor modelling.
Scales are best kept at one, but its good to have an option to scale - this way you can re-use the spehre mesh, or the cube mesh, (less waste of memory) by having them at different scales.
In most unity applications you set the scale to some arbitrary number.
So typically 1 m = 1 unit.
All things that are 1 unit tall are 1 m tall.
If you import a mesh from a modelling program that is the wrong size, scale it to exactly one meter (use a standard 1,1,1 cube as reference). Then, stick it inside an empty game object to “convert” it into your game’s proper scale. So now if you scale the empty object’s y axis to 2, the object is now 2 meters tall.
A better solution is to keep all objects’ highest parent in the hierarchy at 1,1,1 scale. Using the 1,1,1 reference cube, scale your object to a size that looks proper. So for example if I had a model of a person I’d want it to be scaled to be roughly twice as tall as the cube. Then, drag it into an empty object of 1,1,1 scale this way, everything in your scene’s “normal” size is 1,1,1. If you want to double the size of something you’d then make it 2,2,2. In practice this is much more useful than the first option.
Now, if you change its position by 1 unit it is moving effectively by what would look like the proper 1 m also.
This process also lets you change where the “bottom” of an object is. You can change the position of the object inside the empty, making an “offset”. This is Useful for making models stand right on the ground with position y=0.
I have tried to find any information on how the Unity assigns pivot points to object but all I keep finding is threads on how to move pivot points and that it can't be done. I am creating a 2D game with a background that is randomly created with meshes that are wrapped in empty GameObjects. These objects are organically shaped but they have a property that returns a rectangle that bounds the object so that they can be placed in a way that they are not overlapping. The trouble is that the algorithm assumes that the pivot point is going to be the center of the object. What I would like to know is how does Unity decide where the pivot point will be set to so that I can predict how much I will need to move my mesh inside the parent object so that the pivot point will be in the center of the bounding rectangle.
Possible fix:
Try create the meshes during runtime and see if it always places the pivot points at a certain corner or at least relatively speaking the same location.
If it does that you would know where the pivot point is and could take it into account in your code, if you also know the size of the mesh you spawn.
So I think most general and correct answer that I can come up with is that unity assigns the pivot point to the center of the GameObject that you apply the Mesh to. The local coordinates of the vertices of the mesh depending on how you create them mighht place your mesh so that its logical center is not the same as the that of the empty GameObject that it is attached to. What I did to fix the issue was to make a vector from local point (0,0,0) to the center of bounding rectangle and translate the vertices I use to make my mesh by that vector inverted. It wasn't perfect but by far close enough to ensure that I won't have any overlapping meshes.
I'm learning unity by the book "Unity game development in 24 hours". The book says:
Translation: Translation is a inert transformation. That means any changes applied after it won't be affected.
Scaling: Scaling effectively changes the size of the local coordinate grid. Basically, when you scale an object to be larger, you are really scaling the local coordinate system to be larger. This causes the object to seem to grow. This change is multiplicative. For example, if an object is scaled to 1 (its natural, default size) and then translated 5 units along the x axis, the object appears to move 5 units to the right. If the same object were to be scaled to 2, however, then translating 5 units on the x axis would result in the object appearing to move 10 units to the right. This is because the local coordinate system is now double the size and 5 times 2 equals 10. Inversely, if the object were scaled to .5 and then moved, it would appear to only move 2.5 units (.5 x 5 = 2.5)
I tried to experiment this two effects but it didn't work that way. To the Translation, I can apply any changes after it. And to the Scaling, it scaled the local coordinate system in multiplicative way but it didn't multi the affect of translation. Am I understand this wrong or it's the book?
Translating (using Transform.Translate method) means moving object's transform by some vector. Simple as that.
Local scale is little bit more complicated. It scales not only the object itself, but all objects, that are children of it. And the distance moved is relative - if you have a cube that's 1x1x1 in size and you move it by 1 unit, it will move its full length. If, however, you scale it by 2 and than move it by 1 unit, it moves only half its size.
According to what you wrote, the book is probably really bad source to learn Unity3D. Try doing some official tutorials, they are really good and explain the basics really well. This one is pretty good, this one as well. And remember, anytime you are in doubt with Unity. try to search their really good documentation first.
The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?