Get world space Oriented Boundin Box 8 points in unreal (C++) - unreal-engine4

Does anybody know how to retrieve an actor's world space oriented bounding box 8 points in C++. Im reading the official documentation but it's a bit vague as it never specifies whether the bounds objects (FBox, FBoxShpereBounds) are local space, world space, axis aligned etc
I'm thinking something like below but I'm not sure if that's right
UStaticMeshComponent* pMesh = Cast<UStaticMeshComponent>(actor->GetComponentByClass(UStaticMeshComponent::StaticClass()));
if (pMesh)
{
UStaticMesh* pStaticMesh = pMesh->GetStaticMesh();
if (pStaticMesh && pStaticMesh->GetRenderData())
{
FStaticMeshRenderData* pRenderData = pStaticMesh->GetRenderData();
if (pRenderData)
FBoxSphereBounds bounds = pRenderData->Bounds;
bounds.TransformBy(actor>GetActorTransform());
}
}

Unreal maintains it's bounds as axis-aligned (AABB). This is often done in game engines for efficiency in the physics/collision subsystem. To get an AABB for an actor, you can use the following function - this is essentially equivalent to what you did above with pRenderData->Bounds but is independent of the actor implementation.
FBox GetActorAABB(const AActor& Actor)
{
FVector ActorOrigin;
FVector BoxExtent;
// First argument is bOnlyCollidingComponents - if you want to get the bounds for components that don't have collision enabled then set to false
// Last argument is bIncludeFromChildActors. Usually this won't do anything but if we've child-ed an actor - like a gun child-ed to a character then we wouldn't want the gun to be part of the bounds so set to false
Actor.GetActorBounds(true, ActorOrigin, BoxExtent, false);
return FBox::BuildAABB(ActorOrigin, BoxExtent);
}
From code above it looks like you want an oriented bounded box (OBB) since you are applying the transform to it. Trouble is the AABB Unreal maintains will be "fit" to the world space axes and what you are essentially doing just rotates the center point of the AABB which will not give a "tight fit" for rotation angles far from the world axes. The following two UE forum posts provide some insight into how you might do this:
https://forums.unrealengine.com/t/oriented-bounding-box-from-getlocalbounds/241396
https://forums.unrealengine.com/t/object-oriented-bounding-box-from-either-aactor-or-mesh/326571/4
If you want a true OBB, FOrientedBox is what you need, but the engine lacks documented utilities to do intersection or overlap tests with this structure depending on what you are trying to do, but these things do exist in the engine but you have to hunt through the source code to find them. In general, the separating axis theorem (SAT) can be used to find collisions between two convex hull shapes, which an OBB is by definition.

Related

Why a big moving attractor object gives a slingshot to other object with using gravity formula rather than holding it?

I have a big spherical Gameobject which moves forward in 3D with constant velocity. I have other spherical objects that other big object needs to attract to itself. I am using Newton's law of universal gravitation formula to attract other objects, but as expected, other objects are doing a slingshot movement much like the space shuttles doing when needed with other planets' orbits to accelerate.
I actually want a magnetic effect that without taking the masses into account, all other objects will be catched by the big object. How can I do that? Do I need a different formula? Or do I need to change the movement behavior of the objects altogether?
If I got it right you expect to have something like this: https://www.youtube.com/watch?v=33EpYi3uTnQ
You can do a spherical raycast or have an sphere collider as trigger to detected the objects that are inside of your magnetic field.
Once you know those objects you can calculate the distance from each of them to the magnetic ball.
You can make an inverse interpolation to know how much strength/"magnetism" is getting into that object.
Then you can apply some force on the attracted object towards the magnetic ball's center.
Something like this algorithm:
var objectsInsideField = ListOfObjects;
foreach (o in objectsInsideField) {
var distance = (o.position - center.position).magnitude;
var strength = distance / fieldRadius; // fieldRadius == spherical radius
o.AddForce(dir: o.position - center.position, strength: strength)
}
Of course, you need to do some adjustments and probably add some multipliers to make the force to be meaning.
The final result should be: for each sequential frame, if the object is inside the magnetic field it moves towards the center a bit. The next frame it should be even close to the center so the strength is even bigger.. and so and so.
First of all, since the big objects is moving with constant velocity, a coordinate frame with axes parallel to the axes of the original coordinate system will be moving uniformly with constant velocity, so this new moving coordinate system is also inertial and you can write all your equations of motion in it, calculate everything with respect to it, and at the end you add the uniform movement to the results. The benefit is that the big object is stationary in this coordinate system, so simpler physics applies.
The slingshot effect occurs most likely because your objects are treated as mass-points, rather than bigger 3D entities (like spheres), for which the centers of mass never get too close enough. Hence maybe some sort of collision detection may eliminate this problem, especially if you decrease significantly or kill completely the elastic collision resolution.
All of what I am saying is a bit speculative as I have no access to details.

Collider2D.bounds Explaination

I am not understanding unity's explanation of the Collider.2D.bounds.
From Unity docs: "The world space bounding area of the collider."
Could someone give a better explanation? Furthermore, kindly explain collider.2d.bounds.max and min.
"the world space bounding area of the collider"
In unity you can get your objects in different coordinate representations, a local system that is independent from its parents and where the object itself is the center of the system and a world system.
While the local object system is centered around the object the world system is a fixed system and it describes where exactly the object is in your scene, without the world system you would not be able to know where exactly an object is in your scene as its values describe exactly this.
Object space:
World space:
In your case you get the bounds (in case of a box the minimal and maximal position) the object has in the world system.
"bounds min/max"
bounds.min.x will be the lowest x position of the bounds (of the object) and bounds.max.x will be the highest
Edit:
here you can see how bounding volumes are working. Bounding volumes will always include every single vertex around your object, but there are different kinds of bounding boxes, unity uses the axis-aligned bounding box (AABB) one.

In Unity3D, what would "setting" the bounds of a mesh do or achieve?

In a Unity code base, I saw this:
// the game object currently has no mesh attached
MeshFilter mFilter = gameObject.AddComponent<MeshFilter>();
gameObject.AddComponent<MeshRenderer>();
// no problem so far
mFilter.mesh = MakeASmallQuadMesh(0.1f);
// great stuff
mFilter.mesh.bounds = SomeSpecificBounds();
// what ?
The function "MakeASmallQuadMesh" has the usual completely normal code for making a mesh, so
Mesh mesh = new Mesh();
mesh.SetVertices(verts);
mesh.SetIndices(indices);
mesh.SetUVs(0, uvs);
mesh.RecalculateNormals();
mesh.RecalculateBounds();
return mesh;
No worries there. It makes a quad 10cm across.
But what about the line of code
mFilter.mesh.bounds = SomeSpecificBounds();
I was amazed to learn you can set mesh.bounds, I assumed it would be read-only.
What possible "meaning" is there to "setting" the bounds? It would be like: there's a written measurement in a doctor's office stating that Jane is 6'. You change the record to 5'10". Of course, Jane's height does not change at all. You've just, bizarrely made the record wrong.
Could it be that: so, the bounds of a mesh are used by various, indeed many, Unity systems. (Culling, etc etc.) Could the pattern be that by "forcing" the bounds like this (the bounds are now "totally wrong" for the object - they're just some value you forced in there) the programmer wanted (for some reason) Unity's system (say, culling) to use those forced, nonsensical (to the actual object) values?
Wild guess, maybe there's a pattern (I have never heard of) where you "force" the bounds of object A to be the same as object B - for some reason I cannot guess at?
What could possibly be the pattern / reason here?
I would just assume it's a basic mistake, but assumptions kill.
I was actually curious about this myself, and I happened to have a program that explicitly generated large numbers of custom meshes, so I decided to test a few things.
The first thing I wanted to confirm was that the bounds were in fact set automatically. A rudimentary inspection revealed that this was indeed the case: Specifically, a mesh's bounds are automatically recalculated any time you set the mesh.vertices property. This probably explains why the property is a fixed length array rather than a list. (fun side note: If you try to assign secondary properties like uv coords or normals to the mesh before assigning vertex positions, unity gently nags you about mismatched array lengths before promptly exploding. So Don't Do That.)
As for what this actually impacts, I had a hunch: I set the extents of my mesh bounds to be 0. Immediately, meshes at the corner of my viewport stated getting visually culled. This tells us a few things:
Setting bounds explicitly does have an effect.
Unity does actually make use of custom bounds data.
Unity uses mesh bounds to perform frustum culling.
According to Unity's manual, there are three cases where the Bounds class is used: Mesh.bounds, Renderer.bounds, and Collider.bounds. Of those three, Mesh.bounds is the only property that isn't read only.
As for the question of why anybody would want to set mesh bounds explicitly, it's not impossible that you could perform some clever culling optimization like looking at a complex mesh through a window or some such, but if I had to guess, whoever wrote that code didn't trust Unity to set mesh bounds accurately or explicitly.

Open GL - ES 2.0 : Touch detection

Hi Guys I am doing some work on iOS and the work requires use of OpenGL es. So now I have a bunch of squares, cubes and triangles on the screen. Some of these geometries might overlap. Any ideas/ approaches for touch detection?
Regards
To follow up on the answer already given, squares, cubes and triangles are convex shapes so you can perform ray-object intersection quite easily, even directly from the geometry rather than from the mathematical description of the perfect object.
You're going to need to be able to calculate the distance of a point from the plane and the intersection of a ray with the plane. As a simple test you can implement yourself very quickly, for each polygon on the convex shape work out the intersection between the ray and the plane. Then check whether that point is behind all the planes defined by polygons that share an edge with the one you just tested. If so then the hit is on the surface of the object — though you should be careful about coplanar adjoining polygons and rounding errors.
Once you've found a collision you can easily get the length of the ray to the point of collision. The object with the shortest distance is the one that's in front.
If that's fast enough then great, otherwise you'll probably want to look into partitioning the world or breaking objects down to their silhouettes. Convex objects are really simple — consider all the edges that run between one polygon and the next. If only exactly one of those polygons is front facing then the edge is part of the silhouette. All the silhouettes edges together can be projected to a convex 2d shape on the view plane. You can then test touches by performing a 2d point-in-polygon from that.
A further common alternative that eliminates most of the maths is picking. You'd render the scene to an invisible buffer with each object appearing as a solid blob in a suitably unique colour. To test for touch, you'd just do a glReadPixels and inspect the colour.
For the purposes of glu on the iPhone, you can grab SGI's implementation (as used by MESA). I've used its tessellator in a shipping, production project before.
I had that problem in the past. What I have used is an implementation of glu unproject that you can find on google (it uses the inverse of the model view projection matrix and the viewport size). This allows you to map the 2D screen coordinates to a 3D vector into the world. Then, you can use this vector to intersect with your objects and see which one intersects (or comes really close to doing so).
I do hope there are better ways of doing this, so I look forward to other answers as well!
Once you get the inverse-modelview and cast your ray (vector), you still need to know if the ray intersects your geometry. One approach would be to grab the depth (z in view coordinate system) of the object's center and extend (stretch) your vector just that far. Then see if the vector's "head" ends within the volume of your object or not (you need the objects center and e.g. Its radius, if it's a sphere)

Screen-to-World coordinate conversion in OpenGLES an easy task?

The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?