How to get the absolute position of a child entity in bevy? - bevy

I am using the Bevy game engine.
The ability to have transforms be propagated to children in Bevy is handy, but when I am performing collision checks in my game, I have been using the object's Translation to compute its location. Now that I have some parent-child hierarchies in my scene, the Translation of each child entity is relative to its parent.
Is there a way to get the position of an entity relative to the world origin as opposed to the entity's parent?

The "world" position is stored in the GlobalTransform component. Transforms internally are 4x4 matrices where the translation() function returns the position. You can access it like this:
fn system(global_transform: &GlobalTransform) {
let position = global_transform.translation();
}

Related

Positions of objects in the scene

My hierarchy of game objects is as follow.
Display(Scene)
Model(-4.708, 1.55, 14.4277)
Pass(4.7080, -1.5, -14.42)
handle(-0.0236,0.65690,0.149)
shaft(5.34,-1.0225,-0.1489)
head(-7.0912,-9.62,-0.5231)​
ball(0,0,0)
We can see Model and its coordinate on the image. Ball has position (0,0,0), but why it is located at the base of the Model?
How can I locate ball just beside the head?
Sounds like the origin point of one or more of the models is at off.
You can adjust this in 3d modelling software, or by making an empty game object inside unity and making your object a child of that object, then using the empty game object as the new origin point, you can adjust the positions of the child objects you assign to it.

mathematical movable mesh in swift with SceneKit

I am a mathematician who wants to program a geometric game.
I have the exact coordinates, and math formulae, of a few meshes I need to display and of their unit normals.
I need only one texture (colored reflective metal) per mesh.
I need to have the user move pieces, i.e. change the coordinates of a mesh, again by a simple math formula.
So I don't need to import 3D files, but rather I can compute everything.
Imagine a kind of Rubik cube. Cube coordinates are computed, and cubelets are rotated by the user. I have the program functioning in Mathematica.
I am having a very hard time, for sleepless days now, trying to find exactly how to display a computed mesh in SceneKit - with each vertex and normal animated separately.
ANY working example of, say, a single triangle with computed coordinates (rather than a stock provided shape), displayed with animatable coordinates by SceneKit would be EXTREMELY appreciated.
I looked more, and it seems that individual points of a mesh may not be movable in SceneKit. I like from SceneKit (unlike OpenGL) the feature that one can get the objects under the user's finger. Can one mix together OpenGL and SceneKit in a project?
I could take over from there....
Animating vertex positions individually is, in general, a tricky problem. But there are good ways to approach it in SceneKit.
A GPU really wants to have vertex data all uploaded in one chunk before it starts rendering a frame. That means that if you're continually calculating new vertex positions/normals/etc on the CPU, you have the problem of schlepping all that data over to the GPU every time even just part of it changes.
Because you're already describing your surface mathematically, you're in a good position to do that work on the GPU itself. If each vertex position is a function of some variable, you can write that function in a shader, and find a way to pass the input variable per vertex.
There are a couple of options you could look at for this:
Shader modifiers. Start with a dummy geometry that has the topology you need (number of vertices & how they're connected as polygons). Pass your input variable as an extra texture, and in your shader modifier code (for the geometry entry point), lookup the texture, do your function, and set the vertex position with the result.
Metal compute shaders. Create a geometry source backed by a Metal buffer, then at render time, enqueue a compute shader that writes vertex data to that buffer according to your function. (There's skeletal code for part of that at the link.)
Update: From your comments it sounds like you might be in an easier position.
If what you have is geometry composed of pieces that are static with respect to themselves and move with respect to each other — like the cubelets of a Rubik's cube — computing vertices at render time is overkill. Instead, you can upload the static parts of your geometry to the GPU once, and use transforms to position them relative to each other.
The way to do this in SceneKit is to create separate nodes, each with its own (static) geometry for each piece, then set node transforms (or positions/orientations/scales) to move the nodes relative to one another. To move several nodes at once, use node hierarchy — make several of them children of another node. If some need to move together at one moment, and a different subset need to move together later, you can change the hierarchy.
Here's a concrete example of the Rubik's cube idea. First, creating some cubelets:
// convenience for creating solid color materials
func materialWithColor(color: NSColor) -> SCNMaterial {
let mat = SCNMaterial()
mat.diffuse.contents = color
mat.specular.contents = NSColor.whiteColor()
return mat
}
// create and arrange a 3x3x3 array of cubelets
var cubelets: [SCNNode] = []
for x in -1...1 {
for y in -1...1 {
for z in -1...1 {
let box = SCNBox()
box.chamferRadius = 0.1
box.materials = [
materialWithColor(NSColor.greenColor()),
materialWithColor(NSColor.redColor()),
materialWithColor(NSColor.blueColor()),
materialWithColor(NSColor.orangeColor()),
materialWithColor(NSColor.whiteColor()),
materialWithColor(NSColor.yellowColor()),
]
let node = SCNNode(geometry: box)
node.position = SCNVector3(x: CGFloat(x), y: CGFloat(y), z: CGFloat(z))
scene.rootNode.addChildNode(node)
cubelets += [node]
}
}
}
Next, the process of doing a rotation. This is one specific rotation, but you could generalize this to a function that does any transform of any subset of the cubelets:
// create a temporary node for the rotation
let rotateNode = SCNNode()
scene.rootNode.addChildNode(rotateNode)
// grab the set of cubelets whose position is along the right face of the puzzle,
// and add them to the rotation node
let rightCubelets = cubelets.filter { node in
return abs(node.position.x - 1) < 0.001
}
rightCubelets.map { rotateNode.addChildNode($0) }
// animate a rotation
SCNTransaction.begin()
SCNTransaction.setAnimationDuration(2)
rotateNode.eulerAngles.x += CGFloat(M_PI_2)
SCNTransaction.setCompletionBlock {
// after animating, remove the cubelets from the rotation node,
// and re-add them to the parent node with their transforms altered
rotateNode.enumerateChildNodesUsingBlock { cubelet, _ in
cubelet.transform = cubelet.worldTransform
cubelet.removeFromParentNode()
scene.rootNode.addChildNode(cubelet)
}
rotateNode.removeFromParentNode()
}
SCNTransaction.commit()
The magic part is in the cleanup after the animation. The cubelets start out as children of the scene's root node, and we temporarily re-parent them to another node so we can transform them together. Upon returning them to be the root node's children again, we set each one's local transform to its worldTransform, so that it keeps the effect of the temporary node's transform changes.
You can then repeat this process to grab whatever set of nodes are in a (new) set of world space positions and use another temporary node to transform those.
I'm not sure quite how Rubik's-cube-like your problem is, but it sounds like you can probably generalize a solution from something like this.

Unity3D - What does child object inherit from its parent?

What does child object inherit from its parent in Unity except transform (when parent object is moved, then children are being moved too)?
Unity's inheritance isn't like pure OOP inheritance. It's not like a base class provides virtual members that a child inherits.
In Unity a child object inherits only the Transform. It doesn't really inherit it, it just becomes the base for it's own Transform component therefore when modifying the child component it will be relative to the parent. Since all object by default need to at least have only 1 component (Transform) that's pretty much all that can be inherited by a child object.
The child will not inherit the parents transform directly, rather the childs transform becomes relative to that of the parent. So if we take the example of the transforms position, a GameObject without a parent will be relative to the world coordinates, whereas a GameObject with a parent will be relative to the parents position. You can get the relative position using localPosition, which will be equal to the Transform.position if the GameObject has no parents.
As a more concrete example (I'll use 2D coordinates for simplicity):
Say we have a GameObject (A) at world position (0,0) without any parents. Its Transform.position will be (0,0) and Transform.localPosition will also be (0,0).
If we add another GameObject (B), make it a child of A and set its world position to be (1,0) then its Transform.position and Transform.localPosition will both be (1,0).
Now if we were to move GameObject A to (2,0), we would see that B would move to (3,0) in world space, but its Transform.localPosition would still be (1,0) as this is relative to the position of A.
The child won't inherit anything else from the parent, though the relationship can be used in code to obtain references to each other via Transform.parent and Transform.GetChild.

Why in Unity3d Transform is a separate entity from GameObject?

Every GameObject has Transform, and only one. Every Transform can only be created attached to a GameObject. Hierarchy is realized through Transform, but GameObject relies heavily on it.
Is there a reason about why they are different entities and not the same thing?
Component's and GameObject's are inherited types of a base class called Object. Component's can only exist and be attached to Objects.
A Transform is an inherited type of Component. All GameObject's must have a Transform Component attached to them. The reason being:
Every object in a scene has a Transform. It's used to store and
manipulate the position, rotation and scale of the object. Every
Transform can have a parent, which allows you to apply position,
rotation and scale hierarchically. This is the hierarchy seen in the
Hierarchy pane.
If you asking why such an architecture is in place, you may find that this sort of discussion will be found debated in more than a few articles and books. I once asked this question myself and I found the following Stack Overflow answer to best answer it HERE.

how to get an objects position in another object's coordinate system THREE.js

I have created a THREE.Scene, and within the scene there is an THREE.Object3D() that is a new 'coordinate system'.
Inside this object there is a particle with a certain position.
I understand that getting a this particle's position from the object's 'coordinate System' into the Scene's 'coordinate systems' requires the following
//Gives particle position in scene coordinates
particle.position.applyMatrix4(Object.matrixWorld)
What would be the inverse transformation though?
(aka, the particle is in the Scene's 'coordinate system' and I want to find its position in the objects 'coordinate system')
The inverse transformation of the transform you referred to can be calculated like so:
var mInverse = new THREE.Matrix4().getInverse( object.matrixWorld );
particle.position.applyMatrix4( mInverse );
three.js r.55