Weird output debugging the forward of an object with InverseTranformDirection - unity3d

sorry for my bad english! I will try to explain the situation. Im just playing with this function because i want to understand how it works. The concept is really clear but debugging the forward of the object that i am debugging i get really weird output. For example im using InverTranformDirection(Vector3.Forward) to see what vector i get from the object child of another object. They are both perfectly aligned with their own axies. If i try rotate the parent of the object in order to have the forward pointing at (0,0,1) the child object that i am debugging has the same z axis value like it should have because they are alligned but if i rotate the parent to get the forward rotated on the (1,0,0) space cordinates, i get the cordinates inverted (-1,0,0). I mean why? Parent and child are pointing they own forward in the same exactly direction. Could you help me understand? Thanks!

InverseTranformDirection(Vector3.Forward)
wouldn't care about the parent at all. It is simply converting the world space global Z axis direction (= (0, 0, 1)) into local space of the according object.
I'd say it does exactly what you would expect.
Rotating your object to its own forward vector points towards (1,0,0) (=Vector3.right = world X axis) basically means the world space forward vector (0, 0, 1) (=Vector3.forward) is now pointing left away from your object.
Which is what (-1, 0, 0) would mean in its local space.
Vector3.forward (= -transform.right)
(world Z axis) (local negative X axis)
^
|
|
rotated Object ----> Vector3.right (= transform.forward)
(world X axis) (local Z axis)

Related

How to drag SCNNode along specific axis after rotation?

I am currently working on Swift ARKit project.
I am trying to figure out how can I drag node object at specific axis even after some rotations. For example I want to move node along Y axis after rotations but It's axis directions stays same so even if I change Y position it's still move along World Y. SCNNode.localup is static and returns SCNVector3(0, 1, 0) and as far as I see there is no function for node's local up. If I remember correctly, it was enough to increase the local axis to drag after rotating in Unity.
Node object before rotation
Before applying some rotations to drag object all you need to do is increasing or decreasing specific axis.
Node object after rotation
After rotate green Y axis rotates too but when I increase or decrease local Y value object still moves along World Y.
Sorry for my bad English. Thanks for your helps.
Out of curiosity, how are you currently applying the rotation?
A straightforward way to achieve this without needing to dig into quaternion math would be to wrap your node in question inside a parent node, and apply those transformations separately. You can apply the rotation to the parent node, and then the drag motion along the axis to the child node.
If introducing this layer would be problematic outside of this operation, you can add/rotate/translate/remove as a single atomic operation, using node.convertPosition(_:to:) to interchange between local and world coordinates after you've applied all the transformations.
let parent = SCNNode()
rootNode.addChildNode(parent)
parent.simdPosition = node.simdPosition
node.simdPosition = .zero
parent.simdRotation = /../
node.simdPosition = simd_float3(0, localYAxisShift, 0)
node.simdPosition = rootNode.convertPosition(node.simdPosition, from: parent)
rootNode.addChildNode(node)
rootNode.removeChildNode(parent)
I didn't test the above code, but the general approach should work. In general, compound motion as you describe is a bit more complex to do directly on the node itself, and under the hood SceneKit is doing all of that for you when using the above approach.
Edit
Here's a version that just does the matrix transform directly rather than relying on the built in accommodations.
let currentTransform = node.transform
let yShift = SCNMatrix4MakeTranslation(0, localYAxisShift, 0)
node.transform = SCNMatrix4Mult(yShift, currentTransform)
This should shift your object along the 'local' y axis. Note that matrix multiplication is non-commutative, i.e. the order of parameters in the SCNMatrix4Mult call is important (try reversing them to illustrate).

Get x and y 'coordinates' from object speed and direction

I have a player object that controls like the ship in Asteroids, using speed and direction. This object is fixed in the middle of the screen, but can rotate. Movement of this object is a visual illusion as other objects move past it.
I need to get x and y coordinates of this player object, from an origin of (0, 0) at room start. x and y do not provide this info as the object does not move. Does anyone know how I can get 'fake coordinates', based on the speed and direction?
One thing to make sure is that you're not just getting x and y on their own, as that will get the current object's x and y position. Instead, make sure to reference the object you're trying to get. For example:
var objectX = myShip.x;
var objectY = myShip.y;
show_debug_message("x: " + string(objectX));
show_debug_message("y: " + string(objectY));
I think you are thinking about it wrong. You do not need "fake coordinates". Real coordinates are fine. Give the ship and asteroids/enemies whatever coordinates and velocity vectors you want; randomly generate them if the game is like Asteroids.
The coordinates do not have to be fake; it is just that when you render in your game loop, you render a particular frame of reference. If the origin is the center of the screen, when you paint an object at (x,y) paint it as though it were at (x - ship_x, y - ship_y) -- including the ship, which will be at (0,0). If you wanted to make rotation relative to the ship too, you could do the same thing with rotation.
Now, you have your question tagged as game-maker. I have no idea if game-maker lets you control how sprites are painted like this. If not then you need to maintain the real coordinates as separate properties of objects and let the official (x,y) coordinates be relative to the ship. The trouble with this is that you will have to update all of the objects everytime the ship moves. But like I said I don't know how GameMaker works -- if it is a problem maybe ask a question more specific to GameMaker.
You'll need to think what you'll use to move the ship around, but then use that code on different variables.
Normally, you'll update the x or y if you want to move the ship, but since you're not going to do that, simply use a custom variable that replaces the x and y value (like posx or posy), and use them on the code that would otherwise be used to move the ship around.

How to check angle between raycast and surface?

I know how to check angle between raycas(Vector3.Angle(hit.normal, -transform.forward)), but my problem is that it always returns the smaller angle(i know it's mathematically correct, it's just not what i want). I'm using it to determine in which side should my car turn. It's hard to explain so I will use picture:
This two raycasts would both return same angle, and I want one to bo 45* and the other -45*. Actually I don't need exact value, 1 and -1 would be good as well(I need to know only the side, not the exact angle).
I am not entirely sure if this will help you with your problem, but looking at your example it might work for you:
When you look at the modified picture, I have added the vector U and have named the red vectors, where L (and R) and pointing from the wall (the hit point, indicated by the vertical lines) towards the rectangle. The vector U is a fixed direction vector associated with your wall (so it does not change at runtime).
Now taking L and U (or R and U) you can determine whether the ray origin lies behind or in front of the hit point by using the dot product.
In this example, that yields:
Vector3.Dot(L, U) < 0
Vector3.Dot(R, U) > 0
You could then use this information to determine a sign for your angles.

How do convertToWorld-/NodeSpace work ?

I am working on a game for a little while now and I worked with convertToWorldSpace by trial & error. It always worked out, but I have no idea what I am doing to be honest. I really hate that I do not understand what I am doing, but unfortunately the Internet does not give much information on this problem. So I hope somebody can explain, why for example I have to call convertToWorldSpace from the node's parent, and when I use convertToNodeSpace. I have absolutely no clue. Can somebody explain in general what they do and maybe give an example ? I would really appreciate it! :-)
simply put...
the world space is the coordinate grid of the screen.
nodespace is the coordinate grid of the layer.
cocos2d and most other game frameworks consist of multiple layers within one scene and you would most likely have a bunch of other little nodes/sprites. when you are calculating the coordinates for each sprites in your layer you would be using node space, but however when you get touch locations, it would be in world space so you would need to convert it using convertToNodeSpace etc.
hope this helps!
---- edit ----
maybe an example will help...
[somenode convertToWorldSpace:ccp(x, y)];
the above code will convert the coordinates (x, y) to the coordinates on the screen. so for example if (x, y) is (0, 0) that position will be the bottom left corner of the node, but not necessarily on the screen. this will convert (0, 0) of the node to the position of the screen.
[somenode convertToNodeSpace:ccp(x, y)];
the above code will do the opposite. it will convert the coordinates on the screen (x, y) to what it would be on somenode.
so it comes handy when you have something you want to move (or get the position of or whatever) that's a child of some other node or layer, since most of the time you want to move the whatever it is relative to the screen rather than within that layer.

iPhone OpenGl : Finding out an objects position in 3D space

Right imagine your at 0,1,0 or even a cube is. You then rotate 45% (glRotatef), then you move that object forward (glTranslate 0,0,10) so you move that object 10 forward (I have a camera using glLookat).
How do you then get that objects position in the 3D space (not screen position)?
Is it something to do with:
float modelViewMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewMatrix);
If I could find that objects position I could use it as say a bullet and then know if it hits another object very easily (I dont use the Y axes).
Just multiply your vertexes ([x, y, z, 1.0]) by your modelview (modelViewMatrix) matrix.