Right imagine your at 0,1,0 or even a cube is. You then rotate 45% (glRotatef), then you move that object forward (glTranslate 0,0,10) so you move that object 10 forward (I have a camera using glLookat).
How do you then get that objects position in the 3D space (not screen position)?
Is it something to do with:
float modelViewMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewMatrix);
If I could find that objects position I could use it as say a bullet and then know if it hits another object very easily (I dont use the Y axes).
Just multiply your vertexes ([x, y, z, 1.0]) by your modelview (modelViewMatrix) matrix.
Related
I have a Cube in Unity. I'm moving and rotating the cube and I want to track the rotation around a special axis.
The Application is a Holo Lens 2 Application and I'm grabbing an Object with my hand. I then want to rotate the Object in my Hand around one of its local axis. I need that rotation around that axis, meaning a float, to rotate another object around its oft local axis by the same amount.
Any Idea how to achive this?
I'm new to 3D design, so this might be a silly question.
I have trouble simulating wheel camber using rotation with Quaternion.
What I understand is that the rotation of the WheelCollider occurs along the X axis, whereas in my case the "visual wheel" is rotated 80 ° along the Z axis so the rotation occurs along the Y axis.
This code works well with the FL wheel because its axis is aligned with the collider axis, while it doesn't work with the FR wheel.
collider.GetWorldPose(out Vector3 position, out Quaternion rotation);
visual.transform.position = position;
// initialVisualRotation is the wuaternion rotation of the visual at Startup
// in this case: X=0 Y=0 Z=80
visual.transform.rotation = initialVisualRotation * rotation;
These are my local and global rotation axes for the right front wheel (FR).
How can I "adjust" the wheel rotation without change initial axes (as I did for the FL wheel)?
Are there some quaternion operations to adopt in these cases?
A bit of workaround for not having to deal with messed up transforms is to make object a child of another empty object.This way you can make parent object rotated to make wheel tilted but Child( actual wheel) will have its original axis always not messed with and so you can use simple script to always rotate it around same axis that is always going to be the same despite whole system(hierarchy of objects) being " tilted".
Parent is responsible on rotating around one axis and child around another.
I think you need to take a step back and arrange your prefabs this way as it seesm you did not.
Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!
I am using Sprite Kit in Xcode and I was wondering how to change gravity direction.As default gravity direction to "X" you can imagine on below axes graphic.What about if I would like to change to "Y".
My goal is giving to object the falling effect.Its like falling from hight point and touching the ground than getting respond with physics!
(Could be dices on board game)
//Default gravity direction is X
SKSpriteNode *myNode =[SKSpriteNode spriteNodeWithImageNamed:#"ball"];
myNode.physicsBody=[SKPhysicsBody bodyWithCircleOfRadius:self.frame.size.width/2];
[self addChild: myNode];
Thanks in advance!
You can apply a vector to the Physics World of your scene using this code
self.physicsWorld.gravity=CGVectorMake(0,-10);
In SpriteKit, X and Y are the default coordinates that you see on the screen, and the Z coordinate is the order in which the objects are positioned (the zPosition). Since SpriteKit uses a 2D game engine, you do not have a third dimension, Z, to utilize. You can change the gravity between Y and X (Top/Bottom and Left/Right of screen respectively), but not between the Z coordinate. If you want to recreate a "dice falling" effect, I would recommend you create a Sprite called Dice scaled to a large amount, and once you add it to the scene you scale it down in x amounts of seconds.
[self runAction:[SKAction scaleBy:negativeFloatHere duration:3]];
This will make the dice appear to be falling, and you might want to add some spinning animations for it if you with. If you want to use the 3D engine, go try out Metal or SceneKit
So I've developed games in other platforms where the x/y coordinate system made sense to me. The top left representing the game screen with coordinates of (0,0) and the bottom right was (width,height). Now I'm trying to make the jump to Unity 2d and I can't understand how the game screen works. If I had a background object and a character object on the screen, when I move the character around his x and y values vary between -3 and 3... very small coordinates and it doesn't match the game resolution I have setup (1024x768). Are there good tutorials for understanding the game grid in Unity? Or can anyone explain how I can accomplish what I'm trying to do?
There are three coordinates systems in Unity: Screen coordinates, view coordinates and the world coordinates.
World coordinates: Think of the absolute positioning of the objects in your scene, using "points". You can choose to have the units represent any length you want, for example 1 unit = 10 meters. What is actually shown on the screen is determined by where the camera is placed and how it is oriented.
View Coordinates: The coordinates in the viewport of a given camera. Viewport is the imaginary rectangle through which the world is viewed. These coordinates are porportional, and range from (0,0) to (1,1).
Screen Coordinates: The actual pixel coordinates denoting the position on the device's screen.
Note that the world co-ordinates of any given object will always be the same regardless of which camera is used to view, whereas the view coordinates depends on the camera being used. The screen coordinates in addition depend on the resolution of the device and the placement of the camera view on the screen.
The "Camera" object provides several methods to convert between these different coordinate systems like "ScreenToViewportPoint" "ScreenToWorldPoint" etc.
Example: Place object on top left of screen
float distanceFromCamera = 10.0f;
Vector3 pos = Camera.main.ScreenToWorldPoint (new Vector3 (0, Camera.main.pixelHeight, distanceFromCamera));
transform.position = pos;
The ScreenToWorldPoint function takes a Vector3 as an argument, where the x and y denote the pixel position on the screen ( 0,0 is the bottom left) and the z component denotes the desired distance from the camera. An infinite number of 3D locations can map to the same screen position, so you need to provide this value.
Just make sure that the desired position falls within the clipping region of the camera. Also, you might need to pick a proper pivot for your object depending on which part of your object you want centered on the top left.
Using:
Camera.main.WorldToScreenPoint (transform.position);
Let's me convert my GameObjects tranform position to the screen's x and y coordinate system