How to smoothly move a node in an ARkit Scene View based off device motion? - swift

Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.

so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!

Related

Precision in onCollisionStart in flame with flutter

I'm attempting to write a bouncing ball game using flame in flutter. To detect collisions the onCollision and onCollisionStart methods are provided. What I had hoped is that onCollisionStart would give a precise location when two objects first hit each other. However, instead it gives a list of positions indicating where the two objects overlap after the first game-tick when this happens (i.e. onCollisionStart is called at the same time as onCollision, but is not called a second time if the same two objects are still colliding on the next tick).
This is illustrated in the attached picture. The collision points are marked with red dots. If the ball were moving downwards, then the ball would have hit the top of the rectangle and so should bounce upwards. However, if the ball were moving horizontally, then its first point of contact would have been the top left corner of the box, and the ball would bounce upwards and to the left.
If I want to work out correct angle that the ball should fly off, then I would need to do some clever calculations to work out the point that the ball first started hitting the other object (those calculations would depend on the precise shape of the other object). Is there some way to work out the point at which the two objects first started colliding? Thanks
What you usually need for this is the normal of the collision, but unfortunately we don't have that for the collision detection system yet.
We do have it in the raytracing system though, so what you could do is send out a ray and see how it will bounce and then just bounce the ball in the same way.
If you don't want to use raytracing I suggest that you calculate the direction of the ball, which you might already have, but if you don't you can just store the last position and subtract it from the current position.
After that you need to find the normals of the edges where the intersection points are.
Let's say the ball direction vector is v, and the two normal vectors are n1 and n2.
Calculate the dot product (this is build in to the vector_math library) of the ball direction vector and each of the normal vectors:
dot1 = v.dot(n1)
dot2 = v.dot(n2)
Compare the results of the dot products:
If dot1 > 0, n1 is facing the ball.
If dot2 > 0, n2 is facing the ball.
After that you can use v.reflect(nx) to get the direction where your ball should be going (where nx is the normal facing the ball).
Hopefully we'll have this built-in to Flame soon!

(UNITY) Plane not rotating to normal vector of three points?

I am trying to get a stretched out cube (which we can call a plane for the sake of discussion) to orient itself to the normal vector of a plane described by three points. I wrote a script to find the normal of three points, and then used transform.LookAt to have the planes align. However, I am finding that this script is not working at all how it is intended to and despite my best efforts I can not figure out why.
drastic movements of the individual points hardly effect the planes rotation.
the rotation of the object when using the existing points in the script should be 0,0,0 in the inspector. However, it is always off by a few degrees and as i said does not align itself when I move the points around.
This is the script. I can also post photos showing the behavior or share a small unity package
First of all Transform.LookAt takes a position as parameter, not a direction!
And then it
Rotates the transform so the forward vector points at worldPosition.
Doesn't sound like what you are trying to achieve.
If you want your object to look with its forward vector in the given normal direction (assuming you are calculating the normal correctly) then you could rather use Quaternion.LookRotation
transform.rotation = Quaternion.LookRotation(doNormal(cpit, cmit, ctht);
alternatively to this you can also simply assign the according vector directly like e.g.
transform.forward = doNormal(cpit, cmit, ctht);
or
transform.up = doNormal(cpit, cmit, ctht);
depending on your needs

What's the difference between ScreenToWorldPoint and ScreenPointToWorldPointInRectangle?

What's the difference between ScreenToWorldPoint and ScreenPointToWorldPointInRectangle? And when should we use which one?
Senario:
I'm using UI system creating my card game similar to Hearthstone. I want to transform my mouse drag positions to world position. RectTransformUtility.ScreenPointToWorldPointInRectangle(UIObjectBeingDragged.transform.parent as RectTransform, Input.mousePosition, Camera.main, out resultV3) works fine. But I also tried Camera.main.ScreenToWorldPoint(Input.mousePosition), and it give a different and "wrong" result.
ScreenToWorldPoint
Gives you a world position (the return value) that is along a ray shot through the near plane of the camera (the Camera whose method is being called) at some given point (the x and y components of the position parameter) and a given distance from that near plane (the z component of the position parameter).
You should use this when you:
have a specific distance from the near plane of the camera you are interested in and
don't need to know if it hit inside some rectangle or not
You could think of this as a shortcut for Ray.GetPoint that uses the x and y of position and various info of the Camera to make the Ray, and the z component of position is the distance parameter.
ScreenPointToWorldPointInRectangle
Also gives you a world position (worldPoint) along a ray shot through the near plane of a camera (cam) at a given point (screenPoint). Only this time instead of giving you the point a given distance along the ray, it gives you the intersection point between that ray and a given rectangle (rect) if it exists, and tells you if such an intersection exists or not (the return value).
You should use this when you:
have a specific rectangle you are interested in the intersection with a camera ray,
You don't know the distance between the camera or its near plane and the intersection point
Want to know if that rectangle is hit by the ray or not.
You could think of this as a shortcut for Plane.Raycast which uses cam and screenPoint to make the Ray, and rect to make the Plane, and also gives some more information of if it would intersect outside the boundaries of the rect.

Automatically calculating new position of camera after we increase our chessboard size but want it still to stay in shot

Say my camera is rotated around the X axis 60 degrees and looking down on a 9x9 block chess board. As we adjust board size, I want to zoom out the camera. Say for arguments sake the camera's position is (4,20,-7) and like this the whole board is visible and taking up the full screen.
If I adjust my board size to say 11x11 blocks I will now need to zoom out the camera. Say I want to maintain the same 60 degree angle and want the board to fill as much of the screen as it did before. What should the camera's new position be and how do you calculate it?
The X part is easy since you simple give the camera the same X position as the middle of the board. I'm not sure about how to calculate the new Y and Z positions though.
Any advice appreciated. Thanks.
edit: and if i wanted to change the angle of the camera as well as zoom out, is that possible to calculate? this is less important since i'll probably stick with the same angle, but i'm interested to know the maths behind it anyway.
Transform.Translate() method will move the transform according to the rotation. So you don't have to worry about the direction where your camera is looking at, just
yourCamera.transform.Translate(Vector3.forward * moveAmount);
will move your camera forward, which means zoom in. If you want to zoom out, just change the sign of the value to minus.
When I didn't know this, I used Mathf.Sin() and Mathf.Cos() to calculate each y and z world coordinates, which sucks.

cylinder simulation on top of each other in unreal engine 4

I created 2 cylinders and put one on top of the other. Then I clicked to simulate physics and on each cylinder I added the following image blueprint, added a 50-point rotation on the z axis of each cylinder in opposite directions.
It turns out that in the simulation, when I perform, the cylinders rotate in one direction and move on the ground in the other direction. If it turns clockwise it moves left, and vice versa, and should be the other way around.
Can anyone help me solve this? It's for both cylinders to work together and I see how their simulation is accelerating with a constant rotation, but that's not what happens
If you want to simulate physics you should be applying forces to the cylinders using Add Torque in Radians or Add Torque in Degrees rather than modifying the rotation directly.
Alternatively, if you want to precisely control the cylinders, do not simulate physics. Instead, disable Simulate Physics and animate the rotation and position of the cylinders directly as you are.