cylinder simulation on top of each other in unreal engine 4 - unreal-engine4

I created 2 cylinders and put one on top of the other. Then I clicked to simulate physics and on each cylinder I added the following image blueprint, added a 50-point rotation on the z axis of each cylinder in opposite directions.
It turns out that in the simulation, when I perform, the cylinders rotate in one direction and move on the ground in the other direction. If it turns clockwise it moves left, and vice versa, and should be the other way around.
Can anyone help me solve this? It's for both cylinders to work together and I see how their simulation is accelerating with a constant rotation, but that's not what happens

If you want to simulate physics you should be applying forces to the cylinders using Add Torque in Radians or Add Torque in Degrees rather than modifying the rotation directly.
Alternatively, if you want to precisely control the cylinders, do not simulate physics. Instead, disable Simulate Physics and animate the rotation and position of the cylinders directly as you are.

Related

Walk around square platform. Unity2D

I'm trying to make a character that walks around a platform and if the character reach a corner, it rotates and continue walking on the side of the platform, same with the bottom part.
This is a visual representation of what I'm trying to achive.
Movement
The specific problem is when the character reach the corners, the rotation just go crazy. I'm trayng to achive this using a raycast from the character to the platform and if the raycast doesn't find floor, I start the rotation like this:
times++;
transform.rotation = Quaternion.Slerp(transform.rotation,Quaternion.AngleAxis(times*-89.9f,Vector3.forward),Time.deltaTime * RotationSpeed);
_characterGravity.SetGravityAngle(transform.localEulerAngles.z);
I'm using a characterGravity script that allows me to change the gravity direction for the character in order to not fall when is walking upside down or on the sides. But this is not working propperly. Is there a better way to do this?
Assuming your 2D view is aligned on the X-Y plane, with the Z-axis aimed into the screen (the same direction the camera is facing), I suggest using Transform.Rotate() instead of trying to linearly interpolate the rotation between two values:
if (ShouldRotateAroundCorner()) {
transform.Rotate(Vector3.forward, RotationSpeed * Time.deltaTime);
}
You'll just need to make sure your ShouldRotateAroundCorner() knows when to start and stop the 90 degree turn, which will take a little bit of additional code to keep track of when it's in this state change.

[Solved - 3D movement on X and Y with mesh colliders

I am working on an Asteroids clone in 3D with top-down camera. This setup is static and will not change (so no, I do not want to transform the project to 2D game instead).
This means that I need to limit all movement to X and Y axis. I created movement for both asteroids and player and everything worked fine. All movements are done using AddForce on respective RigidBody components.
The problem is then I start dealing with collisions. I use Mesh Collider components to get a nice and precise "touch reaction". The problem is that when collision like this occurs, the new movement vector has Z value different from 0. This is a problem as the object will start moving on Z axis.
What have I tried:
Freezing constraints on RigidBody
Manully reseting Z in Update function
The first solution (freezing constraints) did not work and neither did the second one (moreover, the second one seems quite messy)
So the question is
What would be the optimal way to force physics-based movement only to X and Y axis while using precise collision with Mesh Colliders?
are you sure you used the position restriction correctly? You can check to set the restrictions with a vector as in the documentation. https://docs.unity3d.com/ScriptReference/RigidbodyConstraints.FreezePosition.html
to see how its done. If not please share the code or a screenshot of the rigidbody restrictions you tried in the editor

How to smoothly move a node in an ARkit Scene View based off device motion?

Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!

How do I apply rotations to parent child using Rigidbody?

I have a finger object. It is just three cubes representing the finger parts.
The 2nd cube is the child of the 1st one. And the 3rd cube is the child of the 2nd one.
This is the heirarchy: Cube1 -> Cube2 -> Cube3
My goal is to apply a rotation angle to the first cube and let the other cubes do the same locally.
Example: Apply 30 degrees Z rotation to the first cube, 30 degrees Z rotation to the 2nd cube, and also the 3rd one.
This will make a finger that look like this:
(Forgive me if it doesn't look like a finger)
In every Update() frame, I will change the angle (it's just one number) and it will rotate every cube for me.
My question is:
How do I make all these cubes collide properly with other objects?
I tried putting the Rigidbody on all of them and set isKinematic=false because I want to transform them myself. But I still cannot use transform.rotation to update my rotation because it will miss the collision with a ball very easily (especially the tip of the finger because it moves faster than other parts). Continuous detection doesn't help.
So I tried using rigidbody.MoveRotation() and rigidbody.MovePosition() instead, which is a pain because they need absolute values. They worked but the animation is so jumpy when I change the angle quickly.
I'm guessing that the animation is jumpy because there are many Rigidbodies or because the physics engine cannot interpolate the position of each box properly when I use MoveRotation() and MovePosition().
I need to use MovePosition() also because when I use only child.MoveRotation(transform.parent.rotation * originalChildLocalRotation * Quaternion.Euler(0, 0, angle)), the position doesn't move relative to the parent. So I have to compute child.MovePosition(parent.TransformPoint(originalLocalPositionOfTheChild)) every frame too.

Visible surface in some angle in Unity

I have surface of floor like in screenshot http://prntscr.com/amqstw. If I move camera in some angle I don`t see angle floor : http://prntscr.com/amqt19. How I may resolve this problem.
That effect is due to backface culling.
In that angle (probably) the camera is inside the mesh of the floor, so the normal vectors of the cube (I presume) are facing the other way, and they get "culled" (become invisible).
You can turn it of in two ways:
By editing the mesh in your modeling software so that it becomes a
"double-sided mesh", or
By finding a shader online which, once
applied to the floor object, deactivates its backfire culling (harder
to do, without screwing up something else)