My player character moves around the world using the Animator with root motion activated. Basically, the AI system sets the velocities on Animator, which in turn controls the Animation clips that control character motion. As this is a standard feature that ensures very realistic animation without noticable sliding, I thought this was a good idea ...until I added Network synchronization.
Synching the characters over the Network using NetworkTransform and NetworkAnimation causes those two components to conflict:
NetworkTransform moves the character to whichever position the host commands.
NetworkAnimator syncs the animation vars and plays the Animation clips as host instructs it to, while those Animation clips also apply root motion.
The result is precise (meaning the character reaches the exact target destination), but very stuttering movement (noticable jumps).
Removing NetworkTransform, the host and client instances of the characters desynchronize very quickly, meaning they will end up at different positions in the world when solely controlled by the timing-dependant Animator.
Removing NetworkAnimator, client instances won't play the same animations as the host, if any animations at all.
I tried keeping both Components while disabling root motion for the Animator (on client only). In that case however, NetworkTransform does not seem to interpolate at all. The character just jumps from synched position to synched position in steps of about 0.02 units. Same with rotation.
NetworkTransform is configured to "Sync Transform", as the character neither has a RigidBody nor a CharacterController. All other values are the defaults: sync rate of 9 (also tried higher values there), movement threshold of 0.001, snap threshold of 5, interpolate movement = 1.
How do I get fluent root motion based movement on the Network? I expected that to be a standard scenario...
What you need is to disasble the Root Motion flag on non local instances but also to interpolate the rotation, not just the movement.
Moreover, an interpolation of 1 seems high, same as the thereshold of 5: those seem not realistic unless you are not using Unity standard where 1 unit = 1 meter. I would have a 25cm (0.25) interpolation for the movementand a 3 degrees for the rotation. The sync rate of 9 could be enough but in my experience it has to be recomputed based on the packet loss.
Related
I use Unity's Navmesh agent to govern the movement of my objects. I am observing a weird behaviour: increasing the acceleration of my object increases the accuracy of the path.
For example, consider the following two objects:
Object 1 moves with speed of 15 and acceleration of 30.
Object 2 moves with speed of 15 and acceleration of 8.
I instruct both objects to move to the same spot one at a time. Reaching the spot requires the object to make a left turn around a building.
Object 1 reaches destination using a nice tight path while Object 2 makes a weird turn when trying to turn. I've captured these two in the attached GIF.
Why is this happening?
Anim_BP
In the animation blueprint, I use the “Layered blend per bones” nodes to layer two animations together. My character had a running animation and also holds an AK 47. But due to the running animation, the weapon moves jitters a lot when moving left or right. But, this “Mesh Space Rotation Blend” in “Layered blend per bones” node fixes this issue. I am a beginner in Unreal Engine 4 and I follow a series of FPS tutorials from “DevSquad”
Basic Problem: Unity's physics engine produces weird collisions when a player is moving over a flat surface made of more than one Collider. The ghost collisions occur at the joints between Colliders, and express as two behaviors:
This seems to be a problem with physics engines in general, based on this talk by Bennett Foddy:
https://www.youtube.com/watch?v=NwPIoVW65pE&ab_channel=GDC
Game Specifics:
In my case, the player is moving through a procedurally generated wormhole, composed of Segment objects using a MeshCollider. The wormhole twists randomly through 3D space, while the width and height change dynamically. The player can strafe 360 degrees around the inside of the tunnel (direction of gravity is relative to position).
This makes the simpler solutions I've found impractical. Those include:
Using a single object instead of many
Placing redundant Colliders behind the joints
I've managed to flag these erroneous collisions in OnCollisionEnter(). This method on the PlayerController works fine to identify these erroneous collisions, and raise a flag.
private void OnCollisionEnter(Collision other)
{
if (other.gameObject.tag != "Tunnel"){return;}
// Bit mask for tunnel layer.
int tunnelLayerMask = 1 << 10;
// Get the direction from the nearest Segment's origin to the collision point.
Vector3 toCollision = other.contacts[0].point - nearestSegment.transform.position;
if (Physics.Raycast(nearestSegment.transform.position, toCollision, out RaycastHit hit, 100f, tunnelLayerMask))
{
// Flag the collision if the detected surface normal
// isn't equal to the collision normal.
if (other.contacts[0].normal != hit.normal) { colFidelityFlag = true; }
}
}
But I'm at a complete loss when it comes to gracefully resolving the issue.
Currently, I'm just caching the player's velocity each frame. If a collision is flagged, I overwrite the resulting velocity with the cached velocity from the previous frame. This works for most conditions: the player ghosts imperceptibly into the floor for a frame, and gets past the offending joint.
But under high enough velocities or interactions with obstacles inside the tunnel, it is possible for the player to be ejected through the floor, and into space. I don't want to limit the velocity of the player too much, since the feel of the game partially relies on high velocity and hard collisions.
Does anyone know of a better way to resolve these erroneous collisions?
This is called "ghost vertices", and happens if you have like 2 box colliders (or something equivalent) and they are connected to each other. You could try to join the colliders so that instead of 2 separate connected colliders you have a single one.
Here ghost vertices are explained in more detail: https://www.iforce2d.net/b2dtut/ghost-vertices
In my board game, the points are given by throwing 7 sea-shells cowry shell. These shells are dropped onto a sphere in Unity so they get rolled over randomly to different places. Once the rigidbody.isSleeping() returns true, I do a Raycast(from the belly side downwards) to figure out the orientation of the shell. If it is NOT a hit we know the shells belly is turned upside which means a point.
All is good and very realistic when in single player mode. Reason is I just activate the gravity of the shells and they dropped on to sphere, gets rolled randomly and when stopped i get the marks as stated above.
Now the problem is I am making the game multiplayer. In this case, I sent the randomly generated marks from the server and client will have to animate the shells to represent the marks. For example, if server send 3, out of 7 shells, 3 should have it's belly turned upside.
Trying to do this has been a major problem for me. I tried to transform.Rotate() when the velocity is reduced but it was not very reliable and sometimes acts crazy. Rotating afterrigidbody.isSleeping() works but very unrealistic.
I know I am trying to defy physics here, but there may be some ways to achieve what I want with minimum artificial effect.
I actually need some ideas.
Update - 1
After infor I receive below, I did found some information here, some advanced stuff here. Since the latter link had some advanced stuff, I wanted to start small. So I followed the first link and did below test.
I recorded the position, rotation & velocity of the sea shell with autosimulation enabled and logged them to a file. Then i used the Physics.Simulate() for the same scenario and logged the same.
Comparing the two tells me that data in both cases are kind of similar. So seems like for my requirements I need to simulate the sea-shell drop and then apply that sequence to the actual object.
Now my problem is how can I apply the results of physics.simulate() results (position, rotation, velocity etc..) to the actual sea-shell so the animation can be seen. If I set the positions to my gameobject within the simulation loop nothing happens.
public void Simulate()
{
rbdy = GetComponent<Rigidbody>();
rbdy.AddForce(new Vector3(0f, 0f, 10f));
rbdy.useGravity = true;
rbdy.mass = 1f;
//Simulate where it will be in 5 seconds
int i = 0;
while (simulateTime >= Time.fixedDeltaTime)
{
simulateTime -= Time.fixedDeltaTime;
Debug.Log($"position: {rbdy.position.ToString()} rotation: {rbdy.rotation.ToString()} Velocity {rbdy.velocity.magnitude}");
gameObject.transform.position = rbdy.position;
Physics.Simulate(Time.fixedDeltaTime);
}
}
So, how can I get this simulated data applied to actual gameobject in the scene?
Assume Physics are deterministic, just set the velocity and position and let it simulate on each client. Output should be the same. If the output differs slighly, you could adjust it and it may be only barely noticable.
Physics.simulate may be interesting to read, even if it's kind of the opposite of what you want.
You can throw in the client, record the steps in realtime or using physics.simulate (see point 2) and transmit the animation data as binary - then use it in the other clients to play the animation.
I'm reading Unity Animation cookbook book. And I'm stuck at "Root Motion" topic. All I can understand now is Root motion allows the GameObject to move with the motion clip without coding. and it depends on the root node. But I can't imagine/understand how ? or what're that related properties like "Bake to pose" .. what's the pose..? I searched the web to find anyone talking about it.. but no helpful tutorials there! I tried to read from unity docs about the topic but it made it worse..
https://docs.unity3d.com/Manual/RootMotion.html
Please help me with example/link/replay
After spending more time searching/watching videos/ reading from other books to understand everything. I'll put my answer here for anyone face same difficulties understanding this topic
Treadmill vs root motion: There are two types of animation, treadmill and root motion. Treadmill means that the animation stays at the origin and we use code to move that asset around. Root motion means the motion is built right into the animation and it's the animation that determines how far something moves rather than code.
Then you have to watch this video to get an idea about how it looks in Blender and later in Unity when you import the character & animation
https://www.youtube.com/watch?v=d5z9dEnE4DE
Root Transform Rotation: This option captures the rotation of the root node and
applies it to the whole game object. You can set it to Bake Into Pose to disable the
root motion rotation. With this option selected, the rotation will be treated as a
visual effect of the animation and will not be applied to the game object. You
should set it to true for every animation that shouldn't rotate the character. You
can set the Based Upon option to one of the following options:
Root Transform Position Y: This option captures the vertical movement of the
root node and applies it to the whole game object. You can set it to Bake Into
Pose to disable the root motion in the Y axis. With this option selected, the Y axis
motion will be treated as a visual effect of the animation and will not be applied
to the game object. You should set it to true for every “on ground” animation
(unless it's a jump).
Root Transform Position XZ : This option captures the horizontal (XZ)
movement of the root node and applies it to the whole game object. You can set it
to Bake Into Pose to disable the root motion in the X and Z axis. With this option
selected, horizontal motion will be treated as a visual effect of the animation and
will not be applied to the game object. You should set it to true for all stationary
animations (such as Idle).
Good animations may combine both traditional(treadmill) and root motion ways.
The Body Transform (Pose) is the mass center of the character.
The Root Transform is a projection on the Y plane of the Body Transform (Pose) and is computed at runtime.
If you want to transfer the body transform to the gameobject first you have to transfer it to the root transform because that is the one that is applied to the gameobject transform. to transfer, for example, the XZ motion of the body transform to the root transform, you need to uncheck bake into pose option.
All right except "You should set it to true for every “on ground” animation (unless it's a jump)."
More correctly will be: "You should set it to True for every animation, where you will move Character GameObject by Axis Y by code (Unity Script), also are including the Jumping animation. If you will not move the GameObject of the Character along the Y-axis (by code), but want to implement "Jump" as the movement of bones (parts of the character's skeleton) from the jump animation, if it is present in the animation that was imported, you must set it to False. Many popular the Jumping "In-place" animations doesn't include the movement of bones, which will track the full bones movement at real Character Jump (say simple - in these animations the Root Node not up on demanding height)."