What is a "Mesh Space Rotation Blend" in "Layered blend per bones" node? - unreal-engine4

Anim_BP
In the animation blueprint, I use the “Layered blend per bones” nodes to layer two animations together. My character had a running animation and also holds an AK 47. But due to the running animation, the weapon moves jitters a lot when moving left or right. But, this “Mesh Space Rotation Blend” in “Layered blend per bones” node fixes this issue. I am a beginner in Unreal Engine 4 and I follow a series of FPS tutorials from “DevSquad”

Related

Most efficient way of calculating big terrain collision mesh

I'm using Unity Engine to make a space exploration game. I have to implement the collision system for the planet I made with quadsphere and quadtree LOD, I thought of two ways:
Generate a BIG mesh collider for the hole planet and keep the collision activated only for the face that the player is in (the planet has 6 faces). This works fine because the mesh is created only one time, but the game keeps a mesh collider bigger than needed in the scene.
Use the quadtree LOD and create a new mesh collider every second, but just for the high resolution terrain near the player. The mesh collider used in this option is way less bigger than the first option, but is updated every second.
Which way is more approachable? And there are more efficient methods than these two?

Unity-3D - Can I have several object show the same animation state?

An animated figure requires an Animator to have it show animation.
I am trying to create a batalion of soldiers walking in lockstep. Just creating 200 soldiers and moving them is easy, but each one has its own animator and is calculating the animation pose of the soldier in each frame - for each soldier.
But if they are all the same it would seem better to have one animator calculate the pose and use this shaped mesh for all of them.
Is there a way to have several gameobjects share one Animator, one Mesh, and one single copy of the resulting pose?
The screenshot attached shows that the 500 animators consume several milliseconds on 5 CPUs to do the animations...

Independent Eyes and Mouth animations in unity3d

I am trying to independently animate mouth, eyes and facial expressions on a 3D humanoid character in Unity. The problem I am having is the animation system always blends the eyes and mouth, making the character look like a slack-jawed yokel.
I have bones for neck, head, jaw, and 1 for each eye.
What I have tried.
Attempt 1
Create 3 layers. 1 for a body, 1 for a mouth, 1 for eyes. Add a head mask to the mouth and eye layers. Set the weight to 1, Blending to override for all layers.
What happens is the blend weight just gets set to 0.5 for both head layers.
Attempt 2
Use 1 body layer and 1 head layer with a head mask. In the head layer, use a Blend tree with a Direct Blend type. Have nested blend types for eye movement and jaw movement.
What happens is the blend weight just gets divided up between them. Mouth hangs open.
Attempt 3.
Use a transformed mask on the model animations. Restrict the Eye movement to just the transforms for the eye. Mouth animations to the jaw. Under mask restrict using Humanoid head and then Transform body or eyes, depending on the animation.
The Transform I need to mask it to a greyed out (because it's a humanoid model). Restricting it to a mesh makes the whole mesh move based on jaw movement or other weird things.
The question is how do you make parts of the face move independently from other parts. I want my character to be able to talk and look separately from each other, like in the real world.
I got this working by using only the jaw bone in the animator and using scripts to control the eyes and blend shapes (blinking).
For anyone trying to do the same thing.
I have seen YouTube video where they control multiple blend shapes using a blend tree with blend type of direct, but could not get that working. I suspect they did not have any bones in the face.
Another YouTube video of a red breasted robin animation who mixed shape keys and bone animations using the NLA Editor.

Fluent movement with NetworkTransform & NetworkAnimator

My player character moves around the world using the Animator with root motion activated. Basically, the AI system sets the velocities on Animator, which in turn controls the Animation clips that control character motion. As this is a standard feature that ensures very realistic animation without noticable sliding, I thought this was a good idea ...until I added Network synchronization.
Synching the characters over the Network using NetworkTransform and NetworkAnimation causes those two components to conflict:
NetworkTransform moves the character to whichever position the host commands.
NetworkAnimator syncs the animation vars and plays the Animation clips as host instructs it to, while those Animation clips also apply root motion.
The result is precise (meaning the character reaches the exact target destination), but very stuttering movement (noticable jumps).
Removing NetworkTransform, the host and client instances of the characters desynchronize very quickly, meaning they will end up at different positions in the world when solely controlled by the timing-dependant Animator.
Removing NetworkAnimator, client instances won't play the same animations as the host, if any animations at all.
I tried keeping both Components while disabling root motion for the Animator (on client only). In that case however, NetworkTransform does not seem to interpolate at all. The character just jumps from synched position to synched position in steps of about 0.02 units. Same with rotation.
NetworkTransform is configured to "Sync Transform", as the character neither has a RigidBody nor a CharacterController. All other values are the defaults: sync rate of 9 (also tried higher values there), movement threshold of 0.001, snap threshold of 5, interpolate movement = 1.
How do I get fluent root motion based movement on the Network? I expected that to be a standard scenario...
What you need is to disasble the Root Motion flag on non local instances but also to interpolate the rotation, not just the movement.
Moreover, an interpolation of 1 seems high, same as the thereshold of 5: those seem not realistic unless you are not using Unity standard where 1 unit = 1 meter. I would have a 25cm (0.25) interpolation for the movementand a 3 degrees for the rotation. The sync rate of 9 could be enough but in my experience it has to be recomputed based on the packet loss.

SceneKit memory management issues

So I am playing around with SceneKit (in Swift), and for pure fun, creating an endless runner style game. In the game scene, I have a character moving forward on the z-axis. I have setup a few objects that are repeatedly created when the last expires. Most of these objects are under a node hierarchy. So, I have attempted these two methods for memory management:
Method 1: I have an invisible plane. Only 'special' parent node bodies can collide with this plane. So when the parent node's physics body collides with it, I remove it from the parent, which in result the node removes all of its children.
Method 2: Again I have an invisible plane. Except in this method everything I would potentially like to remove, can and does collide with the invisible plane on it's own accord. So in plain, everything collides with the invisible plane and removes itself from it's parent.
Now. Everything runs and works well. Except when it comes to removing nodes from their parents, I see a noticeable jolt in the framerate or speed of the game. Just for a second or so. With method 1, the jolt happens once for every time a parent node collides with my invisible plane. With method 2, the jolts are continuous and framerate is continuously low and jumpy.
I would be interested in hearing knowledge on ways of achieving something like this smoothly!
A big thanks in advance.