I am trying to independently animate mouth, eyes and facial expressions on a 3D humanoid character in Unity. The problem I am having is the animation system always blends the eyes and mouth, making the character look like a slack-jawed yokel.
I have bones for neck, head, jaw, and 1 for each eye.
What I have tried.
Attempt 1
Create 3 layers. 1 for a body, 1 for a mouth, 1 for eyes. Add a head mask to the mouth and eye layers. Set the weight to 1, Blending to override for all layers.
What happens is the blend weight just gets set to 0.5 for both head layers.
Attempt 2
Use 1 body layer and 1 head layer with a head mask. In the head layer, use a Blend tree with a Direct Blend type. Have nested blend types for eye movement and jaw movement.
What happens is the blend weight just gets divided up between them. Mouth hangs open.
Attempt 3.
Use a transformed mask on the model animations. Restrict the Eye movement to just the transforms for the eye. Mouth animations to the jaw. Under mask restrict using Humanoid head and then Transform body or eyes, depending on the animation.
The Transform I need to mask it to a greyed out (because it's a humanoid model). Restricting it to a mesh makes the whole mesh move based on jaw movement or other weird things.
The question is how do you make parts of the face move independently from other parts. I want my character to be able to talk and look separately from each other, like in the real world.
I got this working by using only the jaw bone in the animator and using scripts to control the eyes and blend shapes (blinking).
For anyone trying to do the same thing.
I have seen YouTube video where they control multiple blend shapes using a blend tree with blend type of direct, but could not get that working. I suspect they did not have any bones in the face.
Another YouTube video of a red breasted robin animation who mixed shape keys and bone animations using the NLA Editor.
Related
I am writing a topdown shooter, where the player constantly moves to the right to progress. I cubes (plates) that are used as the floor as the player advances through the game.
Each plate has a different texture - snow, grass, etc - which I choose at random.
The question - how do I blend the textures on one plate with the following plate. I assume I will have to spawn them overlapping, but I don't know what technique to use to gradually transition from one to the other.
I'm not looking for a full written solution, just a nudge in the right direction so I can look up the right terms to start learning how to do this.
Vertex Colors: You could color each cube with vertex colors, but that would typically limit you to 3 or 4 different textures to blend between... Also the transition is hard when the cubes are just overlapping. If you have a single object (consisting of connected cubes) you could make use of the blended vertex colors.
The texture blending can be achieved like this:
Vertex Colors blue and black, you can see the blending between the areas (blurry gradient)
If you blend like this:
You will get this:
And if you also use a height-map + PBR maps, it can look like this:
I guess this approach is very limiting to the number of possible blocks.
Tiles:
Like in 2D Tile Editors, you need to model/texture different objects to use between your cubes. Example:
Grass Cube - Transition Cube - Sand Cube
And then you need a transition Cube for every possible transition.
The amount of required transitions grows fast when you add base blocks!
Tiles Source Kenney.nl
That could be combined with Wave Function Collapse which was used in TownScaper.
I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.
This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.
I have a projector component and I need to find the angle that projected texture falls at to exclude the projecting on vertical faces.
My projector is under the mouse pointer and works ok when it is over an horizontal face:
I would like the projector to switch off on vertical faces to avoid this bad effect:
If possible, I would like to do it in the shader code to avoid the vertical projected image even if the cursor is located on the corners of an horizontal face and a part "goes out" on vertical face.
I found this solution in C#:
if (Physics.Raycast(MouseRay,out hitInfo)){
if(hitInfo.normal.y>0) {
// draw
} else {
// not draw
}
}
But only it works on curved surfaces and not, for example, on the face cubes.
How can I do this properly?
Normally they would use an image on a quad using TGA transparency, which rotates itself to the face that the middle of the object is aligned to, using ray to find the vertex and making it's absolute normal.
Other ways of doing it would be quite tricky, perhaps using decals... If you did it using a shader, it would take so much time... it's a case of problem solving not being ordered in order of importance for fast development. Technically you can project a volumetric texture on top of whatever object you are using... that way you can add your barred circle projected from a point in space towards the object, as a mathematical formula, it takes a while to do, check out volumetric textures, i have written some and in your case it needs the mouse pos sent to texture and maths to add transparent zone and red zone to texture. takes all day.
It's fine to have a flat circle that flips around when you change the pointer onto a different face, it will just look like a physical card and it's much easier to code, 10 minutes instead of many hours.
Pre-info:
I'm making a 2D game in Unity which behaves like Castle Crashers, where the player can move around forwards and backwards like in a sidescroller, but also up and down, kind of like a topdown game - but it's still a 'sidescroller'.
In Unity I'm using Rigidbody2Ds and Boxcollider2Ds for physics.
However, when wanting to simulate things like dropping items, creating gibs or any other object that needs to fall to the 'floor', this gets tricky.
The objects that need to fall to the floor don't know where the floor is, so they'll fall forever.
Question
Can Boxcollider2Ds be set to collide with an individual infinite x-axis?
Object A should collide with the red axis and Object B should collide with the blue axis.
Is this possible?
You could use layers. And in project settings -> Physics2DSettings set them not to collide with each other. There is a hard limit of 32 layers and first 8 are used by system (you can still use them for this) this leaves you with 24 discreet layers - change layer of your objects when they change their position on Y axis. The gameplay might feel awful.
Use 3D physics. tilt your camera 45 degrees on X axis, set projection to ortho, and draw 2D sprites on top of invisible 3D physics objects - then you will have real 2D plane to walk and jump on.
Don't use box2d at all: write your own - simple physics library, you need it only for jumping and falling, right ?
I'm looking for an alternative technique for rendering reflections in OpenGL ES on the iPhone. Usually I would do this by using the stencil buffer to mark where the reflection can be seen (the reflective surface) and then render the reversed image only in those pixels. Thus when the reflected object moves off the surface its reflection is no longer seen. However, since the iPhone's implementation doesn't support the stencil buffer I can't determine how to hide the portions of the reflection that fall outside of the surface.
To clarify, the issue isn't rendering the reflections themselves, but hiding them when they wouldn't be visible.
Any ideas?
Render the reflected scene first; copy out to a texture using glCopyTexImage2D; clear the framebuffer; draw the scene proper, applying the copied texture to the reflective surface.
I don't have an answer for reflections, but here's how I'm doing shadows without the stencil buffer, perhaps it will give you an idea:
I perform basic front-face/back-face determination of the mesh from the point of view of the light source. I then get a list of all edges that connect a front triangle to a back triangle. I treat this edge list as a line "loop". I project the vertices of this loop along the object-light ray until it intersects the ground. These intersection points are then used to calculate a 2D polygon on the same plane as the ground. I then use a tesselation algorithm to turn that poly into triangles. (This works fine as long as your lights sources or objects don't move too often.)
Once I have the triangles, I render them with a slight offset such that the depth buffer will allow the shadow to pass. Alternatively you can use a decaling algorithm such as the one in the Red Book.