How to blend textures on 2 objects - unity3d

I am writing a topdown shooter, where the player constantly moves to the right to progress. I cubes (plates) that are used as the floor as the player advances through the game.
Each plate has a different texture - snow, grass, etc - which I choose at random.
The question - how do I blend the textures on one plate with the following plate. I assume I will have to spawn them overlapping, but I don't know what technique to use to gradually transition from one to the other.
I'm not looking for a full written solution, just a nudge in the right direction so I can look up the right terms to start learning how to do this.

Vertex Colors: You could color each cube with vertex colors, but that would typically limit you to 3 or 4 different textures to blend between... Also the transition is hard when the cubes are just overlapping. If you have a single object (consisting of connected cubes) you could make use of the blended vertex colors.
The texture blending can be achieved like this:
Vertex Colors blue and black, you can see the blending between the areas (blurry gradient)
If you blend like this:
You will get this:
And if you also use a height-map + PBR maps, it can look like this:
I guess this approach is very limiting to the number of possible blocks.
Tiles:
Like in 2D Tile Editors, you need to model/texture different objects to use between your cubes. Example:
Grass Cube - Transition Cube - Sand Cube
And then you need a transition Cube for every possible transition.
The amount of required transitions grows fast when you add base blocks!
Tiles Source Kenney.nl
That could be combined with Wave Function Collapse which was used in TownScaper.

Related

Independent Eyes and Mouth animations in unity3d

I am trying to independently animate mouth, eyes and facial expressions on a 3D humanoid character in Unity. The problem I am having is the animation system always blends the eyes and mouth, making the character look like a slack-jawed yokel.
I have bones for neck, head, jaw, and 1 for each eye.
What I have tried.
Attempt 1
Create 3 layers. 1 for a body, 1 for a mouth, 1 for eyes. Add a head mask to the mouth and eye layers. Set the weight to 1, Blending to override for all layers.
What happens is the blend weight just gets set to 0.5 for both head layers.
Attempt 2
Use 1 body layer and 1 head layer with a head mask. In the head layer, use a Blend tree with a Direct Blend type. Have nested blend types for eye movement and jaw movement.
What happens is the blend weight just gets divided up between them. Mouth hangs open.
Attempt 3.
Use a transformed mask on the model animations. Restrict the Eye movement to just the transforms for the eye. Mouth animations to the jaw. Under mask restrict using Humanoid head and then Transform body or eyes, depending on the animation.
The Transform I need to mask it to a greyed out (because it's a humanoid model). Restricting it to a mesh makes the whole mesh move based on jaw movement or other weird things.
The question is how do you make parts of the face move independently from other parts. I want my character to be able to talk and look separately from each other, like in the real world.
I got this working by using only the jaw bone in the animator and using scripts to control the eyes and blend shapes (blinking).
For anyone trying to do the same thing.
I have seen YouTube video where they control multiple blend shapes using a blend tree with blend type of direct, but could not get that working. I suspect they did not have any bones in the face.
Another YouTube video of a red breasted robin animation who mixed shape keys and bone animations using the NLA Editor.

How to texture mesh? Shader vs. generated texture

I managed to create a map divided in chunks. Each one holding a mesh generated by using perlin noise and so on. The basic procedural map method, shown in multiple tutorials.
At this point i took a look at surface shader and managed to write one which fades multiple textures depending on the vertex heights.
This gives me a map which is colored smoothly.
In tutorials i watched they seem to use different methods to texture a mesh. So in this one for example a texture is generated for each mesh. This texture will hold a different color depending on the noise value.This texture is applied to the mesh and after that the mesh vertices are displaced depending on the z-value.
This results in a map with sharper borders between the colors giving the whole thing a different look. I believe there is a way to create smoother transitions between the tile-colors by fading them like i do in my shader.
My question is simply what are the pro and cons of those methods. Let's call them "shader" and "texture map". I am lost right now, not knowing in which direction to go.

Shader-coding: nonlinear projection models

As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!

How to use a different texture on intersecting part of 2 quads

I'm looking for a way to dynamically change a part of a Quad that has a SpriteRenderer attached to it. Let's say I have a red Quad and a blue Quad, and then I drag one onto the other (fast or slow), the intersecting part should be colored using a green sprite. This illustration shows the scenario I'm trying to solve.
Can someone please help me with this?
You have two options:
First, if your mid color will be the correct mixture of other two color, in this case it would be yellow, you can use Mobile Particle/Additive or Mobile Particle/Multiply Shaders.
In a second way, you can write your own shader that takes the intersection area as parameter and paint your textures according to parameters.

How make realistic 3D earth for iOS with atmosphere shaders

How porting from 3D Max / or other 3D application realistic earth model to iOS device ( Open GL ES)
How porting atmosphere effects ( not clouds - it is texture) - the glow of sky?
If speed is not the main point, you can use ray-tracing. You can model the earth and it's atmosphere as an opaque sphere, and a few non-opaque large spheres for the atmosphere. It gives you a model that handle clouds, shadows, scattering, light filtering for a reasonable amount of work and not too much tweaks. Ray-tracing a dozen spheres with same center is very cheap. Each 'atmosphere' layer will deviate light rays, with decreasing refraction index for each layer, and they will absorb some light, more for the lower layers. Spending some time on paper, you can simplify the math a bit and make really cheap :)
Also, just for the atmospheric effect, I guess doing it in half-resolution should be enough, as atmospheric effect are rather low-frequency.
I do it like this:
first render pass
surface model is ellipsoid
plus color texture
plus bump mapping
plus alpha blending with cloud texture
second render pass
just draw single Quad over whole screen
and blend in sky color via simplified atmospheric scattering GLSL shader
[Notes]
you can add also atmospheric refraction to be more precise