Rotate vertices selected using weight map on UVs in Unity3D's Shader Graph around pivot point - unity3d

TLDR: Can't figure out the correct Shader Graph setup for using UV and vertex displacement to cheaply animate a (unrigged) mesh.
I am trying to rotate a part of the mesh based on the UV coordinates, e.g: fromX 0 toX 0.4, fromY 0 toY 0.6. The mesh is created uv-mapped with this in mind.
I have no problem getting the affected vertices in this area. Problem is that I want to rotate these verts for customizable axis e.g. axis(X:1, Y:0, Z:1) using a weight so that the rotation takes place around a pivoted point. I want the bottom selection to stay connected to the rest of the mesh while the other affected vertices neatly rotate around this point.
The weight can be painted by using split UV channels as seen in the picture:
I multiply the weighted area with a rotation node to rotate it.
And I add that to the negative multiplied position (the rest of the verts, excluding the rotated area) to get the final output displacement.
But the rotated mesh is bent. I need it to be stiff as in the whole part rotated with weight=1 except for the very pivoting vertex.
I can get it as described using a weight=1 based rotation, but the pivot point becomes the center of the mesh, not the desired point.
How can I do this correctly?
Been at it for days, please help :')

I started using Unity about a month ago, and this is one of the first issues I faced.
The node you are using will always transform the vertices around the origin.
I think you have two options available:
Translate the vertices by the offset of where you want to rotate the wings. This would require storing the pivot point of the wings in the mesh somehow - This could done by utilizing a spare UV channel, or by using the vertex color channel.
Use bones and paint the weights in your chosen 3D package. This way, you can record the animation, and use Unity's skinned mesh shader to render it.
Hope that helps.

Try this:
I've used the UV ranges from your example applied to a sphere of unit size. The spheres original pivot is in the centre, and its adjusted pivot is shifted 0.5 on the Y axis.
The only variable the shader doesn't know, is the adjusted pivot position; so I pass this through the material.
I've not implemented your weight in the graph, as I just wanted to show you the process. You can easily plug that in.
The color output is just being used for debug purposes.
The first image is with the default object pivot.
The second image is with the adjusted pivot.
The final image is the graph. (Note the logic group is driving the vertex rotation based on the UV mask).

Related

Unity mirroring a mesh but the colors are facing down

I am trying to mirror a mesh to another mesh by using procedural mesh generation as seen below. The original mesh is at the positive z axis while the mirrored is at the negative z axis.
The vertices are all mirrored as I wanted but the color is facing down instead of up.
I tried to change the mesh uvs and normals but both did no affect on the mirrored mesh. I heard the triangles have to be reversed in their array or something that I do not understand. How do I make the mirrored mesh color face up?
The black plane is supposed to be half transparent blue water plane but it is not part of my question so I guess do not mind it.
Not 100% sure but I suspect it is the triangles as you say.
You would probably simply need to invert the triangles => simply invert the array e.g. using Array.Reverse like e.g.
var triangles = mirroredMesh.triangles;
Array.Reverse(triangles);
mirroredMesh.triangles = triangles;

How to set direction of arrows in shadergraph

I'm pretty new to shader graph and shaders in general. I'm working on a 2D project and I'm trying to make a shader that rotates an arrow to make a flow-like material and use it on a sprite shape.
Basically what I want to do is make a proper version of this:
What I'm currently doing is multiplying the Y position of the position node by an exposed vector 1 and using it in Rotate node (which I know is pretty hacky and won't work if the shape is not an arc.)
Aligning UV with arbitrary mesh seems bit hard. Why not bend pre-made mesh instead? Graph below bends vertex positions around axis Z at given point and strength (0 makes mesh invisible tho), but, you can easily replace that Position node with UV and plug results into Sample Texture 2D. I just guess bending a mesh will give you better/easier results.
Create a subdivided and well UV-mapped rectangle plane
Bend that plane with a vertex shader (attached graph bends around Z axis)
graph is based on code from Blender source

Do I need triangle information in my surface shader?

Update
The main question is: How can I pass the world space vertex positions of the triangle to the surface shader in a Unity shader.
As mentioned in a comment it might be possible to pass them from a geometry shader. But I read somewhere that implementing a custom geometry shader overwrites Unitys logic to calculate shadows etc.
I would add the triangle information in the Input structure. But before I change my mesh generation logic for it I would like to know if this is feasible. For this solution the vertex positions of the triangle must be constant for every pixel in a triangle and not be interpolated.
This is the original question:
I am writing a surface shader for a triangle mesh. I set a custom vertex attribute with a texture id to every vertex. Now I want the surface shader to apply the texture as seen in the following image. (Note that each color is representing a texture)
In the surface shader I need the 3 vertices that define the triangle and their texture ids. Furthermore I need to position of the pixel I am drawing.
If all texture ids are the same I pick this texture for all pixels.
If one or two texture ids differ I calculate the pixels distance to the triangle vertices and pick the texture like seen in the next image:
The surface shader needs to be aware of the pixels triangle. With this logic I should get the shading I am looking for. I am creating my mesh programmatically so I can add the triangle vertices and their texture ids as vertex attributes and pass it to the surface shader.
But I am not sure if this is feasible with how surface/vertex shaders work. Is there a relationship between the vertex and the pixel to get my custom triangle information from? Is there a better way of doing this?
I am using Unitys ShaderLab for my shaders.
No, you should not be (nor have acceess to) using vertex data in a fragment shader. In a fragment shader you only have access to data about that given pixel, you cannot go back and look at the mesh that formed it (this is the way the pipeline is constructed).
What you can do (and is a common practice) is to bake the data into one of the available channels (i.e. other UV Mapping channels) of the verts within the Vertex Shader. This way the Fragment shader will have access to the value via interpolators
Ok I think I found a solution. Thank you for the comments, they where useful.
Fist I change my grid topology to not use shared vertices. With this I can use a vertex color channel to set the texture ids.
vertexColor.r = vertexTextureId0; // Texture to use for vertex 0
vertexColor.g = vertexTextureId1; // Texture to use for vertex 1
vertexColor.b = vertexTextureId2; // Texture to use for vertex 2
I do not have to worry about interpolation because all vertices of the triangle have the same color information.
Now I create a texture to look up to which vertex my pixel belongs to. This texture looks similar to the images I posted in the question. I have to transpose the UV coordinates according to the TWO or THREE case. This solution gives me the freedom to easily change the edge and make it more ragged.

Changing the position of the vertices in a plane

Is it possible to change the positions of the corners of an SCNPlane? Or do I have to make a custom plane to change the positions of its vertices?
EDIT:
So i have a SCNPlane or a custom created plane, and i want to atleast print the coordinates of the vertices that the plane has.
For simple effects, you can effectively change the plane's corners by using the plane's transform matrix. You can do more complex effects with a shader modifier (see SCNShadable) or a morpher (see SCNMorpher). What effect are you trying to achieve?

What is DOT3 lighting?

An answer to my question suggests that DOT3 lighting can help with OpenGL ES rendering, but I'm having trouble finding a decent definition of what DOT3 lighting is.
Edit 1
iPhone related information is greatly appreciated.
DOT3-lighting is often referred to as per-pixel lighting. With vertex lighting the lighting is calculated at every vertex and the resulting lighting is interpolated over the triangle. In per-pixel lighting, as the name implies, the object is to calculate the lighting at every pixel.
The way this is done on fixed function hardware as the iPhone is with so called register combiners. The name DOT3 comes from this render state:
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGB);
Look at this blog entry on Wolfgang Engels blog for more info on exactly how to set this up.
When doing per-pixel lighting it's popular to also utilize a so called normal map. This means that the normal of every point on an object is stored in a special texture map, a normal map. This was popularized in the game DOOM 3 by ID software where pretty low polygon models where used but with high resolution normal maps. The reason for using this technique is that the eye is more sensitive to variation in lighting than variation in shape.
I saw in your other question that the reason this came up was that you wanted to reduce the memory footprint of the vertex data. This is true, instead of storing three components for a normal in every vertex, you only need to store two components for the texture coordinates to the normal map. Enabling per-pixel lighting will come with a performance cost though so I'm not sure if this will be a net win, as usual the advice is to try and see.
Finally the diffuse lighting intensity in a point is proportional to the cosine of the angle between the surface normal and the direction of the light. For two vector the dot product is defined as:
a dot b = |a||b| cos(theta)
where |a| and |b| is the length of the vectors a and b respectively and theta is the angle between them. If the length is equal to one, |a| and |b| are referred to as unit vectors and the formula simplifies to:
a dot b = cos(theta)
this means that the diffuse lighting intensity is given by the dot product between the surface normal and the direction of the light. This means that all diffuse lighting is a form of DOT3-lighting, even if the name has come to refer to the per-pixel kind.
From here:
Bumpmapping is putting a texture on a model where each texel's brightness defines the height of that texel.
The height of each texel is then used to perturb the lighting across the surface.
Normal mapping is putting a texture on a model where each texel's color is really three values that define the direction that location on the surface points.
A color of (255, 0, 0) for example, might mean that the surface at that location points down the positive X axis.
In other words, each texel is a normal.
The Dot3 name comes from what you actually do with these normals.
Let's say you have a vector which points in the direction your light source points. And let's say you have the vector which is the normal at a specific texel on your model that tells you which direction that texel points.
If you do a simple math equation called a "dot product" on these two normal vectors, like so:
Dot = N1xN2x + N1yN2y + N1z*N2z
Then the resulting value is a number which tells you how much those two vectors point in the same direction.
If the value is -1, then they point in opposite directions, which actually means that the texel is pointing at the light source, and the light source is pointing at the texel, so the texel should be lit.
If the value is 1, then they point in the same direction, which means the texel is pointing away from the light source.
And if the value is 0, then one of the vectors points at 90 degrees relative to the other. Ie: If you are standing on the ground looking forward, then your view vector is 90 degrees relative to the normal of the ground which points up.