What is DOT3 lighting? - iphone

An answer to my question suggests that DOT3 lighting can help with OpenGL ES rendering, but I'm having trouble finding a decent definition of what DOT3 lighting is.
Edit 1
iPhone related information is greatly appreciated.

DOT3-lighting is often referred to as per-pixel lighting. With vertex lighting the lighting is calculated at every vertex and the resulting lighting is interpolated over the triangle. In per-pixel lighting, as the name implies, the object is to calculate the lighting at every pixel.
The way this is done on fixed function hardware as the iPhone is with so called register combiners. The name DOT3 comes from this render state:
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGB);
Look at this blog entry on Wolfgang Engels blog for more info on exactly how to set this up.
When doing per-pixel lighting it's popular to also utilize a so called normal map. This means that the normal of every point on an object is stored in a special texture map, a normal map. This was popularized in the game DOOM 3 by ID software where pretty low polygon models where used but with high resolution normal maps. The reason for using this technique is that the eye is more sensitive to variation in lighting than variation in shape.
I saw in your other question that the reason this came up was that you wanted to reduce the memory footprint of the vertex data. This is true, instead of storing three components for a normal in every vertex, you only need to store two components for the texture coordinates to the normal map. Enabling per-pixel lighting will come with a performance cost though so I'm not sure if this will be a net win, as usual the advice is to try and see.
Finally the diffuse lighting intensity in a point is proportional to the cosine of the angle between the surface normal and the direction of the light. For two vector the dot product is defined as:
a dot b = |a||b| cos(theta)
where |a| and |b| is the length of the vectors a and b respectively and theta is the angle between them. If the length is equal to one, |a| and |b| are referred to as unit vectors and the formula simplifies to:
a dot b = cos(theta)
this means that the diffuse lighting intensity is given by the dot product between the surface normal and the direction of the light. This means that all diffuse lighting is a form of DOT3-lighting, even if the name has come to refer to the per-pixel kind.

From here:
Bumpmapping is putting a texture on a model where each texel's brightness defines the height of that texel.
The height of each texel is then used to perturb the lighting across the surface.
Normal mapping is putting a texture on a model where each texel's color is really three values that define the direction that location on the surface points.
A color of (255, 0, 0) for example, might mean that the surface at that location points down the positive X axis.
In other words, each texel is a normal.
The Dot3 name comes from what you actually do with these normals.
Let's say you have a vector which points in the direction your light source points. And let's say you have the vector which is the normal at a specific texel on your model that tells you which direction that texel points.
If you do a simple math equation called a "dot product" on these two normal vectors, like so:
Dot = N1xN2x + N1yN2y + N1z*N2z
Then the resulting value is a number which tells you how much those two vectors point in the same direction.
If the value is -1, then they point in opposite directions, which actually means that the texel is pointing at the light source, and the light source is pointing at the texel, so the texel should be lit.
If the value is 1, then they point in the same direction, which means the texel is pointing away from the light source.
And if the value is 0, then one of the vectors points at 90 degrees relative to the other. Ie: If you are standing on the ground looking forward, then your view vector is 90 degrees relative to the normal of the ground which points up.

Related

3d vectors from a Unity Plane to a flat 2D point as a pixel on a texture

If I had a sharp sword and I were to perfectly slice an object in half, I would like to sample the colours at various points along this flat, freshly cut face, and place these colours on a texture.
Imagine the face is a Unity Plane defined by its Vector3 normal that goes through a location Vector3 p.
Let the texture be a 100 x 100 sized image.
Lets say the samples I want to take are three 3D points all on this plane, and defined as Vector3 A, B and C.
How do I go about converting the 3D points (x,y,z) from the defined plane into a 2D pixel (x,y) of this texture?
I have read many similar questions but honestly could not understand the answers. I don't know in my scenario if I'm dealing with Orthographic vs Projection perspective, whether I need to create a "conversion matrix", whether I need be concerned about rotations, or if there is just a simpler solution.
I appreciate any tips or suggestions. Thanks

How to step through 3D noise for volume textures?

I'm creating volume textures for volumetric ray marching (Creating this with Unity and a fragment shader)
Example
I have depth value that increases the starting position on the x, y or z axis.
Doing this additivley, results in an ulgy side view where you can see the stacked planes.
Example:
Example
When I multiply the depth value with the starting position, the result is a bit more convincing but the frequency will increase with the depth. I didn't find any 3D noise algorithms that take an extra parameter for the frequency, they all do it with the UVs (the position in my case).
Example
Can't really figure out how to do it correctly.
Found the solution.
As I was stepping along the z-value of the noise, I was multiplying it by my depth value. This of course increases/decreases the frequency.
All I had to do was to add it to the z-value.

Rotate vertices selected using weight map on UVs in Unity3D's Shader Graph around pivot point

TLDR: Can't figure out the correct Shader Graph setup for using UV and vertex displacement to cheaply animate a (unrigged) mesh.
I am trying to rotate a part of the mesh based on the UV coordinates, e.g: fromX 0 toX 0.4, fromY 0 toY 0.6. The mesh is created uv-mapped with this in mind.
I have no problem getting the affected vertices in this area. Problem is that I want to rotate these verts for customizable axis e.g. axis(X:1, Y:0, Z:1) using a weight so that the rotation takes place around a pivoted point. I want the bottom selection to stay connected to the rest of the mesh while the other affected vertices neatly rotate around this point.
The weight can be painted by using split UV channels as seen in the picture:
I multiply the weighted area with a rotation node to rotate it.
And I add that to the negative multiplied position (the rest of the verts, excluding the rotated area) to get the final output displacement.
But the rotated mesh is bent. I need it to be stiff as in the whole part rotated with weight=1 except for the very pivoting vertex.
I can get it as described using a weight=1 based rotation, but the pivot point becomes the center of the mesh, not the desired point.
How can I do this correctly?
Been at it for days, please help :')
I started using Unity about a month ago, and this is one of the first issues I faced.
The node you are using will always transform the vertices around the origin.
I think you have two options available:
Translate the vertices by the offset of where you want to rotate the wings. This would require storing the pivot point of the wings in the mesh somehow - This could done by utilizing a spare UV channel, or by using the vertex color channel.
Use bones and paint the weights in your chosen 3D package. This way, you can record the animation, and use Unity's skinned mesh shader to render it.
Hope that helps.
Try this:
I've used the UV ranges from your example applied to a sphere of unit size. The spheres original pivot is in the centre, and its adjusted pivot is shifted 0.5 on the Y axis.
The only variable the shader doesn't know, is the adjusted pivot position; so I pass this through the material.
I've not implemented your weight in the graph, as I just wanted to show you the process. You can easily plug that in.
The color output is just being used for debug purposes.
The first image is with the default object pivot.
The second image is with the adjusted pivot.
The final image is the graph. (Note the logic group is driving the vertex rotation based on the UV mask).

Smoothing algorithm, 2.5D

The picture below shows a triangular surface mesh. Its vertices are exactly on the surface of the original 3D object but the straight edges and faces have of course some geometric error where the original surface bends and I need some algorithm to estimate the smooth original surface.
Details: I have a height field of (a projectable part of) this surface (a 2.5D triangulation where each x,y pair has a unique height z) and I need to compute the height z of arbitrary x,y pairs. For example the z-value of the point in the image where the cursor points to.
If it was a 2D problem, I would use cubic splines but for surfaces I'm not sure what is the best solution.
As commented by #Darren what you need are patches.
It can be bi-linear patches or bi-quadratic or Coon's patches or other.
I have found no much reference doing a quick search but this links:
provide an overview: http://www.cs.cornell.edu/Courses/cs4620/2013fa/lectures/17surfaces.pdf
while this is more technical: https://www.doc.ic.ac.uk/~dfg/graphics/graphics2010/GraphicsHandout05.pdf
The concept is that you calculate splines along the edges (height function with respect to the straight edge segment itself) and then make a blending inside the surface delimited by the edges.
The patch os responsible for the blending meaning that inside any face you have an height which is a function of the point position coordinates inside the face and the values of the spline ssegments which are defined on the edges of the same face.
As per my knowledge it is quite easy to use this approach on a quadrilateral mesh (because it becomes easy to define on which edges sequence to do the splines) while I am not sure how to apply if you are forced to go for an actual triangulation.

iPhone opengl es: Conceptual knowledge

Hi guys I have a conceptual question in the render function of opengles on the iPhone. Say I have a cube. The cube has 6 faces and consequently 12 triangles. If I calculate a normal for each triangle I have 12 normals, each normal has x,y,z cordinates. Consequently I have an array of 36 floats. In my render function I want to pass the x,y,z cordinates to the vertex for some lighting related calculations. This is my call to achieve that
glVertexAttribPointer(_simpleLightNormalAttribute, 3, GL_FLOAT, GL_FALSE, sizeof(float), duplicateNormals);
The indexes 0,1,2 contain the x,y,z cordinates of the first triangle and so on. How does opengl know that indexes 2,3,4 contain the x,y,z cordinates of the 2 triangle?
Any help would be much appreciated/
OpenGL doesn't define a vertex the way you do. In classical fixed-pipeline OpenGL a single vertex is a location and/or a normal and/or a colour and/or a texture coordinate.
So normals associate with vertices, not with polygons. Looking at it another way, if you're supplying normals you generally need to provide each vertex separately for each face that it rests on.
So in the case of a cube, you should supply 24 vertices, in six groups of four. All the vertices in each group will have the same normal but different positions.
Addition:
In ES 2 basically the same rule applies: all properties are per vertex and are interpolated across such faces as you specify. So you're still not really in a position to specify properties per face. The exception to this is that you can change uniforms at will, so in your case you could use a uniform for the normal though you'd end up drawing two triangles, changing the uniform, drawing two more triangles, changing the uniform again, etc, which would be an incredibly inefficient way to proceed.
So you'll probably still need the 24 vertices — one per location per face.
If you are passing each triangle's vertices separately (as the OpenGL Game project does and as you seem to be doing) rather than using indices then you need to specify a normal for each vertex. For a cube with sharply defined edges you would probably want to send the same perpendicular normal for each vertex on a face.
If you really want/need to specify fewer normals, you should look into using indices. Then you would define the 8 distinct vertices in the cube, the 8 normals that go with them, and define your triangles using an array of indices for your vertices. Bear in mind that with this approach you will have multiple faces sharing normals. This can be a good thing if you are looking for a smooth lighting effect (like with curves) or a bad thing if you want sharp, distinct edges (like with cubes).
Hope this helps