iPhone opengl es: Conceptual knowledge - iphone

Hi guys I have a conceptual question in the render function of opengles on the iPhone. Say I have a cube. The cube has 6 faces and consequently 12 triangles. If I calculate a normal for each triangle I have 12 normals, each normal has x,y,z cordinates. Consequently I have an array of 36 floats. In my render function I want to pass the x,y,z cordinates to the vertex for some lighting related calculations. This is my call to achieve that
glVertexAttribPointer(_simpleLightNormalAttribute, 3, GL_FLOAT, GL_FALSE, sizeof(float), duplicateNormals);
The indexes 0,1,2 contain the x,y,z cordinates of the first triangle and so on. How does opengl know that indexes 2,3,4 contain the x,y,z cordinates of the 2 triangle?
Any help would be much appreciated/

OpenGL doesn't define a vertex the way you do. In classical fixed-pipeline OpenGL a single vertex is a location and/or a normal and/or a colour and/or a texture coordinate.
So normals associate with vertices, not with polygons. Looking at it another way, if you're supplying normals you generally need to provide each vertex separately for each face that it rests on.
So in the case of a cube, you should supply 24 vertices, in six groups of four. All the vertices in each group will have the same normal but different positions.
Addition:
In ES 2 basically the same rule applies: all properties are per vertex and are interpolated across such faces as you specify. So you're still not really in a position to specify properties per face. The exception to this is that you can change uniforms at will, so in your case you could use a uniform for the normal though you'd end up drawing two triangles, changing the uniform, drawing two more triangles, changing the uniform again, etc, which would be an incredibly inefficient way to proceed.
So you'll probably still need the 24 vertices — one per location per face.

If you are passing each triangle's vertices separately (as the OpenGL Game project does and as you seem to be doing) rather than using indices then you need to specify a normal for each vertex. For a cube with sharply defined edges you would probably want to send the same perpendicular normal for each vertex on a face.
If you really want/need to specify fewer normals, you should look into using indices. Then you would define the 8 distinct vertices in the cube, the 8 normals that go with them, and define your triangles using an array of indices for your vertices. Bear in mind that with this approach you will have multiple faces sharing normals. This can be a good thing if you are looking for a smooth lighting effect (like with curves) or a bad thing if you want sharp, distinct edges (like with cubes).
Hope this helps

Related

3d vectors from a Unity Plane to a flat 2D point as a pixel on a texture

If I had a sharp sword and I were to perfectly slice an object in half, I would like to sample the colours at various points along this flat, freshly cut face, and place these colours on a texture.
Imagine the face is a Unity Plane defined by its Vector3 normal that goes through a location Vector3 p.
Let the texture be a 100 x 100 sized image.
Lets say the samples I want to take are three 3D points all on this plane, and defined as Vector3 A, B and C.
How do I go about converting the 3D points (x,y,z) from the defined plane into a 2D pixel (x,y) of this texture?
I have read many similar questions but honestly could not understand the answers. I don't know in my scenario if I'm dealing with Orthographic vs Projection perspective, whether I need to create a "conversion matrix", whether I need be concerned about rotations, or if there is just a simpler solution.
I appreciate any tips or suggestions. Thanks

Do I need triangle information in my surface shader?

Update
The main question is: How can I pass the world space vertex positions of the triangle to the surface shader in a Unity shader.
As mentioned in a comment it might be possible to pass them from a geometry shader. But I read somewhere that implementing a custom geometry shader overwrites Unitys logic to calculate shadows etc.
I would add the triangle information in the Input structure. But before I change my mesh generation logic for it I would like to know if this is feasible. For this solution the vertex positions of the triangle must be constant for every pixel in a triangle and not be interpolated.
This is the original question:
I am writing a surface shader for a triangle mesh. I set a custom vertex attribute with a texture id to every vertex. Now I want the surface shader to apply the texture as seen in the following image. (Note that each color is representing a texture)
In the surface shader I need the 3 vertices that define the triangle and their texture ids. Furthermore I need to position of the pixel I am drawing.
If all texture ids are the same I pick this texture for all pixels.
If one or two texture ids differ I calculate the pixels distance to the triangle vertices and pick the texture like seen in the next image:
The surface shader needs to be aware of the pixels triangle. With this logic I should get the shading I am looking for. I am creating my mesh programmatically so I can add the triangle vertices and their texture ids as vertex attributes and pass it to the surface shader.
But I am not sure if this is feasible with how surface/vertex shaders work. Is there a relationship between the vertex and the pixel to get my custom triangle information from? Is there a better way of doing this?
I am using Unitys ShaderLab for my shaders.
No, you should not be (nor have acceess to) using vertex data in a fragment shader. In a fragment shader you only have access to data about that given pixel, you cannot go back and look at the mesh that formed it (this is the way the pipeline is constructed).
What you can do (and is a common practice) is to bake the data into one of the available channels (i.e. other UV Mapping channels) of the verts within the Vertex Shader. This way the Fragment shader will have access to the value via interpolators
Ok I think I found a solution. Thank you for the comments, they where useful.
Fist I change my grid topology to not use shared vertices. With this I can use a vertex color channel to set the texture ids.
vertexColor.r = vertexTextureId0; // Texture to use for vertex 0
vertexColor.g = vertexTextureId1; // Texture to use for vertex 1
vertexColor.b = vertexTextureId2; // Texture to use for vertex 2
I do not have to worry about interpolation because all vertices of the triangle have the same color information.
Now I create a texture to look up to which vertex my pixel belongs to. This texture looks similar to the images I posted in the question. I have to transpose the UV coordinates according to the TWO or THREE case. This solution gives me the freedom to easily change the edge and make it more ragged.

Constructing voxels of a 3D cube in MATLAB

I want to construct a 3D cube in MATLAB. I know that the units of any 3D shape are voxels not pixels. Here is what I want to do,
First, I want to construct a cube with some given dimensions x, y, and z.
Second, according to what I understand from different image processing tutorials, this cube must consists of voxels (3D pixels). I want to give every voxel an initial color value, say gray.
Third, I want to access every voxel and change its color, but I want to distinguish the voxels that represent the faces of the cube from those that represent the internal region. I want to axis every voxel by its position x,y, z. At the end, we will end up with a cube that have different colors regions.
I've searched a lot but couldn't find a good way to implement that, but the code given here seems very close in regard to constructing the internal region of the cube,
http://www.mathworks.com/matlabcentral/fileexchange/3280-voxel
But it's not clear to me how it performs the process.
Can anyone tell me how to build such a cube in MATLAB?
Thanks.
You want to plot voxels! Good! Lets see how we can do this stuff.
First of all: yeah, the unit of 3D shapes may be voxels, but they don't need to be. You can plot an sphere in 3D without it being "blocky", thus you dont need to describe it in term of voxels, the same way you don't need to describe a sinusoidal wave in term of pixels to be able to plot it on screen. Look at the figure below. (same happens for cubes)
If you are interested in drawing voxels, I generally would recommend you to use vol3D v2 from Matlab's FEX. Why that instead of your own?
Because the best (only?) way of plotting voxels is actually plotting flat square surfaces, 6 for each cube (see answer here for function that does that). This flat surfaces will also create some artifacts for something called z-fighting in computer graphics. vol3D actually only plots 3 surfaces, the ones looking at you, saving half of the computational time, and avoiding ugly plotting artifacts. It is easy to use, you can define colors per voxel and also the alpha (transparency) of each of them, allowing you to see inside.
Example of use:
% numbers are arbitrary
cube=zeros(11,11,11);
cube(3:9,3:9,3:9)=5; % Create a cube inside the region
% Boring: faces of the cube are a different color.
cube(3:9,3:9,3)=2;
cube(3:9,3:9,9)=2;
cube(3:9,3,3:9)=2;
cube(3:9,9,3:9)=2;
cube(3,3:9,3:9)=2;
cube(9,3:9,3:9)=2;
vold3d('Cdata',cube,'alpha',cube/5)
But yeah, that still looks bad. Because if you want to see the inside, voxel plotting is not the best option. Alphas of different faces stack one on top of the other and the only way of solving this is writing advanced computer graphics ray tracing algorithms, and trust me, that's a long and tough road to take.
Very often one has 4D data, thus data that contains 3D location and a single data for each of the locations. One may think that in this case, you really want voxels, as each of them have a 3D +color, 4D data. Indeed! you can do it with voxels, but sometimes its better to describe it in some other ways. As an example, lets see this person who wanted to highlight a region in his/hers 4D space (link). To see a bigger list I suggest you look at my answer in here about 4D visualization techniques.
Lets try wits a different approach than the voxel one. Lets use the previous cube and create isosurfaces whenever the 4D data changes of value.
iso1=isosurface(cube,1);
iso2=isosurface(cube,4);
p1=patch(iso1,'facecolor','r','facealpha',0.3,'linestyle','none');
p2=patch(iso2,'facecolor','g','facealpha',1,'linestyle','none');
% below here is code for it to look "fancy"
isonormals(cube,p1)
view(3);
axis tight
axis equal
axis off
camlight
lighting gouraud
And this one looks way better, in my opinion.
Choose freely and good plotting!

Fast way to convert array of points into triangle strip?

I have an array of CGPoints (basic struct with two floats: x and y). I want to use OpenGL ES to draw a textured curve using these points. I can do this fine with just two points, but it gets harder when I need to make a line from several points.
Currently I draw a line horizontally, calculate its angle from the points given, and then rotate it. I don't think doing this for all lines in a curve is a good idea. There's probably a faster way.
I'm thinking that I can "enlarge" or "constrict" all the points at once to make a curve with some sort of width.
I'm not positive what you want to accomplish, but consider this:
Based on a ordered list of points, you can draw a polyline using those points. If you want to have a polyline with a 2D texture on it, you can draw a series of quadrilaterals (using two triangles each, of course). You can generate these quadrilaterals using an idea similar to catmul-rom spline generation.
Consider a series of points p[i-1], p[i], p[i+1]. Now, for each i, you can find two points each an epsilon distance away from p[i] along the line perpendicular to the line connecting p[i-1] and p[i+1]. You can determine the two points generated for the endpoints in various ways, like using the perpendicular to the line from p[0] to p[1].
I'm not sure if this will be faster than your method, but you should be caching the results. If you are planning on doing this every frame, another type of solution to your problem may be needed.

What is DOT3 lighting?

An answer to my question suggests that DOT3 lighting can help with OpenGL ES rendering, but I'm having trouble finding a decent definition of what DOT3 lighting is.
Edit 1
iPhone related information is greatly appreciated.
DOT3-lighting is often referred to as per-pixel lighting. With vertex lighting the lighting is calculated at every vertex and the resulting lighting is interpolated over the triangle. In per-pixel lighting, as the name implies, the object is to calculate the lighting at every pixel.
The way this is done on fixed function hardware as the iPhone is with so called register combiners. The name DOT3 comes from this render state:
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGB);
Look at this blog entry on Wolfgang Engels blog for more info on exactly how to set this up.
When doing per-pixel lighting it's popular to also utilize a so called normal map. This means that the normal of every point on an object is stored in a special texture map, a normal map. This was popularized in the game DOOM 3 by ID software where pretty low polygon models where used but with high resolution normal maps. The reason for using this technique is that the eye is more sensitive to variation in lighting than variation in shape.
I saw in your other question that the reason this came up was that you wanted to reduce the memory footprint of the vertex data. This is true, instead of storing three components for a normal in every vertex, you only need to store two components for the texture coordinates to the normal map. Enabling per-pixel lighting will come with a performance cost though so I'm not sure if this will be a net win, as usual the advice is to try and see.
Finally the diffuse lighting intensity in a point is proportional to the cosine of the angle between the surface normal and the direction of the light. For two vector the dot product is defined as:
a dot b = |a||b| cos(theta)
where |a| and |b| is the length of the vectors a and b respectively and theta is the angle between them. If the length is equal to one, |a| and |b| are referred to as unit vectors and the formula simplifies to:
a dot b = cos(theta)
this means that the diffuse lighting intensity is given by the dot product between the surface normal and the direction of the light. This means that all diffuse lighting is a form of DOT3-lighting, even if the name has come to refer to the per-pixel kind.
From here:
Bumpmapping is putting a texture on a model where each texel's brightness defines the height of that texel.
The height of each texel is then used to perturb the lighting across the surface.
Normal mapping is putting a texture on a model where each texel's color is really three values that define the direction that location on the surface points.
A color of (255, 0, 0) for example, might mean that the surface at that location points down the positive X axis.
In other words, each texel is a normal.
The Dot3 name comes from what you actually do with these normals.
Let's say you have a vector which points in the direction your light source points. And let's say you have the vector which is the normal at a specific texel on your model that tells you which direction that texel points.
If you do a simple math equation called a "dot product" on these two normal vectors, like so:
Dot = N1xN2x + N1yN2y + N1z*N2z
Then the resulting value is a number which tells you how much those two vectors point in the same direction.
If the value is -1, then they point in opposite directions, which actually means that the texel is pointing at the light source, and the light source is pointing at the texel, so the texel should be lit.
If the value is 1, then they point in the same direction, which means the texel is pointing away from the light source.
And if the value is 0, then one of the vectors points at 90 degrees relative to the other. Ie: If you are standing on the ground looking forward, then your view vector is 90 degrees relative to the normal of the ground which points up.