Are shader passes drawn by object or by triangle in Unity? - unity3d

In Unity, when we write a custom shader with multiple passes, are they executed:
For each triangle do:
For each pass do:
Draw the pass
Or:
For each Pass do:
For each triangle do:
Draw the pass
And if I have multiple materials in the mesh, are the faces drawn grouped by material?

Question 1
Second variant:
In Unity, when we write a custom shader with multiple passes, are they executed:
For each Pass do:
For each triangle do:
Draw the pass
Question 2
And if I have multiple materials in the mesh, are the faces drawn grouped by material?
Yes.
Explanation
Basically, all the rendering (in both OpenGL and Direct3D) is done following way:
Setup rendering parameters (shaders, matrices, buffers, textures, etc.);
Draw a group of primitives (this is called draw call);
Repeat these steps for all the graphics that needs to be rendered.
And the following heuristic rule is applicable: the less draw calls invoked in scene the better (for performance). Thus, in order to fulfill this rule (without reducing amount of primitives you draw) you want to have smaller number of draw calls with bigger number of primitives in each. Which in turn makes it obvious why Unity goes with second variant of your first question. And also explains why does Unity groups faces by material (because it minimizes draw calls number).

Related

Different ways to detect size of image on mesh versus size of mesh

I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts

How can I make dynamically generated terrain segments fit together Unity

I'm creating my game with dynamicly generated terrain. It is very simple idea. There are always three parts of terrain: segment on which stands a player and two next to it. When the player is moving(always forward) to the next segment new one is generated and the last one is cut off. It works wit flat planes, but i don't know how to do it with more complex terrain. Should I just make it have the same edge from both sides(for creating assets I'm using blender)? Or is there any other option? Please note that I'm starting to make games with unity.
It depends on what you would like your terrain to look like. If you want to create the terrain pieces in something external, like Blender, then yes all those pieces will have to fit together seamlessly. But that is a lot of work as you will have to create a lot of pieces that fit together for the landscape to remain interesting.
I would suggest that you rather generate the terrain dynamically in Unity. You can create your own mesh using code. You start by creating an object (in code), and then generating vertex and triangle arrays to assign to the object, for it to have a visible and sensible mesh. You first create vertices at specific positions and then add triangles that consist of 3 vertices at a time. If you want a smooth look instead of a low poly look, you will reuse some vertices for the next triangle, which is a little trickier.
Once you have created your block's mesh, you can begin to change your code to specify how the height of the vertices could be changed, to give you interesting terrain. As long as the first vertices on your new block are at the same height (say y position) as the last vertices on your current block (assuming they have the same x and z positions), they will line up. That said, you could make it even simpler by not using separate blocks, but by rather updating your object mesh to add new vertices and triangles, so that you are creating a terrain that is just one part that changes, rather than have separate blocks.
There are many ways to create interesting terrain. One of the most often used functions to generate semi-random and interesting terrain, is Perlin Noise. Another is his more recent Simplex noise. Like most random generator functions, it has a seed value, which you can keep track of so that you can create interesting terrain AND get your block edges to line up, should you still want to use separate blocks rather than a single mesh which dynamically expands.
I am sure there are many tutorials online about noise functions for procedural landscape generation. Amit Patel's tutorials are good visual and interactive explanations, here is one of his tutorials about noise-based landscapes. Take a look at his other great tutorials as well. There will be many tutorials on dynamic mesh generation as well, just do a google search -- a quick look tells me that CatLikeCoding's Procedural Grid tutorial will probably be all you need.

Why does merging geometries improve rendering speed?

In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.

Shader-coding: nonlinear projection models

As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!

OpenGL: optimizing render of quad particles

I'm rendering particles in a 2D game. Each particle is a quad (2 triangles). How can I make the drawing the fastest possible? All the particles has the same texture, I'm only changing it's positions.
Now I'm using a call to glVertexPointer and glDrawArrays for each particle. So I'm sending 4 vertices each time to the GPU.
Is there any other approach that could be faster?
I'm using OpenGL ES 1.1 (iPhone)
Thanks!
Every draw call you make (glDrawArrays) is expensive. Doing this once per particle is DEFINITELY way too often. All your particles can be drawn with a single draw call; just set up a big array of all the triangle verts and another big array with the texture coords, and call glVertexPointer/glDrawArrays once-- that's the power of glVertexPointer: arbitrary geometry of the same type in one call. :)
For what you're doing, you should also look into point sprites (GL_POINTS), which also function as tiny textured quads. They're 2D only, so you can't map your texture into the Z axis, but if your particles are just 2D quads of the same texture over and over, point sprites will likely do exactly what you want.
There's a way to do that all in one draw routine. I THINK it's by adding an extra vertex after each quad, which is the same as the previous vertex, but I could be wrong.
EDIT: After looking into it a bit, it looks like you need two in between; essentially one after, and one before. It does add up to quite a few extra vertexes, but I know from experience that it makes a HUGE positive difference on the iPhone to do it all in one draw operation (we were drawing text from a texture, so essentially the same thing).
EDIT2: Also note, I'm referring to using GL_TRIANGLE_STRIP - if you were using GL_TRIANGLES instead, you wouldn't need the extra vertices... except, then you'd be doing the same amount extra anyway, due to repeating 2 for each second triangle.