In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.
Related
I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts
I am tinkering around with cubes trying to build variations of 'block types' (in an effort to get more familiar with Unity's abilities, shaders, editor tools etc).
I have a generic cube:
That I want to add a material/shader.. which I have done (no problem there):
Which looks well enough (for my purposes) when it's just one block, but when I stick them altogether, I don't like the effect; you can see the individual boxes and the shader (which you can't see in the still image) is actually animated water, so when it's animating it looks ... pretty ugly.
(Bad/undesired)
I am trying to STRETCH or share the shader/material across all the selected blocks. See the below example (in this case, I have taken a SINGLE block and stretched it, but that's not keeping with the spirit of having individual blocks, so also not what I want).
(better/more desired)
I have thought the following may help, but they all seem overly complicated (aka I think I'm going about it incorrectly)
Have the individual blocks, but stretch a single plane across them and then apply the material.
I have found examples of programmatically joining meshes, and then apply the material/shader to the single object.
Take a single block and stretch it to the dimensions needed.
Maybe (not sure if I can), but have a plane with the water material applied to it and use the blocks as masks to only display water for those blocks? Not sure how that works...
In the end I am hoping to have the following:
Individual blocks (so I can interact with them.
Shader animations/colors are shared across the shared/connected blocks.
It won't always be a 2x3 grid... it could be diagonal, or contain odd shapes of connected blocks...
(this is all in EDITOR mode).
Any thoughts on how I might approach this?
Phrases you could try searching are "converting from world space to uv space", "transforming uv coordinates", "uv math". UV is the name for coordinates in textures that a shader samples from, and if you take already existing shader code, you can do interesting things by changing the UV(s) it uses. One of those things is letting you "stretch" it.
In your 2x3 cube example you could tell each cube to treat its U value as going from 0 to 0.5 or 0.5 to 1 and the V as going from 0 to 0.33 or 0.33 to 0.67 or 0.67 to 1 depending on where it is instead of each one going from 0 to 1. You could do this by having a property on the shader to tell it where to start the uv (a) and where to end its uv (b), and you lerp from (0,0) - (1,1) to a - b.
My answer to a different question uses some similar logic to that by comparing the world position of the pixel vs a range of world positions to get a UV. The relevant shader code is:
fixed4 colorizedMapUV = (IN.worldPos.xz-_WorldSpaceRange.xy)
/ (_WorldSpaceRange.zw-_WorldSpaceRange.xy);
Another option is to only look at the world position, and completely disregard a notion of where the "corners" of the uv should be. A method called "triplanar mapping" might guide you to a solution that does this
As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!
I'm developing an image warping iOS app with OpenGL ES 2.0.
I have a good grasp on the setup, the pipeline, etc., and am now moving along to the math.
Since my experience with image warping is nil, I'm reaching out for some algorithm suggestions.
Currently, I'm setting the initial vertices at points in a grid type fashion, which equally divide the image into squares. Then, I place an additional vertex in the middle of each of those squares. When I draw the indices, each square contains four triangles in the shape of an X. See the image below:
After playing with photoshop a little, I noticed adobe uses a slightly more complicated algorithm for their puppet warp, but a much more simplified algorithm for their standard warp. What do you think is best for me to apply here / personal preference?
Secondly, when I move a vertex, I'd like to apply a weighted transformation to all the other vertices to smooth out the edges (instead of what I have below, where only the selected vertex is transformed). What sort of algorithm should I apply here?
As each vertex is processed independently by the vertex shader, it is not easy to have vertexes influence each other's positions. However, because there are not that many vertexes it should be fine to do the work on the CPU and dynamically update your vertex attributes per frame.
Since what you are looking for is for your surface to act like a rubber sheet as parts of it are pulled, how about going ahead and implementing a dynamic simulation of a rubber sheet? There are plenty of good articles on cloth simulation in full 3D such as Jeff Lander's. Your application could be a simplification of these techniques. I have previously implemented a simulation like this in 3D. I required a force attracting my generated vertexes to their original grid locations. You could have a similar force attracting vertexes to the pixels at which they are generated before the simulation is begun. This would make them spring back to their default state when left alone and would progressively reduce the influence of your dragging at more distant vertexes.
I'm currently trying to implement a silhouette algorithm in my project (using Open GLES, it's for mobile devices, primarily iPhone at the moment). One of the requirements is that a set of 3D lines be drawn. The issue with the default OpenGL lines is that they don't connect at an angle nicely when they are thick (gaps appear). Other subtle artifacts are also evident, which detract from the visual appeal of the lines.
Now, I have looked into using some sort of quad strip as an alternative to this. However, drawing a quad strip in screen space requires some sort of visibility detection - lines obscured in the actual 3D world should not be visible.
There are numerous approaches to this problem - i.e. quantitative invisibility. But such an approach, particularly on a mobile device with limited processing power, is difficult to implement efficiently, considering raycasting needs to be employed. Looking around some more I found this paper, which describes a couple of methods for using z-buffer sampling to achieve such an effect. However, I'm not an expert in this area, and while I understand the theory behind the techniques to an extent, I'm not sure how to go about the practical implementation. I was wondering if someone could guide me here at a more technical level - on the OpenGLES side of things. I'm also open to any suggestions regarding 3D line visibility in general.
The technique with z-buffer will be too complex for iOS devices - it needs heavy pixel shader and (IMHO) it will bring some visual artifacts.
If your models are not complex you can find geometric silhouette in runtime - for example by comparing normals of polygons with common edge: if z value of direction in view space has different sings (one normal is directed to camera and other is from camera) then this edge should be used for silhouette.
Another approach is more "FPS friendly": keep extruded version of your model. And render firstly extruded model with color of silhouette (without textures and lighting) and normal model over it. You will need more memory for vertices, but no real-time computations.
PS: In all games I have look at silhouettes were geometric.
I have worked out a solution that works nicely on an iPhone 4S (not tested on any other devices). It builds on the idea of rendering world-space quads, and does the silhouette detection all on the GPU. It works along these lines (pun not intended):
We generate edge information. This consists of a list of edges/"lines" in the mesh, and for each we associate two normals which represent the tris on either side of the edge.
This is processed into a set of quads that are uploaded to the GPU - each quad represents an edge. Each vertex of each quad is accompanied by three attributes (vec3s), namely the edge direction vector and the two neighbor tri normals. All quads are passed w/o "thickness" - i.e. the vertices on either end are in the same position. However, the edge direction vector is opposite for each vertex in the same position. This means they will extrude in opposite directions to form a quad when required.
We determine whether a vertex is part of a visible edge in the vertex shader by performing two dot products between each tri norm and the view vector and checking if they have opposite signs. (see standard silhouette algorithms around the net for details)
For vertices that are part of visible edges, we take the cross product of the edge direction vector with the view vector to get a screen-oriented "extrusion" vector. We add this vector to the vertex, but divided by the w value of the projected vertex in order to create a constant thickness quad.
This does not directly resolve the gaps that can appear between neighbor edges but is far more flexible when it comes to combating this. One solution may involve bridging the vertices between large angled lines with another quad, which I am exploring at the moment.