Unity3d Understand Voxel terrain - unity3d

I can't understand how create procedural world. I found article about voxel terrain generation, but it's only video or pictures and i found some terrain engine, but i can't understand it. I wanna tutorial step by step how to create simple voxel object. I want to create terrain, but not cube terrain.

There's two very different approaches to this.
Voxel terrain stores solid information - not just the surface of the terrain but the whole volume. The perfect example of this would be Minecraft, where the 'terrain' includes not just the surface but caves and tunnels as well. 3DCoat is a sculpting program that uses voxels, it's a good way to see what can and can't be done with them. The ability to represent any 3d volume is the big advantage of voxels.
Traditional surface terrain stores only the surface: there's nothing underneath. This surface could be an polygon mesh, but most often it's a regular guads (4-sided polygons) that are procedurally generated from a heightmap (which is a bitmap that stores heights instead of colors). Polygon terrains are basically just regular 3d models that happen to look like terrain; heightmap terrains can be easier to work with because they are easier to 'sculpt' quickly, and can be procedurally modified for things like explosion craters or erosion. A good example of a heightmap terrain program is the Unity terrain editor or a standalone tool like Vue
In general, voxels are much more expensive than heightmap or polygon terrain - a 1km by 1km terrain at 1 meter resolution includes 1 million bits of data; a 1 km by 1km voxel terrain that runs 1 km deep would be 1 billion samples (!) That can be cut down be smart enncoding (the hot trend here is sparse voxel octrees) but it's still a lot of data to manage. That's one of the reasons Minecraft has to be so blocky.
You could generate either voxels or heightmaps procedurally or by hand. Vterrain.org is a great resource for different techniques doing terrain.

Related

Implementing weights in a shader for procedural terrain generation

So I'm working on procedural generation in unity and I've got to a point where I'm working on blending between two chunks of terrain (Each chunk is a square for reference).
To do this with the height I generated a 3 dimensional array which is of a size:
[numOfXVerts, numOfYVerts, numOfTerrains]
This is done so that [x,y, 0] stores the weighting of the forest terrain and [x,y,1] stores the weighting of the mountainous terrain and so on. This works with height fine as I don't need to use a shader however when using shaders I realised that I can't use a 3 dimensional array inside a shader.
So my question is two part, is there any conventional way for me to use weight the way that I have made it in a shader? and how is blending from one type of terrain to another usually done in procedural generation?

How to texture mesh? Shader vs. generated texture

I managed to create a map divided in chunks. Each one holding a mesh generated by using perlin noise and so on. The basic procedural map method, shown in multiple tutorials.
At this point i took a look at surface shader and managed to write one which fades multiple textures depending on the vertex heights.
This gives me a map which is colored smoothly.
In tutorials i watched they seem to use different methods to texture a mesh. So in this one for example a texture is generated for each mesh. This texture will hold a different color depending on the noise value.This texture is applied to the mesh and after that the mesh vertices are displaced depending on the z-value.
This results in a map with sharper borders between the colors giving the whole thing a different look. I believe there is a way to create smoother transitions between the tile-colors by fading them like i do in my shader.
My question is simply what are the pro and cons of those methods. Let's call them "shader" and "texture map". I am lost right now, not knowing in which direction to go.

Unity how to paint a mesh in different colors with gradient

I want to generate low poly terrain like on picture below.
I've done mesh generator. But I cannot imagine how to apply colors on that mesh. For example brown color on high angle points, light colors on flat points. And how to make gradient between colors. Can someone help me with advice what should I learn?
The terrain data holds... well, the data for the terrain.
exactly what the terrain data holds can be found here:
http://docs.unity3d.com/ScriptReference/TerrainData.html
you will need to do a bit of coding.
Terrain data can be accessed with something like:
TerrainData terrainData = Terrain.activeTerrain.terrainData;
and here is a post that may be useful:
http://answers.unity3d.com/questions/12835/how-to-automatically-apply-different-textures-on-t.html
the code given in that post, creates a 3D array in which to store your splatmap data in and then uses the terrain data and some logic (based on the elevation of that particular block of terrain) to 'splatter' texure to give a more realistic look.
here is a random example of a terrain using a splatmap (from google):
http://www.cygengames.com/images/terrainBeautyShot09_720.png

Why does merging geometries improve rendering speed?

In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.

Shader-coding: nonlinear projection models

As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!