Procedurally generated cave in Unreal Engine 4 - unreal-engine4

I am making a procedurally generated cave in UE. I have a question regarding what's the most optimised way to do it. I already have a function that creates the verticies and triangles for a cave section from point a A to point B (The final cave consisting of multiple cave sections).
My question is, should the entire cave consist of:
A single procedural mesh component that contains all the cave sections in a single Mesh Section
A single procedural mesh component that has a different Mesh Section for every cave section
Multiple procedural mesh components, each for a cave section
Multiple child actors, each representing a cave section
Thank you in advance.

I would advise against using a single large mesh for the following reasons:
Entire mesh needs to reside in GPU memory any time part of it is visible.
UV, texture, normal and shadow mapping resolution is reduced.
LOD is impractical.
I would split each cave section into its own actor depending on poly count and visibility.

Related

Unity3D dynamic mesh with hole

Dynamically creating a mesh with a hole in it from two lists of vertices
I am currently attempting to dynamically create a mesh (2D) with a hole in it. I have a list of Vector3 vertices for both the outline and the hole's outline.
My question:
How would I go about merging these two lists of vertices into a single mesh?
More detail: I have two meshes that overlap, and I'm trying to do a boolean difference between the two, to create a new mesh that will eventually replace the bigger one, to get rid of clipping. Example
Using the Clipper-Library (see http://www.angusj.com/delphi/clipper.php) is of no use, because it returns the same two sets of vertices that I set as input.
I'm guessing I need to somehow fix the triangles for the mesh to create triangles between the outer and inner vertices? (The meshes can be any shape/size, so finding out which vertexes to combine into triangles is no easy task).
Can anybody tell me how I would create a single mesh out of the two vertex-loops?
If you need a generic boolean algorithm this is a very hard problem, for example 3D Studio Max has two seperate boolean mesh creators, each failing at different sets of objects.
If you only need to subtract rectangular, aligned shapes, which do not touch, its simpler. For your specific case you can just join the list of vertexes, and fill new list of triangles - you'll need two tris per quad, so thats eight triangles stretched across eight verices.
It gets a bit harder if they start to touch, as you need to find intersection points and basically re-triangulate the outline.

How can I make dynamically generated terrain segments fit together Unity

I'm creating my game with dynamicly generated terrain. It is very simple idea. There are always three parts of terrain: segment on which stands a player and two next to it. When the player is moving(always forward) to the next segment new one is generated and the last one is cut off. It works wit flat planes, but i don't know how to do it with more complex terrain. Should I just make it have the same edge from both sides(for creating assets I'm using blender)? Or is there any other option? Please note that I'm starting to make games with unity.
It depends on what you would like your terrain to look like. If you want to create the terrain pieces in something external, like Blender, then yes all those pieces will have to fit together seamlessly. But that is a lot of work as you will have to create a lot of pieces that fit together for the landscape to remain interesting.
I would suggest that you rather generate the terrain dynamically in Unity. You can create your own mesh using code. You start by creating an object (in code), and then generating vertex and triangle arrays to assign to the object, for it to have a visible and sensible mesh. You first create vertices at specific positions and then add triangles that consist of 3 vertices at a time. If you want a smooth look instead of a low poly look, you will reuse some vertices for the next triangle, which is a little trickier.
Once you have created your block's mesh, you can begin to change your code to specify how the height of the vertices could be changed, to give you interesting terrain. As long as the first vertices on your new block are at the same height (say y position) as the last vertices on your current block (assuming they have the same x and z positions), they will line up. That said, you could make it even simpler by not using separate blocks, but by rather updating your object mesh to add new vertices and triangles, so that you are creating a terrain that is just one part that changes, rather than have separate blocks.
There are many ways to create interesting terrain. One of the most often used functions to generate semi-random and interesting terrain, is Perlin Noise. Another is his more recent Simplex noise. Like most random generator functions, it has a seed value, which you can keep track of so that you can create interesting terrain AND get your block edges to line up, should you still want to use separate blocks rather than a single mesh which dynamically expands.
I am sure there are many tutorials online about noise functions for procedural landscape generation. Amit Patel's tutorials are good visual and interactive explanations, here is one of his tutorials about noise-based landscapes. Take a look at his other great tutorials as well. There will be many tutorials on dynamic mesh generation as well, just do a google search -- a quick look tells me that CatLikeCoding's Procedural Grid tutorial will probably be all you need.

Tile Grid Data storage for 3D Space in Unity

This question is (mostly) game engine independent but I have been unable to find a good answer.
I'm creating a turn-based tile game in 3D space using Unity. The levels will have slopes, occasional non-planar geometry, depressions, tunnels, stairs etc. Each level is static/handcrafted so tiles should never move. I need a good way to keep track of tile-specific variables for static levels and i'd like to verify if my approaches make sense.
My ideas are:
Create 2 Meshes - 1 is the complex game world, the second is a reference mesh overlay that will have minimal geometry; it will not be rendered and will only be used for the tiles. I would then Overlay the two and use the 2nd mesh as a grid reference.
Hard-code the tiles for each level. While tedious it will work as a brute force approach. I would, however, like to avoid this since it's not very easy to deal with visually.
Workaround approach - Convert the 3d to 2D textures and only use 1 mesh.
"Project" a plane down onto the level and record height/slope to minimize complexity. Also not ideal.
Create individual tile objects for each tile manually (non-rendered). Easiest solution i could think of.
Now for the Unity3D specific question:
Does unity allow selecting and assigning individual Verts/Triangles/Squares of a mesh and adding componenets, scripts, or variables to those selections; for example, selecting 1 square in the 10x10 unity plane and telling unity the square of that plane now has a new boolean attached to it? This question mostly refers to idea #1 above, where i would use a reference mesh for positional and variable information that were directly assigned to the mesh. I have a feeling that if i do choose to have a reference mesh, i'd need to have the tiles be individual objects, snap them in place using the reference, then attach relevant scripts to those tiles.
I have found a ton of excellent resources (like http://www-cs-students.stanford.edu/~amitp/gameprog.html) on tile generation (mostly procedural), i'm a bit stuck on the basics due to being new to unity and im not looking for procedural design.

Why does merging geometries improve rendering speed?

In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.

Voxelization of a Scene with Instanced Objects

I have created a simple scene in directx11 that has a plane as a floor, with several spheres, cubes and rectangular walls. There is only 3 objects loaded: a plane, cube and sphere; but the cube and sphere are instanced several times with different scaling, positions and rotations. Two of these objects are dynamic.
I would like to voxelize this entire scene (100x100x20 units) into 0.2 unit voxels, taking into account the object instances which have different scales and rotations.
I have read several articles on voxelization and have the source code from GPU Pro 3 of "Practical Binary Surface and Solid Voxelization with Direct3D 11"; but all of these articles show the voxelization of single objects - taking their triangles and splitting them into a grid.
How would I extend these methods to account for an entire scene with multiple object instances?
The only thing that I can think of is I would have to do a top-down octree division of the entire scene. But for a dynamic scene, would this be too expensive?
For my scene, I use a buffer for each model loaded, so if I were to voxelize in the Compute Shader, would I need to copy all three buffers to a single buffer? How do I account for model instances?
Thank you.