How can I create larger worlds/levels in Unity without adding lag? - unity3d

How can I scale up the size of my world/level to include more gameobjects without causing lag for the player?
I am creating an asset for the asset store. It is a random procedural world generator. There is only one major problem: world size.
I can't figure out how to scale up the worlds to have more objects/tiles.
I have generated worlds up to 2000x500 tiles, but it lags very badly.
The maximum sized world that will not affect the speed of the game is
around 500x200 tiles.
I have generated worlds of the same size with smaller blocks: 1/4th the size (it doesn't affect how many tiles you can spawn)
I would like to create a world at least the size of 4200x1200 blocks without lag spikes.
I have looked at object pooling (it doesn't seem like it can help me
that much)
I have looked at LoadLevelAsync (don't really know how to use this,
and rumor is that you need Unity Pro which I do not have)
I have tried setting chunks Active or Deactive based on player
position (This caused more lag than just leaving the blocks alone).
Additional Information:
The terrain is split up into chunks. It is 2d, and I have box colliders on all solid tiles/blocks. Players can dig/place blocks. I am not worried about the amount of time it takes for the level to load initially, but rather about the smoothness of the game while playing it -no lag spikes while playing.
question on Unity Forums

If you're storing each tile as an individual GameObject, don't. Use a texture atlas and 'tile data' to generate the look of each chunk whenever it is dug into or a tile placed on it.
Also make sure to disable, potentially even delete any chunks not within the visible range of the player. Object pooling will help significantly here if you can work out the maximum number of chunks that will ever be needed at once, and just recycle chunks as they go off the screen.
DETAILS:
There is a lot to talk about for the optimal generation, so I'm going to post this link (http://studentgamedev.blogspot.co.uk/2013/08/unity-voxel-tutorial-part-1-generating.html) It shows you how to do it in a 3D space, but the principales are essentially the same if not a little easier for 2D space. The following is just a rough outline of what might be involved, and going down this path will result in huge benefits, but will require a lot of work to get there. I've included all the benefits at the bottom of the answer.
Each tile can be made to be a simple struct with fields like int id, vector2d texturePos, bool visible in it's simplest form. You can then store these tiles in a 2 dimensional array within each chunk, though to make them even more memory efficient you could store the texturePos once elsewhere in the program and write a method to get a texturePos by id.
When you make changes to this 2 dimensional array which represents either the addition or removal of a tile, you update the chunk, which is the actual GameObject used to represent the tiles. By iterating over the tile data stored in the chunk, it will be possible to generate a mesh of vertices based on the position of each tile in the 2 dimensional array. If visible is false, simply don't generate any vertices for it.
This mesh alone could be used as a collider, but won't look like anything. It will also be necessary to generate UV co-ords which happen to be the texturePos. When Unity then displays the mesh, it will display specific points of the texture atlas as defined by the UV co-ords of the mesh.
This has the benefit of resulting in significantly fewer GameObjects, better texture batching for Unity, less memory usage, faster random access for any tile as it's not got any MonoBehaviour overhead, and a genuine plethora of additional benefits.

Related

The right way to handle procedural LOD, distant terrain chunks

I would really appreciate your thoughts on compute-generated terrain, LOD, etc. and what the 'right way' to do it is.
Here's my current plan:
I'm procedurally generating a large finite world, where at any point most or all of the map is visible.
Texturing/Colouring is done in vert/frag shader.
I'm about 50% through implementing:
Generate the closest chunks (eg a 5x5 heightmap around the player) on the CPU using a noise function.
In the middle distance, using instances of a compute shader to generate the vertices of heightmap chunks and passing buffer to vert/frag shader.
In the far distance, generating 1 huge combined chunk with much less dense vertices and passing to vert/frag.
My questions are:
Is this the (or A) right way to handle LOD/Chunks/Distant terrain?
Should I instead generate everything on the GPU and pass a mesh back to CPU for collision, instead of using CPU for near chunks?
What function should I be using to generate and draw the map? I'm currently using DrawProceduralNow in OnRenderObject().
I'm just starting experimenting with using MaterialPropertyBlocks and DrawProcedural in Update().
One idea is to have semi-autonomous chunks that change their settings depending on player location.
Or relative chunks that are based on the space around the player, so a middle distance chunk will always be a middle LOD (distance between vertices), but the heightmap is updated as the player walks towards it.
I'm trying to avoid spending too much time going down the wrong rabbit hole.
If I can establish the right concepts first, that will save me a lot of time.
Edit: I'm also considering pre-generating heightmap textures to save on procedural calculation time. But that's a moot point, my questions are around what to do after I already have the vertices.

Different ways to detect size of image on mesh versus size of mesh

I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts

How to bake static Unit scene into one big mesh and texture

I have a big complex unity scene including terrain, trees, grass, flowers and many other objects.
I'm having performance problems and i was wondering if its possible to bake all static objects that never move or change like terrain trees, houses, and other props etc, into one big static object to increase performance?
Thanks
Check out the Unity documentation on Static Objects here.
Many optimisations need to know if an object can move during gameplay. Information about a Static (ie, non-moving) object can often be precomputed in the editor in the knowledge that it will not be invalidated by a change in the object’s position. For example, rendering can be optimised by combining several static objects into a single, large object known as a batch.
To mark a GameObject as static, you simply need to check the Static box in the inspector window with the desired GameObject selected.
That wouldn't be a good idea. Having separate meshes is much more beneficial and efficient than combining them all into one huge mesh. This will allow you to set different LOD systems for the objects, billboarding of trees and detail objects, and will allow for finer control over your scene without having to rebuild that huge object again and again.
For large scenes, it is important that you set up a Level Of Detail (LOD) system. What it essentially means is that based on the distance of the object from the camera, a higher or lower quality model of that object will be rendered. At close distances, the highest polygon model will be rendered. At huge distances where detail is not required, lower polygon count models can be used. Consult the Unity Manual for more details. You can also look for scripts and tutorials on the internet.
Also, make sure that your terrain settings are reasonable. Setting the Pixel Error of the terrain to 1 is overkill, something like 4-5 is more than enough. Detail Density, Billboard Start, Detail and Tree Distance all these can be toned down.
Or just check for an unoptimized script or shader gone awry, that's usually the problem. Unoptimized 3d models are hell too. You'll have to do more than just pick up a model from Sketchup's 3D warehouse (speaking from personal experience). You will have to pay an artist to get high quality, optimized models with their LOD meshes also unless you have the skills yourself.

interaction with objects in xna

I'm new to using xna and I want to make my player collide with with multipe walls from the same class. So I looked around and I understood that the best way for doing that is to create a list of variables containing the walls id's and make a loop that circles them all and then returns the variable of the objects that collide.
My question is if there is a faster more efficient way for doing that? I mean if I have like 10000 objects that loop can cause a lot of memory use.
Thx in advance
Option 1) If these 10000 object are walls of a level, then you should probably use some sort of grid (like this very old example: https://en.wikipedia.org/wiki/The_Legend_of_Zelda#mediaviewer/File:Legend_of_Zelda_NES.PNG)
With a grid you only have to check collision with adjacent objects, or only with objects that are nearby.
Option 2) If these 10000 objects are enemies or bullets that move more freely, then you could also calculate the distance first and only check for collision if the objects are nearby.
But may I ask why you are using XNA? I used to work with XNA 4.x but in my understanding it is pretty much dead (http://www.computerandvideogames.com/389018/microsoft-email-confirms-plan-to-cease-xna-support). If you're new to XNA, I would advice to use other software to make games (like Unity3D). In Unity3D the hard part of collision detection is done for you (is has standard functions for collision detection) and Unity3D also works with C# (like XNA)
You always want to do the least amount of processing to get the job done. For a tiled 2D game you usually have a 2 dimensional grid. When the player want to walk on a certain tile you can check that tile if it is allowed to walk there. In this case you just have to check a single tile. If you have a lot of NPC's you could divide your map into sections and keep track of in what sections the NPC's are. Now you just have to do collision detection on the enemies within your section.
When you need expensive collision, pixel perfect or polygon collision you should first check if an object is even close with a simple radius float or BoundingSphere only then you go on with more expensive collision detection.
Same goes for pretty much anything, if you have a 100x100 tilemap but only need to draw 20x10 for the screen then you should just render that portion by calculations. In unreal, mappers create invisible boxes, when inside these boxes it only draws a certain part of the map and only checks collision within these boxes. GameDev is all about tricks to make things work smoothly.

Alternatives to diamond-square for incremental procedural terrain generation?

I'm currently in the process of coding a procedural terrain generator for a game. For that purpose, I divide my world into chunks of equal size and generate them one by one as the player strolls along. So far, nothing special.
Now, I specifically don't want the world to be persistent, i.e. if a chunk gets unloaded (maybe because the player moved too far away) and later loaded again, it should not be the same as before.
From my understanding, implicit approaches like treating 3D Simplex Noise as a density function input for Marching Cubes don't suit my problem. That is because I would need to reseed the generator to obtain different return values for the same point in space, leading to discontinuities along chunk borders.
I also looked into Midpoint Displacement / Diamond-Square. By seeding each chunk's heightmap with values from the borders of adjacent chunks and randomizing the chunk corners that don't have any other chunks nearby, I was able to generate a tileable terrain that exhibits the desired behavior. Still, the results look rather dull. Specifically, since this method relies on heightmaps, it lacks overhangs and the like. Moreover, even with the corner randomization, terrain features tend to be confined to small areas, i.e. there are no multiple-chunk hills or similar landmarks.
Now I was wondering if there are other approaches to this that I haven't heard of/thought about yet. Any help is highly appreciated! :)
Cheers!
Post process!
After you do the heightmaps, run back through adding features.
This is how Minecraft does it to get the various caverns and cliff overhangs.