I have a question about the interaction of objects with each other.
Especially with regard to the application on Hololens 2 and the computing power that comes with it.
What would I like to do:
I have two objects. One is a simple staff and is moved by the player / user of HoloLens 2. The other object is a complex construct into which the staff shall be inserted.
Current behaviour:
After adding a BoxCollider and Rigidbody component to the staff and a MeshCollider to the complex object, I was able to create some interaction. Unfortunately the whole process is so expensive that it causes the HoloLens to crash.
My question:
Since I don't need any physics, but only pure non-penetrable objects, does anybody have a hint how I can achieve better performance?
Or what I have to change completely to make it as fast and lightweight as possible?
Notes:
Most entries online that I have found related to firing events or triggers that I do not need at all.
Even in PlayMode on the local computer it stutters.
Due to the complexity of the mesh I cannot simplify the target object.
Any help is welcome.
You could always decimate the complex object in something like blender: https://docs.blender.org/manual/en/latest/modeling/modifiers/generate/decimate.html
Import the decimated object into unity then use that mesh for the mesh collider instead of the rendered mesh.
That should allow you to simplify the mesh so it's more performant but still be plenty complex enough to closely follow the geometry of the original model.
I've used a similar method to reduce improve shadow rendering performance quite effectively.
Related
thank you so much for all the fast responses I truly appreciate it. I have another question I've been wondering about, so I am working mobile game and I want to keep the size as small as possible. So I wanted to know does using Unity particle effects, impact performance and game size? I was wondering, if i wanted to make a visual effect of breaking apart a GameObject when destroyed, should I use unity particle system to do so? or should I make an animation of that game object breaking and use that instead?
to summeize should I be making animation for things such as rain effects and confetti and fire effect instead of making them with the particle system? is there any drawback to using one over the other?
As TEEBQNE said it is your choice between animations and the particle system depending on what you need. If your team is small or you are a single developer it is best to opt for the particle system for the flexibility of changes and quick execution.
Optimising the particle system is not hard, just make sure you don't overdo the numbers and avoid some common mistakes. There is a list of the best practices for optimization on mobile games found here
You can also find some pretty good looking particles premade in the unity asset store, which are already optimized well.
So here is the deal, i have an enemy that is rigged and consists out of quite a few voxels.
When i run the game i get very bad performance overall when its rendering the model because of the amount of objects it needs to render. How can i improve that?
Here's the things i have thought about:
Only rendering the faces we actually can see,
Or maybe using some sort of GPU Instancing?
Anyways, i would like to know how i could resolve such a problem. Any help is much appreciated!
Without knowing how you've actually implemented your voxels in game (i.e what components each voxel consists of and how you're currently managing them) it's hard to offer much specific advice.
In general though there are a few things to consider:
Faces that aren't oriented towards the camera won't be
rendered by default. What you might want to investigate is
Occlusion
Culling, but
since your character is not static within the scene you might not get
much performance improvement from that. The code to
constantly check whether objects are occluded might end up costing
more CPU than not implementing it.
Rendering might not be your actual bottleneck for performance. If each of your voxels is doing some work every update (such as collision detection), then that's going to hit your performance much harder than rendering the objects, particularly for cubic voxels. You'll need to use the Profiler to figure that out.
You need to consider whether you actually REALLY need all those voxels to be in the scene all the time or whether you can replace them with a single GameObject until you need to interact with them, then bring them into the scene as needed. For example, you might replace the upper arm with a mesh that mirrors the shape of the total voxels for that body part. Then when that part is shot, for example, you can detect the point of collision and instantiate the necessary voxels around that point to react as desired, then rebuild the arm mesh to reflect the changed shape.
It might also be worth looking into Unity's Data-oriented Technology Stack (DOTS) features, although that could be overkill for this situation.
Feel free to close this question if it is deemed too subjective. I realize that it might very well be, as it may focus heavily on best practices, which, if not an industry standard, may be subjective.
The Problem & Idea:
I'm using the tilemap system in unity (2D), and its been working great, but I have a few concerns regarding my particular usage. What I'd like to have is multiple "battle maps" that will get instantiated when the battle scene is switched to. My current idea is to just make each type of battle map a prefab (prefab on the grid, since the tilemaps are just layers), and then instantiate the grid when the scene loads in.
Is this best practice or is there any better way? Does having 10 maps vs 200 maps make a difference?
Other Things I've Considered:
Would it be a better idea to make one huge tilemap with all of the battle maps drawn a distance away from each other and just restrict the camera to moving only within the "current" battle map? The absolute max size I could see a map being is, let's say 30x20 tops, probably closer to half that though.
An earlier idea was to use small pixel map images to render the maps in. This was before I found out about Unity's tilemap system, which, admittedly, I'd rather use because it is so much simpler to visualize, and less work to develop.
The Scenario:
For simplicity's sake, let us assume the game has 2 scenes, the main menu, and a battle scene. Basically, the idea is to enter the battle scene and have a random battle map to be spawned out of what's available. The character(s) get placed, and any additionals also get placed, say, items, or what have you.
Is what I proposed above best practice? And should I consider any other other two systems I've also described? Or is there something I'm not thinking of that would be even better?
I don't believe that any of the above ways wouldn't work, I'm just curious if any of them is the best way to go about doing this.
Your consideration to have all of your battle maps drawn on a single tilemap and restricting the camera to only the region of the current map is an instant red-flag.
In my experience with unity, if I want to have a large level drawn on a grid I will have to break it up into chunks (each chunk is a separate tilemap) for performance reasons in the editor and in the game, so I would not recommend doing that.
If your goal is fast load times, then instancing a prefab as you suggested is not a terrible idea as long as the respective tilemap is fairly small (less than 10000 tiles), otherwise you would almost difinitely notice the map getting loaded in. If you are trying to load large tilemaps seemlessly, it may help to load the tilemap asynchronously before the actual switch occurs.
I would keep only the current map and next map loaded in the scene to ensure a seamless transition and optimal resource usage. Unity's tilemaps are not light.
Imagine a large free-roam game in Unity,
The yellow size indicates about the largest you can make a typical Terrain in Unity.
Art dept. will completely build, meter by meter, the entire scene.
Please note, this has absolutely no connection to repeating scenery (as in runners) or procedural scenery (as in say some race games).
Really, what is the correct and good way to do this in Unity?
use say 50 or so terrains, each perhaps 100m x 100m ?
can you even have or use that many terrains?
or what?
For anyone googling here.
The correct solution is indeed
Terrain stitching
that's it.
In practice you must use one of the tools available to do this (eg, TerrainFormer) or, your team will write from scratch a terrain stitcher.
Yes, you just use "many terrains".
The best approach to the exact problem posed,
is in fact to just:
"use lots of Terrains".
It seems to be perfectly viable in Unity to have many (dozens) of Terrain units, basically "sitting next to each other".
In practice, you'll need TerrainFormer
https://assetstore.unity.com/packages/tools/terrain/terrain-former-20052
or one of the similar tools.
(Or, I suppose, from scratch write your own tool to stitch terrains, and allow you manipulate them all at once, join the edge-heights perfectly, etc etc.)
It's not realistically possible to just perfectly sit many Terrains together (by hand), matching all the edges, etc etc. So you're going to need a "stitcher" package for putting many Terrain squares together.
So, this huge area ..
has about 12 Terrain.
So that's the answer, you can indeed have "many, many" Terrain in a Unity project, you do indeed essentially just "sit them next to each other". In practice it's not achievable unless you use one of the editor tools such as TerrainFormer.
The proper way to do this would have been with procedural mesh generation with MeshFilter and Mesh API but you mentioned that this not at all random or generative so that one is eliminated.
It's just simply a very long, thin, hand-made environment
The best way in this case would be a Modular Level Design. You need to create Modular Assets. With this you can have a long road in pieces which can be tileable. A good modeling artist should be able to create and texture modular assets with 3D modeling packages like Blender, Maya or 3ds Max.
All the programmer has to do is make each asset a prefab then use the Instantiate function to instantiate them to create any distance of environment. You can also create a static environment in the scene from the Editor. Almost anything can be made into a modular Asset especially buildings and roads.
After assembling them in Unity, you can do static batch on all the instantiated modular parts with Unity's StaticBatchingUtility.Combine to improve performance of the game since they are not being moved.
Below is an example of a modular road asset that can be used to create almost any amount of road:
You already answered your question
in this case would it be better to not bother with Unity's otherwise excellent Terrain, and the modellers would just outright build the long course/scenery? (Obviously you'd have to chunk it so it all occludes fine)
I think it's the way to go. If the performance is an issue, try putting each chunk in different scenes and then have a master scene to load them async and additively. And of course you want to unload each scene as it becomes invisible in the camera.
I personally go with your own solution which is letting the Unity Occlusion Culling system to take care of the hiding and showing chunks. I only go with the separate scenes approach if I'm not getting enough performance this way.
I recently had the same problem. We build a tilebased infinite runner with road crossings. The camera was positioned behind and over the car looking down on the street and the player car. So the setup is quite comparable.
We used Curvy from the Asset store to create paths for moving the player and also for creating the geometry of the streets and the surroundings among the streets.
https://assetstore.unity.com/packages/tools/level-design/curvy-splines-7038
It is also easy to spawn tiles with curvy paths and combine them at runtime. So you can separate long distances into smaller segments and spawn them randomly.
We also used QuickBrush from ProCore to quickly paint environment detail to the geometry like trees, bushes or stones. I think procore tools are now implemented in the new Unity 2018 version.
Worked quite well.
I'm using Unity3D for a networked multiplayer online game where I have a very large complex 3D terrain scene like a forest, with trees, cliff, hills, mountains, bounders, etc.
Players can also build structures sort of like minecraft, and put them anywhere in the scene, or even move them around anytime.
Aside from the human controlled players, there are automated AI players and objects like animals roaming around the scene following a path.
The problem is how to make these automated AI players and animals, able to navigate around the static and dynamic player created structures, because the path they follow can easily get blocked by player created structures, or even by other players and other AI objects, cliffs etc. So they have to find away around them or get themselves back on track if they tumble down off a high cliff for example.
So I made a navMesh and used NavAgents, but that just takes care of the static, non moving objects, but how do I make the AI players navigate around each other and also the dynamic structures created by the players which can number in the hundreds?
I thought of adding a NavMeshObstacle to everything, but this would result in it being attached to hundreds of objects since the user created structures are built using little pieces like blocks or little tiles to create a larger object.
Here are my questions:
Would attaching a NavMeshObstacle to hundreds of little objects slow down the game?
Is there another way to navigate of dynamic objects other than using NavMeshObstable without slowing down the networked game?
Thanks
Based on the documentation for NavMeshObstacle, one could reasonably assume that if carving (an obstacle "carving" a piece out of the nav mesh) is disabled, obstacles will only affect agent performance when they are in the agent's way. They won't affect pathfinding. The agent will just go around them when it's close. Note that this will not work for you if there are so many dynamic obstacles that the agents need a very different path. You can also set them to re-carve a piece out of the nav mesh only when they've moved a certain amount. You should test the performance, but that sounds like it might work well for you.
http://docs.unity3d.com/ScriptReference/NavMeshObstacle.html
If you don't want to spend much time on making your own "sensor and navigation" system extending unity's navigation then I think your way is through NavMeshObstacles. If you build your obstacles with blocks like minecraft to avoid add NavMeshObstacle on each block you have a huge range of choices on how to limit/approach on your building system.
There is also good AI systems free as RAIN, for example, that implements some extensible and consistent way to do what you want, take a look on unity market if anything there fits your needs.