Using the Unity3D engine's high-level networking API (HLAPI) seems to be an all-or-nothing approach. For any game objects expected to be used in multiplayer, (most) behaviors need to be implemented as NetworkBehaviours, and Network components are required. Additionally, a Network Manager is required in all scenes. I'd rather not have duplicate network-enabled versions of all single-player assets, and it seems that the expected practice is for a single-player game to be realized as a LAN Host-based network game with only the single (localhost) client.
My concern is that the HLAPI stuff riding on top of absolutely everything will result in substantial overhead in single-player mode, which is the main focus of the game. Is this a valid concern?
I have considered a couple mitigation techniques, but they pose their own problems for maintainability and code complexity:
Prefabs needed in both modes will be duplicated (violates DRY: changes to one prefab need to be manually mirrored in the other)
Dynamically modify or replace single-player assets with multiplayer ones, or vice versa (complex and potentially error-prone)
What are some better mitigation techniques? Or are they even needed?
This is kind of an old question, but I figure I'd take run at it anyway. Could you detect when you're in single player mode, and in that case use Destroy to remove the components you don't need? For example:
http://answers.unity3d.com/questions/378930/how-delete-or-remove-a-component-of-an-gameobject.html
Related
I'm playing with ECS. I'd like to know if it is possible to have lots of cubes with rigid bodies instantiated as entities in order to get lots more of them?
I want tens of thousands (if not more) of simple (Mesh + Collider + Rigidbody) objects in a scene just to passively interact with the scene.
ECS is an architecture pattern which stands for entity-component-system. In answer to your question; Yes. It is possible to have many instances of simple mesh-rigidbody-collider entities, if you engineer your code in such a way to accommodate this technical requirement. The rest of this answer is moving forward with the assumption you are referring to designing your own game engine (as the question lacks detail)
From my experience (I have designed 2 physics engines and 3 custom game engines) the major bottlenecks are as follows:
Graphics implementation - It doesn’t matter if you’re using OpenGL or DirectX, inexperienced or outdated graphics implementations are a huge source of custom engine bottlenecks. To remedy this, I suggest using the modern OpenGL tutorials, specifically on things like deferred rendering. In DirectX, the implementations can get quite complicated as there are less learning resources available for free online (Partly due to the fact that implementation details differ largely from version to version in directx). The big new thing I’ve heard from the latest directx version is something called mesh shaders which appear to “simplify” the process.. Research will be your friend for this step
Physics - This can reduce frame rate especially with lots of collisions. Unless you want to be a physics programmer, I suggest using available open source implementations such as Bullet physics, an excellent C++ physics engine. PhysX is an alternative however the implementation can be daunting, and both libraries suffer from subpar or terse documentation. These libraries can be easily integrated (from a design standpoint) into a standard ECS framework. If you are insistent on designing your own, I suggest reading through GDC presentations from people like Erin Catto, Erwin Coumins, Gino Van den Bergen, Dirk Gregorius etc.
As for “tens of thousands” I can almost guarantee if they aren’t spheres, tens of thousands of passive colliders will absolutely slow down your engine with a custom physics implementation. You can trivially multithread a custom physics engine with an iterative solver in two areas; Internal Collider geometry updating (both their bounding volume and world geometry) as well as the narrowphase of collision detection. If your broadphase collision detection outputs unique pairs of potential collisions, you can easily parallelize the actual collision testing if your geometry is updated prior to this stage, as collision detection could be considered a read-only task.
The last simple optimization for physics would be to use collision islands (see bottom of page under optimizations), which essentially separates collisions into groups which are independent, i.e. Two janga towers would be represented by two collision islands, and each can be solved in a separate thread due to data dependencies and the nature of iterative solvers.
For tens of thousands, you may even consider experimenting with compute shaders, as they are great for passive simulation of large quantities of objects. The link provided actually incorporates simple collisions in learning how to use these shaders.
We are currently making our dream game with thousands fast dying zombies.
Problem is - we are making it for mobile devices.
Hybrid ECS is not enough, cuz even 100-200 low poly zombies is to heavy for render even after ultimate optimization.
Solution is use only pure ECS. Followed this tutorial i can spawn now 2-3k zombies on 40-50fps on low end devices.
But, im stuck on adding behavior. I just cant add it on each entity. By getting this tutorial as example - how to add custom behavior like AI scripts/systems for each cube?
I tried to add "system" on it, but it applies only on GameObject that u use for getting copy's.
P.S. I dont want to use external ECS frameworks, cuz im sure in future Unity built-in ECS will be the ultimate "from box" solution.
You don't. With Unity ECS you register systems. Systems work on entities that have certain components attached. E.g. you can create a system that processes all Zombies (e.g. all entities with a "Zombie" component) and execute some logic for them in each tick. The trick with ECS is that you do not handle each entity separately but you run logic for all entities that share certain criteria. This is why it is so fast, but it requires to let go of the Monobehaviour approach, mentally. I found this tutorial helpful in getting some start of actually implementing logic: http://infalliblecode.com/unity-ecs-survival-shooter-part-1/
It's not 100% up-to-date but it should be enough to get you an idea how things roll with ECS.
I'm actually doing a project with the Hololens of Microsoft. The problem is that the Hololens memory is bad, so i can only make a spatialmapping of a room and not of a building because he can't remember all the building. I had an idea, maybe a can create more object and assemble them ? But no one talk about this... Do you think it's possible ?
Thanks for reading me.
Y.P
Since you don’t have a compass, you could establish some convention to help. For example, you could start the scanning by giving a voice command (and stop it by another one), and decide to only start scanning when you’re facing north, for example. Then it would be easy to know the orientation of each room. What may be harder is to get the angle exactly right. Your head might be off by a few degrees and you may have to work some “magic” (post processing) to correct it.
Or placing QR codes on a wall (printer paper + scotch tape) and using something like Vuforia can help you avoid this orientation problem altogether (you would get the QR code’s orientation which would match that of the wall).
You can also simplify the scanned mesh and convert it to planes. That way you can remember simpler objects instead of the raw spatial mapping mesh. (Search for the SurfaceToPlanes script in the Holographic Academy tutorials).
Scanning, the first layer, as in HoloLens trying to reason about the environment is an unstoppable process. There is no API for starting or stopping it. And that process also does slowly consume more and more memory as far as I know. The only thing you can do is deleting space (aka deleting holograms) or covering the sensors. But that's OS/hardware level, not app level, which you presumably want.
Layer two, what you are you probably talking about, is starting and stopping the spatial reconstruction process, where that raw spatial data is processed into a low-poly mesh (aka spatial mapping). This process can be started or stopped. For example through Unity's SpatialMappingCollider and SpatialMappingRenderer components, if you use Unity.
Finally the third level is extracting some objects/segments from that spatial mapping mesh into primitives. Like that SurfaceToPlanes. That you can also fully control in terms of when.
There has been a great confusion, especially due to the a re-naming parties in MixedRealityToolkit (overuse of word Scanning) and Unity (SpatialAnchor to WorldAnchor etc.) and misleading tutorials using a lot of colloquialisms instead of crisp terminology.
Theory aside. If you want the HoloLens to think of your entire building as one continuous space in terms of the first layer, you're out of luck. It was designed for a living room and there is a lot of voodoo involved into making it work stable in facilities 30x30 meters. You probably want to rely on disjointed "islands" with specific detection anchors to identify where you are. Or rely on markers and coordinates relative to them.
Cheers
I am messing around in a Unity3D, making a 2D project. I want to create my own code architecture for Unity's component based system.
As I don't want to create God-Controller scripts, and being more into code resposibilities separation solutions ( having MVC, MVVM in mind ), I am trying to find some good solution.
My first take looks like this:
GameObject is created from:
Unity Components - for ex. SpriteRenderer, Animator, Rigidbody2D
Controller - The only resposibility of this component is to handle Unity functions ( like
Update, FixedUpdate, OnCollision ), and executes functions from model.
Models|Proxies - this components contains data, functions to manipulate game object unity components, and dispatching events to outer world.
I am wondering what do you think about this aproach, what are your code habbits in Unity3D projects, and what solutions worked for you.
While I have learned and taught MVC and similar approaches, I find that when designing game architectures one usually has to be a bit more flexible. In unity I generally take the following approach.
I will create a few GameObjects to hold the necessary global logic. This would be things like the overarching state machine, networking, and sometimes control input. If anything needs to persist between scenes it will go here. Each object typically has one component script for game logic and one for temp/debugging functions that gets turned off or removed when not needed.
If the project has fixed levels I will make each level a scene and I will store level layout and other level specific information in the scene. If I am doing a more procedural project I will create a "LevelGenerator" object with component scripts that build and populate the level at runtime.
If I am building a system that has lots of mostly independent agents (e.g. enemy creatures) I try to keep the game logic and necessary state information for each agent as close to it in the hierarchy as possible. For example, the agent's position and rotation would be stored in it's transform. I might store the agents health, ammunition, speed, and current status effects along with the functions for moving, shooting, healing, and death in a component script on the agent's GameObject.
While there are countless other approaches that could work, I like this approach for a few reasons:
It saves me from having to manually manage tons of data access in a central script. If I need to know where all the monsters are, I can just keep a list of game objects rather than using custom data types.
When the agent gets destroyed all the local data goes with it. (No complex functions to clean up dead agents.)
From a game logic perspective (on the projects I typically work on) it usually makes sense that each agent would "know" about itself and not about everyone else.
I can still use all the OO goodies like polymorphism etc. when necessary.
This response likely says more about how I approach game design and software architecture than general best practices but it might be useful.
One note on encapsulation in Unity. Every component script you add to a game object has a bit of overhead. If your scene has a couple of dozen agents in it, than this is not a big deal and I would recommend trying to keep things as OO and modular as possible. If you are building a system with hundreds or thousands of active agents, cutting the components per agent from two to one can mean quite a bit of saved frame time.
I use another approach.
I don't use many controllers attached to game objects. I just have some kind of GameController which creates other structures.
I have separate project shared between other games. This project contain design patterns and is built before main project did. I widely use State, Observer, Builder, ObjectPool etc. patterns to make my code clear and simple.
Another reason I use such approach is performance optimization. I create objects once and then reuse them. Also I do once such things as gameObject.GetComponent etc. When I need to create many objects using the same prefab I use ObjectPool to avoid CreateInstance/Destroy.
My logical game objects (actors) communicate each other using Observer pattern. My only one GameController just send events like Awake, Update to actors. Some objects have StateController which retranslate events like Awake and Update to current object state. This is useful to separate behavior of each object state.
I have component system architecture similar to Unity. And I also have services like InputService that can be accessed via ServiceLocator in any object.
I have also points, but the idea is clear and easy maintainable code. This is difficult with standard Unity controllers and SendMessage approach.
I'm currently working on a small iPhone game, and am porting the 3d engine I've started to develop for the Mac to the iPhone. This is all going very well, and all functionality of the Mac engine is now present on the iPhone. The engine was by no means finished, but now at least I have basic resource management, a scene graph and a construction to easily animate and move objects around.
A screenshot of what I have now: http://emle.nl/forumpics/site/planes_grid.png. The little plane is a test object I've made several years ago for a game I was making then. It's not related to the game I'm developing now, but the 3d engine and its facilities are, of course.
Now, I've come to the topic of materials, the description of which textures, lights, etc belong to a renderable object. This means a lot of OpenGL clientstate and glEnable/glDisable calls for every object. What way would you suggest to minimise these state changes?
Currently I'm sorting by material, since objects with the same material don't need any changes at all. I've created a class called RenderState that caches the current OpenGL state and only applies the members that are different when a different material is selected. Is this a workable solution, or will it grow beyond control when the engine matures and more and more state needs to be cached?
A bit of advice. Just write the code you need for your game. Don't spend time writing a generalised rendering engine because it's more than likely you won't need it. If you end writing another game then extract the useful bits out into an engine at that point. This will be way quicker.
If the number of states in OpenGL ES as high as the standard version, it will be difficult to manage at some point.
Also, if you really want to minimize state changes you might need some kind of state-sorting concept, so that drawables with similar states are rendered together w/o needing a lot of glEnable/glDisable's between them. However, this might be sort of difficult to manage even on the PC hardware (imagine state-sorting thousands of drawables) and blindly changing the state might actually be cheaper, depending on the OpenGL implementation.
For a comparison, here's the approach taken by OpenSceneGraph:
Basically, every node in the scene graph has its own stateset which stores the material properties, states etc. The nice thing is that statesets can be shared by multiple nodes. This way, the rendering backend can just sort the drawables with respect to their stateset pointers (not the contents of the stateset!) and render nodes with same stateset together. This offers a nice trade-off since the backend is not bothered with managing individual opengl states, yet can achieve nearly minimal state changing, if the scenegraph is generated accordingly.
What I suggest, in your case is that you should do a lot of testing before sticking with a solution. Whatever you do, I'm sure that you will need some kind of abstraction to OpenGL states.