thank you so much for all the fast responses I truly appreciate it. I have another question I've been wondering about, so I am working mobile game and I want to keep the size as small as possible. So I wanted to know does using Unity particle effects, impact performance and game size? I was wondering, if i wanted to make a visual effect of breaking apart a GameObject when destroyed, should I use unity particle system to do so? or should I make an animation of that game object breaking and use that instead?
to summeize should I be making animation for things such as rain effects and confetti and fire effect instead of making them with the particle system? is there any drawback to using one over the other?
As TEEBQNE said it is your choice between animations and the particle system depending on what you need. If your team is small or you are a single developer it is best to opt for the particle system for the flexibility of changes and quick execution.
Optimising the particle system is not hard, just make sure you don't overdo the numbers and avoid some common mistakes. There is a list of the best practices for optimization on mobile games found here
You can also find some pretty good looking particles premade in the unity asset store, which are already optimized well.
Related
I'm wondering about the best way of reducing my CPU and memory overhead for areas of the game world that do not need to be updated every frame.
I have just started to consider this issue as I'm currently implementing a shadow detection system using Raycasts.
My problem is this:
I can have about 100 lights in my level that on every frame send a Raycast to nearby characters to determine if these characters are in shadows.
My game is a low poly PC game, and I understand the overhead from Raycasting isn't that drastic. So its not a major concern. But I'm still not sure of the best approach to optimising this.
I have been thinking of a few soltions, but am unsure if there is "standard" per say.
1. exit update loop if player is too far away
void Update() {
if (Vector3.Distance(playerPos, transform.position) > someRadius) {
return
}
}
This is the most glaringly obvious solution, with even more obvious concerns.
This update loop will still be hogging CPU cycles , performing 2 calculations on every frame, for every light point.
2. Disable Light gameObjects when the player is too far away
This method is more efficient in terms of CPU overhead, as those will be negated. However I'm still hogging uneccesary memory.
In order to make this solution more scalable I would have to design some kind of "enabler" that keeps track of game objects that should be enabled/disabled based on the player position.
But at this point I know I'm re-inventing the wheel, and feel very sure that there is an industry standard for this.
Is there an alternative to enabling/disabling?
I see a lot of game developers talking about physically unloading areas of their game from memory and writing those areas to disk, when the player is not nearby.
I wonder is this achieved by simply destroying and re-instantiating the objects.
Question
Is Unity opionated about this?
The page here lists an example, similar to my first solution. But they are talking about 100s of thousands of updates per frame here.
Maybe I don't need to worry as much as I think
Thanks!
If you want not render objects, that are not in your camera area, Occlusion culling is what you exactly need. Watch this link:
https://docs.unity3d.com/Manual/OcclusionCulling.html
If You want change how your object looks based on range between camera and your object, You have to use LOD(Level Of Detail). Here is documentation:
https://docs.unity3d.com/Manual/class-LODGroup.html
I have been building a game for VR using Unity3d. It has only low poly models and the file size is less then 40 mb still the game lags when played on mobile.. Please suggest how to improve the performance..
Thank you in advance..
In order to improve performance in VR for mobile you have to optimize everything as best as you can, you should keep some of these variables in mind:
Graphics Side
- Number of polygons in the scene
- How many source of lighting do you have
Programming Side
- How much work is taking your code, is doing it efficiently?
The programming part can include problems within the physics system, also some logic problems that can probably decrease the overall performance because of higher computation.
My advice is to learn about the Profiler that unity offers, actually you can observe how much work is taking your code and where exactly it is your bottle-neck. This video also can be useful.
Of course a solution could be implement your code following design standards, like design patterns and software architecture (depending on the size of the project).
I hope it can be useful for you!
What I found from developing and launching a vr game is some of the issues below
Number of polygons is usually your first to check even though your models are low poly. For example, I looked at Synty models in the unity store and some of them were over 1k for a bag and 7k for a character model. This seriously reduce the amount of objects you can if you want to target a max of 50000 per eyes.
With some models, you can use blender and the decimate tool to reduce the polygon count pretty easily. From there I would use LOD's to reduce their count further based on distance.
Use occlusion culling (pro version only)
Set your camera distance to maybe a 100 instead of the default
Use mobile shaders and careful using some of the standard shaders as they are expensive. Also transparent shaders will becomes expensive cause overdraw.
Batch your textures and make them static if possible
Don't use dynamic shadows on lighting but instead bake your lights
Try to avoid using physics as this becomes expensive and instead raycast to trigger events or shooting weapons.
Run profiler often and check for any bottlenecks (pro version only)
Reduce the count of Particles effects and their values
Character bones can also cause issue so remove as many as possible
There is also your code to look at as mentioned by Manujamming
Set quality setting to low in the inspector to gain best performance.
Could you provide a screenshot of your game scene?
I hope this makes sense.
Best of luck!
I wanna to achieve an effect like a fired blue ball (ball with energy) before to shoot it on the sky. I thought to use some particle systems but a friend of mine said that it's very expensive for the memory to use particle system. Is there a different way to achieve this ? Where can I find some examples ?
Particle Systems are probably the best way and can achieve some really nice looking effects.
It depends how optimized your particle system is, but my game used a ton and it ran fast and smooth (this was with OpenGL ES).
Generally they're fine if you keep the max particles as low as possible (whilst still looking good).
Thanks to an open source engine (cocos2d iphone) I also got all my particle systems to render in the same OpenGL call.
I'm working on a 2D game (kind of like a top down space shooter) for the iPhone using an engine very similar to cocos2d (not exactly though) on OpenGL ES. I'm trying to figure out how I'm going to do collision detection.
All the ships for my game are images, and the game will load the image as a texture onto the screen. I've got very very simple detection going already that basically just takes the rectangles of the images and checks to see if those collide and can do that just fine.
But, of course the ship isn't perfectly taking up the entire rectangle so there is whitespace in there. So my question is how am I supposed to account for that whitespace? Do I have to have the matrices of the ships stored? Or is there another way? I've also heard of possibly using the Chipmunk physics engine for collision detection? How would that work?
(1) regarding Chipmunk, the short answer is yes you should immediately download chipmunk, donate something to the bloke, and start learning about it.
Working with that for a day or so will basically answer all the questions you have. If you want to work with physics games you're going to need to get in to it.
(2) you ask about using an approximation ("just" a rectangle) instead of something more accurately shaped like your spaceships. In fact, you'll be perhaps amazed to learn, that is precisely how it is usually done in all your famous big-name games you've played since we were all kids! Indeed sometimes you might use little more than A DOT (!) to detect collisions.
What you'd probably do in production is try a more complicated model, and play with it for a few hours and see, is it actually any better to play with than your simple dot or rectangle model.
If you do want to make a more complicated model -- just make one! Build it up from three or four rectangles using your current system. Try them "all against each other", and have "one big one to check first" to see if it is even anywhere near each other (sort of a simple spatial hashing).
You will find that when you do it with Chipmunk, which as you now know you have to immediately begin after reading this message, you just build it up the same tedious way. It's not a magic bullet. But if you were going to use a "more complicated model" yes it is better to go with something standard, chipmunk, to do the work in - it will get done quicker and better. There is heaps to learn and you should hop to it!
(3) Unity is not just for 3D Finally if you want to do it the smart-ass grown up way, you'd have to use Unity3D which will let you access the very metal, the Nvidia physics on the chipset. Note that unity works perfectly for 2D games also - you just click one button in unity to use a 2D projection (many brand-name ifone 2D games are done exactly like that).
If you use that approach, you can (if you want) have "absolutely exact" physics, with every nook and cranny of your model modelled.
What is the downside to doing this? Ah hah ... well the thing is, you need superb actual 3D models of all the stuff in your game! (Like you see them building in the "how we made the movie" special features that come with your favourite Pixar blu-ray.) To do that you need things like autodesk, maya and the like. you would quite likely buy some models ready-made from a digital prop shop (no need to build "a chair" as it has been done 1000 times already and you can buy one for ten dollars).
(Unity3D is completely free to use for a few months while you see if it can make you money.)
Incidentally on the Chipmunk front --- you can just use Corona which is ridiculously easy to use and has chipmunk-like physics completely built in with zero effort on your part! You could have the whole game done in less time than it took to write this email. You could be selling your game already and thinking up the next one. Or, you could use "Cocos" which indeed has a chipmunk-like physics library built-in .. personally (just me) I do not like and won't touch cocos - but of course many games use it.
(It seems pointless, to me, using cocos which is a "for idiots" product, when you can just go ahead and use Corona, which is a "for idiots" product but stupendously easier to use, 1000x more solid, and probably literally 10x faster to finish your product and start making money.)
Noel Summary:
So in some sense using Unity3D (and hence, the actual nvidia physics on your computer's chips) is the ultimate solution if you want detailed nook-and-cranny collisions. Going down one step, Chipmunk is exactly, precisely what you should be using on the ifone/ipad for 2D physics -- it is precisely what is used in all the famous games we know so well. You have a bit of learning to do so hop to it - it's superfun. Finally go right ahead and just make your current model more complicated if you wish - roll your own by adding more rectangles!
And the fourth point is, be sure to remember that in games, astonishingly, you can often get away with remarkably simple physics (often SIMPLER!! than one rectangle - just a damn point - ie, simply measuring the distance between centers!) Fifthly after going to all the effort of testing more detailed physics, you would play test one against each other, and find out what is the simplest physics you can get away with.
OK, I'm still brand new to iPhone development. I have a free game on the app store, Winner Pong, but it's just a Pong clone (who would've guessed) that uses the standard UIImageViews for the sprites. Now I want to do something a little more complicated, and port my game for the Xbox 360, Trippin Alien, to the iPhone. I obviously can't keep using UIImageViews, so I was wondering which would be better to learn: the simpler, but performance-hindering Qurtz2D, or the smooth-running but dauntingly complex OpenGL ES.
My game is basically a copter game, with about 8-10 sprites on screen plus a simple particle system (video here). Not too complicated, but performance does matter. My only prior game programming experience is in Microsoft's XNA and C#, which has a built in SpriteBatch framework that makes it incredibly easy to draw, scale, and rotate pre-rendered sprites on screen. Is it worth it to learn OpenGL ES? How big is the performance gap? Is quartz really that simple?
Also, if anyone knows of any tutorials for either one, please, post them here. I need as much help as I can get.
Look through code samples of each to actually see the complexity. You might find that OpenGL isn't so daunting.
Regarding the performance. Core Animation, which Quartz2d is part of, uses OpenGL behind the covers, so for simple sprite animations, I would expect your game to perform fairly well.
I would also glance over the programming guide for each before making your final decision.
Another alternative is to use something like Unity. I recently just started playing around with the trial version of this development environment and if you're mostly doing game development with graphical objects and sprites, this may be one option to consider. You can script in C#, Javascript, or Boo. The development environment allows you to graphically setup your scenes and levels. You can then attach scripts to graphical objects for animation to handle user events, etc.
One downside for Unity, which I've heard from others is that if you want to use the familiar UI controls from UIKit, it's not so easy to instantiate them...I haven't verified this myself.