My daydream app is working fine when used slowly. But when I rotate my head too quick it starts giving glitches first and then the app gets crashed. I am guessing it has to do something with frames/sec loaded during high quality objects rendering or something similar. If someone has a solution, please help me out.
Assuming you have no errors in logcat, visual glitches are generally indicative of extremely high per frame GPU load.
It would be good to profile your app and share the crash report - but the only way I've ever been able to actually crash an app this way is with very large textures.
Check the size and number of textures in your scene - it's possible rapid head rotation could be causing a large number of textures to need to be loaded as objects become visible. You can also see a good list of how large the assets are at build time in Unity by inspecting the editor log after a build. This can help make sure you aren't running out of RAM on device.
Make sure you have texture compression and mipmaps enabled on all textures. Disabling mip-maps on minified textures can easily overload the GPU.
Make sure you don't have too much transparency. Adding a lot of overdraw to the scene can overload the GPU.
Follow performance optimization guidelines https://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html
https://unity3d.com/learn/tutorials/topics/virtual-reality/optimisation-vr-unity
Make sure renderViewportScale is around 0.7, MSAA is at 2x or less, and you aren't using post-process effects, shadows, or any kind of deferred rendering
Stay below 100 draw calls, 200k vertices on screen.
Related
Currently, I’m trying to create a game in swift’s SpriteKit, and I’m trying to give it a smoother framerate than it has currently. Right now, whenever I, say, press a button, the framerate suddenly jumps to a much lower value, causing the player sprite (whose movement is based on a value multiplied by a deltaTime value) to suddenly jump forward. Is there any way to smooth the framerate such that the changes in framerate aren’t so sudden and drastic?
Code running in the simulator isn't a good test of the graphics performance of a Sprite-Kit (or Scene Kit) application.
This is because SK will, in order to perform certain graphics effects, utilise calls to the specific graphics hardware available in an iOS device whereas in the simulator, these calls will need to be emulated in software by the host machine, causing a reduction in graphics performance (some operations will be affected a lot, other less so), even if the underlying graphics card in the host machine is nominally more powerful that the one in the iOS device.
Simulator testing is good for testing functionality, and it can give an indication of performance. For example, an application that was running OK in the simulator suddenly starts performing poorly after a change. Checking the displayed 'draw count' shows that the number of 'draws' required to render the scene has increased, possibly causing the slowdown. Or conversely, a change may lead to an increase in performance and a reduction in draw count.
However, the only way to be sure is to test the application of a real device.
I'm making maps for my game. I designed forest for map by using many tree, stone.. Images (insert image into unity scene and arrange it). My game runs well on Android but can not run on iOS (ip4s). It met memory problem. I want to ask everyone:
If i design the forest in photoshop instead of unity, is that way better than my current way?
Thanks all
It depends on what you are trying to do.
But anyway, there is a common way to solve memory usage related problems. Fire up the Profiler!
http://docs.unity3d.com/Manual/Profiler.html
There you can see what is eating your precious memory. So you can try and decide if it's best to make them combined in one image or keep them as separate images. Maybe you would also see the memory problem could be related to other assets you use.
You need to look at Instruments in XCode. Also you can use the Profiler that comes with Unity. Another way you can save memory is by reducing the graphics strain on the screen. You need to see your draw calls and verts, and tris under stats. Keep draw calls under 50-60. And keep verts and tris down. Look at the graphics benchmarks for Graphics on OpenGL, iPhone 4S is an older device and depending on your android may be substantially slower. iPhone 4S has 512MB of RAM I think. This should be enough to handle a pretty big memory load. Check out your OnGUI() objects and calls. You want to mitigate those as much as possible. Also try and use culling to your advantage! Also if you are using Fog or camera filters they take a substantial load as well too. Stay away from Literal Types, too if you can.
Also use Vertex Lit for rendering path vs forward rendering path. Use auto best performance for resolution, too. This will make everything go at 0.75 resolution instead of full retina 960x640 or whatever it is for 4S. You can also in Xcode tweak the resolution, depending on the size of your controls you could make it 0.6
under DeviceSettings.mm in your XCode Project:
case deviceiPhone4S: resMult = 0.6f; break;
or in your MonoDevelop UnityScript (depending on orientation):
Screen.SetResolution (Screen.width * 0.6f, Screen.height * 0.6f, true);
I've made a small png (100x30) and added it as a texture in Unity. I've also added it to a script so I can poll it, but if I print out its width and height it now states its 128x32. What happened?
I've tried to adjust the camera, but it does not seems to have anything to do with it. Any clue?
Generally textures are scaled to a power of 2 on both sides for performance reasons.
For example multiplication and division are faster if the GPU can assume that you are working with POT textures. Even mipmaps generation process relies on that because division by 2 never produces a remainder.
Old GPU (I don't know exactly but probably even several mobile GPU these days) requires strictly POT textures.
Unity by default will try to scale the texture to a power of 2. You can disable or tweak that setting switching texture import settings to "Advanced".
For more details checkout the doc.
Note that generally you want NPOT textures only for GUI, where you need to control exactly the resolution on screen and mipmap aren't used. For everything in the 3D scene power of 2 performs better.
Ok so i'm building a game and just using the WebPlayer.
It plays just fine for me and others with no performance issues.
I loaded it to an iPhone 5 to see how it would handle performance and it's far from acceptable. Likely due to the nature of it and all of the objects and effects being drawn.
Is the entire scene loaded at once or are the items that are out of range only drawn when the camera is in that area?
Here is the game
http://burningfistentertainment.com/3D/DevilsNote/index.html
Any pointers would be great.
Is the entire scene loaded at once
Yes it is. When you load a scene everything contained is loaded into memory. Eventually try to split the scene.
are the items that are out of range only drawn when the camera is in
that area
Each active GameObject inside camera frustum with a Renderer component will be rendered. Always animated Animator components can affect performances even if not rendered.
I loaded it to an iPhone 5 to see how it would handle performance and
it's far from acceptable. Likely due to the nature of it and all of
the objects and effects being drawn.
It's hard to say what could be the bottleneck without knowing more about your code. If you think the problem is GPU related, here's
some tips for mobile:
reduce draw-calls (it depends on the device, I'd say no more than 50-70) => reduce materials count(use atlas), mark static objects as static (this way can be static batched),...
limit overdraw: reduce size and number of transparent objects (what about rain? how do you implemented it?)
consider using occlusion culling (probably not a problem in your game, but if the depth complexity increases it could save you a lot of GPU workload)
I try to draw rain and snow as particle system using Core Graphics.
In simulator rendering proceeded fine but when I run my app on real device rendering is slow down.
So, advise me please approaches to increase particle system drawing performance on iPhone.
May be I should use OpenGL for this or CoreAnimation?
OpenGL would be the lowest level to drop to for the rendering, so should offer the best performance (if done right). CoreAnimation would be close enough if there are not too many particles (the exact figure depends on other factors, but upto about 50 should be ok).
When you say you're using CoreGraphics at the moment do you mean you're redrawing based on a timer? If so then CoreAnimation will definitely help you out - as long as you can seperate out each particle into a view. You could still use CoreGraphics to render the individual particles.
Are you using a physics engine to calculate the positions?
If you are simply drawing a view with Core Graphics, then redrawing it every frame to reflect the movement of the particles, you will see terrible performance (for more, see this answer). You will want to go to OpenGL ES or Core Animation for the particle system animation.
My recommendation would be to look at CAReplicatorLayer, a new Core Animation layer type added in iPhone OS 3.0 and Snow Leopard. I've seen some impressive particle systems created just using this one layer type, without using much code. See the ReplicatorDemo sample application (for the Mac, but the core concepts are the same), or Joe Ricioppo's "To 1e100f And Beyond with CAReplicatorLayer" article.
It's hard to give advice with little to no information about your implementation. One thing that is a major bottleneck on the iPhone from my experience are memory allocations. So if you're allocating new objects for each particle spawned, that would be the first thing you might want to fix. (Allocate a pool of objects and reuse them.)