Pausing FPS / CPU usage for static scenes in Unity3D for cross platform development - unity3d

I'd like to use Unity for cross platform non game development but the battery consumption is painful. Most of the usage is read only / static, i.e., the canvas doesn't change, just panning at most (sometimes zooming, but not frequent). Is it possible to turn the FPS to 0 and reduce CPU usage?

Hesitant "yes" because there might be side effects you'll have to handle on your own that aside from some general predictions I cannot know, or may not actually save any CPU usage.
But you can try fiddling with Time.timeScale so that the game updates less frequently, but it has some rather wide-ranging implications (if TimeScale is 0, then your panning and zooming aren't going to work either if they're reliant on either deltaTime (which would be zero) or use FixedUpdate (which won't be called)).
Generally speaking though, if something's using a lot of CPU then you need to go in and figure out why and optimize it. Use the Profiler. If you're absolutely sure that your application is doing nothing and the CPU is still cranked, then there might not be anything you can do (it's an Engine problem).

Application.targetFrameRate sets how many frames per second will be calculated and rendered, so setting this to a low value will actually save power. Not sure how well supported this is on mobile devices though. Especially iOS frame rate may depend on a static setting inside your XCode project.
The Unity documentation for Application.targetFrameRate states:
- On mobile platforms the default frame rate is less than the maximum
achievable frame rate due to need to conserve battery power. Typically
on mobile platforms the default frame rate is 30 frames per second.
This should mean that setting the target frame rate to a lower value will consume less battery power.

Related

How to improve camera quality in ARKit

I am building an ARKit app where we want to be able to take a photo of the scene. I am finding the image quality of the ARCamera view is not good enough to take photos with on an iPad Pro.
Standard camera image:
ARCamera image:
I have seen an Apple forum post that mentions this could be iPad Pro 10.5 specific and is related to fixed lens position (https://forums.developer.apple.com/message/262950#262950).
Is there are public way to change the setting?
Alternatively, I have tried to use AVCaptureSession to take a normal photo and apply it to sceneView.scene.background.contents to switch out a blurred image for higher res image at the point the photo is taken but can't get AVCapturePhotoOutput to work with ARKit
Update: Congrats to whoever filed feature requests! In iOS 11.3 (aka "ARKit 1.5"), you can control at least some of the capture settings. And you now get 1080p with autofocus enabled by default.
Check ARWorldTrackingConfiguration.supportedVideoFormats for a list of ARConfiguration.VideoFormat objects, each of which defines a resolution and frame rate. The first in the list is the default (and best) option supported on your current device, so if you just want the best resolution/framerate available you don't have to do anything. (And if you want to step down for performance reasons by setting videoFormat, it's probably better to do that based on array order rather than hardcoding sizes.)
Autofocus is on by default in iOS 11.3, so your example picture (with a subject relatively close to the camera) should come out much better. If for some reason you need to turn it off, there's a switch for that.
There's still no API for changing the camera settings for the underlying capture session used by ARKit.
According to engineers back at WWDC, ARKit uses a limited subset of camera capture capabilities to ensure a high frame rate with minimal impact on CPU and GPU usage. There's some processing overhead to producing higher quality live video, but there's also some processing overhead to the computer vision and motion sensor integration systems that make ARKit work — increase the overhead too much, and you start adding latency. And for a technology that's supposed to show users a "live" augmented view of their world, you don't want the "augmented" part to lag camera motion by multiple frames. (Plus, on top of all that, you probably want some CPU/GPU time left over for your app to render spiffy 3D content on top of the camera view.)
The situation is the same between iPhone and iPad devices, but you notice it more on the iPad just because the screen is so much larger — 720p video doesn't look so bad on a 4-5" screen, but it looks awful stretched to fill a 10-13" screen. (Luckily you get 1080p by default in iOS 11.3, which should look better.)
The AVCapture system does provide for taking higher resolution / higher quality still photos during video capture, but ARKit doesn't expose its internal capture session in any way, so you can't use AVCapturePhotoOutput with it. (Capturing high resolution stills during a session probably remains a good feature request.)
config.videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats[1]
I had to look for a while on how to set the config, so maybe it will help somebody.
This lets you pick the one with the highest resolution, you can change it so that it picks by most fps, etc...
if let videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats.sorted { ($0.imageResolution.width * $0.imageResolution.height) < ($1.imageResolution.width * $1.imageResolution.height) }.last{
configuration.videoFormat = videoFormat
}

Choppy touch response on lower FPS

I'm making an iPhone game which has a quite intense use of pixel shaders. Some effects make my fps rate sometimes drop down to ~22 FPS in the 3GS, but it is around ~27 most of the time.
When the FPS rate is down there, the touch gesture response becomes extremely choppy. In other words, the gesture update time reaches nearly 5hz, which is much slower than the game itself.
Has anyone experienced similar problems? Is there any way around it?
Note1: I'm already using CADisplayLink
EDIT: I had a significant improvement by manually skipping even frames. I'm not sure if that is a good thing to do but the game remained quite playable and I'm sure it is using much less CPU now.
I have a similar situation in one of my applications, where I have very heavy shaders that can lead to slower rendering on older devices, but I still want to have the framerate be as fast as it can on more powerful hardware.
What I do is use a single GCD serial queue for all OpenGL ES rendering-related actions, combined with a dispatch semaphore. I use CADisplayLink to fire at 60 FPS, then within the callback I dispatch a block for the actual rendering action. I use a dispatch semaphore so that if the CADisplayLink tries to add another block to the rendering queue while one is running, that new block is dropped and never added.
I describe this approach in detail in this answer, and you can download the source code for my application which uses this here.
The GCD queue lets you move this rendering to a background thread, which leaves your interface responsive, while scaling the FPS so that your rendering runs as fast as your hardware supports. This has particular advantages on the new dual core iOS devices, because I noticed significant rendering speed increases just by performing my OpenGL ES updates on this background queue.
However, as I describe in that answer, you'll need to funnel all of your OpenGL ES updates through this queue to avoid the potential for more than one thread from simultaneously accessing an OpenGL ES context (which causes a crash).
If your app's game loop run at 22 fps, but is requesting 30 fps, that means that the app is oversubscribing the total number of CPU cycles available per second in the UI run loop. Either try putting more stuff in background threads, or turn your requested frame rate down to below what you can actually get (e.g. set it to 20 fps), so that there is more time left for UI stuff, such as touch event delivery.

Possibilities to reduce power consumption with cocos2d apps

I made a board game with includes just some little animations. I reduced the fps from 60 to 30 to reduce the processor load. But the device still gets very warm.
Another application made without cocos2d is not heating it so much.
Are there any methods to calm the iPhone down?
The device state is as follows:
Wifi is always enabled
The app uses gamecenter
GPS is inactive
fps is always on 30
I use cocos2d-iphone as engine
It might be worth experimenting with different director types, e.g. kCCDirectorTypeNSTimer, and seeing if that helps at all. Those will have the biggest effect on the main loop of the game.
You should also spend some time with Instruments if you've not already, as that will show you where the CPU is spending its time and give you some hints on where you could ease things up.
I've noticed that a sequence of small time animations in cocos2d takes a lot of processor time. I've tried making tips at the level which will pulse in size. 0.1 second pulse up, 0.15 down and 0.2 stay. And i've put it all in a repeat forever sequence. Everything was terribly slow. Then i've just made the animation manually and the device calmed down and fps increased back to 60
When showing menus or dialogs that do not require animation, you can actually lower your framerate even further.

Getting Framerate Performance on iPhone

I've just come off the PSP where performance testing was easy. You just turned off 'vsync' and printed out the frameratem, then change something and see whether the frame rate goes up or down...
Is there any way to do the same thing on the iPhone? How do you turn vsync off? The Instruments tool is next to useless. Its chief problem being that it running it adversely affects the performance of the app! Also, the frame rate it reports is extremely sporadic.
I don't want any fancy tool that reports call trees and time spent in each function. I just want an unrestricted frame rate and some way to see what it is. Is there a high precision counter that you can use on the iPhone? Something like QueryPerformanceCounter in windows?
Also, is there anyway for you to somehow KILL backround processes so you know they can't effect the performance, perhaps solving the sporatic frame rate problem?
Profile your app with Instruments and use the Core Animation instrument. It gives a frame rate.
You're taking the try-something-and-measure approach. Very indirect. It's easy to tell exactly what is taking the time; it doesn't depend on what else is going on and doesn't require learning a new tool. All you need is a debugger that you can interrupt.
You can't kill background processes on the iPhone. That would make it possible for a buggy or malicious app to interfere with the phone function and the needs of all other functions on the iPhone are subordinated to the phone.
Try QuartzDebug or OpenGL Profiler.
Use instruments to get the frame rate.
To do this, run profile on your app (click and hold on the run button in xcode and choose profile). Make sure you are running your app on device. Choose openGL ES analysis. Look at the data display under core animation frames per second.
You want to aim for 60fps.

Accuracy of OpenGL ES Instrument

I'm developing a game for the iPhone. I've decided that 30FPS is plenty so I've written some code that only allows the App to present the render buffer every 1/30 of a second. When I tried to verify this with Instruments I got varying information.
On an iPod Touch (2009 edition, 32G) it reports 30 FPS for Core Animation Frames Per Second.
On an iPhone 3G I get wildly varying results. And not just less than 30 FPS. I see >30 FPS on a regular basis. It actually seems to hang closer to 36-39.
To investigate this anomaly I added my own FPS to the app and update it once per second. I stays right at 29 FPS on both devices.
So, does anyone have any suggestions as to what might be going on? I expect Instruments to be accurate so it really concerns me that it appears inaccurate. It makes me think I have a bug somewhere, but I sure can't find it.
Are you using CADisplayLink? This might give you a little bit more precision on your main loop.