iPhone: how to prioritize items in a CFRunLoop (OpenGL related) - iphone

I have an OpenGL application which is rendering intensive and also fetches stuff over HTTP.
Following Apple's samples for OpenGL, I originally used NSTimer for my main painting loop, before finding out (like everyone else) that it really isn't a good idea (because you sometimes have huge delays on touch events being processed when slow paints cause paint timers to pile up).
So currently I am using the strategy given by user godexsoft at http://www.idevgames.com/forum/showthread.php?t=17058 (search for the post by godexsoft). Specifically, my render run loop is on the main thread and contains this:
while( CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.01f, FALSE) ==
kCFRunLoopRunHandledSource);
This line allows events like touch events and things related to comms to happen in between rendering of frames. It works, but I'd like to refine it further. Is there any way I can give priority to the touch events over other events (like comms related stuff)? If I could do that, I could reduce the 0.01f number and get a better frame rate. (I'm aware that this might mean comms would take longer to get back, but the touch events not lagging is pretty important).

This doesn't directly answer your question, but have you considered using CADisplayLink for scheduling redraws? It was introduced in iPhone OS 3.1.

Related

Unreal Engine: accessing/saving in-game images to disk without blocking game thread

I am working on an Unreal based open-source UAV simulation (Microsoft AirSim) where I am trying to capture and save images from a camera that's attached to the drone. The image underneath gives an idea of how the game looks like. The rightmost view on the bottom is the actual view from the camera, the other two are just processed versions of the same image.
Right now the way it's set up in this way: There's a camera asset, which is read through the code as a capture component. The three views in the screenshot are linked to this capture component. The views are streamed without any problem as the drone is flown around in-game. But when it comes to recording screenshots, the current code sets up a TextureRenderTargetResource from this capture component, and subsequently calls ReadPixels and saves that data as an image (please see below for code flow). Using ReadPixels() as it is, is blocking the game thread directly and slowing down the whole game by a lot: drops from ~120 FPS to less than 10 FPS when I start recording.
bool saveImage() {
USceneCaptureComponent2D* capture = getCaptureComponent(camera_type, true);
FTextureRenderTargetResource* RenderResource = capture->TextureTarget->GameThread_GetRenderTargetResource();
width = capture->TextureTarget->GetSurfaceWidth();
height = capture->TextureTarget->GetSurfaceHeight();
TArray<FColor> imageColor;
imageColor.AddUninitialized(width * height);
RenderResource->ReadPixels(bmp);
}
Looking at this article, it seems evident that ReadPixels() "will block the game thread until the rendering thread has caught up". The article contains sample code for a 'non-blocking' method of reading pixels (through removal of FlushRenderingCommands() and using a RenderCommandFence flag to determine when the task is done), but it doesn't significantly improve the performance: the rate at which images are saved is slightly higher, but the game thread still runs at about only 20 FPS, thus making it really hard to control the UAV. Are there any more efficient asynchronous methods that can achieve what I am trying to do, perhaps, say, in a separate thread? I am also a little confused as to why the code has no trouble streaming those images on-screen as fast as possible, but saving the images seems way more complicated. It's fine even if the images are saved to disk only at 15 Hz or so, as long as it doesn't interfere with the game's native FPS too much.
You can move the save-to-disk operation from Game thread to other thread.
Please check Async.h for detail
The limitation for thread is that you could not modify/add/delete uobject/uactor in other thread.
Multi-thread for UE4
I'm very late to the party, though in case anybody is still having issues with this: I built a non-blocking version inspired by code of AirSim's and UnrealCV's plugins because I had some serious compilation and versioning issues with those.. but needed a smoothly running camera while capturing images myself. This is also based on the async.h class of unreal API provides as the accepted answer suggested.
I setup a tutorial repo for this:
https://github.com/TimmHess/UnrealImageCapture.
Hoping it may help.
(...) it seems evident that ReadPixels() "will block the game thread until the rendering thread has caught up"
Using functions that try to access and copy GPU texture data is always a pain. I found that completion time of these functions depends on hardware or driver version. You could try to run your code on different (preferably faster or newer) GPU platform.
Back to code: to bypass single thread issue I would try to copy UTextureRenderTarget2D to UTexture2D using TextureRenderTarget2D::ConstructTexture2D method. When you get your image copied (I hope it take much faster) you could push it to consumer thread at 15 FPS ratio. Thread can be created using FRunnable or Async engine module (using so called Task Graph). Consumer thread could access cloned texture by using texture's PlatformData->Mips[0]->BulkData. Make sure that you are reading texture data between BulkData->Lock(...) and BulkData->Unlock() calls.
I'm not sure, but it could help because your copied texture will be not used by render thread so you could lock it's data without any blocking on render thread. I'm only worried about thread safety of Lock and Unlock operation (didn't found any clue), should be possible unless there are engine RHI calls.
Here is some related code that could be helpful.
Perhaps you should store screenshots in a TArray until the "game" is done, then process all stored data?
Or look at taking a screenshot from the drone camera instead of manipulating the pixel data? I assume the left and middle drone camera images are material-modified versions of the base camera image.
When you save an image, is it writing an image-format file or a .uasset?
Lastly - and this is a complete shot-in-the-dark - create a custom UActorComponent, add it to the drone, and make it watch for some instruction from the game. When you want to write an image, push the RenderTarget data to the custom UActorComponent. Since actor components can tick on any thread (using BCanTickOnAnyThread=true in the constructor, I think that's the variable name), the actor component's tick can watch for the incoming image data and process it there. I've never done it, but I have made an actor component tick on any thread (mine was checking physics).

Choppy touch response on lower FPS

I'm making an iPhone game which has a quite intense use of pixel shaders. Some effects make my fps rate sometimes drop down to ~22 FPS in the 3GS, but it is around ~27 most of the time.
When the FPS rate is down there, the touch gesture response becomes extremely choppy. In other words, the gesture update time reaches nearly 5hz, which is much slower than the game itself.
Has anyone experienced similar problems? Is there any way around it?
Note1: I'm already using CADisplayLink
EDIT: I had a significant improvement by manually skipping even frames. I'm not sure if that is a good thing to do but the game remained quite playable and I'm sure it is using much less CPU now.
I have a similar situation in one of my applications, where I have very heavy shaders that can lead to slower rendering on older devices, but I still want to have the framerate be as fast as it can on more powerful hardware.
What I do is use a single GCD serial queue for all OpenGL ES rendering-related actions, combined with a dispatch semaphore. I use CADisplayLink to fire at 60 FPS, then within the callback I dispatch a block for the actual rendering action. I use a dispatch semaphore so that if the CADisplayLink tries to add another block to the rendering queue while one is running, that new block is dropped and never added.
I describe this approach in detail in this answer, and you can download the source code for my application which uses this here.
The GCD queue lets you move this rendering to a background thread, which leaves your interface responsive, while scaling the FPS so that your rendering runs as fast as your hardware supports. This has particular advantages on the new dual core iOS devices, because I noticed significant rendering speed increases just by performing my OpenGL ES updates on this background queue.
However, as I describe in that answer, you'll need to funnel all of your OpenGL ES updates through this queue to avoid the potential for more than one thread from simultaneously accessing an OpenGL ES context (which causes a crash).
If your app's game loop run at 22 fps, but is requesting 30 fps, that means that the app is oversubscribing the total number of CPU cycles available per second in the UI run loop. Either try putting more stuff in background threads, or turn your requested frame rate down to below what you can actually get (e.g. set it to 20 fps), so that there is more time left for UI stuff, such as touch event delivery.

iPhone OpenGL ES glFlush() inconsistently slow

Seemingly at random (but typically consistent during any given program run), my presentRenderBuffer call is very slow. I tracked it down to a call to glFlush() which presentRenderBuffer makes, so now I call glFlush() right before presentRenderBuffer. I put a timer on glFlush(), and it does one of two things, seemingly at random.
glFlush() either
1) consistently takes 0.0003 seconds
OR
2) alternates between about 0.019 and 0.030 seconds
The weirdest thing is, this is independent of drawing code. Even when I comment out ALL drawing code so that all it does is call glClear(), I still just randomly get one of the two results.
The drawing method is called by an CADisplayLink with the following setup:
dLink = [[UIScreen mainScreen] displayLinkWithTarget:viewController selector:#selector(drawFrame)];
dLink.frameInterval = 1;
[dLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
I'm finding it impossible to pin down what causes one of the results to occur. Can anyone offer ideas?
Performing exact timings on iOS OpenGL ES calls in general is a little tricky, due to the tile-based deferred renderers used for the devices. State changes, drawing, and other actions can be deferred until right before the scene is presented.
This can often make something like glFlush() or a context's -presentRenderBuffer: look to be very slow, when really it's just causing all of the deferred rendering to be performed at that point.
Your case where you comment out all drawing code but a glClear() wouldn't be affected by this. The varying timings you present in your alternating example correspond roughly to 1/53 or 1/33 of a second, which seems to indicate to me that it might simply be blocking for long enough to match up to the screen refresh rate. CADisplayLink should keep you in sync with the screen refresh, but I could see your drawing sometimes being slightly off that.
Are you running this test on the main thread? There may be something causing a slight blocking of the main thread, throwing you slightly off the screen refresh timing. I've seen a reduction in this kind of oscillation when I moved my rendering to a background thread, but still had it be triggered by a CADisplayLink. Rendering speed also increased as I did this, particularly on the multicore iPad 2.
Finally, I don't believe you need to explicitly use glFlush() when using OpenGL ES on iOS. Your EAGLContext's presentRenderbuffer: method should be all that is required to render your frame to the screen. I don't see a single instance of glFlush() in my OpenGL ES application here. It may be redundant in your case.
I found what I think was the problem. The view controller that was attached to the EAGLView was NOT set as the root view controller of the window as it should have been. Instead, the view was manually added as a subview to the window. When this was remedied (along with a couple other related fixes), the drawFrame method now seems to sync up perfectly with the screen refresh. Success!

OpenGL ES on iPhone: timer-based painting problems, jitteryness

I have an OpenGL ES 1.1 project on iPhone, based heavily on the empty OpenGL ES application given here:
http://iphonedevelopment.blogspot.com/2009/06/empty-opengl-es-application-project.html
I'm testing on a 3G (not 3GS) device, 8GB.
The paint loop in my app which does openGL operations is doing quite a lot each time it renders the screen. However, in situations with it doing the same thing each paint cycle, I'm seeing variable frame rates. I have set the NSTimer which fires the paint code to fire 30 times a second -- i.e. every 0.0333 of a second. The main problem is that whereas my actual paint code often takes approximately that amount of time to execute one paint (in terms of wall time), it varies, and sometimes it takes far longer, for no apparent reason. Using careful logging to report maximum time intervals when they occur, I can see that sometimes my paint has taken as long as 0.23 sec to complete - that's like 4FPS, which compared to 30FPS is like skipping 5 frames of animation/ user interaction, which isn't very acceptable.
At first I suspected my paint loop was snagging on some lock (there aren't many in there, because the GL render stuff in on the main thread (which is necessary AFAIK), as is incoming event handling) but some logging with finer granularity revealed that, in one paint code execution cycle, a large time elapsing over a bit of code that was doing basically hardly anything, and certainly not a GL operation.
So it seems a bit like my GL drawing thread (i.e. the main thread) just takes longer sometimes for no good reason. I have comms in my application and I disabled comms to see if that was the problem -- but I still see some "spikes" in execution time of my painting, when it's doing the same painting each time.
It's seems like another thread is just being switched to, mid-paint code, for ages, before returning to my paint code, on occaison.
Any ideas with how to analyse further what is going on? I know NSTimers aren't perfect and aren't at a guaranteed frequency, but the main issue here is that my actual paint cycle sometimes just takes forever, presumably because some other thread gets switched to....
Keep in mind that your application can seem to "hang" for no reason that has nothing to do with your "main loop". That's because you are multitasking... and in paticular, something as simple as your phone checking email can cause this sort of problem. One big cause, on the iPhone anyway, is when you move through different cell sites (like if you are on a subway or in a car) you can sometimes get spikes as it does... whatever it does.
If you are on an iPhone, try airplane mode and see if the problems go away.
-- David

Delay with touch events

We have an app in AppStore Bust~A~Spook we had an issue with. When you tap the screen we use CALayer to find the position of all the views during their animation and if you hit one we start a die sequence. However, there is a noticeable delay, it appears as if the touches are buffered and we we receive the event to late. Is there a way to poll or any better way to respond to touches to avoid this lag time?
This is in a UIView not a UIScrollView
Are you using a UIScrollView to host all this? There's a property of that called delaysContentTouches. This defaults to YES, which means the view tries to ascertain whether a touch is a scroll gesture or not, before passing it on. You might try setting this to NO and seeing if that helps.
This is a pretty old post about a seasonal app, so the OP probably isn't still working on this problem, but in case others come across this same problem and find this useful.
I agree with Kriem that CPU overload is a common cause of significant delay in touch processing, though there is a lot of optimization one can do before having to pull out OpenGL. CALayer is quite well optimized for the kinds of problems you're describing here.
We should first check the basics:
CALayers added to the main view's layer
touchesBegan:withEvent: implemented in the main view
When the phase is UITouchPhaseBegan, you call hitTest: on the main view's layer to find the appropriate sub-layer
Die sequence starts on the relevant model object, updating the layer.
Then, we can check performance using Instruments. Make sure your CPU isn't overloaded. Does everything run fine in simulator but have trouble on the device?
The problem you're trying to solve is very common, so you should not expect a complex or tricky solution to be required. It is most likely that the design or implementation has a basic flaw and just needs troubleshooting.
Delayed touches usually indicates a CPU overload. Using a NSTimer for frame-to-frame based action is prone to interfering with the touch handling.
If that's the case for your app, then my advice is very simple: OpenGL.
If you're doing any sort of core-animation animation of the CALayers at the same time as you're hit-testing, you must get the presentationLayer before calling hitTest:, as the positions of the model layers do not reflect what might be on screen, but the positions to which the layers are animating.
Hope that helps.