I have written a map application with osmdroid, which uses several overlays, standard ones (CopyrightOverlay, ScaleBarOverlay, etc.) plus own implementations (e.g. for a north arrow). All these classes consist of a constructor and a function "draw()". I found that each draw function is invoked several times per second, even for static conditions (no zoom or scroll is applied, no invalidate() is called, no android life cycle events occur). I don't understand, why this happens and what is it good for.
I mean, the app works fine, but has a constant CPU load in the background, slowing down the app slightly. What's the point in updating the copyright notice several times per second?
This issue was solved by the osmdroid team, as of release 6.1.6. See here
Related
I have a very intriguing obstacle to overcome. I am trying to display the live contents of a UIView in another, separate UIView.
What I am trying to accomplish is very similar to Mission Control in Mac OS X. In Mission Control, there are large views in the center, displaying the desktop or an application. Above that, there are small views that can be reorganized. These small views display a live preview of their corresponding app. The preview is instant, and the framerate is exact. Ultimately, I am trying to recreate this effect, as cheaply as possible.
I have tried many possible solutions, and the one shown here is as close as I have gotten. It works, however the - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx method isn't called on every change. My solution was to call [cloneView setNeedsDisplay] using a CADisplayLink, so it is called on every screen refresh. It is very near my goal, but the framerate is extremely low. I think that [CALayer renderInContext:] is much too slow.
If it is possible to have two CALayers render the same source, that would be golden. However, I am not sure how to approach this. Luckily, this is simply a concept app and isn't destined for the App Store, so I can make use of the private APIs. I have looked into IOSurface and Quartz contexts, but I haven't been able to solve this puzzle so far. Any input would be greatly appreciated!
iOS and OSX are actually mostly the same underneath at the lowest level. (However, when you get higher up the stack iOS is actually largely more advanced than OSX as it is newer and had a fresh start)
However, in this case they both use the same thing (I believe). You'll notice something about Mission Control. It isolates "windows" rather than views. On iOS each UIWindow has a ".contentID" property and CALayerHost can use to make the render server share the render context between the 2 of them (2 layers that is).
So my advice is to make your views separate UIWindows and get native mirroring for free-(ish). (In my experience the CALayerHost takes over the target layers place with the render server and so if both the CALayerHost and the window are visible the window won't be anymore, only the layer host will be (which the way they are used on OSX and iOS isn't a problem).)
So if you are after true mirroring, 2 copies of it, you'll need to resort to the sort of thing you were thinking about.
1 Option for this is to create a UIView subclass that uses
https://github.com/yyfrankyy/iOS5.1-Framework-Headers/blob/master/UIKit.framework/UIView-Rendering.h#L12
this UIView private method to get an IOSurface for a target view and then using a CADisplayLink once per second get and draw the surface.
Another option which may work (I'm not sure as I don't know your setup or desired effect) is possibly just to use a CAReplicatorLayer which displays a mirror of a CALayer using the same backing store (very fast and efficient + public stable API).
Sorry I couldn't give you a fixed, "this is the answer reply", but hopefully I've given you enough ideas and possibilities to get started.
I've also included some links to things you might find useful to read.
What's the magic behind CAReplicatorLayer?
http://aptogo.co.uk/2011/08/no-fuss-reflections/
http://iphonedevwiki.net/index.php/SBAppContextHostManager
http://iphonedevwiki.net/index.php/SBAppContextHostView
http://iphonedevwiki.net/index.php/CALayerHost
http://iky1e.tumblr.com/post/33109276151/recreating-remote-views-ios5
http://iky1e.tumblr.com/post/14886675036/current-projects-understanding-ios-app-rendering
Seemingly at random (but typically consistent during any given program run), my presentRenderBuffer call is very slow. I tracked it down to a call to glFlush() which presentRenderBuffer makes, so now I call glFlush() right before presentRenderBuffer. I put a timer on glFlush(), and it does one of two things, seemingly at random.
glFlush() either
1) consistently takes 0.0003 seconds
OR
2) alternates between about 0.019 and 0.030 seconds
The weirdest thing is, this is independent of drawing code. Even when I comment out ALL drawing code so that all it does is call glClear(), I still just randomly get one of the two results.
The drawing method is called by an CADisplayLink with the following setup:
dLink = [[UIScreen mainScreen] displayLinkWithTarget:viewController selector:#selector(drawFrame)];
dLink.frameInterval = 1;
[dLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
I'm finding it impossible to pin down what causes one of the results to occur. Can anyone offer ideas?
Performing exact timings on iOS OpenGL ES calls in general is a little tricky, due to the tile-based deferred renderers used for the devices. State changes, drawing, and other actions can be deferred until right before the scene is presented.
This can often make something like glFlush() or a context's -presentRenderBuffer: look to be very slow, when really it's just causing all of the deferred rendering to be performed at that point.
Your case where you comment out all drawing code but a glClear() wouldn't be affected by this. The varying timings you present in your alternating example correspond roughly to 1/53 or 1/33 of a second, which seems to indicate to me that it might simply be blocking for long enough to match up to the screen refresh rate. CADisplayLink should keep you in sync with the screen refresh, but I could see your drawing sometimes being slightly off that.
Are you running this test on the main thread? There may be something causing a slight blocking of the main thread, throwing you slightly off the screen refresh timing. I've seen a reduction in this kind of oscillation when I moved my rendering to a background thread, but still had it be triggered by a CADisplayLink. Rendering speed also increased as I did this, particularly on the multicore iPad 2.
Finally, I don't believe you need to explicitly use glFlush() when using OpenGL ES on iOS. Your EAGLContext's presentRenderbuffer: method should be all that is required to render your frame to the screen. I don't see a single instance of glFlush() in my OpenGL ES application here. It may be redundant in your case.
I found what I think was the problem. The view controller that was attached to the EAGLView was NOT set as the root view controller of the window as it should have been. Instead, the view was manually added as a subview to the window. When this was remedied (along with a couple other related fixes), the drawFrame method now seems to sync up perfectly with the screen refresh. Success!
I have an OpenGL application which is rendering intensive and also fetches stuff over HTTP.
Following Apple's samples for OpenGL, I originally used NSTimer for my main painting loop, before finding out (like everyone else) that it really isn't a good idea (because you sometimes have huge delays on touch events being processed when slow paints cause paint timers to pile up).
So currently I am using the strategy given by user godexsoft at http://www.idevgames.com/forum/showthread.php?t=17058 (search for the post by godexsoft). Specifically, my render run loop is on the main thread and contains this:
while( CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.01f, FALSE) ==
kCFRunLoopRunHandledSource);
This line allows events like touch events and things related to comms to happen in between rendering of frames. It works, but I'd like to refine it further. Is there any way I can give priority to the touch events over other events (like comms related stuff)? If I could do that, I could reduce the 0.01f number and get a better frame rate. (I'm aware that this might mean comms would take longer to get back, but the touch events not lagging is pretty important).
This doesn't directly answer your question, but have you considered using CADisplayLink for scheduling redraws? It was introduced in iPhone OS 3.1.
I have an OpenGL ES 1.1 project on iPhone, based heavily on the empty OpenGL ES application given here:
http://iphonedevelopment.blogspot.com/2009/06/empty-opengl-es-application-project.html
I'm testing on a 3G (not 3GS) device, 8GB.
The paint loop in my app which does openGL operations is doing quite a lot each time it renders the screen. However, in situations with it doing the same thing each paint cycle, I'm seeing variable frame rates. I have set the NSTimer which fires the paint code to fire 30 times a second -- i.e. every 0.0333 of a second. The main problem is that whereas my actual paint code often takes approximately that amount of time to execute one paint (in terms of wall time), it varies, and sometimes it takes far longer, for no apparent reason. Using careful logging to report maximum time intervals when they occur, I can see that sometimes my paint has taken as long as 0.23 sec to complete - that's like 4FPS, which compared to 30FPS is like skipping 5 frames of animation/ user interaction, which isn't very acceptable.
At first I suspected my paint loop was snagging on some lock (there aren't many in there, because the GL render stuff in on the main thread (which is necessary AFAIK), as is incoming event handling) but some logging with finer granularity revealed that, in one paint code execution cycle, a large time elapsing over a bit of code that was doing basically hardly anything, and certainly not a GL operation.
So it seems a bit like my GL drawing thread (i.e. the main thread) just takes longer sometimes for no good reason. I have comms in my application and I disabled comms to see if that was the problem -- but I still see some "spikes" in execution time of my painting, when it's doing the same painting each time.
It's seems like another thread is just being switched to, mid-paint code, for ages, before returning to my paint code, on occaison.
Any ideas with how to analyse further what is going on? I know NSTimers aren't perfect and aren't at a guaranteed frequency, but the main issue here is that my actual paint cycle sometimes just takes forever, presumably because some other thread gets switched to....
Keep in mind that your application can seem to "hang" for no reason that has nothing to do with your "main loop". That's because you are multitasking... and in paticular, something as simple as your phone checking email can cause this sort of problem. One big cause, on the iPhone anyway, is when you move through different cell sites (like if you are on a subway or in a car) you can sometimes get spikes as it does... whatever it does.
If you are on an iPhone, try airplane mode and see if the problems go away.
-- David
I have multiple UIView animations running in my app. They are very short, and then make callbacks to a method that then usually fires off another animation. This leads to a lot of little animations running at the same time, each firing back callbacks.
This actually performs pretty well, and for the first few levels (the app is a game), no problems are observed. However, as you continue to play deeper into the game, I'm starting to get memory warnings and ultimately crashes. I've put NSLog in all of my dealloc methods, so I can see that everything is being properly released and dealloc'd. I've also run static analysis on the app and fixed anything it found.
The weird part to me is this: Shouldn't any performance problems caused by running multiple animations be processor bound (i.e. shouldn't I see a bunch of slowdown and such)? It seems that everything performs just fine, it just runs up memory too fast and there's nothing more I can free. Is there something in the framework on the UIView side of things that will need lots of memory to do these operations? Is there perhaps a leak in the framework I need to avoid when doing these?
Additional note: I'm animating a custom class that extends UIView and has a label and a UIImageView inside of them.
Multiple animation shouldn't cause memory warnings..
I suggest you should run Instruments for Leaks, ObjectAlloc & CPU Samplers..
That would give a much better view than NSLogs in dealloc
You don't show any code so it's hard to give specific advice, however, have you considered using Core Animation layers instead of UIViews? If you need to animate text, you'll have to use a view since there is no CATextLayer on the phone, however, Core Animation provides the facilities to draw complex sprites in a 2d space making it a great candidate for many games.
Best Regards.