I'm displaying a series of images on an iPad that are being sent over a network connection. It seems to work fine, but the images have a lot of ghosting for some reason (see image below). Is there some kind of technique for drawing that would eliminate this? I'd say it's an issue with the refresh rate of the screen, but that wouldn't explain why using the iPad's screenshot functionality captures the phenomenon.
You are probably switching between images in a way that triggers an implicit animation that cross-fades between the old image and the new one.
The documentation for layer actions explains how CoreAnimation is deciding to run that implicit animation, and how to override it.
The two easiest ways IMHO are:
When you change the image, use a CATransaction to disable actions
In your layer's delegate, implement -actionForLayer:forKey: and return [NSNull null].
(If you are using a UIView, it's already the layer's delegate.)
This question gives a few more options -- it might even be a duplicate of your situation.
Related
I have a very intriguing obstacle to overcome. I am trying to display the live contents of a UIView in another, separate UIView.
What I am trying to accomplish is very similar to Mission Control in Mac OS X. In Mission Control, there are large views in the center, displaying the desktop or an application. Above that, there are small views that can be reorganized. These small views display a live preview of their corresponding app. The preview is instant, and the framerate is exact. Ultimately, I am trying to recreate this effect, as cheaply as possible.
I have tried many possible solutions, and the one shown here is as close as I have gotten. It works, however the - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx method isn't called on every change. My solution was to call [cloneView setNeedsDisplay] using a CADisplayLink, so it is called on every screen refresh. It is very near my goal, but the framerate is extremely low. I think that [CALayer renderInContext:] is much too slow.
If it is possible to have two CALayers render the same source, that would be golden. However, I am not sure how to approach this. Luckily, this is simply a concept app and isn't destined for the App Store, so I can make use of the private APIs. I have looked into IOSurface and Quartz contexts, but I haven't been able to solve this puzzle so far. Any input would be greatly appreciated!
iOS and OSX are actually mostly the same underneath at the lowest level. (However, when you get higher up the stack iOS is actually largely more advanced than OSX as it is newer and had a fresh start)
However, in this case they both use the same thing (I believe). You'll notice something about Mission Control. It isolates "windows" rather than views. On iOS each UIWindow has a ".contentID" property and CALayerHost can use to make the render server share the render context between the 2 of them (2 layers that is).
So my advice is to make your views separate UIWindows and get native mirroring for free-(ish). (In my experience the CALayerHost takes over the target layers place with the render server and so if both the CALayerHost and the window are visible the window won't be anymore, only the layer host will be (which the way they are used on OSX and iOS isn't a problem).)
So if you are after true mirroring, 2 copies of it, you'll need to resort to the sort of thing you were thinking about.
1 Option for this is to create a UIView subclass that uses
https://github.com/yyfrankyy/iOS5.1-Framework-Headers/blob/master/UIKit.framework/UIView-Rendering.h#L12
this UIView private method to get an IOSurface for a target view and then using a CADisplayLink once per second get and draw the surface.
Another option which may work (I'm not sure as I don't know your setup or desired effect) is possibly just to use a CAReplicatorLayer which displays a mirror of a CALayer using the same backing store (very fast and efficient + public stable API).
Sorry I couldn't give you a fixed, "this is the answer reply", but hopefully I've given you enough ideas and possibilities to get started.
I've also included some links to things you might find useful to read.
What's the magic behind CAReplicatorLayer?
http://aptogo.co.uk/2011/08/no-fuss-reflections/
http://iphonedevwiki.net/index.php/SBAppContextHostManager
http://iphonedevwiki.net/index.php/SBAppContextHostView
http://iphonedevwiki.net/index.php/CALayerHost
http://iky1e.tumblr.com/post/33109276151/recreating-remote-views-ios5
http://iky1e.tumblr.com/post/14886675036/current-projects-understanding-ios-app-rendering
I'm looking for help with a performance issue in an Objective-C based iOS app.
I have an iOS application that captures the screen's contents using CALayer's renderInContext method. It attempts to capture enough screen frames to create a video using AVFoundation. The screen recording is then combined with other elements for research purposes on usability. While the screen is being captured, the app may also be displaying the contents of a UIWebView, going out over the network to fetch data, etc... The content of the Web view is not under my control - it is arbitrary content from the Web.
This setup is working but as you might imagine, it's not buttery smooth. Since the layer must be rendered on the main thread, there's more UI contention than I'd like. What I'd like to do is to have a setup where the responsiveness of the UI is prioritized over the screen capture. For instance, if the user is scrolling the Web view, I'd rather drop frames on the recording than have a terrible scrolling experience.
I've experimented with several techniques, from dispatch_source coalescing to submitting the frame capture requests as blocks to the main queue to CADisplayLink. So far they all seem to perform about the same. The frame capture is currently being triggered in the drawRect of the screen's main view.
What I'm asking here is: given the above, what techniques would you suggest I try to achieve my goals? I realize the answer may be that there is no great answer... but I'd like to try anything, however wacky it might sound.
NOTE: Whatever techniques need to be App Store friendly. Can't use something like the CoreSurface hack that Display Recorder used/uses.
Thanks for your help!
"Since the layer must be rendered on the main thread" this is not true, as long as you don't touch UIKit.
Please see https://stackoverflow.com/a/12844171/136305
Maybe you can record at half resolution to speed up things, if that fits the requirements?
I released the first beta version of my iPhone app on TestFlightApp today. Everything is going really well until I notice that the responsiveness of the application is pretty cruddy. Certainly doesn't have a "nice" native feel that I'm going for.
I've been particularly fastidious concerning my memory allocation/deallocation, so I don't think this is the issue. Basically, I don't know where to turn to next in order to improve the performance of my app.
Here's where I think some of my slowdown can be attributed to:
Using UIAppearance to customize the looks of most (if not all) UI elements. I use a brand new font, lots of CAGradientLayers, and lots of edits to CALayer in order to draw nice Shadows.
Grouped UITableViewCells that display pictures of a map and itemized lists.
UITableViewCells whose layouts are updated every time I call layoutSubviews.
UITableViewCells with customized heights. For each call of heightForRowAtIndexPath, I need to reconstruct and re-layout the view, returning the exact height each time.
Because I programmatically created views, controllers with longer viewDidLoad calls tend to load slower. What code can I offset in the init call?
Does anyone have any hints or tips for dealing with these problems? Or perhaps people have stories about how they dealt with a slowdown in performance when they released their first app?
My answer won't address all of your points, but here are a couple:
1) Make sure you are using Shadow Paths. Paths are much, much more performant.
2) Are you using transparency or corner rounding? If so, try and reduce transparency as much as possible and do not round corners using CALayer cornerRadius. Instead, use a clipping mask in the drawRect of the view that needs to be rounded.
4) Perhaps you can cache the height in an array and not have to repeat the calculation, each and every time. This may/may not scale well depending on the potential number of items, but may be acceptable depending on the use case.
5) Are there views you can reuse? For example, when I have a custom selection view on a UITableViewCell, I only create a single instance held by the controller and reference it in all the cells.
Did you run Instruments (or other profiling methodology) to determine where you app is spending most of it's time. It would be a good idea to do this before optimizing the wrong thing.
First: I've implement a fairly complex scrolling mechanism for images, that allows to scroll over a few hundred thousands (theoretically) in one single scroll view. This is done by preloading small portions upon scrolling, while re-using all UIImageViews. Currently all I do is to assign new created UIImage objects to those re-used UIImageViews.
It might be better if it's possible to also re-use those UIImage objects by passing new image data to them.
Now the problem is, that I am currently using the -imageNamed: method. The documentation says, that it caches the image.
Problems I see in this case with -imageNamed:
As the image gets scrolled out of the preloading range, it's not needed anymore. It would be bad if it tries to cache thousands of images while the user scrolls and scrolls and scrolls.
And if I would find a way to stuff new image data into the UIImage object for re-using it, then what happens with the old image that was cached?
So there is one method left, that seems interesting:
-initWithContentsOfFile:
This does not cache the image. And it doesn't use -autorelease, which is good in this case.
Do you think that in this case it would be better to use -initWithContentsOfFile:?
Only a benchmark can tell you for sure. I'm inclined to think that UIImage image caching is probably extremely efficient, given that it's used virtually everywhere in the OS. That said with the number of images you're displaying, your approach might help.
I'd say YES. You have too much images to keep all of them in cache so you can't use -imageNamed:. If your images are not displayed many times, you will not get lower performance.
I found one link regarding this which comments of imageNamed method http://www.alexcurylo.com/blog/2009/01/13/imagenamed-is-evil/
We have an app in AppStore Bust~A~Spook we had an issue with. When you tap the screen we use CALayer to find the position of all the views during their animation and if you hit one we start a die sequence. However, there is a noticeable delay, it appears as if the touches are buffered and we we receive the event to late. Is there a way to poll or any better way to respond to touches to avoid this lag time?
This is in a UIView not a UIScrollView
Are you using a UIScrollView to host all this? There's a property of that called delaysContentTouches. This defaults to YES, which means the view tries to ascertain whether a touch is a scroll gesture or not, before passing it on. You might try setting this to NO and seeing if that helps.
This is a pretty old post about a seasonal app, so the OP probably isn't still working on this problem, but in case others come across this same problem and find this useful.
I agree with Kriem that CPU overload is a common cause of significant delay in touch processing, though there is a lot of optimization one can do before having to pull out OpenGL. CALayer is quite well optimized for the kinds of problems you're describing here.
We should first check the basics:
CALayers added to the main view's layer
touchesBegan:withEvent: implemented in the main view
When the phase is UITouchPhaseBegan, you call hitTest: on the main view's layer to find the appropriate sub-layer
Die sequence starts on the relevant model object, updating the layer.
Then, we can check performance using Instruments. Make sure your CPU isn't overloaded. Does everything run fine in simulator but have trouble on the device?
The problem you're trying to solve is very common, so you should not expect a complex or tricky solution to be required. It is most likely that the design or implementation has a basic flaw and just needs troubleshooting.
Delayed touches usually indicates a CPU overload. Using a NSTimer for frame-to-frame based action is prone to interfering with the touch handling.
If that's the case for your app, then my advice is very simple: OpenGL.
If you're doing any sort of core-animation animation of the CALayers at the same time as you're hit-testing, you must get the presentationLayer before calling hitTest:, as the positions of the model layers do not reflect what might be on screen, but the positions to which the layers are animating.
Hope that helps.