I'm having some problems getting CoreAnimation to perform multiple animations in succession on the iPhone simulator. I have many layers in my application that I animate – these layers are all sublayers of the layer associated with a UIView in my application. After I animate the first sublayer's position (using explicit animation - CAKeyFrameAnimation), I do the following in the animationDidStop delegate method:
I remove the layer from its superlayer
I start a CATransaction to animate 2 other sublayers' positions simultaneously - these layers are also animated explicitly with individual CAKeyframeAnimations added to the respective layers.
I then reuse the 1st layer with different contents and add that back to the superlayer in a separate position (intentionally not animated).
When I run my application, I see the first animation occur, the layer then gets removed, and the layer gets added back with new contents in the new position, but I never see the animation of the 2 layers in step 2. Interestingly, I do get animationDidStop calls for each of the 2 layers animating in the transaction. Since I get these calls it would appear the animations are occurring, but the animations do not appear on screen. I also tried removing the transaction in case I didn't have that set up correctly and am seeing the same results.
Is it possible to link together multiple animations in this manner?
Any insights or suggestions are greatly appreciated. Thanks in advance for your help.
My first guess would be that you are adding your animation in the animationDidStop to a layer that is no longer valid. Of course, I can't know that unless you post some code.
Second, you should take a look at the timing documentation for Core Animation as the way you're doing it, while functional, may not be the best way. Specifically take a look at this section:
The timing protocol provides the means
of starting an animation a certain
number of seconds into its duration
using two properties: beginTime and
timeOffset. The beginTime specifies
the number of seconds into the
duration the animation should start
and is scaled to the timespace of the
animation's layer. The timeOffset
specifies an additional offset, but is
stated in the local active time. Both
values are combined to determine the
final starting offset.
Related
I'm using the GLCameraRipple example to have ripples over a static image, which I load with glTexImage2D.
I use my own shaders since those in the example are for getting images from video feed.
It responds to touches and generates ripples on the touch location.
It works ok, but I need it to pause drawing, for battery preservation reasons, when the ripple animation has finished. In this example from Apple, I don't see anywhere how I can check this state. GLKView or GLKViewController don't have any delegate method for this.
It uses a -runSimulation method that is called in every -update of my GLKViewController, which does all the magic, but I still don't see where I can check if the ripples have finished animating, or even compare the initial state with the state where ripples are running all over.
I am currently counting myself how much time the ripple animation takes to finish at most, until what we see is a static image again and I've set it to pause after this amount of seconds and unpause it again when a touch event occurs, but it doesn't feel right at all.
(The animation duration is different on larger screens (e.g. on iPad) and may vary depending on the pool size, mesh factor, touch radius etc.)
I was hoping there would be a way to check if the view's contents are different than the initial state (when I have just loaded the image) and know the ripple animation is done playing?
As requested, I'm converting my comment to an answer so that the question can be closed out.
The GLCameraRipple example calculates the displacement for the texture coordinates in the image using an internal array called rippleTexCoords. This array is updated in the -runSimulation method on each frame, which is what causes the ripples to propagate.
If you observe the values of this array as they change, you can determine the point at which the ripples die down below a certain threshold. You can then use this as the time to pause the ongoing simulation.
I've been working on custom drawings using drawRect in UIView subclasses. That's cool, but you have to wait until the end of the run loop for drawRect to be called and I'm wondering how you can control frame by frame animations where you change the drawings over time, or if this is possible? Perhaps Quartz isn't really designed for this type of animated graphics? I gather that perhaps it is designed for static drawings that don't change so frequently.
Quartz by itself its not able to sustain a high frame rate, due to its need to redraw everything each time. But you can have Quartz work together with CoreAnimation to have Quartz based animations. The idea behind this is that you can cache previously drawn content inside CALayer objects and then use CoreAnimation to create the continuous drawing effect.
A good example of this technique can be see in the AccelerometerGraph sample code provided by Apple. Inside this sample the UIView subclass that uses this technique is the "GraphView" object. Basically this object draws as completely new only a portion of the graph (the newly generated segments), backs it in a dedicated layer and then animates the layers in order to provide the "scrolling graph" animation.
Clearly this technique works only when you have full control of the drawing elements and can manage this incremental way of adding objects in the screen. Of course things become much more complicated when you must redraw many different parts of the screen and you need to modify previously generated layers.
Anyway have a look at the mentioned code: it is quite interesting.
Your app should exit to the run loop before each frame. Do all your custom frame animation setup between each frame. So frame-by-frame drawing in drawRect should work just fine. This can work in iOS apps at a 60 Hz frame update rate, not just for static views, as long as all your methods between frame times, as well as your drawRects, are fast enough. Chop them up if needed.
I'm animating an image in an UIImageView. This works just great with Core Animation. Now I want to attach one end of a line permanentely to the center of this UIIMageView. But with Core Animation I only get the starting point and the end point of the UIImageView. Is there a possibility to access interim points during the animation? I want to extend the functionality to many UIImageViews, which are animated, with a line attached to them. I don't want to create timers to monitor all animations and calculate my own interim points. Does somebody have any suggestions? Thanks!
Bye, Björn
If you're using Core Animation (I'm not sure if this works with strictly UIView-based animation), you should be able to get a particular instantaneous value of your property by looking at the presentationLayer property of your view's backing CALayer. That property exposes a read-only copy of the current set of values as a layer is animating. You can look at this on a display link "timer" and update something else, etc.
I'm having a tough time picturing exactly what you're trying to accomplish overall, but this might get you started.
I found this post, which points to the right direction: http://dbachrach.com/blog/2008/04/instantaneous-frame-of-moving-core-animation-views/
But there's still one problem: this seems to work for CABasicAnimations only, not for the standard animations. Is there a way to access interim points when using standard animations ([UIView beginAnimation] ...)?? Thanks.
Björn
I have a custom UIView which is composed of many images, their positions are changing in response to the user touch.
The view must track the user touch and i'm experiencing a performance bottleneck in the drawing of such view, preventing me to follow the input in realtime.
At the beginning i was drawing everything in the [UIView drawRect:] method and of course it was way too slow because everything was redrawn even if not necessary.
Then, i used more CALayers to update only the layer that was changing and this gave me much better responsiveness.
But still, when i have to draw the same image many times on a layer it takes up to 500ms.
Since the images are placed at fixed positions it there a way to pre-draw them? Should i consider putting them in many CALayers and just hide/show them?
Also, i don't understand why a [CALayer setNeedsDisplayInRect:] exists but the delegate has (apparently) no way to know what the invalid rect is to optimize the drawing.
Solution
Following the advice in the answer I finally created many CALayers for the images and set the contents property the first time the layer was being shown. This is a lazy-loading compromise: in a first attempt i set the contents of every layer at the creation time but this caused to pre-draw any possible image on the program launch, freezing the application for seconds.
From the documentation for -[CALayer drawInContext:]:
Default implementation does nothing. The context may be clipped to protect valid layer content. Subclasses that wish to find the actual region to draw can call CGContextGetClipBoundingBox. Called by the display method when the contents property is being updated.
The default implementation of display calls drawInContext: on an automatically-created context; presumably setting the bounding box as well (which is presumably passed to drawRect:).
If you're drawing several static images, I'd just stick each one in its own UIView; I don't think the overhead is that big (if it is, the CALayer overhead should be smaller). If they all animate, I'd definitely use UIView/CALayer. If some of them don't animate (much) and you notice significant slowness, you can pre-render those. It's a trade-off between rendering in drawRect: (or similar) and layer compositing on the GPU, but in general I'd assume that the latter is much faster.
Louis - Thanks for your input. Here is what I did, and my current issues, which i don't understand.
The menus (2) are UIImageViews that respond to touch. Each is it's own class. The 'sprites' as you called them also respond to touch.
In the .m where I add the menus and sprites to the layer, I created two container layers, via this code: menuContainer = [CALayer layer].
I ordered them with the menuContainer above the spriteContainer. Now, the sprites do move below the menus - great! But in the process, the sprites stopped responding to touch (I can understand this as they are below the menuContainer layer), and the menus stopped responding to touch as well (don't get that). Continuing to confuse the situation, a layer added to the menuContainer that responds to a multitouch gesture by popping up a slider still reads the multitouch and pops up the slider, but I can't slide the slider. This has me thoroughly confounded.
Thanks.
My app is building a background layer, then building some menu layers that should always be at the top.
After this, I add many moving layers and remove many moving layers very quickly.
My problem is I want my menu layers (there are 4) to always be in front of the other layers. I have thought of building a layer hiearchy and using the insertsublayer: atindex: method, making my menus a notional index of 1000, and inserting the multitude of moving layers at a lower index, like 200. If I insert one layer at 200 and then the next at 200, does the first layer that was assigned to 200 get shifted (to 201) or does it get blown away?
I have tried inserting the layers with the insertsublayer: below:, but that does not give me the desired results.
Thanks for the help.
You can't really do that, the layer index is not a z-order, it is an array index. From the documentation:
This value must not be
greater than the count of elements in
the sublayer array.
I think you would be best served but actually making a true hierarchy of layers as opposed to trying to shove all of you active layers into one super layer. In other, but all of your menulayers into a container layer, then insert that in the root layer where you want it. Likewise, insert all your sprites into one container layer, and put that in the root layer where you want it.
Additional stuff based on your edits:
Layers are a essentially a graphical construct, they do not directly respond to events. You either need to handle that yourself by writing code to hitTest them in the view they are attached to, or you need to use UIViews instead of layers. You were probably getting away with it before because you were manipulating the layers in such a way that the view hierarchy and layer hierarchy were consistent, but it was not clear to me from your original question that you were using views and not a purely layer based setup.
Louis -
Thanks for your input. Here is what I did, and my current issues, which i don't understand.
The menus (2) are UIImageViews that respond to touch. Each is it's own class.
The 'sprites' as you called them also respond to touch.
In the .m where I add the menus and sprites to the layer, I created two container layers, via this code: menuContainer = [CALayer layer].
I ordered them with the menuContainer above the spriteContainer. Now, the sprites do move below the menus - great! But in the process, the sprites stopped responding to touch (I can understand this as they are below the menuContainer layer), and the menus stopped responding to touch as well (don't get that). Continuing to confuse the situation, a layer added to the menuContainer that responds to a multitouch gesture by popping up a slider still reads the multitouch and pops up the slider, but I can't slide the slider. This has me thoroughly confounded.
Thanks.