How would you impelement the following?
1. Image is loaded from a file
2. On Touch the picture burns in flame.
3. Next picture loads from another file.
How would you make the flame transition?
Hands down I'd use some OpenGL ES code I wrote for doing non-standard transitions (about 300 lines of code) as a base, and build a flame transition this way -- because I already have the code existing of course.
Basically how it works is like this:
Subclass UIView, set up a few properties including a EAGLContext, some GLuints representing textures of the views, etc.
Tell the view that its backing layer is an EAGLContext by overriding +layerClass
During initialization, pass another view in (your start view), and in this initialization phase, set up the GL context, take a texture of the view by capturing how it looks on the screen, save it for later.
Define a transition method which takes another view (the one to transition to), and does similar actions to #3 above, but also calls your custom transition code -- i.e., your flame effect.
That said, even if I didn't, OpenGL ES would still be the way I'd look at doing this first, since it will give me desirable effects in terms of realism, safe timing, and fast performance.
Alternatively you can look at CoreAnimation, which may be simple enough for your needs.
Related
I've been working on custom drawings using drawRect in UIView subclasses. That's cool, but you have to wait until the end of the run loop for drawRect to be called and I'm wondering how you can control frame by frame animations where you change the drawings over time, or if this is possible? Perhaps Quartz isn't really designed for this type of animated graphics? I gather that perhaps it is designed for static drawings that don't change so frequently.
Quartz by itself its not able to sustain a high frame rate, due to its need to redraw everything each time. But you can have Quartz work together with CoreAnimation to have Quartz based animations. The idea behind this is that you can cache previously drawn content inside CALayer objects and then use CoreAnimation to create the continuous drawing effect.
A good example of this technique can be see in the AccelerometerGraph sample code provided by Apple. Inside this sample the UIView subclass that uses this technique is the "GraphView" object. Basically this object draws as completely new only a portion of the graph (the newly generated segments), backs it in a dedicated layer and then animates the layers in order to provide the "scrolling graph" animation.
Clearly this technique works only when you have full control of the drawing elements and can manage this incremental way of adding objects in the screen. Of course things become much more complicated when you must redraw many different parts of the screen and you need to modify previously generated layers.
Anyway have a look at the mentioned code: it is quite interesting.
Your app should exit to the run loop before each frame. Do all your custom frame animation setup between each frame. So frame-by-frame drawing in drawRect should work just fine. This can work in iOS apps at a 60 Hz frame update rate, not just for static views, as long as all your methods between frame times, as well as your drawRects, are fast enough. Chop them up if needed.
I'm developing an application for iPad.
I want to make nice animations so that I can beautify my application.
For example, there are 4 main buttons/images in a view.
When tapping on one of them, a few more buttons/images will branch out.
It's like the 'parent button' will branch out to few 'child buttons'.
How are these kind of animations done?
Are there any good references or code snippets to refer to?
Thanks.
A good stating point would be the Core Animation demo's here:
https://github.com/neror/CA360
Run them in the iOS Simulator and checkout the code that creates the magic.
UIView animations would also be suitable for your example, and are a little easier to implement. There is a nice tutorial here:
http://www.raywenderlich.com/2454/how-to-use-uiview-animation-tutorial
Search on the term "Core Animation iOS" on your favorite search engine. You'll find information from Apple's Developer Central site, particularly the Core Animation Guide and Cookbook.
There are actually 2 main methods.
One is to use Core Animation, if it contains your desired path and animation.
The other is to use an animation game loop, where the app periodically calls a routine to redraw the view every frame time. An NSTimer or CADisplayLink can periodically (say at 24 or 30 or 60 Hz) call a routine doing a setNeedsDisplay, which then causes the view's drawRect to be called, etc. The some other periodic code can change some state (moving some X Y button positions, etc.) during or between each frame to provide the appearance of movement or other animation effect, when the view is redrawn. Or OpenGL can be used to redraw some animated 3D world as things move. You can even have each frame change in response to user input. This is the most flexible way to animate, and allows you to customize animations in ways that are impossible for Core Animation, but it uses more power and can be so CPU intensive as to be a lot slower than Core Animation as well.
Is there a way to draw on the iPhone screen (on a UIView in a UIWindow) outside of that view's drawRect() method? If so, how do I obtain the graphics context?
The graphics guide mentions class NSGraphicsContext, but the relevant chapter seems like a blind copy/paste from Mac OS X docs, and there's no such class in iPhone SDK.
EDIT: I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use GetDC()/ReleaseDC() rather than the full cycle of InvalidateRect()/WM_PAINT. Trying to do the same here. Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
No. Drawing is drawRect:'s (or a CALayer's) job. Even if you could draw elsewhere, it would be a code smell (as it is on the Mac). Any other code should simply update your model state, then set yourself as needing display.
When you need display, moving the display code elsewhere isn't going to make it go any faster. When you don't need display (and so haven't been set as needing display), the display code won't run if it's in drawRect:.
I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use [Windows code]. … Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
It sounds like Core Animation might be more appropriate for this.
I dont think ull be able to draw outside drawRect...but to get the current graphic context all you do is CGContextRef c = UIGraphicsGetCurrentContext(); hope that helps.
I'm working on an iPhone OS app whose primary view is a 2-D OpenGL view (this is a subclass of Apple's EAGLView class, basically setting up an ortho-projected 2D environment) that the user interacts with directly.
Sometimes (not at all times) I'd like to render some controls on top of this baseline GL view-- think like a Heads-Up Display. Note that the baseline view underneath may be scrolling/animating while controls should appear to be fixed on the screen above.
I'm good with Cocoa views in general, and I'm pretty good with CoreGraphics, but I'm green with Open GL, and the EAGLView's operations (and its relationship to CALayers) is fairly opaque to me. I'm not sure how to mix in other elements most effectively (read: best performance, least hassle, etc). I know that in a pinch, I can create and keep around geometry for all the other controls, and render those on top of my baseline geometry every time I paint/swap, and thus just keep everything the user sees on one single view. But I'm less certain about other techniques, such as having another view on top (UIKit/CG or GL?) or somehow creating other layers in my single view, etc.
If people would be so kind to write up some brief observations if they've travelled these roads before, or at least point me to documentation or existing discussion on this issue, I'd greatly appreciate it.
Thanks.
Create your animated view as normal. Render it to a render target. What does this mean? Well, usually, when you 'draw' the polygons to the screen, you're actually doing it to a normal surface (the primary surface), that just so happens to be the one that eventually goes to the screen. Instead of rendering to the screen surface, you can render to any old surface.
Now, your HUD. Will this be exactly the same all the time or will it change? Will only bits of it change?
If all of it changes, you'll need to keep all the HUD geometry and textures in memory, and will have to render them onto your 'scrolling' surface as normal. You can them apply this final, composite render to the screen. I wouldn't worry too much about hassle and performance here -- the HUD can hardly be as complex as the background. You'll have a few textures quads at most?
If all of the hud is static, then you can render it to a separate surface when your app starts, then each frame render from that surface onto the animated surface you're drawing each frame. This way you can unload all the HUD geom and textures right at the start. Of course, it might be the case that the surface takes up more memory -- it depends on what resources your app needs most.
If your had half changes and half not, then technically, you can pre-render the static parts and then render the other parts as you're going along, but this is more hassle than the other two options.
Your two main options depend on the dynamicness of the HUD. If it moves, you will need to redraw it onto your scene every frame. It sucks, but I can hardly imagine that geometry is complex compared to the rest of it. If it's static, you can pre-render and just alpha blend one surface onto another before sending to the screen.
As I said, it all depends on what resources your app will have spare.
Currently, I have a UIView subclass that "stamps" a single 2px by 2px CGLayerRef across the screen, up to 160 x 240 times.
I currently animate this by moving the UIView "up" the screen 2 pixels (actually, a UIImageView) and then drawing the next "row".
Would using multiple CALayer layers speed up performance of rendering this animation?
Are there tutorials, sample applications or code snippets for use of CALayer with the iPhone SDK?
The reason I ask is that most of the code snippets I find that demonstrate simple examples of CALayer employ method calls that do not work with the iPhone SDK. I appreciate any advice or pointers.
Okay, well, if you want something that has some good examples of CA good that draws things like that and works on the phone, I recommend the GeekGameBoard code that Jens Aflke published (it is an improved version of some Apple demo code).
Based on what you are describing I think you are doing somthing way more complicated than it needs be. My impression is you want basically a static view that you are animating by shifting its position so that it is partially off screen. If you just need to set some static content in your drawRect going through layers is not going to be faster than just calling CGFillRect() with your color. After that you could just use implicit animations and the animator proxy on UIView to move the view. I suspect you could even get rid of the custom drawRect: implementation with a patterned UIColor, but I honestly have not benchmarked the difference between the two.
What CALayer methods are you seeing that don't work on iPhone? Aside from animation features tied to CoreImage I have not noticed much that is missing. The big thing you are likely to notice is that all views are layer backed (so you do not need to do anything special to use layers, you can just grab a UIView's layer through the layer accessors methos), and the coordinate system has a top left origin.
In any event, generally having more things is slower than having fewer things. If you are just repeating the same pattern over and over again you are likely to find the best performance is implementing a custom UIView/CALayer/UIColor that knows how to draw what you want, rather than placing visually identical layers or views next to each other.
Having said that, generally layers are lighter weight than views, so if you have a lot of separate elements that you need to keep logically separated you will find that moving to layers can be a win over using views.
You might want to look at -[UIColor initWithPatternImage:] depending on exactly what you are trying to do. If you are using this two pixel pattern as a background color you could just make a UIColor that draws it and set the background.
What CALayer methods are you seeing that don't work on iPhone?
As one example, I tried implementing the grid demo here, without much luck. It looks like CAConstraintLayoutManager and CAConstraint are not available in QuartzCore.h.
In another attempt, I tried a very simple, small 20x20 CALayer object as a sublayer of my UIView's layer property, but that didn't show up.
Right now, I have a custom UIView of which I override the drawRect method. In drawRect I grab a context and render two types of CGLayerRefs:
At "off" cells I draw the background color across the entire 320x480 canvas.
At "on" cells, I either draw a single CGLayerRef across a grid of 320x480 pixels (initialization) or across a 320x2 row (animation).
During animation, I make a UIImageView clip view from 320x478 pixels, and draw a single row. This "pushes" my bitmap up the screen two pixels at a time.
Basically, I'd like to test whether or not using CALayer will accomplish two things:
Make my rendering faster, if CALayer has less overhead than what I'm doing now
Make my animation smoother, by letting me transition a layer up the screen smoothly
Unfortunately, I can't seem to get a basic CALayer working at the moment, and haven't found a good chunk of sample code to look at and play with.