How to animate a polygon mask with performance? - iphone

I'm having a performance issue.
I've created an UIView and overwrited it's drawRect function. At this function, I was drawing an image (big one), and over that, an white square at the entire screen with a polygon inside it, with CGContextEOFillPath. The result is an white screen with portion of the image (defined by the polygon) displayed.
After that, I created a function to animate the transition of that polygon to another one. Besides the polygon animation, the image should also be scaled and moved to fix the diplayed at the screen. I did that with an NSTimer. The animation of the polygon consists in calculating the distance between each vertex and moving then to a position according to elapsedTime. It worked just fine at the simulator, but got really stucked at device.
Reading about performance, here at stackoverflow, I found the alternative to use beginAnimations and commitAnimations. I'm changing everything to use that approach with the image. But what can I do with the polygon. The polygon is drawn with CGContextMoveToPoint and CGContextAddLineToPoint, so I believe it can't be animated with beginAnimations. An I correct? Is there a better approach to do that?
The desired result is something like this comic reader app: http://www.comixology.com/iphoneapp (click on guided tour. at the middle of the video they show the "automatic masking" feature)

My suggestion would be to use a CAShapeLayer overlaid on your main image view, with the CAShapeLayer being the size of the view you want to mask and having a polygon path for a hole in the center of it. CAShapeLayers let you animate from one CGPathRef to another smoothly, as long as the two paths have the same number of control points. You will need to use a CABasicAnimation here to do that animating, rather than a UIView begin / commitAnimations block, but it's not too difficult.
Joe Ricioppo has a nice example of animating CAShapeLayer paths in his post here.

With Core Animation you can animate "animatable" (sic) properties. Apple's documentation enumerates animatable properties in Mac OS X:
http://url.akosma.com/55
In the case of the iPhone, the UIView documentation explicitly says "animatable" when a given property is, hum, animatable. The most powerful of these are (IMHO) UIView's "transform" property, which takes CGAffineTransform structs as inputs, or CALayer's "transform" property (which takes CATransform3D structs). Both are animatable and give you tremendous power to create any kind of transition you want.
Now, in your case, indeed, you can't animate the polygon in an "easy" way. My bet would be in your case to try to map CGAffineTransforms that fit your needs (scale, translation) and apply that to a fixed view, non-animated, created using your Quartz code.
I hope I'm clear enough :)

Related

How to detect if the touch point is contained by draw content of CALayer?

There are three layers added to UIView. One layer draws a rectangle. One draws a circle. One draws a polygon. The layer's opacity is no. When I touched the polygon, I want to get the correct layer which draws the polygon. And the three layers are full filled to the view. I have implemented this. But I don't know if we have better solution to solve it .My way is like this:
1.Drawing the content using -drawLayer:inContext. store the CGPath that you used.
2.In the UIView's -touchedEnded:withEvent method. using CGPathContainsPoint() to detect if the touch point is contained by the CGPath.
Maybe this is the stupid way to solve this. Anyone who can tell me how to solve it better?
If you need an accurate hit test for path's I'm afraid you have to check/iterate the layer hierarchy yourself if the point is inside your path using CGPathContainsPoint as you suggested.
While iterating you could optimize it by skipping layers where the point is outside their frame.
For less fine grained control you can get the touched layer by using CALayers
- (CALayer *)hitTest:(CGPoint)thePoint
method.
If you have a layer hierarchy with a nesting level < 1000 (which is almost always true) I would not worry too much.

Zooming in/out and painting in openGL

I've recently had some issues implementing a zooming feature into a painting application. Please let me start off by giving you some background information.
First, I started off by modifying Apple's glPaint demo app. I think it's a great source, since it shows you how to set up the EAGLView, etc...
Now, what I wanted to do next, was to implement zooming functionality. After doing some research, I tried two different approaches.
1) use glOrthof
2) change the frame size of my EAGLView.
While both ways allow me to perfectly zoom in / out, I experience different problems, when it actually comes to painting while zoomed in.
When I use (1), I have to render the view like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(left, right, bottom, top, -1.0f, 1.0f); //those values have been previously calculated
glDisable(GL_BLEND);
//I'm using Apple's Texture2D class here to render an image
[_textures[kTexture_MyImage] drawInRect:[self bounds]];
glEnable(GL_BLEND);
[self swapBuffers];
Now, let's assume I zoom in a little, THEN I paint and after that, I want to zoom out again. In order to get this to work, I need to make sure that "kTexture_MyImage" always contains the latest changes. In order to do that, I need to capture the screen contents after changes have been made and merge them with the original image. The problem here is, that when I zoom in, my screen only shows part of the image (enlarged) and I haven't found a proper way to deal with this yet.
I tried to calculate which part of the screen was enlarged, then do the capturing. After that I'd resize this part to its original size and use yet another method to paste it into the original image at the correct position.
Now, I could go more into detail on how I achieved this, but it's really complicated and I figured, there has to be an easier way. There are already several apps out there, that perfectly do, what I'm trying to achieve, so it must be possible.
As far as approach (2) goes, I can avoid most of the above, since I only change the size of my EAGLView window. However, when painting, the strokes are way off their expected position. I probably need take the zoom level into account when painting and re-calculate the CGPoints in a different way.
However, if you have done similar things in the past or can give me a hint, how I could implement zooming into my painting app, I'd really appreciate it.
Thanks in advance.
Yes, it is definitely possible.
When it comes to paint programs, you should be keeping a linked list or tree of objects to draw for easy insertion / removal. When the user stops painting, (i.e. touchesEnded), you add objects to the data structure containing your scene.
When your user zooms you need to modulate the coordinates of the objects you are drawing with respect to the current viewport, projection, and modelview transforms. In your case, you're not changing the viewport or the modelview transforms so you need only account for the projection transform. You could also implement your zoom using a translation and scale on the modelview matrix but I'll ignore that case for simplicity because it involves inverting the transforms.
The good news is that you are using an orthographic projection so world coordinates correspond to window coordinates when no zooming is in effect. The "world" in your case is a simple canvas that probably corresponds to the size of the device in window coordinates.
Before you add an object to your scene data structure, convert all of the coordinates, using the current projection transform (i.e. the parameters to the glOrthof() call) to world coordinates (i.e. full canvas coordinates). You'll only remain sane if you keep all things in your model in the same coordinate space.
To convert the coordinates, assuming you can never zoom out past full device dimensions in your glOrtho() call, you'll have to scale them down proportional to the ratios of your zoomed ortho dimensions to your unzoomed ortho dimensions then bias them by the difference between your zoomed ortho bottom, left values and those of the original unzoomed ortho values.

implementing stretchable dialog borders in iphone sdk

I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great.
Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions.
1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation.
1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()?
2) Is it generally better to create
a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)?
b) The pieces separately in 8 different files?
UPDATE:
To clarify, the idea is to take any customized border art and be able to stretch the 2nd, 4th, 6th, and 8th cell (in a 3x3-cell grid) to form a border of any size with just those assets. Stretching just a plain image would result in distortion of the corners, so I'd like to stretch those even numbered cells as needed and tack on the corners so there is no distortion. I'd seen this done before so thought it might be a standard thing and have a standard naming to it other than what I called it.
Anyhow, I was advised that adding 8 UIImageViews to a container would not be as efficient as drawing the UIImages on the fly in drawRect so took that approach using CGContextDrawImage() after applying the necessary transformations to the context to translate and scale the Y. Because this function draws from the bottom left corner of an image but onto a top-left origined UIView, the image is upside down without the Y axis invert. I noticed the suggestion to use UIImage functions like drawAtPoint works as well and similarly but for the invert since UIImage draws in the same orientation as UIViews. I will continue my implementation with the former and see how it goes, but one other question.
Would someone happen to know which of these approaches is more efficeint, faster, etc?
I'm not sure I follow, but here's my best shot at an answer...
Using drawRect: or adding individual UIImageViews to a parent view is entirely up to you. UIImageView gives you a bit of encapsulated functionality for free, but otherwise they are the same as far as appearances go.
If you do want to go the drawRect route, you just need to use UIImage's drawAtPoint: method. Do the math for where you want it to be, and draw it. You can calculate your points based on the parent view's dimensions.
As far as scaling, it's impossible to resize these images without scaling them, so I'd plan ahead and make your originals as large or larger than you ever expect to display them.
Hope that helps a little?
Cheers
If you want a border on a dialog box, assuming the box is a UIView (or subclass), then set the layer's border properties and let the system draw the border for you.
#import <QuartzCore/QuartzCore.h>
// ...
view.layer.borderWidth = 2;
view.layer.borderColor = [UIColor whiteColor].CGColor;
view.layer.cornerRadius = 0; // 0=square corners, >0 for rounded

Overlaying 2D paths on UIImage without scaling artifacts

I need to draw a path along the shape of an image in a way that it is always matching its position on the image independent of the image scale. Think of this like the hybrid view of Google Maps where streets names and roads are superimposed on top of the aerial pictures.
Furthermore, this path will be drawn by the user's finger movements and I need to be able to retrieve the path keypoints on the image pixel coordinates. The user zooms-in in order to more precisely set the paths location.
I manage to somehow make it work using this approach:
-Create a custom UIView called CanvasView that handles touches interaction and delivers scaling, rotation, translation values to either the UIImageView or PathsView (see bellow) depending on a flag: deliverToImageOrPaths.
-Create a UIImageView holding the base image. This is set as a children of CanvasView
-Create a custom UIView called PathsView that keeps track of the 2D paths geometry and draws itself with a custom drawRect. This is set as children of the UIImageView.
So hierarchy: CanvasView -> UIImageView ->PathsView
In this way when deliverToImageOrPaths is YES, finger gestures transforms both the UIImageView and its child PathsView. When deliverToImageOrPaths is NO the gestures affect only the PathsView altering its geometry. So far so good.
QUESTION:
The problem I have is that when scaling the base UIImageView (via its .transform property) the PathsView is scaled with aliasing artifacts. drawRect is still being called on the PathsView but I guess it's performing the drawing using the original buffer size and then interpolating.
How can I solve this issue? Are there better ways to implement these features?
PS: I tried changing the PathsView layer class to CATiledLayer with levelsOfDetailBias 4 and levelsOfDetail 4. It solves the aliasing problem to some extent but it's unacceptable slow to render.
If you're holding onto the path values as they are being drawn, you can capture the % scaling of the underlying image, then reposition the path values and redraw them in the PathsView's drawRect method.
It would involve a bit of math (mostly along the lines of map projections) but instead of scaling bitmaps and getting artifacts, you would be scaling point distances and redrawing using vectors, which would make it smooth at any resolution (especially if you're connecting dots with beziers instead of pixels or lines).
If you're laying paint along the paths as the user draws them and throwing away the point data, then the only solution is to do bitmap anti-aliasing which is essentially what CATiledLayer is doing for you and as you've discovered, can be slow.
If you are targeting iPhone OS 3.0+ you could use a CAShapeLayer for your path. You assign a CGPathRef to the layer and it will handle all the drawing and scaling for you. You just set the layer's transform.

How do I use CALayer with the iPhone?

Currently, I have a UIView subclass that "stamps" a single 2px by 2px CGLayerRef across the screen, up to 160 x 240 times.
I currently animate this by moving the UIView "up" the screen 2 pixels (actually, a UIImageView) and then drawing the next "row".
Would using multiple CALayer layers speed up performance of rendering this animation?
Are there tutorials, sample applications or code snippets for use of CALayer with the iPhone SDK?
The reason I ask is that most of the code snippets I find that demonstrate simple examples of CALayer employ method calls that do not work with the iPhone SDK. I appreciate any advice or pointers.
Okay, well, if you want something that has some good examples of CA good that draws things like that and works on the phone, I recommend the GeekGameBoard code that Jens Aflke published (it is an improved version of some Apple demo code).
Based on what you are describing I think you are doing somthing way more complicated than it needs be. My impression is you want basically a static view that you are animating by shifting its position so that it is partially off screen. If you just need to set some static content in your drawRect going through layers is not going to be faster than just calling CGFillRect() with your color. After that you could just use implicit animations and the animator proxy on UIView to move the view. I suspect you could even get rid of the custom drawRect: implementation with a patterned UIColor, but I honestly have not benchmarked the difference between the two.
What CALayer methods are you seeing that don't work on iPhone? Aside from animation features tied to CoreImage I have not noticed much that is missing. The big thing you are likely to notice is that all views are layer backed (so you do not need to do anything special to use layers, you can just grab a UIView's layer through the layer accessors methos), and the coordinate system has a top left origin.
In any event, generally having more things is slower than having fewer things. If you are just repeating the same pattern over and over again you are likely to find the best performance is implementing a custom UIView/CALayer/UIColor that knows how to draw what you want, rather than placing visually identical layers or views next to each other.
Having said that, generally layers are lighter weight than views, so if you have a lot of separate elements that you need to keep logically separated you will find that moving to layers can be a win over using views.
You might want to look at -[UIColor initWithPatternImage:] depending on exactly what you are trying to do. If you are using this two pixel pattern as a background color you could just make a UIColor that draws it and set the background.
What CALayer methods are you seeing that don't work on iPhone?
As one example, I tried implementing the grid demo here, without much luck. It looks like CAConstraintLayoutManager and CAConstraint are not available in QuartzCore.h.
In another attempt, I tried a very simple, small 20x20 CALayer object as a sublayer of my UIView's layer property, but that didn't show up.
Right now, I have a custom UIView of which I override the drawRect method. In drawRect I grab a context and render two types of CGLayerRefs:
At "off" cells I draw the background color across the entire 320x480 canvas.
At "on" cells, I either draw a single CGLayerRef across a grid of 320x480 pixels (initialization) or across a 320x2 row (animation).
During animation, I make a UIImageView clip view from 320x478 pixels, and draw a single row. This "pushes" my bitmap up the screen two pixels at a time.
Basically, I'd like to test whether or not using CALayer will accomplish two things:
Make my rendering faster, if CALayer has less overhead than what I'm doing now
Make my animation smoother, by letting me transition a layer up the screen smoothly
Unfortunately, I can't seem to get a basic CALayer working at the moment, and haven't found a good chunk of sample code to look at and play with.