How does CATiledLayer know when to provide a new tile? - iphone

Because of various reasons, I am considering to make my own implementation of CATiledLayer. I have done some investigation, but I don't seem to be able to figure out how CATiledLayer knows which tile to provide.
For example, when you scroll the layer, setPosition: or setBounds: are never called. It looks like the background thread just calls drawLayer:inContext: of the delegate out of the blue without any triggers.
I have found out that CATiledLayer calls setContent: with an instance of "CAImageProvider", and the all calls to drawLayer:inContext: originate from that class. So probably that one is the key in determining what tile to draw. But I cannot find any documentation on that class.
So... does anybody know how this is working, and how I might be able to override it?
As for the disadvantages of CATiledLayer:
it always uses screen resolution (or x2, x4, etc); you cannot set it to the native resolution of your source images
you cannot specify any other scaling factor than 2
you have to specify the levelsOfDetail and levelsOfDetailBias, for which I see no implementation reason at all. If you have content that is infinitely scalable, like fractals, then this is very limiting.
most importantly: if you restrict it to zooming in only one direction (I do that by forcing the scale factor of one direction to 1 in setTransform:), it acts all weird

In drawLayer:inContext:, you can get the bounding box using CGContextGetClipBoundingBox. CGContextGetCTM should give you information about the current resolution.

Related

UIGesture recognition on different areas of a UIImageView

I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents

Finding a free space within current bounds of view on iOS

I have an infinite scrollview in which I add images as the user scrolls. Those images have varying heights and I've been trying to come up with the best way of finding a clear space inside the current bounds of the view that would allow me to add the image view.
Is there anything built-in that would make my search more efficient?
The problem is I want the images to be sort of glued to one another with no blank space between them. Making the search through 320x480 pixels tends to be quite a CPU hog. Does anyone know an efficient method to do it?
Thanks!
It seems that you're scrolling this thing vertically (you mentioned varying image heights).
There's nothing built in to UIScrollView that will do this for you. You'll have to track your UIImageView subviews manually. You could simply maintain the max y coordinate occupied by you images as you add them.
You might consider using UITableView instead, and implementing a very customized tableView:heightForRowAtIndexPath: in your delegate. You would probably need to do something special with the actual cells as well, but it would seem to make your job a little easier.
Also, for what it's worth, you might find a way to avoid making your solution infinite. Be careful about your memory footprint! iOS will shut your app off if things get out of hand.
UPDATE
Ok, now I understand what you're going for. I had imagined that you were presenting photographs or something rectangular like that. If I were trying to cover a scroll view with UILeafs (wah wah) I would take a statistical approach. I would 'paint' leaves randomly along horizontal/vertical strips as the user scrolls. Perhaps that's what you're doing already? Whatever you're doing I think it looks good.
Now I guess that the reason you're asking is to prevent the little random white spots that show through - is that right? If I may suggest a different solution: try to color the background of your scroll view to something earthy that looks good if it shows through here and there.
Also, it occurred to me that you could use a larger template image -- something that already has a nice distribution of leaves -- with transparency all along the outside outline of the leaves but nowhere else. Then you could tile these, but with overlap, so that the alpha just shows through to the leaves below. You could have a number of these images so that it doesn't look obvious. This would take away all of the uncertainty and make your retiling very efficient.
Also, consider learning about CoreAnimation (CALayer in particular) and CoreGraphics/Quartz 2D ). Proper use of these libraries will probably yield great improvements in rendering speed.
UPDATE 2:
If your images are all 150px wide, then split your scrollview into columns and add/remove based on those (as discussed in chat).
Good luck!

Improving drawing performance on custom UIView

I have a custom UIView which is composed of many images, their positions are changing in response to the user touch.
The view must track the user touch and i'm experiencing a performance bottleneck in the drawing of such view, preventing me to follow the input in realtime.
At the beginning i was drawing everything in the [UIView drawRect:] method and of course it was way too slow because everything was redrawn even if not necessary.
Then, i used more CALayers to update only the layer that was changing and this gave me much better responsiveness.
But still, when i have to draw the same image many times on a layer it takes up to 500ms.
Since the images are placed at fixed positions it there a way to pre-draw them? Should i consider putting them in many CALayers and just hide/show them?
Also, i don't understand why a [CALayer setNeedsDisplayInRect:] exists but the delegate has (apparently) no way to know what the invalid rect is to optimize the drawing.
Solution
Following the advice in the answer I finally created many CALayers for the images and set the contents property the first time the layer was being shown. This is a lazy-loading compromise: in a first attempt i set the contents of every layer at the creation time but this caused to pre-draw any possible image on the program launch, freezing the application for seconds.
From the documentation for -[CALayer drawInContext:]:
Default implementation does nothing. The context may be clipped to protect valid layer content. Subclasses that wish to find the actual region to draw can call CGContextGetClipBoundingBox. Called by the display method when the contents property is being updated.
The default implementation of display calls drawInContext: on an automatically-created context; presumably setting the bounding box as well (which is presumably passed to drawRect:).
If you're drawing several static images, I'd just stick each one in its own UIView; I don't think the overhead is that big (if it is, the CALayer overhead should be smaller). If they all animate, I'd definitely use UIView/CALayer. If some of them don't animate (much) and you notice significant slowness, you can pre-render those. It's a trade-off between rendering in drawRect: (or similar) and layer compositing on the GPU, but in general I'd assume that the latter is much faster.

Performance issues scaling multiple CALayers

I have two CALayer subclasses, each with their own drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx delegate. They are both simple layers (some single color shapes drawn with CG paths), but I need to scale about 12 instances simultaneously, and I'm having some issues with frame rates. I marked all of the layers as opaque to try to free up some cycles, and have tried using implicit and explicit basic animations (on the bounds property itself), as well as assigning CA3DTransform matricies to the transform property.
Does anyone know of a good way to quickly resize objects while maintaining a good frame-rate?
This doesn't sound to be beyond the capabilities of the iPhone.
One solution might be to render them to an image and scale that? This is (more or less) what CoreAnimation would do. It sounds like you have a defect though - maybe you should post your code and people could look at it.
Where are you performing the redraw and what are you redrawing?
I agree with Roger.
Check how often your drawLayer:inContext: methods (or whatever you use to draw) are being called. A simple NSLog can accomplish that. If they are being called constantly, consider Roger's idea of rendering to an image and scaling that.
You will likely have to fire up the performance tools to find your bottleneck.

How do I use CALayer with the iPhone?

Currently, I have a UIView subclass that "stamps" a single 2px by 2px CGLayerRef across the screen, up to 160 x 240 times.
I currently animate this by moving the UIView "up" the screen 2 pixels (actually, a UIImageView) and then drawing the next "row".
Would using multiple CALayer layers speed up performance of rendering this animation?
Are there tutorials, sample applications or code snippets for use of CALayer with the iPhone SDK?
The reason I ask is that most of the code snippets I find that demonstrate simple examples of CALayer employ method calls that do not work with the iPhone SDK. I appreciate any advice or pointers.
Okay, well, if you want something that has some good examples of CA good that draws things like that and works on the phone, I recommend the GeekGameBoard code that Jens Aflke published (it is an improved version of some Apple demo code).
Based on what you are describing I think you are doing somthing way more complicated than it needs be. My impression is you want basically a static view that you are animating by shifting its position so that it is partially off screen. If you just need to set some static content in your drawRect going through layers is not going to be faster than just calling CGFillRect() with your color. After that you could just use implicit animations and the animator proxy on UIView to move the view. I suspect you could even get rid of the custom drawRect: implementation with a patterned UIColor, but I honestly have not benchmarked the difference between the two.
What CALayer methods are you seeing that don't work on iPhone? Aside from animation features tied to CoreImage I have not noticed much that is missing. The big thing you are likely to notice is that all views are layer backed (so you do not need to do anything special to use layers, you can just grab a UIView's layer through the layer accessors methos), and the coordinate system has a top left origin.
In any event, generally having more things is slower than having fewer things. If you are just repeating the same pattern over and over again you are likely to find the best performance is implementing a custom UIView/CALayer/UIColor that knows how to draw what you want, rather than placing visually identical layers or views next to each other.
Having said that, generally layers are lighter weight than views, so if you have a lot of separate elements that you need to keep logically separated you will find that moving to layers can be a win over using views.
You might want to look at -[UIColor initWithPatternImage:] depending on exactly what you are trying to do. If you are using this two pixel pattern as a background color you could just make a UIColor that draws it and set the background.
What CALayer methods are you seeing that don't work on iPhone?
As one example, I tried implementing the grid demo here, without much luck. It looks like CAConstraintLayoutManager and CAConstraint are not available in QuartzCore.h.
In another attempt, I tried a very simple, small 20x20 CALayer object as a sublayer of my UIView's layer property, but that didn't show up.
Right now, I have a custom UIView of which I override the drawRect method. In drawRect I grab a context and render two types of CGLayerRefs:
At "off" cells I draw the background color across the entire 320x480 canvas.
At "on" cells, I either draw a single CGLayerRef across a grid of 320x480 pixels (initialization) or across a 320x2 row (animation).
During animation, I make a UIImageView clip view from 320x478 pixels, and draw a single row. This "pushes" my bitmap up the screen two pixels at a time.
Basically, I'd like to test whether or not using CALayer will accomplish two things:
Make my rendering faster, if CALayer has less overhead than what I'm doing now
Make my animation smoother, by letting me transition a layer up the screen smoothly
Unfortunately, I can't seem to get a basic CALayer working at the moment, and haven't found a good chunk of sample code to look at and play with.