Do I have to move the layer frame or apply translate matrix transformation to layer? Or perhaps I can move the contents inside of the layer? If contents is not movable inside of layer, how it would position initially?
A CALayer has a frame (or, equivalently, a bounds and an origin), which is used logically to determine what to draw. When drawInContext: or equivalent is called, it's the frame that determines how the contents are produced.
However, like OS X, iOS adopts a compositing window manager, which means that views know how to draw their output to a buffer and the buffers are combined to create the view, with the window manager figuring out what to do about caching and video memory management in between.
If you adjust the transform property of the view or of the layer class, then you adjust how the compositing happens. However, the results of drawInContext: should explicitly still be the same so the window manager knows it can just use the cached image.
So, for example, if you set a frame of size 128x128 and then a transform that scales the CALayer up to double, you'll occupy a 256x256 area of the screen but the image used for compositing will be only 128x128 in size, making each source pixel into four target pixels. If you set a frame of size 256x256 and the identity transform, you'll cover the same amount of screen space but with each source pixel being 1:1 related to a target pixel.
A side effect is that changing the frame causes a redraw from first principles. Changing the transform doesn't. So the latter is usually faster, and is also the thing to do if you decide to use something like CATiledLayer (as used in Safari, Maps, etc) that draws in a separate thread and may take a while to come up with results.
As a rule of thumb, you use the frame to set the initial position and update the frame for normal work stuff. You play with the transform for transitions and other special effects. However, all of the frame and transform properties of a CATiledLayer are animatable in the CoreAnimation sense, so that's really still at your discretion.
Most people don't work on the level of a CALayer, but prefer to work with UIViews. In which case the comments are mostly the same, with the caveat that you can then adjust the [2d] transform on the view or the [3d] transform on the view's layer and have the compositor figure it all out, but change the frame to prompt a redraw.
Related
I have tried using the methods "Transform.scale" (for "zooming" ) and "Transform.translate" (for the moving), but they seem to trigger the paint method in the CustomPainter.
(even though the method "shouldRepaint" returns false, but that method is not even invoked)
Maybe there is some other way of doing what I want i.e. be able to move and zoom something I have created with a Canvas (without again executing the paint method)?
In the below example code there are three sliders, one for zooming out (i.e. reducing the size) and one for moving horizontally and one for moving vertically.
The "paint" method simply draws a polygon (see the below attached screenshot picture).
While the example code below is simple (i.e. small amount of hardcoded vector data and fast to render), I want to emphasize that the solution I am looking for need to support MANY complicated drawings (with LOTS OF data, slow to render), i.e. it is not an acceptable solution to suggest something like instead manually converting this one vector image to a raster image (e.g. PNG/JPG/GIF).
Below I try do describe how you can think regarding a scenario that need to be supported:
Imagine you want to implement an app with a huge dropdown list with lots of different vector data images to be selected.
The data of those VECTOR images may be retrieved from the internet or from a big local SQLite database.
IMPORTANT: The DATA in those images are NOT raster images such as jpg, png, gif... but the VECTOR data to become retrieved is defined as lots of screen coordinates for points, lines, polygons, and textual labels, and icons, and color values... and so on.
Such VECTOR data will then be used for creating the image, and as far as I understand you should use CustomPainter with the paint method unless there are better options?
Also imagine that each of such selected image with vector data is HUGE with MANY THOUSANDS of lines, polygons, icons, ... and so on, and that the paint method might take seconds for creating the image.
BUT, once it is drawn the data will not change.
So, since the "paint" method might take seconds to render a huge amount of vector data, you want to avoid invoking it frequently, when moving or zooming.
Therefore I think it would be desirable if it would be possible to use the method "shouldRepaint" to return false, but it seems as that method is not even invoked at all when resizing or moving with the Transform methods "scale" and "translate".
But maybe there is some other solution to support the above described scenario, maybe some other class than CustomPainter that do not automatically trigger the paint method when applying Transform scale/translate ?
I hope there is a solution with the Flutter framework somehow being able to automatically reuse the bitmap (e.g color values at certain bits and bytes) that was created potentially slowly with a paint method but can scale/zoom and move it in a faster way than having to execute the paint method again.
If you just want to zoom, scale, pan on your images, then you can try the new InteractiveViewer widget.
InteractiveViewer class
A widget that enables pan and zoom
interactions with its child.
The user can transform the child by dragging to pan or pinching to
zoom.
https://api.flutter.dev/flutter/widgets/InteractiveViewer-class.html
How would I efficiently draw a CGPath on a CATiledLayer? I'm currently checking if the bounding box of the tile intersects the bounding box of the path like this:
-(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context {
CGRect boundingBox = CGPathGetPathBoundingBox(drawPath);
CGRect rect = CGContextGetClipBoundingBox(context);
if( !CGRectIntersectsRect(boundingBox, rect) )
return;
// Draw path...
}
This is not very efficient as the drawLayer:inContext: is called multiple times from multiple threads and results in drawing the path many times.
Is there a better, more efficient way to do this?
The simplest option is to draw your curve into a large image and then tile the image. But if you're tiling, it probably means the image would be too large, or you would have just drawn the path in the first place, right?
So you probably need to split your path up. The simplest approach is to split it up element by element using CGPathApply. For each element, you can check its bounding box and determine if that element falls in your bounds. If not, just keep track of the last end point. If so, then move to the last end point you saw and add the element to a new path for this tile. When you're done, each tile will draw its own path.
Technically you will "draw" things that go outside your bounds here (such as a line that extends beyond the tile), but this is much cheaper than it sounds. Core Graphics is going to clip single elements very easily. The goal is to avoid calculating elements that are not in your bounding box at all.
Be sure to cache the resulting path. You don't need to calculate the path for every tile; just the ones you're drawing. But avoid recalculating it every time the tile draws. Whenever the data changes, dump your cache. If there are a very large number of tiles, you can also use NSCache to optimize this even better.
You don't show where the path gets created. If possible, you might try building the path up in the -drawLayer:inContext: method, only creating the portion of it needed for the tile being drawn.
As with all performance problems, you should use Instruments to profile your code and find out exactly where the bottlenecks are. Have you tried that already, and if so, what did you find?
As a side note, is there a reason you're using CGPath instead of UIBezierPath? From Apple's documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”
I'm having an issue with drawing to areas outside of the MKMapRect passed to drawMapRect:mapRect:zoomScale:inContext in my MKOverlayView derived class. I'm trying to draw a triangle for each coordinate in a collection and the problem occurs when the coordinate is near the edge of the MKMapRect. See the below image for an example of the problem.
In the image, the light red boxes indicate the MKMapRect being rendered in each call to drawMapRect. The problem is illustrated in the red circle where, as you can see, only part of the triangle is being rendered. I'm assuming that its being clipped to the MKMapRect, though the documentation for MKOverlayView:drawMapRect makes me think this shouldn't be happening.
From the documentation:
You should also not make assumptions that the view’s frame matches the bounding rectangle of the overlay. The view’s frame is actually bigger than the bounding rectangle to allow you to draw lines for things like roads that might be located directly on the border of that rectangle.
My current solution is to draw objects more than once if they are in a maprect that is slightly larger than then maprect given to drawMapRect but this causes me to draw some things more than needed.
Does anyone know of a way to increase the size of the clipping area in drawMapRect so this isn't an issue? Any other suggestions are also welcome.
I ended up adding a buffer to the rect passed in to drawMapRect:mapRect:zoomScale:inContext and using that to determine which objects to draw. This results in more objects being drawn than needed, but not by much.
Greetings... I come in peace, shoot to kill...
I have a container of type UIView (A Grid) and add many sublayers to the layer of the UIView (CALayers representing cells within the grid).
Within the Cell, I render many UIImages at different locations using CGContextDrawImage. I am well aware of the need to Translate and Scale, but the scaling (flipping) is with reference to the superviews (Grid) co-ordiantes and the origin of the Cell CALayer is not (0,0).
Therefore my rendering is all over the shop (mostly off screen). What is the best way to handle the translating and scaling when the UIImage is not at (0,0). Is there an established design pattern I should be using.
I solved this issue by just manually offsetting the translation by double the y origin.
Alright, I have a UIView which displays a CGPath which is rather wide. It can be scrolled within a UIScrollView horizontally. Right now, I have the view using a CATiledLayer, because it's more than 1024 pixels wide.
So my question is: is this efficient?
- (void) drawRect:(CGRect)rect {
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextAddPath(g,path);
CGContextSetStrokeColor(g,color);
CGContextDrawPath(g,kCGPathStroke);
}
Essentially, I'm drawing the whole path every time a tile in the layer is drawn. Does anybody know if this is a bad idea, or is CGContext relatively smart about only drawing the parts of the path that are within the clipping rect?
The path is mostly set up in such a way that I could break it up into blocks that are similar in size and shape to the tiles, but it would require more work on my part, would require some redundancy amongst the paths (for shapes that cross tile boundaries), and would also take some calculating to find which path or paths to draw.
Is it worth it to move in this direction, or is CGPath already drawing relatively quickly?
By necessity Quartz must clip everything it draws against the dimensions of the CGContext it's drawing into. But it will still save CPU if you only send it geometry that is visible within that tile. If you do this by preparing multiple paths, one per tile, you're talking about doing that clipping once (when you create your multiple paths) versus Quartz doing it every time you draw a tile. The efficiency gain will come down to how complex your path is: if it's a few simple shapes, no big deal; if it's a vector drawn map of the national road network, it could be huge! Of course you have to trade this speedup off against the increased complexity of your code.
What you could do is use instruments to see how much time is being spent in Quartz before you go crazy optimizing stuff.