I am creating a vector based drawing app, and being truly vector based I need to have a "limitless" canvas. So I have created the following structure:
UIVIew (canvas view)
|
CALayer (canvas container)
|
CAShapeLayer (the drawn shapes)
...
Each shape that has been drawn by the user is a CAShapeLayer with a CGPath assigned to it.
When the user zooms out the container (say the container has a size of about 10000x10000 points) and draws a new shape I get the points in scale of 1 and than assign a scale transform to the new CAShapeLayer (that`s the only way I found that works).
The problem is that in my app I can draw a connection between two objects, so I use a CAShapeLayer to draw the connection but when zoomed out (say the distance between two shapes is about 5000pts) I draw a line with two points which distance is about 5000pts as well. Doing that I think the springboard is restarted.
I've read that CAShapeLayer caches only the pixels of the drawn path and maybe that's my problem.
So I ask for your help guys because I ran out of ideas.
Is there a different approach to doing these huge shapes drawing and easily operate with them.
Thank you in advance.
Related
Actually, I'm migrating a game from another platform, and I need to generate a sprite with two images.
The first image will be something like the form, a pattern or stamp, and the second is only a rectangle that sets color to the first. If the color was plane, it will be easy, I could use sprite.color and sprite.colorBlendFactor to play with it, but there are levels where the second image is a rectangle with two colors (red and green, for example).
Is there any way to implement these with Sprite Kit?
I mean, something like using Core image filter, and CIBlendWithAlphaMask, but only with Image and Mask image. (https://developer.apple.com/library/ios/documentation/graphicsimaging/Reference/CoreImageFilterReference/Reference/reference.html#//apple_ref/doc/uid/TP40004346) -> CIBlendWithAlphaMask.
Thanks.
Look into the SKCropNode class (documentation here) - it allows you to set a mask for an image underneath it.
In short, you would create two SKSpriteNodes - one with your stamp, the other with your coloured rectangle. Then:
SKCropNode *myCropNode = [SKCropNode node];
[myCropNode addChild:colouredRectangle]; // the colour to be rendered by the form/pattern
myCropNode.maskNode = stampNode; // the pattern sprite node
[self addChild:myCropNode];
Note that the results will probably be more similar to CIBlendWithMask rather than CIBlendWithAlphaMask, since the crop node will mask out any pixels below 5% transparency and render all pixels above this level, so the edges will be jagged rather than smoothly faded. Just don't use any semi-transparent areas in your mask and you'll be fine.
I am drawing line over 5 custom uiviews (Like UITableView rows) from one position to another (X,Y axis) using CAShapeLayer.
My issue is that I want to know that what view the CAShapeLayer (line in my case) is currently in. Is there any CGIntersect for CALayer and CGRect etc? I am trying to create graph using core animation instead of any chart API.
You could check in the shape layer intersects a specific layer by checking intersection of the bounding box of the path agains the frame of the other layer.
BOOL doesIntersect = CGRectIntersectsRect(CGPathGetBoundingBox(path), layer.frame);
I'm trying to build this:
Where the white background is in fact transparent. I know how to clip a CGPath to a set region, but this seems to be to other way around, since I need to substract regions from a filled CGPath.
I guess the right way to go would be to substract the whole outer-circles from the CGPath and then to draw smaller circles at my CGPoints, but I'm not sure how to execute the former. Can anyone point me in the right direction?
That's what I would do :
1) Draw your general line
2) CGContextSetBlendMode(context, kCGBlendModeClear) to "clear the context" when you draw.
3) Draw you bigger circles
4) CGContextSetBlendMode(context, kCGBlendModeNormal) to return to normal drawing
5) Draw your little circles.
You could instead start a transparency layer, draw the lines, then draw the larger transparent circles using the clear color, then draw the smaller black circles. Then when you finish the transparency layer, it will composite exactly what you want back onto the context.
I'm having an issue with drawing to areas outside of the MKMapRect passed to drawMapRect:mapRect:zoomScale:inContext in my MKOverlayView derived class. I'm trying to draw a triangle for each coordinate in a collection and the problem occurs when the coordinate is near the edge of the MKMapRect. See the below image for an example of the problem.
In the image, the light red boxes indicate the MKMapRect being rendered in each call to drawMapRect. The problem is illustrated in the red circle where, as you can see, only part of the triangle is being rendered. I'm assuming that its being clipped to the MKMapRect, though the documentation for MKOverlayView:drawMapRect makes me think this shouldn't be happening.
From the documentation:
You should also not make assumptions that the view’s frame matches the bounding rectangle of the overlay. The view’s frame is actually bigger than the bounding rectangle to allow you to draw lines for things like roads that might be located directly on the border of that rectangle.
My current solution is to draw objects more than once if they are in a maprect that is slightly larger than then maprect given to drawMapRect but this causes me to draw some things more than needed.
Does anyone know of a way to increase the size of the clipping area in drawMapRect so this isn't an issue? Any other suggestions are also welcome.
I ended up adding a buffer to the rect passed in to drawMapRect:mapRect:zoomScale:inContext and using that to determine which objects to draw. This results in more objects being drawn than needed, but not by much.
Alright, I have a UIView which displays a CGPath which is rather wide. It can be scrolled within a UIScrollView horizontally. Right now, I have the view using a CATiledLayer, because it's more than 1024 pixels wide.
So my question is: is this efficient?
- (void) drawRect:(CGRect)rect {
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextAddPath(g,path);
CGContextSetStrokeColor(g,color);
CGContextDrawPath(g,kCGPathStroke);
}
Essentially, I'm drawing the whole path every time a tile in the layer is drawn. Does anybody know if this is a bad idea, or is CGContext relatively smart about only drawing the parts of the path that are within the clipping rect?
The path is mostly set up in such a way that I could break it up into blocks that are similar in size and shape to the tiles, but it would require more work on my part, would require some redundancy amongst the paths (for shapes that cross tile boundaries), and would also take some calculating to find which path or paths to draw.
Is it worth it to move in this direction, or is CGPath already drawing relatively quickly?
By necessity Quartz must clip everything it draws against the dimensions of the CGContext it's drawing into. But it will still save CPU if you only send it geometry that is visible within that tile. If you do this by preparing multiple paths, one per tile, you're talking about doing that clipping once (when you create your multiple paths) versus Quartz doing it every time you draw a tile. The efficiency gain will come down to how complex your path is: if it's a few simple shapes, no big deal; if it's a vector drawn map of the national road network, it could be huge! Of course you have to trade this speedup off against the increased complexity of your code.
What you could do is use instruments to see how much time is being spent in Quartz before you go crazy optimizing stuff.