Clipping in MKOverlayView:drawMapRect - iphone

I'm having an issue with drawing to areas outside of the MKMapRect passed to drawMapRect:mapRect:zoomScale:inContext in my MKOverlayView derived class. I'm trying to draw a triangle for each coordinate in a collection and the problem occurs when the coordinate is near the edge of the MKMapRect. See the below image for an example of the problem.
In the image, the light red boxes indicate the MKMapRect being rendered in each call to drawMapRect. The problem is illustrated in the red circle where, as you can see, only part of the triangle is being rendered. I'm assuming that its being clipped to the MKMapRect, though the documentation for MKOverlayView:drawMapRect makes me think this shouldn't be happening.
From the documentation:
You should also not make assumptions that the view’s frame matches the bounding rectangle of the overlay. The view’s frame is actually bigger than the bounding rectangle to allow you to draw lines for things like roads that might be located directly on the border of that rectangle.
My current solution is to draw objects more than once if they are in a maprect that is slightly larger than then maprect given to drawMapRect but this causes me to draw some things more than needed.
Does anyone know of a way to increase the size of the clipping area in drawMapRect so this isn't an issue? Any other suggestions are also welcome.

I ended up adding a buffer to the rect passed in to drawMapRect:mapRect:zoomScale:inContext and using that to determine which objects to draw. This results in more objects being drawn than needed, but not by much.

Related

CIDetector to detect any objects bounding box

Imagine having an array of images like these
Background is always white(even in the 3rd pic, main object there is that big brown rectangle with shapes inside)
No matter of given type of the image you would need to:
1) find main object boundary rectangle
2) crop it out like this
3) and place it in the center of a blank square image.
How would you achieve this? I already know how to crop out anything knowing rectangle and place it anywhere but I just need to know which way would be the best to make the 1st step.
Vision API can detect rectangles, faces and barcodes, but it seems what I need is even more simple.
I just need to find leftest, rightest, top and bottom non-white pixels and it will be my bounds.
Is there any way except iterating pixelBuffer for each pixel?
What is the type of these images? UIImage? CAShapeLayer? In most cases, you should be able to get the .frame from each image in the array, which will give you a CGRect the X and Y origin coordinates, as well as height and width dimensions. You should also have access to .midX and .midY coordinates, or .center.x and .center.y to find the midpoint you're looking for. Unless what you're talking about is taking in a flattened bitmap like a .jpg or .png and running some shape detection on the contents, in which case you would need something like Vision to accomplish what you're trying to do.

CAShapeLayer annoying clipping error

I am working on a map functionality. The map is built up out of multiple CAShapeLayers with CGPaths from calculated coordinates. I have a clipping problem. Look below on the screenshot where Alaska is badly clipped. The coordinates of the Alaska path extend beyond the bounds of my container layer. In effect, if i make my container layer big enough the clipping effect is gone (of course).
You see a dark line because at the bottom of Alaska is solid from left to right. Also the line is darker than the rest of the map because the map has opacity (it gets darker because it adds up).
I drilled down into the problem and i have narrowed it down to the single big polygon (there are not other polygons responsible for the clipping error).
As a workaround, i make the layer bigger to hide the line, then make the UIView smaller again to hide the line.
I'd like to know what is causing the issue instead of working with workarounds.
After a lot of digging, i managed to find an answer to my own question.
I was rendering the layers to an UIImage for improved performance. The background layer was scaled up by a UIScrollView and then several things went wrong:
Apparently, setting masksToBounds:YES has no effect when using renderInContext, just as it does with the mask property of a CALayer. MasksToBounds (or clipToBounds) only applies to childlayers.
When scaling a bitmap, be sure to include integral values to the scale argument of UIGraphicsBeginImageContextWithOptions. If not, the image will have fractional sizes, e.g. 24.2323 x 34.3290. Btw, that scale argument is used to create amazing detail on Retina screens, but it can be misused to zoom in on CAShapeLayer drawings.
When using fractional size images as a background layer, you get distortion at the edge.
The clipping effect disappeared after i updated my layer to image function. This one did the trick:
- (UIImage *)getImageWithSize:(CGSize)size opaque:(bool)opaque contentScale:(CGFloat)scale
{
CGContextRef context;
size = CGSizeMake(ceilf(size.width), ceilf(size.height));
scale = roundf(scale);
UIGraphicsBeginImageContextWithOptions(size, opaque, scale);
context = UIGraphicsGetCurrentContext();
[self renderInContext:context];
UIImage *outputImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImg;
}
Using ceilf, roundf, or floorf didn't really matter. As long as you lose the fractions.
Sorry if my stupidity wasted any of your time, but perhaps others have the same issue.

Efficiently draw CGPath on CATiledLayer

How would I efficiently draw a CGPath on a CATiledLayer? I'm currently checking if the bounding box of the tile intersects the bounding box of the path like this:
-(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context {
CGRect boundingBox = CGPathGetPathBoundingBox(drawPath);
CGRect rect = CGContextGetClipBoundingBox(context);
if( !CGRectIntersectsRect(boundingBox, rect) )
return;
// Draw path...
}
This is not very efficient as the drawLayer:inContext: is called multiple times from multiple threads and results in drawing the path many times.
Is there a better, more efficient way to do this?
The simplest option is to draw your curve into a large image and then tile the image. But if you're tiling, it probably means the image would be too large, or you would have just drawn the path in the first place, right?
So you probably need to split your path up. The simplest approach is to split it up element by element using CGPathApply. For each element, you can check its bounding box and determine if that element falls in your bounds. If not, just keep track of the last end point. If so, then move to the last end point you saw and add the element to a new path for this tile. When you're done, each tile will draw its own path.
Technically you will "draw" things that go outside your bounds here (such as a line that extends beyond the tile), but this is much cheaper than it sounds. Core Graphics is going to clip single elements very easily. The goal is to avoid calculating elements that are not in your bounding box at all.
Be sure to cache the resulting path. You don't need to calculate the path for every tile; just the ones you're drawing. But avoid recalculating it every time the tile draws. Whenever the data changes, dump your cache. If there are a very large number of tiles, you can also use NSCache to optimize this even better.
You don't show where the path gets created. If possible, you might try building the path up in the -drawLayer:inContext: method, only creating the portion of it needed for the tile being drawn.
As with all performance problems, you should use Instruments to profile your code and find out exactly where the bottlenecks are. Have you tried that already, and if so, what did you find?
As a side note, is there a reason you're using CGPath instead of UIBezierPath? From Apple's documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”

How to clip or subtract a CGMutablePathRf by another CGMutalbePathRef?

I have a rectangular CGMutablePathRef and I want to subtract a circle which lays exactly on centered on one edge of that rectangle, so the edge does not cross the circle anymore.
There seem to be no functions to intersect or subtract paths from another. How can I do it?
You need to look at the CGContext you are drawing into and use the clipping on the context rather than the path.
Apple's documentation is here.
If I understand your question, you can draw your rectangle into the context and then "clip out" the circle path. If you are filling the paths, you'll need to pay attention to the winding rules.
Alternatively, you can make your path with a series of commands such as CGPathAddLineToPoint, CGPathAddArcToPoint, etc and then stoke the path in your context. If you use this approach, you can then apply transforms to the final path for scaling and rotating as needed. Depending on what you are trying to accomplish, this may be the better approach.

What's the most efficient way to draw a large CGPath?

Alright, I have a UIView which displays a CGPath which is rather wide. It can be scrolled within a UIScrollView horizontally. Right now, I have the view using a CATiledLayer, because it's more than 1024 pixels wide.
So my question is: is this efficient?
- (void) drawRect:(CGRect)rect {
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextAddPath(g,path);
CGContextSetStrokeColor(g,color);
CGContextDrawPath(g,kCGPathStroke);
}
Essentially, I'm drawing the whole path every time a tile in the layer is drawn. Does anybody know if this is a bad idea, or is CGContext relatively smart about only drawing the parts of the path that are within the clipping rect?
The path is mostly set up in such a way that I could break it up into blocks that are similar in size and shape to the tiles, but it would require more work on my part, would require some redundancy amongst the paths (for shapes that cross tile boundaries), and would also take some calculating to find which path or paths to draw.
Is it worth it to move in this direction, or is CGPath already drawing relatively quickly?
By necessity Quartz must clip everything it draws against the dimensions of the CGContext it's drawing into. But it will still save CPU if you only send it geometry that is visible within that tile. If you do this by preparing multiple paths, one per tile, you're talking about doing that clipping once (when you create your multiple paths) versus Quartz doing it every time you draw a tile. The efficiency gain will come down to how complex your path is: if it's a few simple shapes, no big deal; if it's a vector drawn map of the national road network, it could be huge! Of course you have to trade this speedup off against the increased complexity of your code.
What you could do is use instruments to see how much time is being spent in Quartz before you go crazy optimizing stuff.