I'm having an issue with drawing to areas outside of the MKMapRect passed to drawMapRect:mapRect:zoomScale:inContext in my MKOverlayView derived class. I'm trying to draw a triangle for each coordinate in a collection and the problem occurs when the coordinate is near the edge of the MKMapRect. See the below image for an example of the problem.
In the image, the light red boxes indicate the MKMapRect being rendered in each call to drawMapRect. The problem is illustrated in the red circle where, as you can see, only part of the triangle is being rendered. I'm assuming that its being clipped to the MKMapRect, though the documentation for MKOverlayView:drawMapRect makes me think this shouldn't be happening.
From the documentation:
You should also not make assumptions that the view’s frame matches the bounding rectangle of the overlay. The view’s frame is actually bigger than the bounding rectangle to allow you to draw lines for things like roads that might be located directly on the border of that rectangle.
My current solution is to draw objects more than once if they are in a maprect that is slightly larger than then maprect given to drawMapRect but this causes me to draw some things more than needed.
Does anyone know of a way to increase the size of the clipping area in drawMapRect so this isn't an issue? Any other suggestions are also welcome.
I ended up adding a buffer to the rect passed in to drawMapRect:mapRect:zoomScale:inContext and using that to determine which objects to draw. This results in more objects being drawn than needed, but not by much.
I'm putting an image into a CALayer that could be irregularly transparent:
theCardLayer.front = [CALayer layer];
theCardLayer.front.contents = (id)[cardDrawing CGImage];
In other words, it might be a square filling the layer or it might be an octagon that leaves the corners see-through.
I want to sometimes darken this layer, but without darkening the see-through bits. Any suggestions for how to do so in a programmatic way?
Take a look at CGBlendMode; a multiply blend, done by creating a new CGBitmapContext, drawing the image and then a grey fill, and assigning the resulting image to your layer, should work nicely.
You can use a CGShapeLayer. Set it's path to the shape you want to draw. You can also use shape layers as masks for other layers, if that's what you want.
This topic has been scratched once or twice, but I am still puzzled. And Google was not friendly either.
Since Quartz allows for arbitrary coordinate systems using affine transforms, I want to be able to draw things such as floorplans using real-life coordinate, e.g. feet.
So basically, for the sake of an example, I want to scale the view so that when I draw a 10x10 rectangle (think a 10-inch box for example), I get a 60x60 pixels rectangle.
It works, except the rectangle I get is quite fuzzy. Another question here got an answer that explains why. However, I'm not sure I understood that reason why, and moreover, I don't know how to fix it. Here is my code:
I set my coordinate system in my awakeFromNib custom view method:
- (void) awakeFromNib {
CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0);
self.transform = scale;
}
And here is my draw routine:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect r = CGRectMake(10., 10., 10., 10.);
CGFloat lineWidth = 1.0;
CGContextStrokeRectWithWidth(context, r, lineWidth);
}
The square I get is scaled just fine, but totally fuzzy. Playing with lineWidth doesn't help: when lineWidth is set smaller, it gets lighter, but not crisper.
So is there a way to set up a view to have a scaled coordinate system, so that I can use my domain coordinates? Or should I go back and implementing scaling in my drawing routines?
Note that this issue doesn't occur for translation or rotation.
Thanks
the [stroked] rectangle I get is quite fuzzy.
Usually, this is because you plotted the rectangle on whole-number co-ordinates and your line width is 1.
In PostScript (and thus in its descendants: AppKit, PDF, and Quartz), drawing units default to points, 1 point being exactly 1/72 inch. The Mac and iPhone currently* treat every such point as 1 pixel, regardless of the actual resolution of the screen(s), so, in a practical sense, points (by default, on the Mac and iPhone) are equal to pixels.
In PostScript and its descendants, integral co-ordinates run between points. 0, 0, for example, is the lower-left corner of the lower-left point. 1, 0 is the lower-right corner of that same point (and the lower-left corner of the next point to the right).
A stroke is centered on the path you're stroking. Thus, half will be inside the path, half outside.
In the (conceptually) 72-dpi world of the Mac, these two facts combine to produce a problem. If 1 pt is equal to 1 pixel, and you apply a 1-pt stroke between two pixels, then half of the stroke will hit each of those pixels.
Quartz, at least, will render this by painting the current color into both pixels at one-half of the color's alpha. It determines this by how much of the pixel is covered by the conceptual stroke; if you used a 1.5-pt line width, half of that is 0.75 pt, which is three-quarters of each 1-pt pixel, so the color will be rendered at 0.75 alpha. This, of course, goes to the natural conclusion: If you use a 2-pt line width, each pixel is completely covered, so the alpha will be 1. That's why you can see this effect with a 1-pt stroke and not a 2-pt stroke.
There are several workarounds:
Half-point translation: Exactly what it says on the box, you translate up and right by half a point, compensating for the aforementioned 1-pt-cut-in-half division.
This works in simple cases, but flakes out when you involve any other co-ordinate transformations except whole-point translations. That is to say, you can translate by 30, 20 and it'll still work, but if you translate by 33+1/3, 25.252525…, or if you scale or rotate at all, your half-point translation will be useless.
Inner stroke: Clip first, then double the line width (because you're only going to draw half of it), then stroke.
This can require gstate juggling if you have a lot of other drawing to do, since you don't want that clipping path affecting your other drawing.
Outer stroke: Essentially the same as an inner stroke, except that you reverse the path before clipping.
Can be better (less gstate juggling) than an inner stroke if you're sure that the paths you want to stroke won't overlap. On the other hand, if you also want to fill the path, the gstate juggling returns.
*This won't last forever. Apple's been dropping hints for some time that they're going to change at least the Mac's drawing resolution at some point. The API foundation for such a change is pretty much all there now; it's all a matter of Apple throwing the switch.
Well, as often, explaining the issue lead me to a solution.
The problem is that the view transform property is applied to it after it has been drawn into a bit buffer. The scaling transform has to be applied before drawing, ie. in the drawRect method. So scratch the awakeFromNib I gave, and here is a correct drawRect:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0);
CGContextConcatCTM(context, scale);
CGRect r = CGRectMake(10., 10., 10., 10.);
CGFloat lineWidth = 0.1;
CGContextStrokeRectWithWidth(context, r, lineWidth);
}
Ultimately I'm working on a box blur function for use on iPhone.
That function would take a UIImage and draw transparent copies, first to the sides, then take that image and draw transparent copies above and below, returning a nicely blurred image.
Reading the Drawing with Quartz 2D Programming Guide, it recommends using CGLayers for this kind of operation.
The example code in the guide is a little dense for me to understand, so I would like someone to show me a very simple example of taking a UIImage and converting it to a CGLayer that I would then draw copies of and return as a UIImage.
It would be OK if values were hard-coded (for simplicity). This is just for me to wrap my head around, not for production code.
UIImage *myImage = …;
CGLayerRef layer = CGLayerCreateWithContext(destinationContext, myImage.size, /*auxiliaryInfo*/ NULL);
if (layer) {
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextDrawImage(layerContext, (CGRect){ CGPointZero, myImage.size }, myImage.CGImage);
//Use CGContextDrawLayerAtPoint or CGContextDrawLayerInRect as many times as necessary. Whichever function you choose, be sure to pass destinationContext to it—you can't draw the layer into itself!
CFRelease(layer);
}
That is technically my first ever iPhone code (I only program on the Mac), so beware. I have used CGLayer before, though, and as far as I know, Quartz is no different on the iPhone.
… and return as a UIImage.
I'm not sure how to do this part, having never worked with UIKit.
Alright, I have a UIView which displays a CGPath which is rather wide. It can be scrolled within a UIScrollView horizontally. Right now, I have the view using a CATiledLayer, because it's more than 1024 pixels wide.
So my question is: is this efficient?
- (void) drawRect:(CGRect)rect {
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextAddPath(g,path);
CGContextSetStrokeColor(g,color);
CGContextDrawPath(g,kCGPathStroke);
}
Essentially, I'm drawing the whole path every time a tile in the layer is drawn. Does anybody know if this is a bad idea, or is CGContext relatively smart about only drawing the parts of the path that are within the clipping rect?
The path is mostly set up in such a way that I could break it up into blocks that are similar in size and shape to the tiles, but it would require more work on my part, would require some redundancy amongst the paths (for shapes that cross tile boundaries), and would also take some calculating to find which path or paths to draw.
Is it worth it to move in this direction, or is CGPath already drawing relatively quickly?
By necessity Quartz must clip everything it draws against the dimensions of the CGContext it's drawing into. But it will still save CPU if you only send it geometry that is visible within that tile. If you do this by preparing multiple paths, one per tile, you're talking about doing that clipping once (when you create your multiple paths) versus Quartz doing it every time you draw a tile. The efficiency gain will come down to how complex your path is: if it's a few simple shapes, no big deal; if it's a vector drawn map of the national road network, it could be huge! Of course you have to trade this speedup off against the increased complexity of your code.
What you could do is use instruments to see how much time is being spent in Quartz before you go crazy optimizing stuff.