Ultimately I'm working on a box blur function for use on iPhone.
That function would take a UIImage and draw transparent copies, first to the sides, then take that image and draw transparent copies above and below, returning a nicely blurred image.
Reading the Drawing with Quartz 2D Programming Guide, it recommends using CGLayers for this kind of operation.
The example code in the guide is a little dense for me to understand, so I would like someone to show me a very simple example of taking a UIImage and converting it to a CGLayer that I would then draw copies of and return as a UIImage.
It would be OK if values were hard-coded (for simplicity). This is just for me to wrap my head around, not for production code.
UIImage *myImage = …;
CGLayerRef layer = CGLayerCreateWithContext(destinationContext, myImage.size, /*auxiliaryInfo*/ NULL);
if (layer) {
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextDrawImage(layerContext, (CGRect){ CGPointZero, myImage.size }, myImage.CGImage);
//Use CGContextDrawLayerAtPoint or CGContextDrawLayerInRect as many times as necessary. Whichever function you choose, be sure to pass destinationContext to it—you can't draw the layer into itself!
CFRelease(layer);
}
That is technically my first ever iPhone code (I only program on the Mac), so beware. I have used CGLayer before, though, and as far as I know, Quartz is no different on the iPhone.
… and return as a UIImage.
I'm not sure how to do this part, having never worked with UIKit.
Related
I am working on a map functionality. The map is built up out of multiple CAShapeLayers with CGPaths from calculated coordinates. I have a clipping problem. Look below on the screenshot where Alaska is badly clipped. The coordinates of the Alaska path extend beyond the bounds of my container layer. In effect, if i make my container layer big enough the clipping effect is gone (of course).
You see a dark line because at the bottom of Alaska is solid from left to right. Also the line is darker than the rest of the map because the map has opacity (it gets darker because it adds up).
I drilled down into the problem and i have narrowed it down to the single big polygon (there are not other polygons responsible for the clipping error).
As a workaround, i make the layer bigger to hide the line, then make the UIView smaller again to hide the line.
I'd like to know what is causing the issue instead of working with workarounds.
After a lot of digging, i managed to find an answer to my own question.
I was rendering the layers to an UIImage for improved performance. The background layer was scaled up by a UIScrollView and then several things went wrong:
Apparently, setting masksToBounds:YES has no effect when using renderInContext, just as it does with the mask property of a CALayer. MasksToBounds (or clipToBounds) only applies to childlayers.
When scaling a bitmap, be sure to include integral values to the scale argument of UIGraphicsBeginImageContextWithOptions. If not, the image will have fractional sizes, e.g. 24.2323 x 34.3290. Btw, that scale argument is used to create amazing detail on Retina screens, but it can be misused to zoom in on CAShapeLayer drawings.
When using fractional size images as a background layer, you get distortion at the edge.
The clipping effect disappeared after i updated my layer to image function. This one did the trick:
- (UIImage *)getImageWithSize:(CGSize)size opaque:(bool)opaque contentScale:(CGFloat)scale
{
CGContextRef context;
size = CGSizeMake(ceilf(size.width), ceilf(size.height));
scale = roundf(scale);
UIGraphicsBeginImageContextWithOptions(size, opaque, scale);
context = UIGraphicsGetCurrentContext();
[self renderInContext:context];
UIImage *outputImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImg;
}
Using ceilf, roundf, or floorf didn't really matter. As long as you lose the fractions.
Sorry if my stupidity wasted any of your time, but perhaps others have the same issue.
I have read what I believe to be the relevant parts of the Quartz 2D Programming Guide, but cannot find an answer to the following (they don't seem to talk a lot about iOS in the document):
My application displays a drawing in a UIView. Every now and then I have to update the drawing in some way, e.g. change the fill colour of one of the shapes (I keep CGPathRefs to the important shapes to be able to redraw them with a different fill colour later). As described in the Section "Drawing With a CGLayer" on page 169 of the aforementioned document, I was thinking of drawing the entire drawing into a CGContext that I would obtain from a CGLayer, like so:
CGContextRef offscreenContext = CGLayerGetContext(offscreenLayer);
Then I could do my updating off-screen into the CGContext and draw the CGLayer into my UIView in the UIView's drawRect: method, like so:
CGContextDrawLayerAtPoint(viewContext, CGPointZero, offscreenLayer);
The problem I am having is, where do I get my CGLayer from? My understanding is I have to make it using CGLayerCreateWithContext and supply a CGContext as a parameter from which it inherits most of it's properties. Obviously, the right context would be the context of the UIView, that I am getting with
CGContextRef viewContext = UIGraphicsGetCurrentContext();
but if I am not mistaken, I can only get that within the drawRect: method and it is not valid to assume that the context I am given there will be the same one next time the method is called, i.e. I can only use that CGContext locally within the method.
So, how can I get a CGContext that I can use to initialise my CGLayer to create an offscreen CGContext to draw into and then draw the entire layer back into my UIView's CGContext?
PS: While you're at it; if anything above does not make sense or is not sane, please let me know. I am just starting to get my head around Quartz 2D.
First of all, if you are doing it from in an iOS environment, I think you are right. The documentation clearly said that the only way to obtain a CGContextRef is by
CGContextRef ctx = UIGraphicGetCurrentContext();
Then you use that context for creating the CGLayer with
CGLayerRef layer = CGLayerCreateWithContext(ctx, (CGSize){0,0}, NULL);
And if you want to draw on that layer, you have to draw it with the context you get from the layer. (It is somewhat different from the context you passed in earlier to create the CGLayer). Im guessing the CGLayerCreateWithContext saves the information it can get from the context passed in, but not everything. (One of the example is the ColorSpace information, you have to re-specify when you fill something with the context from CGLayer).
You can get the CGLayer context reference from the CGLayerGetContext() function and use that to draw.
CGContextRef layerCtx = CGLayerGetContext(layer);
CGContextBeginPath(layerCtx);
CGContextMoveToPoint(layerCtx, -10, 10);
CGContextAddLineToPoint(layerCtx, 100, 10);
CGContextAddLineToPoint(layerCtx, 100, 100);
CGContextClosePath(layerCtx);
One point that I found out is when you draw something offscreen, it automatically clips the thing offscreen. (make sense, so it doesnt draw things that is not seen) but when you move the layer (using the matrix transformation). The clipped path is not showing (missing).
One last thing, if you save the reference to a layer into a variable and later on you want to draw it, you can use CGContextDrawLayerAtPoint() method like
CGContextDrawLayerAtPoint(ctx, (CGPoint) {newPointX, newPointY}, layer);
It will sort of "stampt" or "draw" the layer at that newPointX and new PointY coordinate.
I hope that answer your question, if its not please let me know.
I've read all Apple documentation on the iPhone development and there's nothing that describes how to do it.
The drawing on iPhone is not pixel-oriented in this way. You can draw a rectangle of size 1x1 using the Quartz API if you want in your drawRect: method:
CGColorRef colorRef = CGColorCreateGenericRGB(r, g, b, a);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, colorRef);
// This next line does the drawing; repeat for as many x,y pairs as you want.
CGContextFillRect(context, CGRectMake(x,y,1,1));
CGColorRelease(colorRef);
But you wouldn't want to fill the entire screen using this method, it's crazy slow. You should think in terms of vectors: lines, rectangles, other polygons. This applies to both Quartz (like the above) and OpenGL.
If you really are doing something that ends up rendering pixel-at-a-time filling a whole screen, your best bet may be using Quartz to create an offscreen bitmap context, writing to the bitmap memory directly per-pixel, and then drawing the whole thing to the drawing context inside drawRect:. You could also use OpenGL to draw a series of single-pixel sized GL_POINTs (instead of triangles), which may be faster depending on what you're doing.
I am making an iPhone application that loads an image from the camera, and then the user can select a second image from the library, move/scale/rotate that second image, and then save the result. I use two UIImageViews in IB as placeholders, and then apply transformations while touching/pinching.
The problem comes when I have to save both images together. I use a rect of the size of the first image and pass it to UIGraphicsBeginImageContext. Then I tried to use CGContextConcatCTM but I can't understand how it works:
CGRect rect = CGRectMake(0, 0, img1.size.width, img1.size.height); // img1 from camera
UIGraphicsBeginImageContext(rect.size); // Start drawing
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextClearRect(ctx, rect); // Clear whole thing
[img1 drawAtPoint:CGPointZero]; // Draw background image at 0,0
CGContextConcatCTM(ctx, img2.transform); // Apply the transformations of the 2nd image
But what do I need to do next? What information is being held in the img2.transform matrix? The documentation for CGContextConcatCTM doesn't help me that much unfortunately..
Right now I'm trying to solve it by calculating the points and the angle using trigonometry (with the help of this answer), but since the transformation is there, there has to be an easier and more elgant way to do this, right?
Take a look at this excellent answer, you need to create a bitmapped/image context, draw to it, and get the resultant image out. You can then save that. iOS UIImagePickerController result image orientation after upload
I have created a CG context that is 800 pixels wide and 1200 pixels height. I have created CGLayer over this context that has been transformed (scaled, translated and rotated). So, at some point, as the CGLayer is bigger than the context and has been translated, rotated, etc., not all parts of this CGLayer falls inside the context. See next picture:
layer and context
As you can see by the picture, some parts of the layer falls outside the context area. When I render the final composition using
CGContextDrawLayerInRect(context, superRect, objectLayer);
it will render the full layer, including those unnecessary parts outside the context.
My problem is: if I can make it draw just the relevant parts inside the context I can make it render fast and save memory.
Is there any way to do that?
NOTE: LAYER contains transparency.
Please refrain from giving solutions that don't involve CGLayers.
thanks in advance.
You can clip the context using CGContextClip/-ToMask/-ToRect.
But i think it's actually cheaper/faster to simply 'dump' pixels into a context, than to have to calculate the clipping bounds and 'draw less'.
The surplus drawing doesn't (normally) use-up extra memory.
Can you use a CATiledLayer? This should lazy-load in squares ala google maps....
+(Class)layerClass
{
return [CATiledLayer class];
}
-(id)init {
CATiledLayer *tiledLayer = (CATiledLayer *) self.layer;
tiledLayer.tileSize = CGSize(x,x);
tiledLayer.levelsOfDetail = y;
tiledLayer.levelsOfDetailBias = z;
}