iPhone: Transforming an image using Quartz 2D - iphone

I am trying to apply some transformations on images using a CGContextRef. I am using CGContextTranslateCTM, CGContextScaleCTM and CGContextRotateCTM functions, but to keep things simple lets focus on just the first. I was wondering why the following code produces exactly the original image?! Am I missing something?
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef g = CGBitmapContextCreate((void*) pixelData,
width,
height,
RGBA_8_BIT,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetShouldAntialias(g, YES);
CGContextSetInterpolationQuality(g, kCGInterpolationHigh);
CGContextTranslateCTM( g,translateX, translateY );
CGImageRef tempImg = CGBitmapContextCreateImage (g);
CGContextDrawImage( g, CGRectMake (0, 0, width, height), tempImg );
CGContextRelease(g);
CGColorSpaceRelease( colorSpace );
Also, after translating, how to draw another image over this one but with partial transparency (eg alpha = 0.5).
I searched alot but didn't find an answer, any help is appreciated... :)
Please note that I am creating the context from pixelData, and that tempImg is created after the translation. There is nothing wrong in the initialization, as the original image is being currently produced, but the issue is with the translation I suppose..

Transformations to the graphics state only affect subsequent drawing operations - they don't change the existing image data. If you want to apply transforms to an image, try something like this:
Create an empty CGContext (on iPhone, use UIGraphicsBeginImageContext)
Translate, scale, or rotate it's graphics state
Draw existing image into it.
Get image from new CGContext (on iPhone, use UIGraphicsGetImageFromCurrentImageContext)
When you perform step 3, the existing image is drawn into your new graphics context with the transformations applied. The trick here is that in order to apply the transformations, we have to actually draw something.
You can do some really cool things with transformations this way. You can draw half your image, apply some transforms, and draw some more.

As noted in other answers, transformations only apply to subsequent drawing operations; they don't affect the pixel buffer you started with.
So you need a drawing operation. The solution is to create a CGImage; drawing that image is a drawing operation, so it will be subject to the current transformation matrix (CTM).
Step-by-step:
Create the context with empty pixel data. (If you pass NULL for the buffers, Quartz should create them for you. That works on the Mac, anyway.)
Create the image with the pixel data you want to draw transformed.
Transform the CTM in the context.
Draw the image.

You have to call CGBitmapContextCreateImage() after you draw the image.
Then you can draw another image on top of the first one and call CGBitmapContextCreateImage() again to get the second image. You can set the alpha using CGContextSetAlpha(ctx, alphaValue);

Related

How to create a CGLayer from a UIView for off-screen drawing

I have read what I believe to be the relevant parts of the Quartz 2D Programming Guide, but cannot find an answer to the following (they don't seem to talk a lot about iOS in the document):
My application displays a drawing in a UIView. Every now and then I have to update the drawing in some way, e.g. change the fill colour of one of the shapes (I keep CGPathRefs to the important shapes to be able to redraw them with a different fill colour later). As described in the Section "Drawing With a CGLayer" on page 169 of the aforementioned document, I was thinking of drawing the entire drawing into a CGContext that I would obtain from a CGLayer, like so:
CGContextRef offscreenContext = CGLayerGetContext(offscreenLayer);
Then I could do my updating off-screen into the CGContext and draw the CGLayer into my UIView in the UIView's drawRect: method, like so:
CGContextDrawLayerAtPoint(viewContext, CGPointZero, offscreenLayer);
The problem I am having is, where do I get my CGLayer from? My understanding is I have to make it using CGLayerCreateWithContext and supply a CGContext as a parameter from which it inherits most of it's properties. Obviously, the right context would be the context of the UIView, that I am getting with
CGContextRef viewContext = UIGraphicsGetCurrentContext();
but if I am not mistaken, I can only get that within the drawRect: method and it is not valid to assume that the context I am given there will be the same one next time the method is called, i.e. I can only use that CGContext locally within the method.
So, how can I get a CGContext that I can use to initialise my CGLayer to create an offscreen CGContext to draw into and then draw the entire layer back into my UIView's CGContext?
PS: While you're at it; if anything above does not make sense or is not sane, please let me know. I am just starting to get my head around Quartz 2D.
First of all, if you are doing it from in an iOS environment, I think you are right. The documentation clearly said that the only way to obtain a CGContextRef is by
CGContextRef ctx = UIGraphicGetCurrentContext();
Then you use that context for creating the CGLayer with
CGLayerRef layer = CGLayerCreateWithContext(ctx, (CGSize){0,0}, NULL);
And if you want to draw on that layer, you have to draw it with the context you get from the layer. (It is somewhat different from the context you passed in earlier to create the CGLayer). Im guessing the CGLayerCreateWithContext saves the information it can get from the context passed in, but not everything. (One of the example is the ColorSpace information, you have to re-specify when you fill something with the context from CGLayer).
You can get the CGLayer context reference from the CGLayerGetContext() function and use that to draw.
CGContextRef layerCtx = CGLayerGetContext(layer);
CGContextBeginPath(layerCtx);
CGContextMoveToPoint(layerCtx, -10, 10);
CGContextAddLineToPoint(layerCtx, 100, 10);
CGContextAddLineToPoint(layerCtx, 100, 100);
CGContextClosePath(layerCtx);
One point that I found out is when you draw something offscreen, it automatically clips the thing offscreen. (make sense, so it doesnt draw things that is not seen) but when you move the layer (using the matrix transformation). The clipped path is not showing (missing).
One last thing, if you save the reference to a layer into a variable and later on you want to draw it, you can use CGContextDrawLayerAtPoint() method like
CGContextDrawLayerAtPoint(ctx, (CGPoint) {newPointX, newPointY}, layer);
It will sort of "stampt" or "draw" the layer at that newPointX and new PointY coordinate.
I hope that answer your question, if its not please let me know.

UIImage stretchableImageWithLeftCapWidth doesn't work when drawing to a quartz context

I'm drawing an image with strechable caps defined like the following:
[bubbleImg stretchableImageWithLeftCapWidth:17 topCapHeight:12];
And that was working fine when inside a UIImageView but it doesn't work when drawing the image like the following:
CGContextDrawImage(context, CGRectMake(x, y, width, height), image.CGImage);
.. which I kind of suspected it wouldn't work. Is there a solution for stretchable cap images when drawing to a CG context?
When drawing a stretchable image, don't use the CGImage property, because it returns the underlying unstretched bitmap. Instead use UIImage's drawing methods directly.
If you are drawing in an unusual context (outside of UIView/CALayer drawing methods in a pure Core Graphics context), you may need to wrap UIKit drawing in UIGraphicsPushContext()/UIGraphicsPopContext() calls.

How do you display a pixel of color C to iPhone coordinates (x, y)?

I've read all Apple documentation on the iPhone development and there's nothing that describes how to do it.
The drawing on iPhone is not pixel-oriented in this way. You can draw a rectangle of size 1x1 using the Quartz API if you want in your drawRect: method:
CGColorRef colorRef = CGColorCreateGenericRGB(r, g, b, a);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, colorRef);
// This next line does the drawing; repeat for as many x,y pairs as you want.
CGContextFillRect(context, CGRectMake(x,y,1,1));
CGColorRelease(colorRef);
But you wouldn't want to fill the entire screen using this method, it's crazy slow. You should think in terms of vectors: lines, rectangles, other polygons. This applies to both Quartz (like the above) and OpenGL.
If you really are doing something that ends up rendering pixel-at-a-time filling a whole screen, your best bet may be using Quartz to create an offscreen bitmap context, writing to the bitmap memory directly per-pixel, and then drawing the whole thing to the drawing context inside drawRect:. You could also use OpenGL to draw a series of single-pixel sized GL_POINTs (instead of triangles), which may be faster depending on what you're doing.

How to reset the context to the original rectangle after clipping it for drawing?

I try to draw a sequence of pattern images (different repeated patterns in one view).
So what I did is this, in a loop:
CGContextRef context = UIGraphicsGetCurrentContext();
// clip to the drawing rectangle to draw the pattern for this portion of the view
CGContextClipToRect(context, drawingRect);
// the first call here works fine... but for the next nothing will be drawn
CGContextDrawTiledImage(context, CGRectMake(0, 0, 2, 31), [img CGImage]);
I think that after I've clipped the context to draw the pattern in the specific rectangle, I cut out a snippet from the big canvas and the next time, my canvas is gone. can't cut out another snippet. So I must reset that clipping somehow in order to be able to draw another pattern again somewhere else?
Edit: In the documentation I found this:
CGContextClip: "... Therefore, to
re-enlarge the paintable area by
restoring the clipping path to a prior
state, you must save the graphics
state before you clip and restore the
graphics state after you’ve completed
any clipped drawing. ..."
Well then, how to store the graphics state before clipping and how to restore it?
The functions you are looking for are:
CGContextSaveGState(context);
and
CGContextRestoreGState(context);

iPhone: How to use CGContextConcatCTM for saving a transformed image properly?

I am making an iPhone application that loads an image from the camera, and then the user can select a second image from the library, move/scale/rotate that second image, and then save the result. I use two UIImageViews in IB as placeholders, and then apply transformations while touching/pinching.
The problem comes when I have to save both images together. I use a rect of the size of the first image and pass it to UIGraphicsBeginImageContext. Then I tried to use CGContextConcatCTM but I can't understand how it works:
CGRect rect = CGRectMake(0, 0, img1.size.width, img1.size.height); // img1 from camera
UIGraphicsBeginImageContext(rect.size); // Start drawing
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextClearRect(ctx, rect); // Clear whole thing
[img1 drawAtPoint:CGPointZero]; // Draw background image at 0,0
CGContextConcatCTM(ctx, img2.transform); // Apply the transformations of the 2nd image
But what do I need to do next? What information is being held in the img2.transform matrix? The documentation for CGContextConcatCTM doesn't help me that much unfortunately..
Right now I'm trying to solve it by calculating the points and the angle using trigonometry (with the help of this answer), but since the transformation is there, there has to be an easier and more elgant way to do this, right?
Take a look at this excellent answer, you need to create a bitmapped/image context, draw to it, and get the resultant image out. You can then save that. iOS UIImagePickerController result image orientation after upload