UIImage stretchableImageWithLeftCapWidth doesn't work when drawing to a quartz context - iphone

I'm drawing an image with strechable caps defined like the following:
[bubbleImg stretchableImageWithLeftCapWidth:17 topCapHeight:12];
And that was working fine when inside a UIImageView but it doesn't work when drawing the image like the following:
CGContextDrawImage(context, CGRectMake(x, y, width, height), image.CGImage);
.. which I kind of suspected it wouldn't work. Is there a solution for stretchable cap images when drawing to a CG context?

When drawing a stretchable image, don't use the CGImage property, because it returns the underlying unstretched bitmap. Instead use UIImage's drawing methods directly.
If you are drawing in an unusual context (outside of UIView/CALayer drawing methods in a pure Core Graphics context), you may need to wrap UIKit drawing in UIGraphicsPushContext()/UIGraphicsPopContext() calls.

Related

Remove shape from UIView to create transparency

Here's a sample view I have right now:
Ideally, I'd like to take a "chunk" out of the top, so any views underneath are now visible through that removed area (e.g. transparency).
I've tried creating a path, and then using CGContextClip to attempt a clip, but it doesn't seem to be clipping the shape like intended. Any ideas of how to do this, or even if it's possible at all?
Typically to do something like this, one would make a UIView with a transparent background color, and then draw the "background" manually through CoreGraphics. For instance, to make a view that is essentially a circle with a black background, you could do something like this in the drawRect method:
- (void)drawRect:(CGRect)dirtyRect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillElipseInRect(context, self.bounds);
}
For something more complicated than a simple circle, you should use CGContextBeginPath() in conjunction with the CGContextMoveToPoint() and CGContextAddLineToPoint() functions. This will allow you to make a transparent view with any opaque shape that you want for a background.
EDIT: To clip a background image to a certain path, you could do something like this:
CGContextSaveGState(context);
CGImageRef image = [[UIImage imageNamed:#"backgroundImage.png"] CGImage];
CGContextBeginPath(context);
// add your shape to the path
CGContextClipToPath(context);
CGContextDrawImage(context, self.bounds, image);
CGContextRestoreGState(context);
Obviously for repeating patterns you could use more than one call to CGContextDrawImage, but this is basically what needs to be done. Like I said above, you could use basic line drawing functions to add lines, rectangles, circles, and anything else to your path.
Use a [UIColor clearColor] in you V to make it transparent, you will probably have to set the view opaque to NO as well.

How do you display a pixel of color C to iPhone coordinates (x, y)?

I've read all Apple documentation on the iPhone development and there's nothing that describes how to do it.
The drawing on iPhone is not pixel-oriented in this way. You can draw a rectangle of size 1x1 using the Quartz API if you want in your drawRect: method:
CGColorRef colorRef = CGColorCreateGenericRGB(r, g, b, a);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, colorRef);
// This next line does the drawing; repeat for as many x,y pairs as you want.
CGContextFillRect(context, CGRectMake(x,y,1,1));
CGColorRelease(colorRef);
But you wouldn't want to fill the entire screen using this method, it's crazy slow. You should think in terms of vectors: lines, rectangles, other polygons. This applies to both Quartz (like the above) and OpenGL.
If you really are doing something that ends up rendering pixel-at-a-time filling a whole screen, your best bet may be using Quartz to create an offscreen bitmap context, writing to the bitmap memory directly per-pixel, and then drawing the whole thing to the drawing context inside drawRect:. You could also use OpenGL to draw a series of single-pixel sized GL_POINTs (instead of triangles), which may be faster depending on what you're doing.

iPhone: How to use CGContextConcatCTM for saving a transformed image properly?

I am making an iPhone application that loads an image from the camera, and then the user can select a second image from the library, move/scale/rotate that second image, and then save the result. I use two UIImageViews in IB as placeholders, and then apply transformations while touching/pinching.
The problem comes when I have to save both images together. I use a rect of the size of the first image and pass it to UIGraphicsBeginImageContext. Then I tried to use CGContextConcatCTM but I can't understand how it works:
CGRect rect = CGRectMake(0, 0, img1.size.width, img1.size.height); // img1 from camera
UIGraphicsBeginImageContext(rect.size); // Start drawing
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextClearRect(ctx, rect); // Clear whole thing
[img1 drawAtPoint:CGPointZero]; // Draw background image at 0,0
CGContextConcatCTM(ctx, img2.transform); // Apply the transformations of the 2nd image
But what do I need to do next? What information is being held in the img2.transform matrix? The documentation for CGContextConcatCTM doesn't help me that much unfortunately..
Right now I'm trying to solve it by calculating the points and the angle using trigonometry (with the help of this answer), but since the transformation is there, there has to be an easier and more elgant way to do this, right?
Take a look at this excellent answer, you need to create a bitmapped/image context, draw to it, and get the resultant image out. You can then save that. iOS UIImagePickerController result image orientation after upload

How to use a CGLayer to draw multiple images offscreen

Ultimately I'm working on a box blur function for use on iPhone.
That function would take a UIImage and draw transparent copies, first to the sides, then take that image and draw transparent copies above and below, returning a nicely blurred image.
Reading the Drawing with Quartz 2D Programming Guide, it recommends using CGLayers for this kind of operation.
The example code in the guide is a little dense for me to understand, so I would like someone to show me a very simple example of taking a UIImage and converting it to a CGLayer that I would then draw copies of and return as a UIImage.
It would be OK if values were hard-coded (for simplicity). This is just for me to wrap my head around, not for production code.
UIImage *myImage = …;
CGLayerRef layer = CGLayerCreateWithContext(destinationContext, myImage.size, /*auxiliaryInfo*/ NULL);
if (layer) {
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextDrawImage(layerContext, (CGRect){ CGPointZero, myImage.size }, myImage.CGImage);
//Use CGContextDrawLayerAtPoint or CGContextDrawLayerInRect as many times as necessary. Whichever function you choose, be sure to pass destinationContext to it—you can't draw the layer into itself!
CFRelease(layer);
}
That is technically my first ever iPhone code (I only program on the Mac), so beware. I have used CGLayer before, though, and as far as I know, Quartz is no different on the iPhone.
… and return as a UIImage.
I'm not sure how to do this part, having never worked with UIKit.

iPhone: Transforming an image using Quartz 2D

I am trying to apply some transformations on images using a CGContextRef. I am using CGContextTranslateCTM, CGContextScaleCTM and CGContextRotateCTM functions, but to keep things simple lets focus on just the first. I was wondering why the following code produces exactly the original image?! Am I missing something?
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef g = CGBitmapContextCreate((void*) pixelData,
width,
height,
RGBA_8_BIT,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetShouldAntialias(g, YES);
CGContextSetInterpolationQuality(g, kCGInterpolationHigh);
CGContextTranslateCTM( g,translateX, translateY );
CGImageRef tempImg = CGBitmapContextCreateImage (g);
CGContextDrawImage( g, CGRectMake (0, 0, width, height), tempImg );
CGContextRelease(g);
CGColorSpaceRelease( colorSpace );
Also, after translating, how to draw another image over this one but with partial transparency (eg alpha = 0.5).
I searched alot but didn't find an answer, any help is appreciated... :)
Please note that I am creating the context from pixelData, and that tempImg is created after the translation. There is nothing wrong in the initialization, as the original image is being currently produced, but the issue is with the translation I suppose..
Transformations to the graphics state only affect subsequent drawing operations - they don't change the existing image data. If you want to apply transforms to an image, try something like this:
Create an empty CGContext (on iPhone, use UIGraphicsBeginImageContext)
Translate, scale, or rotate it's graphics state
Draw existing image into it.
Get image from new CGContext (on iPhone, use UIGraphicsGetImageFromCurrentImageContext)
When you perform step 3, the existing image is drawn into your new graphics context with the transformations applied. The trick here is that in order to apply the transformations, we have to actually draw something.
You can do some really cool things with transformations this way. You can draw half your image, apply some transforms, and draw some more.
As noted in other answers, transformations only apply to subsequent drawing operations; they don't affect the pixel buffer you started with.
So you need a drawing operation. The solution is to create a CGImage; drawing that image is a drawing operation, so it will be subject to the current transformation matrix (CTM).
Step-by-step:
Create the context with empty pixel data. (If you pass NULL for the buffers, Quartz should create them for you. That works on the Mac, anyway.)
Create the image with the pixel data you want to draw transformed.
Transform the CTM in the context.
Draw the image.
You have to call CGBitmapContextCreateImage() after you draw the image.
Then you can draw another image on top of the first one and call CGBitmapContextCreateImage() again to get the second image. You can set the alpha using CGContextSetAlpha(ctx, alphaValue);