get the UIImage (or CGImage) from drawRect - iphone

I'm calling the following code from drawRect
- (void) drawPartial:(UIImage *)img colour:(UIColor *)colour {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, self.frame, img.CGImage);
if (colour!=nil) {
CGContextSetBlendMode (context, kCGBlendModeColor );
CGContextClipToMask(context, self.bounds, img.CGImage);
CGContextSetFillColor(context, CGColorGetComponents(colour.CGColor));
CGContextFillRect (context, self.bounds);
}
self.currentImage=UIGraphicsGetImageFromCurrentImageContext();
CGContextRestoreGState(context);
}
Infact I call it multiple times to build a composite image dynamically coloured in. Each img passed in is effectively a transparent overlay with a different part of the image on it.
I can build my compound image by calling this multiple times in drawrect. The issue comes when I want to update one part of the image. Ideally I would be able to call the above function and just change the one part: I've tried playing with self.clearsContextBeforeDrawing to no avail.
Then I thought I would try to keep a copy of the image from each draw, as a cached image of the state - that way I just need to overlay the new part on that: two calls rather than 15.
However the line self.currentImage=UIGraphicsGetImageFromCurrentImageContext() is not returning me the current image so I can't build a cached copy.
Any help on either approach would really be appreciated. (or obviously point me the way I should be doing this!)
EDIT
I also tried compositing the image separtly using virtually the same code, and then drawing that, but again I don't get the image out...
- (UIImage *) overlayImage:(UIImage *)srcImage withImage:(UIImage *)overlayImage ofColour:(UIColor *)colour {
CGSize size =srcImage.size;
CGRect box = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(size);
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, box, srcImage.CGImage);
if (colour!=nil) {
CGContextSetBlendMode (context, kCGBlendModeColor ); //kCGBlendModeMultiply
CGContextClipToMask(context, box, overlayImage.CGImage); // respect alpha mask
CGContextSetFillColor(context, CGColorGetComponents(colour.CGColor));
CGContextFillRect (context, box);
}
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
//UIImage *result = [UIImage imageWithCGImage: CGBitmapContextCreateImage (context)];
UIGraphicsEndImageContext();
return result;
}

As a rule, you want -drawRect: to be as simple as possible.
Move your drawing code from -drawRect: into a method that creates and draws the entire image you want. Reduce -drawRect: to just compositing this image into the current context.

It's entirely possible that your UIImage object simply isn't retaining the stored data. Furthermore you should know that UIImage is not a mutable object type. You'll get memory leaks unless you release it before reassigning it. Try this:
UIImage *result = [UIGraphicsGetImageFromCurrentImageContext() retain];

Related

How to Re draw an erased image with touches moved in iphone

I am erasing an image with touch using the blend mode destination out. I did this successfully.
Actually I am reducing the alpha with touch so I am able to set the strength too.
Now my problem is about redrawing the erased part of image with touch (i.e I want to draw back the image with strength or want to set the alpha darker). For that I am having a backup of original image then I cropped the part of touch and merged it with image. But the problem is that it is drawing more than it should.
Note that Redraw procedure just darkens the image more than original when drawing overlaps (need to set a upper bound). So how can I avoid the redrawing at the point on which I have already drawn the image in order to avoid darkening of the original image.
I have also attached the code.
// Code to erase an image
UIGraphicsBeginImageContext(self._overlayImage.image.size);
CGRect rect =CGRectMake(0, 0, self._overlayImage.image.size.width, self._overlayImage.image.size.height) ;
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef imageRef=self._overlayImage.image.CGImage;
if (imageRef) {
// Restore the screen that was previously saved
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, rect, imageRef);
//CGImageRelease(imageRef);
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
}
// Erase the background -- raise the alpha to clear more away with eash swipe
// [[UIImage imageNamed:#"eraser222.png"] drawAtPoint:point blendMode:kCGBlendModeDestinationOut alpha:.2];
[ [UIImage imageNamed:#"eraser222.png"] drawInRect:CGRectMake(newPoint.x-self.imgOrignal.size.width*2*radius/self._overlayImage.bounds.size.width, newPoint.y-self.imgOrignal.size.height*2*radius/self._overlayImage.bounds.size.height, self.imgOrignal.size.width*2*radius/self._overlayImage.bounds.size.width, self.imgOrignal.size.height*2*radius/self._overlayImage.bounds.size.height) blendMode:kCGBlendModeDestinationOut alpha:strength/3];
self._overlayImage.image=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// code to draw an image
UIImage *cropped = [self imageByCropping:self.imgOrignal toRect:CGRectMake(newPoint.x-self.imgOrignal.size.width*2*radius/self._overlayImage.bounds.size.width, newPoint.y-self.imgOrignal.size.height*2*radius/self._overlayImage.bounds.size.height, self.imgOrignal.size.width*2*radius/self._overlayImage.bounds.size.width, self.imgOrignal.size.height*2*radius/self._overlayImage.bounds.size.height)];
UIGraphicsBeginImageContext(self._overlayImage.image.size);
CGRect rect =CGRectMake(0, 0, self._overlayImage.image.size.width, self._overlayImage.image.size.height) ;
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef imageRef=self._overlayImage.image.CGImage;
if (imageRef) {
// Restore the screen that was previously saved
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, rect, imageRef);
//CGImageRelease(imageRef);
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
}
[ cropped drawInRect:CGRectMake(newPoint.x-self.imgOrignal.size.width*2*radius/self._overlayImage.bounds.size.width, newPoint.y-self.imgOrignal.size.height*2*radius/self._overlayImage.bounds.size.height, self.imgOrignal.size.width*2*radius/self._overlayImage.bounds.size.width, self.imgOrignal.size.height*2*radius/self._overlayImage.bounds.size.height) blendMode:kCGBlendModeNormal alpha:strength];
cropped= [UIImage imageWithData:UIImagePNGRepresentation(cropped)];
UIImage *finalimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self._overlayImage.image=finalimage;
I know it too late to ans here but I have reached one solution and I will like to share here as it can help someone in need. It was not possible to delete and redraw the same image. So I worked on two images. One that was original Image and second image was nil. Second image was used to draw path where user touches screen. Thenceforth created a new context , drew original image, then drew the path image with kCGBlendModeDestinationOut blend mode. It is kCGBlendModeDestinationOut the main hero of this method. Main task was accomplished by using the kCGBlendModeDestinationOut blend mode. Hence getting the required effect. Check my blog here.

How do I create a new image by drawing on top of an existing one using Quartz?

i have a view with uiimageview i assign this uiimageview image by camera..now i want to do some drawing onto image....using coregraphics.i want to do something like this... select an area by touching and drawing line when line joins something like circle or any shape..i want to change that particular area in to something else for example change color there.turn that into grayscale.. till now i am able to draw line...here is an image of line drawn over a uiimage view...
but i am unable to figure it out how do i draw at imageview's image..mean how to modify imageview's image???
also i want to restore image when click on clear button or something like undo..does someone knows how to achieve this?
and
how do i create a rectangle when click on crop button move the rectangle any where on the screen...and then push the button to crop the image...and then save cropped image..
These are the steps:
Create a CGBitmapContext matching the image's colorspace and dimensions.
Draw the image into that context.
Draw whatever you want on top of the image.
Create a new image from the context.
Dispose off the context.
Here's a method that takes an image, draws something on top of it and returns a new UIImage with modified contents:
- (UIImage*)modifiedImageWithImage:(UIImage*)uiImage
{
// build context to draw in
CGImageRef image = uiImage.CGImage;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL,
CGImageGetWidth(image), CGImageGetHeight(image),
8, CGImageGetWidth(image) * 4,
colorspace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorspace);
// draw original image
CGRect r = CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image));
CGContextSetBlendMode(ctx, kCGBlendModeCopy);
CGContextDrawImage(ctx, r, image);
CGContextSetBlendMode(ctx, kCGBlendModeNormal);
// draw something
CGContextAddEllipseInRect(ctx, CGRectInset(r, 10, 10));
CGContextSetRGBStrokeColor(ctx, 1.0f, 1.0f, 1.0f, 0.5f);
CGContextSetLineWidth(ctx, 16.0f);
CGContextDrawPath(ctx, kCGPathStroke);
CGContextAddEllipseInRect(ctx, CGRectInset(r, 10, 10));
CGContextSetRGBStrokeColor(ctx, 0.7f, 0.0f, 0.0f, 1.0f);
CGContextSetLineWidth(ctx, 4.0f);
CGContextDrawPath(ctx, kCGPathStroke);
// create resulting image
image = CGBitmapContextCreateImage(ctx);
UIImage* newImage = [[[UIImage alloc] initWithCGImage:image] autorelease];
CGImageRelease(image);
CGContextRelease(ctx);
return newImage;
}
To restore to old image, just keep a reference to it.
The cropping thing is not related to the above and you should create a new question for that.
A lot easier solution would be
(UIImage *) modifyImage:(UIImage *)inputImage
{
UIGraphicsBeginImageContext(inputImage.size);
[inputImage drawInRect:CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//Drawing code using above context goes here
/*
*
*/
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Take a look at Overview of Quartz 2D for information on using Quartz 2D on iPhone.

Export CGPath as JPG or PNG

Is it possible to take a path draw in an UIView with CGPath and export it as a PNG?
Assuming you want a UIImage, not a png file, you can do something like this:
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSetLineWidth(context, 2.0);
CGContextSetLineCap(context, kCGLineCapSquare);
//DRAW YOUR PATH HERE
CGContextStrokePath(context);
myUIImage = UIGraphicsGetImageFromCurrentImageContext();
[myUIImage retain];
UIGraphicsEndImageContext();
The cleanest way to do that is to perform the whole drawing again in an image context produced by UIGraphicsBeginImageContext(), get an UIImage out of it, then save it via the UIImagePNG/JPEGRepresentation() functions.
Note that UIView do not "hold" images. You can rerender a UIView's layer, but it's a gross violation of MVC (you're using views to store model data!), and it doesn't look clean to me.

Drawing incrementally in a UIView (iPhone)

As far as I have understood so far, every time I draw something in the drawRect: of a UIView, the whole context is erased and then redrawn.
So I have to do something like this to draw a series of dots:
Method A: drawing everything on every call
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, maskRef); //draw the mask
CGContextClipToMask(context, self.bounds, maskRef); //respect alpha mask
CGContextSetBlendMode(context, kCGBlendModeColorBurn); //set blending mode
for (Drop *drop in myPoints) {
CGContextAddEllipseInRect(context, CGRectMake(drop.point.x - drop.size/2, drop.point.y - drop.size/2, drop.size, drop.size));
}
CGContextSetRGBFillColor(context, 0.5, 0.0, 0.0, 0.8);
CGContextFillPath(context);
}
Which means, I have to store all my Dots (that's fine) and re-draw them all, one by one, every time I want to add a new one. Unfortunately this gives my terrible performance and I am sure there is some other way of doing this, more efficiently.
EDIT: Using MrMage's code I did the following, which unfortunately is just as slow and the color blending doesn't work. Any other method I could try?
Method B: saving the previous draws in a UIImage and only drawing the new stuff and this image
- (void)drawRect:(CGRect)rect
{
//draw on top of the previous stuff
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext(); // ctx is now the image's context
[cachedImage drawAtPoint:CGPointZero];
if ([myPoints count] > 0)
{
Drop *drop = [myPoints objectAtIndex:[myPoints count]-1];
CGContextClipToMask(ctx, self.bounds, maskRef); //respect alpha mask
CGContextAddEllipseInRect(ctx, CGRectMake(drop.point.x - drop.dropSize/2, drop.point.y - drop.dropSize/2, drop.dropSize, drop.dropSize));
CGContextSetRGBFillColor(ctx, 0.5, 0.0, 0.0, 1.0);
CGContextFillPath(ctx);
}
[cachedImage release];
cachedImage = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
//draw on the current context
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, maskRef); //draw the mask
CGContextSetBlendMode(context, kCGBlendModeColorBurn); //set blending mode
[cachedImage drawAtPoint:CGPointZero]; //draw the cached image
}
EDIT:
After all I combined one of the methods mention below with redrawing only in the new rect. The result is:
FAST METHOD:
- (void)addDotAt:(CGPoint)point
{
if ([myPoints count] < kMaxPoints) {
Drop *drop = [[[Drop alloc] init] autorelease];
drop.point = point;
[myPoints addObject:drop];
[self setNeedsDisplayInRect:CGRectMake(drop.point.x - drop.dropSize/2, drop.point.y - drop.dropSize/2, drop.dropSize, drop.dropSize)]; //redraw
}
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, maskRef); //draw the mask
CGContextClipToMask(context, self.bounds, maskRef); //respect alpha mask
CGContextSetBlendMode(context, kCGBlendModeColorBurn); //set blending mode
if ([myPoints count] > 0)
{
Drop *drop = [myPoints objectAtIndex:[myPoints count]-1];
CGPathAddEllipseInRect (dotsPath, NULL, CGRectMake(drop.point.x - drop.dropSize/2, drop.point.y - drop.dropSize/2, drop.dropSize, drop.dropSize));
}
CGContextAddPath(context, dotsPath);
CGContextSetRGBFillColor(context, 0.5, 0.0, 0.0, 1.0);
CGContextFillPath(context);
}
Thanks everyone!
If you are only actually changing a small portion of the UIView's content every time you draw (and the rest of the content generally stays the same), you can use this. Rather than redraw all the content of the UIView every single time, you can mark only the areas of the view that need redrawing using -[UIView setNeedsDisplayInRect:] instead of -[UIView setNeedsDisplay]. You also need to make sure that the graphics content is not cleared before drawing by setting view.clearsContextBeforeDrawing = YES;
Of course, all this also means that your drawRect: implementation needs to respect the rect parameter, which should then be a small subsection of your full view's rect (unless something else dirtied the entire rect), and only draw in that portion.
You can save your CGPath as a member of your class. And use that in the draw method, you will only need to create the path when the dots change but not every time the view is redraw, if the dots are incremental, just keep adding the ellipses to the path. In the drawRect method you will only need to add the path
CGContextAddPath(context,dotsPath);
-(CGMutablePathRef)createPath
{
CGMutablePathRef dotsPath = CGPathCreateMutable();
for (Drop *drop in myPoints) {
CGPathAddEllipseInRect ( dotsPath,NULL,
CGRectMake(drop.point.x - drop.size/2, drop.point.y - drop.size/2, drop.size, drop.size));
}
return dotsPath;
}
If I understand your problem correctly, I would try drawing to a CGBitmapContext instead of the screen directly. Then in the drawRect, draw only the portion of the pre-rendered bitmap that is necessary from the rect parameter.
How many ellipses are you going to draw? In general, Core Graphics should be able to draw a lot of ellipses quickly.
You could, however, cache your old drawings to an image (I don't know if this solution is more performant, however):
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext(); // ctx is now the image's context
[cachedImage drawAtPoint:CGPointZero];
// only plot new ellipses here...
[cachedImage release];
cachedImage = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, maskRef); //draw the mask
CGContextClipToMask(context, self.bounds, maskRef); //respect alpha mask
CGContextSetBlendMode(context, kCGBlendModeColorBurn); //set blending mode
[cachedImage drawAtPoint:CGPointZero];
If you are able to cache the drawing as an image, you can take advantage of UIView's CoreAnimation backing. This will be much faster than using Quartz, as Quartz does its drawing in software.
- (CGImageRef)cachedImage {
/// Draw to an image, return that
}
- (void)refreshCache {
myView.layer.contents = [self cachedImage];
}
- (void)actionThatChangesWhatNeedsToBeDrawn {
[self refreshCache];
}

UIView: how to do non-destructive drawing?

My original question:
I'm creating a simple drawing
application and need to be able to
draw over existing, previously drawn content in my drawRect.
What is the proper way to draw on top of existing content
without entirely replacing it?
Based on answers received here and elsewhere, here is the deal.
You should be prepared to redraw the
entire rectangle whenever drawRect
is called.
You cannot prevent the contents from being erased by doing the following:
[self setClearsContextBeforeDrawing: NO];
This is merely a hint to the graphics engine that there is no point in having it pre-clear the view for you, since you will likely need to re-draw the whole area anyway. It may prevent your view from being automatically erased, but you cannot depend on it.
To draw on top of your view without erasing, do your drawing to an off-screen bitmap context (which is never cleared by the system.) Then in your drawRect, copy from this off-screen buffer to the view.
Example:
- (id) initWithCoder: (NSCoder*) coder {
if (self = [super initWithCoder: coder]) {
self.backgroundColor = [UIColor clearColor];
CGSize size = self.frame.size;
drawingContext = [self createDrawingBufferContext: size];
}
return self;
}
- (CGContextRef) createOffscreenContext: (CGSize) size {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
return context;
}
- (void)drawRect:(CGRect) rect {
UIGraphicsPushContext(drawingContext);
CGImageRef cgImage = CGBitmapContextCreateImage(drawingContext);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
UIGraphicsPopContext();
CGImageRelease(cgImage);
[uiImage drawInRect: rect];
[uiImage release];
}
TODO: can anyone optimize the drawRect so that only the (usually tiny) modified rectangle region is used for the copy?
It is fairly common to draw everything in an offscreen image, and simply display this image when drawing the screen. You can read: Creating a Bitmap Graphics Context.
On optimizing drawRect
Try this:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef cgImage = CGBitmapContextCreateImage(drawingContext);
CGContextClipToRect(context, rect);
CGContextDrawImage(context, CGRectMake(0, 0, self.frame.size.width, self.frame.size.height), cgImage);
CGImageRelease(cgImage);
}
After that you should also comment these lines in your code:
//CGContextTranslateCTM(context, 0, size.height);
//CGContextScaleCTM(context, 1.0, -1.0);
Also created a separate question to be sure that it's the optimal way.
This seems like a better method than I've been using. That is, in a touches event I make a copy of the view about to be updated. Then in drawRect I take that image and draw it to the view and make my other view changes at the same time.
But this seems inefficient but the only way I figured out how to do it.
This prevents your view from being erased before drawRect is done:
[self.layer setNeedsDisplay];
Also, I find it is better to do all the drawing in the drawRect method (unless you have a good reason not to). Drawing offscreen and transferring takes more time and adds more complexity then simply drawing everything once.