Drawing incrementally in a UIView (iPhone) - iphone

As far as I have understood so far, every time I draw something in the drawRect: of a UIView, the whole context is erased and then redrawn.
So I have to do something like this to draw a series of dots:
Method A: drawing everything on every call
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, maskRef); //draw the mask
CGContextClipToMask(context, self.bounds, maskRef); //respect alpha mask
CGContextSetBlendMode(context, kCGBlendModeColorBurn); //set blending mode
for (Drop *drop in myPoints) {
CGContextAddEllipseInRect(context, CGRectMake(drop.point.x - drop.size/2, drop.point.y - drop.size/2, drop.size, drop.size));
}
CGContextSetRGBFillColor(context, 0.5, 0.0, 0.0, 0.8);
CGContextFillPath(context);
}
Which means, I have to store all my Dots (that's fine) and re-draw them all, one by one, every time I want to add a new one. Unfortunately this gives my terrible performance and I am sure there is some other way of doing this, more efficiently.
EDIT: Using MrMage's code I did the following, which unfortunately is just as slow and the color blending doesn't work. Any other method I could try?
Method B: saving the previous draws in a UIImage and only drawing the new stuff and this image
- (void)drawRect:(CGRect)rect
{
//draw on top of the previous stuff
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext(); // ctx is now the image's context
[cachedImage drawAtPoint:CGPointZero];
if ([myPoints count] > 0)
{
Drop *drop = [myPoints objectAtIndex:[myPoints count]-1];
CGContextClipToMask(ctx, self.bounds, maskRef); //respect alpha mask
CGContextAddEllipseInRect(ctx, CGRectMake(drop.point.x - drop.dropSize/2, drop.point.y - drop.dropSize/2, drop.dropSize, drop.dropSize));
CGContextSetRGBFillColor(ctx, 0.5, 0.0, 0.0, 1.0);
CGContextFillPath(ctx);
}
[cachedImage release];
cachedImage = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
//draw on the current context
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, maskRef); //draw the mask
CGContextSetBlendMode(context, kCGBlendModeColorBurn); //set blending mode
[cachedImage drawAtPoint:CGPointZero]; //draw the cached image
}
EDIT:
After all I combined one of the methods mention below with redrawing only in the new rect. The result is:
FAST METHOD:
- (void)addDotAt:(CGPoint)point
{
if ([myPoints count] < kMaxPoints) {
Drop *drop = [[[Drop alloc] init] autorelease];
drop.point = point;
[myPoints addObject:drop];
[self setNeedsDisplayInRect:CGRectMake(drop.point.x - drop.dropSize/2, drop.point.y - drop.dropSize/2, drop.dropSize, drop.dropSize)]; //redraw
}
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, maskRef); //draw the mask
CGContextClipToMask(context, self.bounds, maskRef); //respect alpha mask
CGContextSetBlendMode(context, kCGBlendModeColorBurn); //set blending mode
if ([myPoints count] > 0)
{
Drop *drop = [myPoints objectAtIndex:[myPoints count]-1];
CGPathAddEllipseInRect (dotsPath, NULL, CGRectMake(drop.point.x - drop.dropSize/2, drop.point.y - drop.dropSize/2, drop.dropSize, drop.dropSize));
}
CGContextAddPath(context, dotsPath);
CGContextSetRGBFillColor(context, 0.5, 0.0, 0.0, 1.0);
CGContextFillPath(context);
}
Thanks everyone!

If you are only actually changing a small portion of the UIView's content every time you draw (and the rest of the content generally stays the same), you can use this. Rather than redraw all the content of the UIView every single time, you can mark only the areas of the view that need redrawing using -[UIView setNeedsDisplayInRect:] instead of -[UIView setNeedsDisplay]. You also need to make sure that the graphics content is not cleared before drawing by setting view.clearsContextBeforeDrawing = YES;
Of course, all this also means that your drawRect: implementation needs to respect the rect parameter, which should then be a small subsection of your full view's rect (unless something else dirtied the entire rect), and only draw in that portion.

You can save your CGPath as a member of your class. And use that in the draw method, you will only need to create the path when the dots change but not every time the view is redraw, if the dots are incremental, just keep adding the ellipses to the path. In the drawRect method you will only need to add the path
CGContextAddPath(context,dotsPath);
-(CGMutablePathRef)createPath
{
CGMutablePathRef dotsPath = CGPathCreateMutable();
for (Drop *drop in myPoints) {
CGPathAddEllipseInRect ( dotsPath,NULL,
CGRectMake(drop.point.x - drop.size/2, drop.point.y - drop.size/2, drop.size, drop.size));
}
return dotsPath;
}

If I understand your problem correctly, I would try drawing to a CGBitmapContext instead of the screen directly. Then in the drawRect, draw only the portion of the pre-rendered bitmap that is necessary from the rect parameter.

How many ellipses are you going to draw? In general, Core Graphics should be able to draw a lot of ellipses quickly.
You could, however, cache your old drawings to an image (I don't know if this solution is more performant, however):
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext(); // ctx is now the image's context
[cachedImage drawAtPoint:CGPointZero];
// only plot new ellipses here...
[cachedImage release];
cachedImage = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, maskRef); //draw the mask
CGContextClipToMask(context, self.bounds, maskRef); //respect alpha mask
CGContextSetBlendMode(context, kCGBlendModeColorBurn); //set blending mode
[cachedImage drawAtPoint:CGPointZero];

If you are able to cache the drawing as an image, you can take advantage of UIView's CoreAnimation backing. This will be much faster than using Quartz, as Quartz does its drawing in software.
- (CGImageRef)cachedImage {
/// Draw to an image, return that
}
- (void)refreshCache {
myView.layer.contents = [self cachedImage];
}
- (void)actionThatChangesWhatNeedsToBeDrawn {
[self refreshCache];
}

Related

Multiple CGContextRef in drawRect

In overriding a UIView drawRect, I draw the main image with CGContextDrawImage. Over it I need to draw another image (with a multiply blend mode), so I actually need to draw over it.
This second image needs to be prepares since it's generated dynamically (it has to be masked, may be different size and so), so in the end I need to generate it. How can I get a second context where I can draw and mask this second image before applying it over the main one with? If I draw on the current context, it gets directly drawn no the main one before I can mask it.
- (void) drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextDrawImage(ctx, CGRectMake(0, 0, rect.size.width, rect.size.height), [UIImage imageNamed:#"actionBg.png"].CGImage);
// prevent the drawings to be flipped
CGContextTranslateCTM(ctx, 0, rect.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
// generate the overlay
if ([self isActive] == NO && self.fullDelay != 0) { // TODO: remove fullDelay check!
int segmentSize = (ACTION_SIZE / [self fullDelay]);
for (int i=0; i<[self fullDelay]; i++) {
float alpha = 0.9 - (([self fullDelay] * 0.1) - (i * 0.1));
[[UIColor colorWithRed:120.0/255.0 green:14.0/255.0 blue:14.0/255.0 alpha:alpha] setFill];
if (currentDelay > i) {
CGRect r = CGRectMake(0, i * segmentSize, ACTION_SIZE, segmentSize);
CGContextFillRect(ctx, r);
}
[[UIColor colorWithRed:1 green:1 blue:1 alpha:0.3] setFill];
CGRect line = CGRectMake(0, (i * segmentSize) + segmentSize - 1 , ACTION_SIZE, 1);
CGContextFillRect(ctx, line);
}
UIImage *overlay = UIGraphicsGetImageFromCurrentImageContext();
UIImage *overlayMasked = [TDUtilities maskImage:overlay withMask:[UIImage imageNamed:#"actionMask.png"]];
}
}
Here overlayMasked now contains the correct image, but since I've prepared it using the main context, its is not all messed. Thanks
Thanks
You can create a bitmap context using either UIGraphicsBeginImageContextWithOptions or CGBitmapContextCreate. After you're finished drawing in the bitmap context, you can get an image using UIGraphicsGetImageFromCurrentImageContext or CGBitmapContextCreateImage (as appropriate), and then release the context using either UIGraphicsEndImageContext or CGContextRelease.

How to draw a line with a custom style using Core Graphics?

I'm currently drawing a line using Core Graphics. It's really bare bones and simple.
- (void)drawRect:(CGRect)rect {
CGContextRef c = UIGraphicsGetCurrentContext();
CGFloat red[4] = {1.0f, 0.0f, 0.0f, 1.0f};
CGContextSetStrokeColor(c, red);
CGContextBeginPath(c);
CGContextMoveToPoint(c, 5.0f, 5.0f);
CGContextAddLineToPoint(c, 300.0f, 600.0f);
CGContextSetLineWidth(c, 25);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextStrokePath(c);
}
This works well. Let's say that we wanted to draw a custom style line. Say we wanted to imitate the style of a crayon for example. And that the designer handed your crayon style images: http://imgur.com/a/N40ig
To do accomplish this effect I think I need to do something like this:
Create a special colored versions of crayonImage1-crayonImage4
Every time you add a line to line you use one of the crayonImages
You alternate the crayonImages every time you draw a point.
Step 1 makes sense. I can use the following method:
- (UIImage *)image:(UIImage *)img withColor:(UIColor *)color {
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
I'm unsure how I can complete steps 2 and 3. Is there an API in CoreGraphics for setting an image as the point of line? If so what is it and how can I use it?
Thanks in advance,
-David
Start with the following example: http://www.ifans.com/forums/showthread.php?t=132024
But for brushes, don't draw a line. Simply draw the brush image using CGContextDrawImage.
Basically, you simply draw an image for every touch.

how to darken a UIImageView

I need to darken a UIImageView when it gets touched, almost exactly like icons on the springboard (home screen).
Should I be added UIView with a 0.5 alpha and black background. This seems clumsy. Should I be using Layers or something (CALayers).
I would let a UIImageView handle the actual drawing of the image, but toggle the image to one that's been darkened in advance. Here's some code I've used to generate darkened images with alpha maintained:
+ (UIImage *)darkenImage:(UIImage *)image toLevel:(CGFloat)level
{
// Create a temporary view to act as a darkening layer
CGRect frame = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIView *tempView = [[UIView alloc] initWithFrame:frame];
tempView.backgroundColor = [UIColor blackColor];
tempView.alpha = level;
// Draw the image into a new graphics context
UIGraphicsBeginImageContext(frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:frame];
// Flip the context vertically so we can draw the dark layer via a mask that
// aligns with the image's alpha pixels (Quartz uses flipped coordinates)
CGContextTranslateCTM(context, 0, frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, frame, image.CGImage);
[tempView.layer renderInContext:context];
// Produce a new image from this context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIGraphicsEndImageContext();
[tempView release];
return toReturn;
}
How about subclassing UIView and adding a UIImage ivar (called image)? Then you could override -drawRect: something like this, provided you had a boolean ivar called pressed that was set while touched.
- (void)drawRect:(CGRect)rect
{
[image drawAtPoint:(CGPointMake(0.0, 0.0))];
// if pressed, fill rect with dark translucent color
if (pressed)
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextSetRGBFillColor(ctx, 0.5, 0.5, 0.5, 0.5);
CGContextFillRect(ctx, rect);
CGContextRestoreGState(ctx);
}
}
You would want to experiment with RGBA values above. And, of course, non-rectangular shapes would require a bit more work - like a CGMutablePathRef.
UIImageView can have multiple images; you could have two versions of the image and switch to the darker one when needed.

How do I create a new image by drawing on top of an existing one using Quartz?

i have a view with uiimageview i assign this uiimageview image by camera..now i want to do some drawing onto image....using coregraphics.i want to do something like this... select an area by touching and drawing line when line joins something like circle or any shape..i want to change that particular area in to something else for example change color there.turn that into grayscale.. till now i am able to draw line...here is an image of line drawn over a uiimage view...
but i am unable to figure it out how do i draw at imageview's image..mean how to modify imageview's image???
also i want to restore image when click on clear button or something like undo..does someone knows how to achieve this?
and
how do i create a rectangle when click on crop button move the rectangle any where on the screen...and then push the button to crop the image...and then save cropped image..
These are the steps:
Create a CGBitmapContext matching the image's colorspace and dimensions.
Draw the image into that context.
Draw whatever you want on top of the image.
Create a new image from the context.
Dispose off the context.
Here's a method that takes an image, draws something on top of it and returns a new UIImage with modified contents:
- (UIImage*)modifiedImageWithImage:(UIImage*)uiImage
{
// build context to draw in
CGImageRef image = uiImage.CGImage;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL,
CGImageGetWidth(image), CGImageGetHeight(image),
8, CGImageGetWidth(image) * 4,
colorspace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorspace);
// draw original image
CGRect r = CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image));
CGContextSetBlendMode(ctx, kCGBlendModeCopy);
CGContextDrawImage(ctx, r, image);
CGContextSetBlendMode(ctx, kCGBlendModeNormal);
// draw something
CGContextAddEllipseInRect(ctx, CGRectInset(r, 10, 10));
CGContextSetRGBStrokeColor(ctx, 1.0f, 1.0f, 1.0f, 0.5f);
CGContextSetLineWidth(ctx, 16.0f);
CGContextDrawPath(ctx, kCGPathStroke);
CGContextAddEllipseInRect(ctx, CGRectInset(r, 10, 10));
CGContextSetRGBStrokeColor(ctx, 0.7f, 0.0f, 0.0f, 1.0f);
CGContextSetLineWidth(ctx, 4.0f);
CGContextDrawPath(ctx, kCGPathStroke);
// create resulting image
image = CGBitmapContextCreateImage(ctx);
UIImage* newImage = [[[UIImage alloc] initWithCGImage:image] autorelease];
CGImageRelease(image);
CGContextRelease(ctx);
return newImage;
}
To restore to old image, just keep a reference to it.
The cropping thing is not related to the above and you should create a new question for that.
A lot easier solution would be
(UIImage *) modifyImage:(UIImage *)inputImage
{
UIGraphicsBeginImageContext(inputImage.size);
[inputImage drawInRect:CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//Drawing code using above context goes here
/*
*
*/
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Take a look at Overview of Quartz 2D for information on using Quartz 2D on iPhone.

UIView: how to do non-destructive drawing?

My original question:
I'm creating a simple drawing
application and need to be able to
draw over existing, previously drawn content in my drawRect.
What is the proper way to draw on top of existing content
without entirely replacing it?
Based on answers received here and elsewhere, here is the deal.
You should be prepared to redraw the
entire rectangle whenever drawRect
is called.
You cannot prevent the contents from being erased by doing the following:
[self setClearsContextBeforeDrawing: NO];
This is merely a hint to the graphics engine that there is no point in having it pre-clear the view for you, since you will likely need to re-draw the whole area anyway. It may prevent your view from being automatically erased, but you cannot depend on it.
To draw on top of your view without erasing, do your drawing to an off-screen bitmap context (which is never cleared by the system.) Then in your drawRect, copy from this off-screen buffer to the view.
Example:
- (id) initWithCoder: (NSCoder*) coder {
if (self = [super initWithCoder: coder]) {
self.backgroundColor = [UIColor clearColor];
CGSize size = self.frame.size;
drawingContext = [self createDrawingBufferContext: size];
}
return self;
}
- (CGContextRef) createOffscreenContext: (CGSize) size {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
return context;
}
- (void)drawRect:(CGRect) rect {
UIGraphicsPushContext(drawingContext);
CGImageRef cgImage = CGBitmapContextCreateImage(drawingContext);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
UIGraphicsPopContext();
CGImageRelease(cgImage);
[uiImage drawInRect: rect];
[uiImage release];
}
TODO: can anyone optimize the drawRect so that only the (usually tiny) modified rectangle region is used for the copy?
It is fairly common to draw everything in an offscreen image, and simply display this image when drawing the screen. You can read: Creating a Bitmap Graphics Context.
On optimizing drawRect
Try this:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef cgImage = CGBitmapContextCreateImage(drawingContext);
CGContextClipToRect(context, rect);
CGContextDrawImage(context, CGRectMake(0, 0, self.frame.size.width, self.frame.size.height), cgImage);
CGImageRelease(cgImage);
}
After that you should also comment these lines in your code:
//CGContextTranslateCTM(context, 0, size.height);
//CGContextScaleCTM(context, 1.0, -1.0);
Also created a separate question to be sure that it's the optimal way.
This seems like a better method than I've been using. That is, in a touches event I make a copy of the view about to be updated. Then in drawRect I take that image and draw it to the view and make my other view changes at the same time.
But this seems inefficient but the only way I figured out how to do it.
This prevents your view from being erased before drawRect is done:
[self.layer setNeedsDisplay];
Also, I find it is better to do all the drawing in the drawRect method (unless you have a good reason not to). Drawing offscreen and transferring takes more time and adds more complexity then simply drawing everything once.