Problems when using Quartz 2D on NSOperation - iphone

I need to paint an image using some data and show it on my iphone application. Since the painting takes significant time (2-3 seconds on device), I want to perform painting on a different thread. Also, I want to be able to cancel painting, change something in data and start it again. So it's best for me to use NSOperation.
Now, when I do the drawing on the main thread, everything looks fine.
When I do exactly the same thing using NSOperation subclass, everything looks fine, but only 95% of the time. Sometimes it doesnt draw the full picture. Sometimes it doesnt draw text. Sometimes it uses different colors, there might be red/green/blue dots scattered all over the image etc etc etc.
I made a very short example to illustrate this:
First, we do all the painting on a main thread in a regular method:
//setting up bitmap context
size_t width = 400;
size_t height = 400;
size_t bitsPerComponent = 8;
size_t bytesPerRow = 4 * width;
void* imageData = malloc(bytesPerRow * height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(imageData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CFRelease(colorSpace);
//transforming it to usual coordinate system
CGRect mapRect = CGRectMake(0, 0, width, height);
UIGraphicsPushContext(context);
CGContextTranslateCTM(context, 0, mapRect.size.height);
CGContextScaleCTM(context, 1, -1);
//actull drawing - nothing complicated here, 2 lines and 3 text strings on white background
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextFillRect(context, mapRect);
CGContextSetLineWidth(context, 3);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextMoveToPoint(context, 10, 10);
CGContextAddLineToPoint(context, 20, 20);
CGContextStrokePath(context);
CGContextMoveToPoint(context, 20, 20);
CGContextAddLineToPoint(context, 100, 100);
CGContextStrokePath(context);
[UIColor blackColor].set;
[[NSString stringWithString:#"tag1"] drawInRect:CGRectMake(10, 10, 40, 15) withFont:[UIFont systemFontOfSize:15]];
[[NSString stringWithString:#"tag2"] drawInRect:CGRectMake(20, 20, 40, 15) withFont:[UIFont systemFontOfSize:15]];
[[NSString stringWithString:#"tag3"] drawInRect:CGRectMake(100, 100, 40, 15) withFont:[UIFont systemFontOfSize:15]];
//getting UIImage from bitmap context
CGImageRef _trueMap = CGBitmapContextCreateImage(context);
if (_trueMap) {
UIImage* _map = [UIImage imageWithCGImage:_trueMap];
CFRelease(_trueMap);
//displaying what we got
//self.map leads to UIImageView
self.map = _map;
}
//releasing context and memmory
UIGraphicsPopContext();
CFRelease(context);
free(imageData);
No errors here. Always works.
Now, I'll subclass NSOperation and copy-paste this code there: Interface:
#interface Painter : NSOperation {
//The controller which contains UIImageView we will use to display image
MapViewController* mapViewController;
CGContextRef context;
void* imageData;
}
#property (nonatomic, assign) MapViewController* mapViewController;
- (id) initWithRootController:(MapViewController*)mvc__;
#end
Now the methods:
- (id) initWithRootController:(MapViewController*)mvc__ {
if (self = [super init]) {
self.mapViewController = mvc__;
size_t width = 400;
size_t height = 400;
size_t bitsPerComponent = 8;
size_t bytesPerRow = 4 * width;
imageData = malloc(bytesPerRow * height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(imageData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CFRelease(colorSpace);
}
return self;
}
- (void) main {
size_t width = 400;
size_t height = 400;
//transforming it to usual coordinate system
CGRect mapRect = CGRectMake(0, 0, width, height);
UIGraphicsPushContext(context);
CGContextTranslateCTM(context, 0, mapRect.size.height);
CGContextScaleCTM(context, 1, -1);
//actull drawing - nothing complicated here, 2 lines and 3 text strings on white background
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextFillRect(context, mapRect);
CGContextSetLineWidth(context, 3);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextMoveToPoint(context, 10, 10);
CGContextAddLineToPoint(context, 20, 20);
CGContextStrokePath(context);
CGContextMoveToPoint(context, 20, 20);
CGContextAddLineToPoint(context, 100, 100);
CGContextStrokePath(context);
[UIColor blackColor].set;
[[NSString stringWithString:#"tag1"] drawInRect:CGRectMake(10, 10, 40, 15) withFont:[UIFont systemFontOfSize:15]];
[[NSString stringWithString:#"tag2"] drawInRect:CGRectMake(20, 20, 40, 15) withFont:[UIFont systemFontOfSize:15]];
[[NSString stringWithString:#"tag3"] drawInRect:CGRectMake(100, 100, 40, 15) withFont:[UIFont systemFontOfSize:15]];
//getting UIImage from bitmap context
CGImageRef _trueMap = CGBitmapContextCreateImage(context);
if (_trueMap) {
UIImage* _map = [UIImage imageWithCGImage:_trueMap];
CFRelease(_trueMap);
//displaying what we got
[mapViewController performSelectorOnMainThread:#selector(setMap:) withObject:_map waitUntilDone:YES];
}
//releasing context and memmory
UIGraphicsPopContext();
CFRelease(context);
free(imageData);
}
Again, no significant code changes between this 2 pieces of code. And when I start this operation like this:
NSOperationQueue* repaintQueue = [[NSOperationQueue alloc] init];
repaintQueue.maxConcurrentOperationCount = 1;
[repaintQueue addOperation:[[[Painter alloc] initWithRootController:self] autorelease]];
It will work. But not always, sometimes image will contain artifacts.
I've also made few screenshots to illustrate the issue, but couldn't post them =(
Anyways, there is a screenshot which shows red line and 3 text lines (which is fine) and a screenshot which shows red line, no text lines and "tag2" written upside down on tab bar controller.
So what's the problem?
I cant use Quartz with NSOperation? Is there some kind of restriction on drawing on separate threads? Is there a way to bypass those restrictions if so?
If anyone ever seen this problem, please reply.

A few calls in your code are form UIKit (the ones prefixed UI), and UIKit is not threadsafe. Any UI operations should be called on the main thread, or you risk weird things happening, such as artefacting.
I can't speak for Quartz2d (or Core Grahics) itself, as I haven't used a lot of it directly. But I do know that UIKit is not threadsafe.
The method drawInRect:withFont: is from a category added to NSString from UIKit (UIStringDrawing). These methods are not threadsafe, and that is why you are seeing strange behaviour.

Related

Anti-aliasing UIimage looks blurred or jagged

i want to draw 12 images in a circle representing the watch numbers, i have read all topics on stackoverflow regarding images with transparent border but it's not working in my case
-(UIImage *)addImageNumber_:(UIImage *)img {
int w = img.size.width;
int h = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1);
CGContextSetShouldAntialias(context,YES);
CGContextSetAllowsAntialiasing( context ,YES );
CGAffineTransform transform;
for (int x=0; x<=11; x++) {
UIImage *timg1 = [UIImage imageNamed:#"2.png"];
CGRect imageRect = CGRectMake(0, 0, timg1.size.width+2, timg1.size.height+2);
UIGraphicsBeginImageContext(imageRect.size);
[timg1 drawInRect:CGRectMake(1,1,timg1.size.width,timg1.size.height)];
timg1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
transform = CGAffineTransformIdentity;
CGContextDrawImage(context, CGRectMake((w-26)/2, 0, 26, 30), timg1.CGImage);
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeTranslation(-w/2, -w/2));
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeRotation(radians(30)));
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeTranslation(w/2, w/2));
CGContextConcatCTM(context, transform);
}
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];
}
UIViewEdgeAntialiasing = YES;
UIImage *img = [UIImage imageNamed:#"test.png"];
UIImage *img2 = [self addImageNumber_:img ];
R1.image = img2;
[self.view addSubview:R1];
test img is the background of the watch , 2.png is a transparent png with transparent borders
numbers at 12'o clock and 6'o clock looks ok because they are not rotated the rest are jaggy
Never say UIGraphicsBeginImageContext(imageRect.size). Say UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0). That way, on a double-resolution screen, you'll get a double-resolution graphics context.
You could even try a resolution value of 4 to increase the resolution even further.
Of course, the fact that you're starting with a pre-drawn image of a "2" might limit your resolution; you can't make a silk purse out of a sow's ear. You could be a lot better off drawing the "2" from scratch as a string.
try
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
Swift 3:
context!.interpolationQuality = .high

CGContextShowTextAtPoint producing non-Retina output on Retina device

I'm trying to add a text to an UIImage and I'm getting a pixelated drawing.
I have tried some other answers with no success:
Drawing on the retina display using CoreGraphics - Image pixelated
Retina display core graphics font quality
Drawing with Core Graphics looks chunky on Retina display
My code:
-(UIImage *)addText:(UIImage *)imgV text:(NSString *)text1
{
int w = self.frame.size.width * 2;
int h = self.frame.size.height * 2;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate
(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), imgV.CGImage);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1);
char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding];
CGContextSelectFont(context, "Arial", 24, kCGEncodingMacRoman);
// Adjust text;
CGContextSetTextDrawingMode(context, kCGTextInvisible);
CGContextShowTextAtPoint(context, 0, 0, text, strlen(text));
CGPoint pt = CGContextGetTextPosition(context);
float posx = (w/2 - pt.x)/2.0;
float posy = 54.0;
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetRGBFillColor(context, 255, 255, 255, 1.0);
CGContextShowTextAtPoint(context, posx, posy, text, strlen(text));
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];
}
Like Peter said, use UIGraphicsBeginImageContextWithOptions. You might want to pass image.scale to the scale attribute (where image.scale is the scale of one of the images you're drawing), or simply use [UIScreen mainScreen].scale.
This code can be made simpler overall. Try something like:
// Create the image context
UIGraphicsBeginImageContextWithOptions(_baseImage.size, NO, _baseImage.scale);
// Draw the image
CGRect rect = CGRectMake(0, 0, _baseImage.size.width, _baseImage.size.height);
[_baseImage drawInRect:rect];
// Get a vertically centered rect for the text drawing
rect.origin.y = (rect.size.height - FONT_SIZE) / 2 - 2;
rect = CGRectIntegral(rect);
rect.size.height = FONT_SIZE;
// Draw the text
UIFont *font = [UIFont boldSystemFontOfSize:FONT_SIZE];
[[UIColor whiteColor] set];
[text drawInRect:rect withFont:font lineBreakMode:NSLineBreakByClipping alignment:NSTextAlignmentCenter];
// Get and return the new image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
Where text is an NSString object.
I was facing the exact same problem and posted a question last night to which no-one replied. I have combined fnf's suggested changes with Clif Viegas's answer in the following question:
CGContextDrawImage draws image upside down when passed UIImage.CGImage
to come up with the solution. My addText method is slightly different from yours:
+(UIImage *)addTextToImage:(UIImage *)img text:(NSString *)text1{
int w = img.size.width;
int h = img.size.height;
CGSize size = CGSizeMake(w, h);
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(size);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0,h);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1);
char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding];// \"05/05/09\";
CGContextSelectFont(context, "Times New Roman", 14, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
//rotate text
CGContextSetTextMatrix(context, CGAffineTransformMakeRotation( -M_PI/8 ));
CGContextShowTextAtPoint(context, 70, 88, text, strlen(text));
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
return newImg;
}
In summary, what I have done is add
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(size);
}
Remove
// CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
and put
CGContextRef context = UIGraphicsGetCurrentContext();
instead. But this causes the image to be upside down. Hence I put
CGContextTranslateCTM(context, 0,h);
CGContextScaleCTM(context, 1.0, -1.0);
just before
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
This way you can use CG functions instead of using drawInRect etc.
Don't forget
UIGraphicsEndImageContext
at the end. Hope this helps.

A UIImage created in coregraphics isn't repeating

I have this code, which should repeat the same UIImage:
UIView *paperMiddle = [[UIView alloc] initWithFrame:CGRectMake(0, 34, 320, rect.size.height - 34)];
UIImage *paperPattern = paperBackgroundPattern(context);
paperMiddle.backgroundColor = [UIColor colorWithPatternImage:paperPattern];
[self addSubview:paperMiddle];
And this is the paperBackgroundPattern method:
UIImage *paperBackgroundPattern(CGContextRef context) {
CGRect paper3 = CGRectMake(10, -15, 300, 16);
CGRect paper2 = CGRectMake(13, -15, 294, 16);
CGRect paper1 = CGRectMake(16, -15, 288, 16);
//Shadow
CGContextSetShadowWithColor(context, CGSizeMake(0,0), 10, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]);
CGPathRef path = createRoundedRectForRect(paper3, 0);
CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]);
CGContextAddPath(context, path);
CGContextFillPath(context);
//Layers of paper
CGContextSaveGState(context);
drawPaper(context, paper3);
drawPaper(context, paper2);
drawPaper(context, paper1);
CGContextRestoreGState(context);
UIGraphicsBeginImageContextWithOptions(CGSizeMake(320, 1), NO, 0);
UIImage *paperImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return paperImage;
}
It isn't repeating the image. A result of having the image there though is that it shows as the top pixel of the screen (which isn't the frame i've given it).
Any ideas why?
I don't know what the context is that you're passing in, but whatever it is, you shouldn't be drawing into it. And you aren't drawing anything into the context that you made with UIGraphicsBeginImageContextWithOptions.
If you want to generate an image, you don't need to pass in a context, just use the one that UIGraphicsBeginImageContextWithOptions makes for you.
UIImage *paperBackgroundPattern() {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(320, 1), NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// draw into context, then...
UIImage *paperImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return paperImage;
}
Also -- are you really trying to make an image that's 320 pt wide and 1 high? It seems odd that you are drawing such elaborate stuff into such a tiny image.

show image in a CGContextRef

What i am doing, i downloded a code for calender now i want to show images on its tiles(for date).
What i am trying shows in code
- (void)drawTextInContext:(CGContextRef)ctx
{
CGContextSaveGState(ctx);
CGFloat width = self.bounds.size.width;
CGFloat height = self.bounds.size.height;
CGFloat numberFontSize = floorf(0.3f * width);
CGContextSetFillColorWithColor(ctx, kDarkCharcoalColor);
CGContextSetTextDrawingMode(ctx, kCGTextClip);
for (NSInteger i = 0; i < [self.text length]; i++) {
NSString *letter = [self.text substringWithRange:NSMakeRange(i, 1)];
CGSize letterSize = [letter sizeWithFont:[UIFont boldSystemFontOfSize:numberFontSize]];
CGContextSaveGState(ctx); // I will need to undo this clip after the letter's gradient has been drawn
[letter drawAtPoint:CGPointMake(4.0f+(letterSize.width*i), 0.0f) withFont:[UIFont boldSystemFontOfSize:numberFontSize]];
if ([self.date isToday]) {
CGContextSetFillColorWithColor(ctx, kWhiteColor);
CGContextFillRect(ctx, self.bounds);
} else {
// CGContextDrawLinearGradient(ctx, TextFillGradient, CGPointMake(0,0), CGPointMake(0, height/3), kCGGradientDrawsAfterEndLocation);
CGDataProviderRef dataProvider = CGDataProviderCreateWithFilename("left-arrow.png");
CGImageRef image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, NO, kCGRenderingIntentDefault);
//UIImage* image = [UIImage imageNamed:#"left-arrow.png"];
//CGImageRef imageRef = image.CGImage;
CGContextDrawImage(ctx, CGRectMake(8.0f+(letterSize.width*i), 0.0f, 5, 5), image);
//im.image=[UIImage imageNamed:#"left-arrow.png"];
}
CGContextRestoreGState(ctx); // get rid of the clip for the current letter
}
CGContextRestoreGState(ctx);
}
In else condition i want to show images on the tile so for that i am converting image objects in the CGImageRef.
Please help me.
I am not sure this would be done in same manner or in other manner please suggest your way to do this.
Thanx a lot.
The file-path of the image seems to problematic. You can retrieve the correct path with the NSBundle-methods. Also you're leaking a lot of memory, because you don't release your images and data-providers. To make a long story short, try this:
[[UIImage imageNamed:#"left-arrow.png"] drawInRect:...]
or even simpler:
[[UIImage imageNamed:#"left-arrow.png"] drawAtPoint:...]

Flipped NSString drawing in CGContext

I try to draw a string in this texture:
http://picasaweb.google.it/lh/photo/LkYWBv_S_9v2d6BAfbrhag?feat=directlink
but the green numbers seem vertical flipped.
I've created my context in this way:
colorSpace = CGColorSpaceCreateDeviceRGB();
data = malloc(height * width * 4);
context = CGBitmapContextCreate(data, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
and i've draw the strings:
UIGraphicsPushContext(context);
for(int i=0;i<packs.size();++i)
{
CGPoint points[4] =
{
gltextures[i].texCoords[0] * size.width, //0
gltextures[i].texCoords[1] * size.height, //1
gltextures[i].texCoords[2] * size.width, //2
gltextures[i].texCoords[3] * size.height, //3
gltextures[i].texCoords[4] * size.width, //4
gltextures[i].texCoords[5] * size.height, //5
gltextures[i].texCoords[6] * size.width, //6
gltextures[i].texCoords[7] * size.height //7
};
CGRect debugRect = CGRectMake
(
gltextures[i].texCoords[0] * size.width,
gltextures[i].texCoords[1] * size.height,
gltextures[i].width,
gltextures[i].height
);
CGContextSetStrokeColorWithColor(context, [[UIColor redColor] CGColor]);
CGContextAddLines(context, points, 4);
CGContextClosePath(context);
CGContextDrawPath(context, kCGPathStroke);
NSString* s = [NSString stringWithFormat:#"%d",gltextures[i].texID];
UIFont* font = [UIFont fontWithName:#"Arial" size:12];
CGContextSetRGBFillColor(context, 0, 1, 0, 1);
[s drawAtPoint:points[0] withFont:font];
}
UIGraphicsPopContext();
The transformation matrix seems the Identity matrix... no CGAffineTransform is applied to THIS context.
If the string is flipped, maybe, all my images are flipped!
Any suggestion?
PS: sorry for my english ;)
As describe here, the coordinate system for a context within a UIView or its layer is inverted when compared to the normal Quartz drawing space. However, when using Quartz to draw to an image, this is no longer the case. The UIStringDrawing Cocoa Touch category on NSString that gives you the -drawAtPoint:withFont: method assumes an inverted layout, and that's why you're seeing the flipped text in your image.
One way to work around this would be to save the graphics state, apply an invert transform to the image, draw the text, and restore the old graphics state. This would look something like the following:
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0f, self.bounds.size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
[text drawAtPoint:CGPointMake(0.0f, 0.0f) withFont:font];
CGContextRestoreGState(context);
This is a fairly crude example, in that I believe it just draws the text at the bottom of the image, but you should be able to adjust the translation or the coordinate of the text to place it where it's needed.
Better save the text 's matrix before draw, otherwise it will affect other text draw by CoreGraphic:
CGContextSaveGState(ctx);
CGAffineTransform save = CGContextGetTextMatrix(ctx);
CGContextTranslateCTM(ctx, 0.0f, self.bounds.size.height);
CGContextScaleCTM(ctx, 1.0f, -1.0f);
[str drawAtPoint:point withFont:font];
CGContextSetTextMatrix(ctx, save);
CGContextRestoreGState(ctx);