What I'm trying to Accomplish
I have a UICollectionView in which I'm trying to render all of the drawing in the background, then display when finished with a fade in animation.
I'm already doing this well with Images, but some of the drawing is text only. I need to size the size the text appropriately then draw it in the background.
It's potentially a lot of text and creates stuttering when done on main thread.
How I'm trying to do it
I was using CGBitmapContextCreate for images, so I tried to do it with text as well:
-(void)drawTextFromBundle
{
UIFont * font = [UIFont AvenirLTStdBlackObliqueWithSize:35]; //custom font
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, 250, backgroundHeight - 112, 8, 250 * 4, colorSpaceRef, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colorSpaceRef);
[_text drawInRect:CGRectMake(0, 0, 250, backgroundHeight - 112) withFont:font];
CGImageRef outputImage = CGBitmapContextCreateImage(context);
imageRef = outputImage;
[self performSelectorOnMainThread:#selector(finishDrawingImage) withObject:nil waitUntilDone:YES];
CGContextRelease(context);
CGImageRelease(outputImage);
});
}
Details
This is obviously not a right way to do it because I'm getting many errors, all involving Core Graphics text functions, similar to <Error>: CGContextSetFont: invalid context 0x0
I know there is UIGraphicsGetCurrentContext but I wasn't sure if this was thread safe, as I've heard it wasn't.
To note, this method is indeed getting called from within a -drawRect: method. The same exact context parameters are working for my images.
What can I do to get the text drawn to any customization I want, all done safely in the background? Bonus points if you can tell me how to do it while shrinking the text to fit the appropriate size.
Thanks again SO team.
When you change the UI , you need in the main thread (main queue).
try to put the below in the main queue
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, 250, backgroundHeight - 112, 8, 250 * 4, colorSpaceRef, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colorSpaceRef);
dispatch_async(dispatch_get_main_queue(), ^{
[_text drawInRect:CGRectMake(0, 0, 250, backgroundHeight - 112) withFont:font];
CGImageRef outputImage = CGBitmapContextCreateImage(context);
imageRef = outputImage;
[self performSelectorOnMainThread:#selector(finishDrawingImage) withObject:nil waitUntilDone:YES];
});
CGContextRelease(context);
CGImageRelease(outputImage);
});
Related
I am developing an iPad application in iOS6 that shows designing of houses with different colors and textures. For that I am using cocos2d. And for showing the used textures and colors on the home, I am using UIKit views.
Now I want to take a screenshot of this view, which contains both cocos2d layer and UIKit views.
If I am taking screen shot using cocos2d like:
UIImage *screenshot = [AppDelegate screenshotWithStartNode:n];
then it is only taking a snap of the cocos2d layer.
else if I am taking screen shot using UIkit like:
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
then it is only taking capture of the UIKit components and blackouts the cocos2d part.
I want both of them in a same screen shot...
try this method and just change some code with your requirement..
-(UIImage*) screenshotUIImage
{
CGSize displaySize = [self displaySize];
CGSize winSize = [self winSize];
//Create buffer for pixels
GLuint bufferLength = displaySize.width * displaySize.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0, 0, displaySize.width, displaySize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * displaySize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(displaySize.width, displaySize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, winSize.width, winSize.height, 8, winSize.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, displaySize.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
switch (deviceOrientation_)
{
case CCDeviceOrientationPortrait: break;
case CCDeviceOrientationPortraitUpsideDown:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(180));
CGContextTranslateCTM(context, -displaySize.width, -displaySize.height);
break;
case CCDeviceOrientationLandscapeLeft:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(-90));
CGContextTranslateCTM(context, -displaySize.height, 0);
break;
case CCDeviceOrientationLandscapeRight:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(90));
CGContextTranslateCTM(context, displaySize.width * 0.5f, -displaySize.height);
break;
}
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, displaySize.width, displaySize.height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:imageRef];
//Dealloc
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}
- (Texture2D*) screenshotTexture {
return [[Texture2D alloc] initWithImage:[self screenshotUIImage]];
}
For More Info see This Link
Refer all answers and comments its very interesting
i hope this help you
I researched a lot about this... And at present I can't find any code which can take a screenshot of a screen containing cocos2d and UIKit both together. Even there is some code available but, it is not acceptable on AppStore. So, if u use that code, your app will be rejected from the AppStore.
So for now, I found a temporary solution to achieve this:
First I took the screenshot of my cocos2d layer and then took that screenshot in a UIImageView and added that UIImageView on my screen behind all the present UIViews like this, such that user can't visualize this event:
[self.view insertSubview:imgView atIndex:1];
At index 1 because my cocos2d layer is at index 0. so above that...
Now, that the cocos2d picture is part of my UIKit view, I took the screenshot of my current screen with the normal UIKit way. And there we are. I now had the screenshot containing both the views.
This worked for me for now. If any one finds a valid solution for this, then most welcome!!!
I'll be waiting for any feasible solution for this.
Thanks all for help!!!
What I'm trying to Accomplish
Draw an image in a background thread
Convert the CGImage to a UIImage and add it to a UIImageView on the main thread.
Fade in the imageview, which is on a subclassed UICollectionViewCell from alpha value 0 to 1.
Do it all so theres no choppiness when scrolling the UICollectionView
The Problem
The behavior, at first, acts normally, but quickly digressed into unpredictableness, and usually and quickly resulted in EXC_BAD_ACCESS errors, happening somewhere in the process of converting uiimage to cgimage or vice versa.
The Code
//setup for background drawing
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef context;
context = CGBitmapContextCreate(NULL, 250, backgroundHeight - 112, 8, 250 * 4, colorSpaceRef, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colorSpaceRef);
//Take the image I want drawn, convert to cgimage for background drawing
CGImageRef image = contentImage.CGImage;
CGContextDrawImage(context, CGRectMake(0, 0, 250, backgroundHeight - 112), image);
CGImageRelease(image);
CGImageRef outputImage = CGBitmapContextCreateImage(context);
imageRef = outputImage;
[self performSelectorOnMainThread:#selector(addImageToView) withObject:nil waitUntilDone:YES];
CGContextRelease(context);
CGImageRelease(outputImage);
The addImageToView Method simply create an image and adds to my UIImageView
UIImage * image = [UIImage imageWithCGImage:imageRef];
[photoView setImage:image];
These methods get called during the cellForRowAtIndexPath method, along with my method fadeInImage
[UIView animateWithDuration:.6 delay:0 options:UIViewAnimationOptionCurveEaseIn animations:^{
photoView.alpha = 1;
}completion:nil];
When I run, I get the Bad Access calls and crashing. I'm guessing it has something to do with the main thread and the background threads passing the images between one another. Any ideas? Thanks guys.
I think, as long as you didnt create an image with CGImageCreate or used CGImageRetain, you dont have to use CGImageRelease(image);. It should release automatically once you stop using it. Check it out.
Apple documentation
Don't release first image CGImageRelease(image);
And probably it is better to change:
CGImageRef outputImage = CGBitmapContextCreateImage(context);
imageRef = outputImage;
[self performSelectorOnMainThread:#selector(addImageToView) withObject:nil waitUntilDone:YES];
To:
CGImageRef outputImage = CGBitmapContextCreateImage(context);
UIImage* result = [UIImage imageWithCGImage:imageRef]
[self performSelectorOnMainThread:#selector(addImageToView:) withObject:result waitUntilDone:YES];
And then:
- (void) addImageToView:(UIImage*) image {
[photoView setImage:image];
}
Or even:
[photoView performSelectorOnMainThread:#selector(setImage:) withObject:result waitUntilDone:YES];
Wondering if there is a way to isolate a single color in an image either using masks or perhaps even a custom color space. I'm ultimately looking for a fast way to isolate 14 colors out of an image - figured if there was a masking method it might may be faster than walking through the pixels.
Any help is appreciated!
You could use a custom color space (documentation here) and then substitute it for "CGColorSpaceCreateDeviceGray()" in the following code:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // <- SUBSTITUTE HERE
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
This code is from this blog which is worth a look at for removing colors from images.
I'm trying to take my gl buffer and turn it into a UIImage while retaining the per-pixel alpha within that gl buffer. It doesn't seem to work, as the result I'm getting is the buffer w/o alpha. Can anyone help? I feel like I must be missing a few key steps somewhere. I would really love any advice on this.
Basically I do:
//Read Pixels from OpenGL
glReadPixels(0, 0, miDrawBufferWidth, miDrawBufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, len, NULL);
//Configure image
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(miDrawBufferWidth, miDrawBufferHeight, 8, 32, (4 * miDrawBufferWidth), colorSpaceRef, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// use device's orientation's width/height to determine context dimensions (and consequently resulting image's dimensions)
uint32* pixels = (uint32 *) IQ_NEW(kHeapGfx, "snapshot_pixels") GLubyte[len];
// use kCGImageAlphaLast? :-/
CGContextRef context = CGBitmapContextCreate(pixels, iRotatedWidth, iRotatedHeight, 8, (4 * iRotatedWidth), CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, miDrawBufferWidth, miDrawBufferHeight), iref);
UIImage *outputImage = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(context)];
//cleanup
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
return outputImage;
Yes! Luckily apparently someone has solved this exact problem here: http://www.iphonedevsdk.com/forum/iphone-sdk-development/23525-cgimagecreate-alpha.html
It boiled down to an extra kCGImageAlphaLast flag being passed into the CGImageCreate to incorporate the alpha (along with the kCGBitmapByteOrderDefault flag). :)
I need to paint an image using some data and show it on my iphone application. Since the painting takes significant time (2-3 seconds on device), I want to perform painting on a different thread. Also, I want to be able to cancel painting, change something in data and start it again. So it's best for me to use NSOperation.
Now, when I do the drawing on the main thread, everything looks fine.
When I do exactly the same thing using NSOperation subclass, everything looks fine, but only 95% of the time. Sometimes it doesnt draw the full picture. Sometimes it doesnt draw text. Sometimes it uses different colors, there might be red/green/blue dots scattered all over the image etc etc etc.
I made a very short example to illustrate this:
First, we do all the painting on a main thread in a regular method:
//setting up bitmap context
size_t width = 400;
size_t height = 400;
size_t bitsPerComponent = 8;
size_t bytesPerRow = 4 * width;
void* imageData = malloc(bytesPerRow * height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(imageData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CFRelease(colorSpace);
//transforming it to usual coordinate system
CGRect mapRect = CGRectMake(0, 0, width, height);
UIGraphicsPushContext(context);
CGContextTranslateCTM(context, 0, mapRect.size.height);
CGContextScaleCTM(context, 1, -1);
//actull drawing - nothing complicated here, 2 lines and 3 text strings on white background
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextFillRect(context, mapRect);
CGContextSetLineWidth(context, 3);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextMoveToPoint(context, 10, 10);
CGContextAddLineToPoint(context, 20, 20);
CGContextStrokePath(context);
CGContextMoveToPoint(context, 20, 20);
CGContextAddLineToPoint(context, 100, 100);
CGContextStrokePath(context);
[UIColor blackColor].set;
[[NSString stringWithString:#"tag1"] drawInRect:CGRectMake(10, 10, 40, 15) withFont:[UIFont systemFontOfSize:15]];
[[NSString stringWithString:#"tag2"] drawInRect:CGRectMake(20, 20, 40, 15) withFont:[UIFont systemFontOfSize:15]];
[[NSString stringWithString:#"tag3"] drawInRect:CGRectMake(100, 100, 40, 15) withFont:[UIFont systemFontOfSize:15]];
//getting UIImage from bitmap context
CGImageRef _trueMap = CGBitmapContextCreateImage(context);
if (_trueMap) {
UIImage* _map = [UIImage imageWithCGImage:_trueMap];
CFRelease(_trueMap);
//displaying what we got
//self.map leads to UIImageView
self.map = _map;
}
//releasing context and memmory
UIGraphicsPopContext();
CFRelease(context);
free(imageData);
No errors here. Always works.
Now, I'll subclass NSOperation and copy-paste this code there: Interface:
#interface Painter : NSOperation {
//The controller which contains UIImageView we will use to display image
MapViewController* mapViewController;
CGContextRef context;
void* imageData;
}
#property (nonatomic, assign) MapViewController* mapViewController;
- (id) initWithRootController:(MapViewController*)mvc__;
#end
Now the methods:
- (id) initWithRootController:(MapViewController*)mvc__ {
if (self = [super init]) {
self.mapViewController = mvc__;
size_t width = 400;
size_t height = 400;
size_t bitsPerComponent = 8;
size_t bytesPerRow = 4 * width;
imageData = malloc(bytesPerRow * height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(imageData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CFRelease(colorSpace);
}
return self;
}
- (void) main {
size_t width = 400;
size_t height = 400;
//transforming it to usual coordinate system
CGRect mapRect = CGRectMake(0, 0, width, height);
UIGraphicsPushContext(context);
CGContextTranslateCTM(context, 0, mapRect.size.height);
CGContextScaleCTM(context, 1, -1);
//actull drawing - nothing complicated here, 2 lines and 3 text strings on white background
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextFillRect(context, mapRect);
CGContextSetLineWidth(context, 3);
CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor);
CGContextMoveToPoint(context, 10, 10);
CGContextAddLineToPoint(context, 20, 20);
CGContextStrokePath(context);
CGContextMoveToPoint(context, 20, 20);
CGContextAddLineToPoint(context, 100, 100);
CGContextStrokePath(context);
[UIColor blackColor].set;
[[NSString stringWithString:#"tag1"] drawInRect:CGRectMake(10, 10, 40, 15) withFont:[UIFont systemFontOfSize:15]];
[[NSString stringWithString:#"tag2"] drawInRect:CGRectMake(20, 20, 40, 15) withFont:[UIFont systemFontOfSize:15]];
[[NSString stringWithString:#"tag3"] drawInRect:CGRectMake(100, 100, 40, 15) withFont:[UIFont systemFontOfSize:15]];
//getting UIImage from bitmap context
CGImageRef _trueMap = CGBitmapContextCreateImage(context);
if (_trueMap) {
UIImage* _map = [UIImage imageWithCGImage:_trueMap];
CFRelease(_trueMap);
//displaying what we got
[mapViewController performSelectorOnMainThread:#selector(setMap:) withObject:_map waitUntilDone:YES];
}
//releasing context and memmory
UIGraphicsPopContext();
CFRelease(context);
free(imageData);
}
Again, no significant code changes between this 2 pieces of code. And when I start this operation like this:
NSOperationQueue* repaintQueue = [[NSOperationQueue alloc] init];
repaintQueue.maxConcurrentOperationCount = 1;
[repaintQueue addOperation:[[[Painter alloc] initWithRootController:self] autorelease]];
It will work. But not always, sometimes image will contain artifacts.
I've also made few screenshots to illustrate the issue, but couldn't post them =(
Anyways, there is a screenshot which shows red line and 3 text lines (which is fine) and a screenshot which shows red line, no text lines and "tag2" written upside down on tab bar controller.
So what's the problem?
I cant use Quartz with NSOperation? Is there some kind of restriction on drawing on separate threads? Is there a way to bypass those restrictions if so?
If anyone ever seen this problem, please reply.
A few calls in your code are form UIKit (the ones prefixed UI), and UIKit is not threadsafe. Any UI operations should be called on the main thread, or you risk weird things happening, such as artefacting.
I can't speak for Quartz2d (or Core Grahics) itself, as I haven't used a lot of it directly. But I do know that UIKit is not threadsafe.
The method drawInRect:withFont: is from a category added to NSString from UIKit (UIStringDrawing). These methods are not threadsafe, and that is why you are seeing strange behaviour.