I want to be able to create a greyscale image with no alpha from a png in the app bundle.
This works, and I get an image created:
// Create graphics context the size of the overlapping rectangle
UIGraphicsBeginImageContext(rectangleOfOverlap.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// More stuff
CGContextDrawImage(ctx, drawRect2, [UIImage imageNamed:#"Image 01.png"].CGImage);
// Create the new UIImage from the context
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
However the resulting image is 32 bits per pixel and has an alpha channel, so when I use CGCreateImageWithMask it doesn't work. I've tried creating a bitmap context thus:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef ctx =CGBitmapContextCreate(nil, rectangleOfOverlap.size.width, rectangleOfOverlap.size.height, 8, rectangleOfOverlap.size.width , colorSpace, kCGImageAlphaNone);
UIGraphicsGetImageFromCurrentImageContext returns zero and the resulting image is not created. Am I doing something dumb here?
Any help would be greatly appreciated.
Regards
Dave
OK, think I have an answer (or several of them)...
If the context create fails then the console window gives a reasonable error message and tells you why it failed. In this case the context is created, so no error.
Secondly the list of supported parameters for the CGBtimapContext is here:
http://developer.apple.com/mac/library/qa/qa2001/qa1037.html
Thirdly UIGraphicsGetImageFromCurrentImageContext() only works with a generic context created using UIGraphicsBeginGraphicsContext() I needed to use:
UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(ctx)];
and forthly, I cannot get at the underlying pixel data with CGBitmapContextGetData(ctx) if I create the context using NULL as the first parameter, even though the docs imply that from 10.3 onwards memory is handled for you. To get around this I created a method called:
- (CGContextRef) newGreyScaleBitmapContextOfSize:(CGSize) size;
The method creates the context by malloc'ing the memory and returns a context ref. I am not sure if that is comprehensive, but as I've spent days on this I thought I'd let you know what I have discovered so far.
Hope this helps,
Dave
Related
I'm looking into this method because I would like to convert a rather large NSImage to a smaller CGImage in order to assign it to a CALayer's contents.
From Apple's documentation I get that the proposedRect is supposed to be the size of the CGImage that will be returned and that, if I pass nil for the proposedRect, I will get a CGImage the size of the original NSImage. (Please correct me, if I'm wrong.)
I tried calling it with nil for the proposed rect and it works perfectly, but when I try giving it some rectangle like (0,0,400,300), the resulting CGImage still is the size of the original image. The bit of code I'm using is as follows.
var r = NSRect(x: 0, y: 0, width: 400, height: 300)
let img = NSImage(contentsOf: url)?.cgImage(forProposedRect: &r, context: nil, hints: nil)
There must be something about this that I understood wrong. I really hope someone can tell me what that is.
This method is not for producing scaled images. The basic idea is that drawing the NSImage to the input rect in the context would produce a certain result. This method creates a CGImage such that, if it were drawn to the output rect in that same context, it would produce the same result.
So, it's perfectly valid for the method to return a CGImage the size of the original image. The scaling would occur when that CGImage is drawn to the rect.
There's some documentation about this that only exists in the historical release notes from when it was first introduced. Search for "NSImage, CGImage, and CoreGraphics impedance matching".
To produce a scaled-down image, you should create a new image of the size you want, lock focus on it, and draw the original image to it. Or, if you weren't aware, you can just assign your original image as the layer's contents and see if that's performant enough.
I need to convert each page of a PDF document to PNG images. These images are then displayed in a scrollView. I need to create two images per page: one at the screen dimension, and one 2.5 times bigger (it is used when the user zoom in the scrollView).
My problem is that I have sometimes memory warnings and crashes when I create big images.
The way I do it is well-known:
CGRect pageRect = CGPDFPageGetBoxRect(pdfPage, kCGPDFMediaBox);
float pdfScale = 2.5*self.view.frame.size.height/pageRect.size.height;
pageRect.size = CGSizeMake(pageRect.size.width*pdfScale, pageRect.size.height*pdfScale);
UIGraphicsBeginImageContext(pageRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 1.0,1.0,1.0,1.0);
CGContextFillRect(context,pageRect);
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0, pageRect.size.height);
CGContextScaleCTM(context, pdfScale,-pdfScale);
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The problem occurs on iPhone 3G/3GS and iPod Touch.
How can I limit the memory consumption while still having a zoom scale of 2.5 ?
Thanks !
You could consider using a webview to display the PDF, that takes care of the zooming and memory issues. Basically rendering the vector format PDF to a bitmap PNG at a large size is possibly not the most appropriate design decision. Without knowing what your requirements for the PNG are though, it's hard to comment.
I've noticed examples of two ways to put images into core graphics. One is much simpler than the other. So what's the advantage of the second, more sophisticated, approach? Is it faster?
example 1
UIImage *myImage = [UIImage imageNamed:#"picture.png"];
CGRect imageRect = CGRectMake(70, 330, 40, 40);
[myImage drawInRect:imageRect];
[myImage release];
example 2
// Load image from application bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"picture.png"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
// Draw image
CGContextTranslateCTM(context, 70, 370 );
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0, 0, 40, 40), image);
CGImageRelease(image);
Directly comparing the two, the advantage of the second method is that it doesn't put the image into the +[UIImage imageNamed:] cache. It's not possible for you to remove images from that cache, so you may be better off not using imageNamed:.
However, you can get the best of both worlds.
First off, use -[NSBundle URLForResource:withExtension:] to get the URL to the image. This simplifies that step over getting the resource directory path or URL and appending the filename to it yourself.
Second, use +[UIImage imageWithContentsOfURL:] to create the image, passing the URL you obtained from the NSBundle.
From there, continue on as in your first example. The total will be only five lines (two of them replacing the first line of your first example).
In example 1 you are just gonna show the image in the view as it is, and it is good too to show static images.
But in example 2 you are using core graphics to display the image so you can do more operations on it, more than just a static image. You can rotate it, scale it, translate it, reduce its alpha value and many more that i also not aware of.
so it depends on your choice of use. whether you want to just show the image or want it to do something more than that.if you are begineer and want to develop utility apps than i dont think you will ever use 2 option. but if you want to develop games or want to animate the image you will use should 2 option.
Cheers
I don't know any advantages for the second way. It has the disadvantage of having more steps. If you use the first way and have a UIImage and you need a CGImageRef you can access it through the CGImage property. I find myself using the first most often. When you have a UIImage you can use your own UIImageView to add it to a view hierarchy or you can assign it directly to some of the UIKit objects like UITableViewCell's imageView. If you get the CGImageRef (second way) then you would need to imageWithCGImage: first...
My app stores images as NSData objects. However, when these are loaded on an iPhone 4, they are displayed at double the size because the default scale factor is 1. I have 2 questions I would appreciate help with please:
Is there any way to set the scale of the UIImage without using initWithCGImage:scale:orientation:
If the answer to 1 is no, what is the most efficient way to load the NSData into a UIImage using the method above? At present it seems I will have to create a UIImage from the NSData and then create another UIImage using the method noted in 1 above.
Thank you.
UIImage is immutable, so I guess there is no way to do so without hacking.
UIImage is just a wrapper of CGImage , so I think using initWithCGImage: as you describe won't have any noticeable performance impact. If you really worry about that, you can load it to CGImageRef first.
I have a method that needs to parse through a bunch of large PNG images pixel by pixel (the PNGs are 600x600 pixels each). It seems to work great on the Simulator, but on the device (iPad), i get an EXC_BAD_ACCESS in some internal memory copying function. It seems the size is the culprit because if I try it on smaller images, everything seems to work. Here's the memory related meat of method below.
+ (CGRect) getAlphaBoundsForUImage: (UIImage*) image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
memset(rawData,0,height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
/* non-memory related stuff */
free(rawData);
When I run this on a bunch of images, it runs 12 times and then craps out, while on the simulator it runs no problem. Do you guys have any ideas?
Running 12 times and then crashing sounds like a running out of memory problem. It might be that internally the CGContext is creating some large autoreleased structures. Since you're doing this in a loop, they are not getting freed, so you run out of memory and die.
I'm not sure how Core Foundation deals with temporary objects though. I don't think CF objects have the equivalent of autorelease, and a Core Graphics context is almost certainly dealing with CF objects rather than NSObjects.
To reduce the memory churn in your code, I would suggest refactoring it to create an offscreen CGContext once before you start processing, and use it repeatedly to process each image. Then release it when you are done. That is going to be faster in any case (since you aren't allocating huge data structures on each pass through the loop.)
I'll wager that will eliminate your crash problem, and I bet it also makes your code much, much faster. Memory allocation is very slow compared to other operations, and you're slinging around some pretty big data structures to handle 600x600 pixel RGBA images.
Goto Product -> Edit Schemes -> Enable Zombie objects. Put a tick mark before Enable Zombie objects. Now build and run it. It can give you better and to the point description for EXC_BAD_ACCES error.
I was getting a similar crash on my iPad (iPhoneOS 3.2 of course) using CGImageCreate(). Seeing your difficulty gave me a hint. I solved the problem by aligning my bytesPerRow to the next largest power of 2.
size_t bytesPerRowPower2 = (size_t) round( pow( 2.0, trunc( log((double) bytesPerRow) / log(2.0) ) + 1.0 ) );
Let us know if providing power of 2 row alignment also solves your problem. You would need to allocate *rawData with the adjusted size and pass bytesPerRowPower2 to CGBitmapContextCreate()... The height does not seem to need alignment.
Perhaps CGImageGetBytesPerRow (power of two sounds excessive).
I used a power of two aligned row size in an app that was defining the row size directly, as I was creating images programmatically. But I would also recommend toastie to try CGImageGetBytesPerRow before trying my suggestion as well.
Try to Run instruments allocation while you are running the application on device, it cold be' a memory related issue. If you are in a loop, create inside that loop an auto release pool.
Maybe replace
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
with
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGColorSpaceRelease(colorSpace);
as the color space ref might still be needed somehow? Just a random guess...
And maybe even put it behind the release of the context?
// cleanup
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
I solved the problem by making a square context (width=height). I was having a 512x256 texture and it crashing every time I sent the data to OpenGL. Now I allocate a 512x512 buffer BUT STILL render 512x256. Hope this helps.