I have a UITableView, and in every cell there's displayed a UIImage created from a pdf. But now the performance is very bad. Here's my code I use to generate the UIImage from the PDF.
Creating CGPDFDocumentRef and UIImageView (in cellForRowAtIndexPath method):
...
CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), (CFStringRef)formula.icon, NULL, NULL);
CGPDFDocumentRef documentRef = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL);
CFRelease(pdfURL);
UIImageView *imageView = [[UIImageView alloc] initWithImage:[self imageFromPDFWithDocumentRef:documentRef]];
...
Generate UIImage:
- (UIImage *)imageFromPDFWithDocumentRef:(CGPDFDocumentRef)documentRef {
CGPDFPageRef pageRef = CGPDFDocumentGetPage(documentRef, 1);
CGRect pageRect = CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox);
UIGraphicsBeginImageContext(pageRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, CGRectGetMinX(pageRect),CGRectGetMaxY(pageRect));
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, -(pageRect.origin.x), -(pageRect.origin.y));
CGContextDrawPDFPage(context, pageRef);
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
What can I do to increas the speed and keep the memory low?
Generating image from PDF for each cell in runtime seems like a huge performance hit.
You should try to consider maybe performing the actual rendering in the background thread and store the rendered result UIImages and populate them into the UITableView in runtime. This will at-least free up the UI thread and add responsiveness to application.
For the cells that are not yet rendered you can display a 'loading' message or add a 'activity indicator' spinner.
Related
I am trying to make a UITable with different cells containing UITextfield, UITextview and UIImage. I want to generate a report-like pdf containing all the cells including images and text.
I have the code for making one screenshots for UITableviewcell. But i dont know how to make multiple screenshot then merge them together to form a one pdf file.
Below is the code that i have for making the UITableviewcell screenshot. Thanks
NSUInteger index[] = {0, 0}; // section, row
NSIndexPath *indexPath = [[NSIndexPath alloc] initWithIndexes:index length:2];
// Get the cell
UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:indexPath];
// Render a graphics context to turn your cell into a UIImage
CGSize imageSize = cell.bounds.size;
UIGraphicsBeginImageContext(imageSize);
CGContextRef imageContext = UIGraphicsGetCurrentContext();
[cell.layer renderInContext:imageContext];
// Retrieve the screenshot image
UIImage *imagefinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(imagefinal, nil, nil, nil);
I dont know how to take multiple screenshots but i hav done some coding for merging two images you need to make some changes to this code for merging multiple screenshots. Here is code.
CGSize endImageSize = CGSizeMake(0, 0);
endImageSize = CGSizeMake(CELL_IMAGE_WIDTH,DesiredHeight);
UIGraphicsBeginImageContext(endImageSize);
// draw images into this context
[anOriginalImage1 drawInRect:CGRectMake(0, 0,
endImageSize.width, IMAGE1_HEIGHT)];
[anOriginalImage2 drawInRect:CGRectMake(0, IMAGE1_HEIGHT,
endImageSize.width, IMAGE2_HEIGHT)];
//convert context into a UIImage
UIImage *endImage = UIGraphicsGetImageFromCurrentImageContext();
//cleanup
UIGraphicsEndImageContext();
I was wondering if anyone could provide an example of how to take a screenshot which mixes OpenGL and UIKit elements. Ever since Apple made UIGetScreenImage() private this has become a pretty difficult task because the two common methods Apple used to replace it capture only UIKit or only OpenGL.
This similar question references Apple's Technical Q&A QA1714, but the QA only describes how to handle elements from the camera and UIKit. How do you go about rendering the UIKit view hierarchy into an image context and then drawing the image of your OpenGL ES view on top of it like the answer to the similar question suggests?
This should do the trick. Basically rendering everything to CG and creating an image you can do whatever with.
// In Your UI View Controller
- (UIImage *)createSavableImage:(UIImage *)plainGLImage
{
UIImageView *glImage = [[UIImageView alloc] initWithImage:[myGlView drawGlToImage]];
glImage.transform = CGAffineTransformMakeScale(1, -1);
UIGraphicsBeginImageContext(self.view.bounds.size);
//order of getting the context depends on what should be rendered first.
// this draws the UIKit on top of the gl image
[glImage.layer renderInContext:UIGraphicsGetCurrentContext()];
[someUIView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Do something with resulting image
return finalImage;
}
// In Your GL View
- (UIImage *)drawGlToImage
{
// Draw OpenGL data to an image context
UIGraphicsBeginImageContext(self.frame.size);
unsigned char buffer[320 * 480 * 4];
CGContextRef aContext = UIGraphicsGetCurrentContext();
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320 * 480 * 4, NULL);
CGImageRef iref = CGImageCreate(320,480,8,32,320*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast, ref, NULL, true, kCGRenderingIntentDefault);
CGContextScaleCTM(aContext, 1.0, -1.0);
CGContextTranslateCTM(aContext, 0, -self.frame.size.height);
UIImage *im = [[UIImage alloc] initWithCGImage:iref];
UIGraphicsEndImageContext();
return im;
}
Then, to create a screenshot
UIImage *glImage = [self drawGlToImage];
UIImage *screenshot = [self createSavableImage:glImage];
I created a masked image using a function form an iphone blog:
UIImage *imgToSave = [self maskImage:[UIImage imageNamed:#"pic.jpg"] withMask:[UIImage imageNamed:#"sd-face-mask.png"]];
Looks good in a UIImageView
UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
imgView.center = CGPointMake(160.0f, 140.0f);
[self.view addSubview:imgView];
UIImagePNGRepresentation to save to disk:
[UIImagePNGRepresentation(imgToSave) writeToFile:[self findUniqueSavePath] atomically:YES];
UIImagePNGRepresentation returns NSData of an image that looks different.
The output is inverse image mask.
The area that was cut out in the app is now visible in the file.
The area that was visible in the app is now removed. Visibility is opposite.
My mask is designed to remove everything but the face area in the picture. The UIImage looks right in the app but after I save it on disk, the file looks opposite. The face is removed but everything else this there.
Please let me know if you can help!
In quartz you cam mask either by an image mask (black let through and white blocks), or a normal image (white let through and black blocks) which is the opposite. It seems for some reason saving is treating the image mask as a normal image to mask with. One thought is to render to a bitmap context and then create an image to be saved from that.
I had the exact same issue, when I saved the file it was one way, but the image returned in memory was the exact opposite.
The culprit & the solution was UIImagePNGRepresentation(). It fixes the in-app image before saving it to disk, so I just inserted that function as the last step in creating the masked image and returning that.
This may not be the most elegant solution, but it works. I copied some code from my app and condensed it, not sure if this code below works as is, but if not, its close... maybe just some typos.
Enjoy. :)
// MyImageHelperObj.h
#interface MyImageHelperObj : NSObject
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
#end
// MyImageHelperObj.m
#import <QuartzCore/QuartzCore.h>
#import "MyImageHelperObj.h"
#implementation MyImageHelperObj
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
{
// create image size rect
CGRect newRect = CGRectZero;
newRect.size = newSize;
// draw source image
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
[sourceImage drawInRect:newRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// draw mask image
[maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// create grayscale version of mask image to make the "image mask"
UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);
CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
CGImageRelease(imageMask);
newImage = [UIImage imageWithCGImage:maskedImage];
CGImageRelease(maskedImage);
return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
}
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
{
//create gray device colorspace.
CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
//create 8-bit bimap context without alpha channel.
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
CGColorSpaceRelease(space);
//Draw image.
CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
//Get image from bimap context.
CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
//image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
CGImageRelease(grayScaleImage);
return image;
}
#end
Is there any way to get the content of a UIWebView and convert it to a PDF or PNG file? I'd like to get similar output to that available on the Mac by selecting the PDF button when printing from Safari, for example. I'm assuming this isn't possible/built in yet, but hopefully I'll be surprised and find a way to get the content from a webview to a file.
Thanks!
You can use the following category on UIView to create a PDF file:
#import <QuartzCore/QuartzCore.h>
#implementation UIView(PDFWritingAdditions)
- (void)renderInPDFFile:(NSString*)path
{
CGRect mediaBox = self.bounds;
CGContextRef ctx = CGPDFContextCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path], &mediaBox, NULL);
CGPDFContextBeginPage(ctx, NULL);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -mediaBox.size.height);
[self.layer renderInContext:ctx];
CGPDFContextEndPage(ctx);
CFRelease(ctx);
}
#end
Bad news: UIWebView does not create nice shapes and text in the PDF, but renders itself as an image into the PDF.
Creating a image from a web view is simple:
UIImage* image = nil;
UIGraphicsBeginImageContext(offscreenWebView_.frame.size);
{
[offscreenWebView_.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
Once you have the image you can save it as a PNG.
Creating PDFs is also possible in a very similar way, but only on a yet unreleased iPhone OS version.
#mjdth, try fileURLWithPath:isDirectory: instead. URLWithString wasn't working for me either.
#implementation UIView(PDFWritingAdditions)
- (void)renderInPDFFile:(NSString*)path
{
CGRect mediaBox = self.bounds;
CGContextRef ctx = CGPDFContextCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path isDirectory:NO], &mediaBox, NULL);
CGPDFContextBeginPage(ctx, NULL);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -mediaBox.size.height);
[self.layer renderInContext:ctx];
CGPDFContextEndPage(ctx);
CFRelease(ctx);
}
#end
The code below will convert the (full) content of a UIWebView to an UIImage.
After rendering the UIImage I write it to disk as PNG to see the result.
Of course you could do with the UIImage whatever you like.
UIImage *image = nil;
CGRect oldFrame = webView.frame;
// Resize the UIWebView, contentSize could be > visible size
[webView sizeToFit];
CGSize fullSize = webView.scrollView.contentSize;
// Render the layer content onto the image
UIGraphicsBeginImageContext(fullSize);
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
[webView.layer renderInContext:resizedContext];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Revert the UIWebView back to its old size
webView.frame = oldFrame;
// Write the UIImage to disk as PNG so that we can see the result
NSString *path= [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Test.png"];
[UIImagePNGRepresentation(image) writeToFile:path atomically:YES];
Note: make sure the UIWebView is fully loaded (UIWebViewDelegate or loading property).
I have an image in an UIScrollView, that can be scrolled and zoomed.
When the user presses a button, I want the code to create an image from whatever part of the UIScrollView is inside an area I specify with a CGRect.
I've seen code to crop UIImages, but I can't adapt it to do the same for a view, because it uses CGContextDrawImage.
Any thoughts?
Cheers,
Andre
I've managed to get it.
Here's my solution, based on a few different ones from the web:
- (UIImage *)imageByCropping:(UIScrollView *)imageToCrop toRect:(CGRect)rect
{
CGSize pageSize = rect.size;
UIGraphicsBeginImageContext(pageSize);
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(resizedContext, -imageToCrop.contentOffset.x, -imageToCrop.contentOffset.y);
[imageToCrop.layer renderInContext:resizedContext];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
which you call by using:
CGRect clippedRect = CGRectMake(0, 0, 320, 300);
picture.image = [self imageByCropping:myScrollView toRect:clippedRect];