If it's possible to generate a PDF file from a UITableView? - iphone

My application has a tableview with custom cells(include textfield, label and button), I wanna generate a pdf file from the tableview.
Bellow is the code(now assume the tableview's content is all visible):
UIGraphicsBeginImageContext(self._tableView.bounds.size);
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, self._tableView.bounds.size.width, self._tableView.bounds.size.height), nil);
// what's the code here?
// [self._tableView drawInRect:];
UIGraphicsEndPDFContext();
I always generate an blank pdf file.
Currently what I do is to generate the tableview as a image, then draw the image to the current context, then will get the pdf file. but any nice idea?

HI , you can convert current view as image through the follwoing code and then you have to use that image to create PDF File through the link
- (UIImage *)captureView:(UIView *)view {
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
[view.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

To get good quality image,
Use
UIGraphicsBeginImageContextWithOptions(screenRect.size, NO, 0.0)
instead of
UIGraphicsBeginImageContext(screenRect.size)
#JeffWood: thank a million, for your answer :)

Related

How to take a screenshot programmatically in iOS?

I have a UIView that uses both UIKit control and OpenGL. I'd like to get a screenshot of that view programatically.
If I use UIGraphicsGetImageFromCurrentImageContext(), the OpenGL content is blank;
If I use the glReadPixels(...) method, the UIKit content is blank;
I'm confused as how to take a complete screenshot.
Thank you!
use following code to take screen shot
-(UIImage *) screenshot
{
CGRect rect;
rect=CGRectMake(0, 0, 320, 480);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context=UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *image=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
-(UIImage *) screenshot
{
CGImageRef UIGetScreenImage(void);
CGImageRef screen = UIGetScreenImage();
UIImage* screenImage = [UIImage imageWithCGImage:screen];
CGImageRelease(screen); // you need to call this.
return screenImage;
}
Well, there are few ways of capturing the iPhone screen programmatically
Using UIKIT http://developer.apple.com/library/ios/#qa/qa1703/_index.html
Using AVFoundation framework http://developer.apple.com/library/ios/#qa/qa1702/_index.html
Using OpenGL ES
http://developer.apple.com/library/ios/#qa/qa1704/_index.html
Starting from iOS 7 you can also use Why does my programmatically created screenshot look so bad on iOS 7?
Using UIWindow
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
[self.window.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Using UIView
UIGraphicsBeginImageContext(self.view.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Out of these 6 options, I found the first option very convenient for copy-pasting and for applying level of compression as 1st method gives the image with true pixel data. I also like option 4, as the API comes with SDK.
You get result Uiimage without lose quality of image
- (UIImage *)captureView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data = [UIImagePNGRepresentation(image) retain];
UImage *screenShot = [UIImage imageWithData:data];
You know where the OpenGL content is located in your view. So it should be easy to copy the result of glReadPixels into the snapshot of the UIKit.
maybe.. can you use something from here
- https://stackoverflow.com/questions/12413460/cocos2d-2-0-screenshots-on-ios-6
and here...
Why is glReadPixels() failing in this code in iOS 6.0?
Here is how you can take a simple screenshot programmatically of a UIImage. If you want it for any other object substitute the UIImage with it.
In the .h file add NSData *Data; then in your .m file add this to your code
UIGraphicsBeginImageContext(self.view.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *attachimage = [[UIImage alloc]initWithData:Data];
UIImage *viImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
attachimage = viImage;
UIImageWriteToSavedPhotosAlbum(viImage, nil, nil, nil);

how to get snap shots of touch part on screen by programming?

I am implementing iphone application in which I want to implement below features.
When user touch on iphone screen then user snap shot will generate of touch area of the screen and save to photo library.
I have done googling but dont get successed.
Please help me for this query.
Thanks in advance
If you are looking for sample code to control the camera. Here is a bare bones Camera application that takes picture and saves it to library
Check out this method, pass the view touched to this method, will send u the image & save it to library.
- (UIImage*) giveScreentshotsOfView:(UIView*) view
{
if (view == nil)
return nil;
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *) captureView:(UIView *)view
{
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
[view.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//save the image to photo album
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil , nil);
}

iPhone: UIImage obtained after rendering a view is solid black

I'm using this method to render a UIView into a UIImage:
+ (UIImage *)imageWithView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
The resulting image is of the right size (as shown by the UIImage's size property), but its contents is solid black colour. The view passed there definitely contains some graphics, but it's not rendered. Any idea why?
This is the code I use to capture images of a UIView:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
Your code looks fine to me. You can try swapping mine out to see if it helps, but I'd cite it to be a separate issue from the code you posted.

Get a PDF/PNG as output from a UIWebView or UIView

Is there any way to get the content of a UIWebView and convert it to a PDF or PNG file? I'd like to get similar output to that available on the Mac by selecting the PDF button when printing from Safari, for example. I'm assuming this isn't possible/built in yet, but hopefully I'll be surprised and find a way to get the content from a webview to a file.
Thanks!
You can use the following category on UIView to create a PDF file:
#import <QuartzCore/QuartzCore.h>
#implementation UIView(PDFWritingAdditions)
- (void)renderInPDFFile:(NSString*)path
{
CGRect mediaBox = self.bounds;
CGContextRef ctx = CGPDFContextCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path], &mediaBox, NULL);
CGPDFContextBeginPage(ctx, NULL);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -mediaBox.size.height);
[self.layer renderInContext:ctx];
CGPDFContextEndPage(ctx);
CFRelease(ctx);
}
#end
Bad news: UIWebView does not create nice shapes and text in the PDF, but renders itself as an image into the PDF.
Creating a image from a web view is simple:
UIImage* image = nil;
UIGraphicsBeginImageContext(offscreenWebView_.frame.size);
{
[offscreenWebView_.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
Once you have the image you can save it as a PNG.
Creating PDFs is also possible in a very similar way, but only on a yet unreleased iPhone OS version.
#mjdth, try fileURLWithPath:isDirectory: instead. URLWithString wasn't working for me either.
#implementation UIView(PDFWritingAdditions)
- (void)renderInPDFFile:(NSString*)path
{
CGRect mediaBox = self.bounds;
CGContextRef ctx = CGPDFContextCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path isDirectory:NO], &mediaBox, NULL);
CGPDFContextBeginPage(ctx, NULL);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -mediaBox.size.height);
[self.layer renderInContext:ctx];
CGPDFContextEndPage(ctx);
CFRelease(ctx);
}
#end
The code below will convert the (full) content of a UIWebView to an UIImage.
After rendering the UIImage I write it to disk as PNG to see the result.
Of course you could do with the UIImage whatever you like.
UIImage *image = nil;
CGRect oldFrame = webView.frame;
// Resize the UIWebView, contentSize could be > visible size
[webView sizeToFit];
CGSize fullSize = webView.scrollView.contentSize;
// Render the layer content onto the image
UIGraphicsBeginImageContext(fullSize);
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
[webView.layer renderInContext:resizedContext];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Revert the UIWebView back to its old size
webView.frame = oldFrame;
// Write the UIImage to disk as PNG so that we can see the result
NSString *path= [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Test.png"];
[UIImagePNGRepresentation(image) writeToFile:path atomically:YES];
Note: make sure the UIWebView is fully loaded (UIWebViewDelegate or loading property).

How to create an image from a UIView / UIScrollView

I have an image in an UIScrollView, that can be scrolled and zoomed.
When the user presses a button, I want the code to create an image from whatever part of the UIScrollView is inside an area I specify with a CGRect.
I've seen code to crop UIImages, but I can't adapt it to do the same for a view, because it uses CGContextDrawImage.
Any thoughts?
Cheers,
Andre
I've managed to get it.
Here's my solution, based on a few different ones from the web:
- (UIImage *)imageByCropping:(UIScrollView *)imageToCrop toRect:(CGRect)rect
{
CGSize pageSize = rect.size;
UIGraphicsBeginImageContext(pageSize);
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(resizedContext, -imageToCrop.contentOffset.x, -imageToCrop.contentOffset.y);
[imageToCrop.layer renderInContext:resizedContext];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
which you call by using:
CGRect clippedRect = CGRectMake(0, 0, 320, 300);
picture.image = [self imageByCropping:myScrollView toRect:clippedRect];