IOS - Generating PDF from UIwebview in iphone - iphone

I am trying to generate a PDF report from a UIWebView. Using Graphics Contexts I can get the images of screen frame and draw it in PDF.
Problems in curent method are,
Images are blurred (bad quality)
So the report pdf is not readable.
How can I get a high quality images of a screen frame? Or is there any other better solutions for my problem?
P.S. Report includes multiple background colors and images, I think it is hard to directly draw them in PDF.

+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}

NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, webview.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[webview.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
Have you tried like this?

Related

Is there a way to create a UIImage without orientation?

I am about to crop images but facing the orientation issue while creating image using CGImageCreateWithImageInRect.
CGImageCreateWithImageInRect crops the image based on UIImage orientation so I cannot get the right images as I want.
I want the plain image from the UIImage/maybe-camera-image without orientation meta data.
Is there a way to achieve this?
EDIT:
I have the some selected rect of the 'UIImage'. If I apply the crop on the 'UIImage' it gives different output with some other orientation
Try this:
CGImageRef imageRef = CGImageCreateWithImageInRect(src.CGImage, croppingRect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:1.0f orientation:src.imageOrientation];
Could help someone!
-(UIImage*) removeImageOrientation:(UIImage*)ImgtakenByUser
{
UIGraphicsBeginImageContext(ImgtakenByUser.size);
[ImgtakenByUser drawInRect:CGRectMake(0, 0, ImgtakenByUser.size.width, ImgtakenByUser.size.height)];
UIImage *_image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return _image;
}
Thanks!

Screenshot Of webView with full image quality

I am trying to take a screenshot of a webView and change it to PDF. I have used this method: http://www.ioslearner.com/wp-content/uploads/2012/01/HtmlToPdfDemo.zip
I mean the code in this project. It works fine for iPad but it doesn't covers full width for iPhone. I have used -sizeThatFits: for the webView, but that gives unreadable images for large html pages. I have searched a lot but all I could find out was in Android.. not iPhone. Please help me.Thanks!
By changing the webView's frame before you render it, you will be able to capture the entire page.
NSMutableData *pdfData = [NSMutableData data];
CGRect contentRect = CGRectMake(0, 0, webView.scrollView.contentSize.width, webView.scrollView.contentSize.height);
[webView setFrame:contentRect];
UIGraphicsBeginPDFContextToData(pdfData, contentRect, nil);
UIGraphicsBeginPDFPage();
CGContextRef context = UIGraphicsGetCurrentContext();
[webView.layer renderInContext:context];
UIGraphicsEndPDFContext();

Is it possible to render AVCaptureVideoPreviewLayer in a graphics context?

This seems like a simple task, yet it is driving me nuts. Is it possible to convert a UIView containing AVCaptureVideoPreviewLayer as a sublayer into an image to be saved? I want to create an augmented reality overlay and have a button save the picture to the camera roll. Holding the power button + home key captures the screenshot to the camera roll, meaning that all of my capture logic is working, AND the task is possible. But I cannot seem to be able to make it work programmatically.
I'm capturing a live preview of the camera's image using AVCaptureVideoPreviewLayer . All of my attempts to render the image fail:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
//start the session, etc...
//this saves a white screen
- (IBAction)saveOverlay:(id)sender {
NSLog(#"saveOverlay");
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
UIGraphicsBeginImageContext(scrollView.frame.size);
[previewLayer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
// [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//this renders everything, EXCEPT for the preview layer, which is blank.
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
I've read somewhere that this may be due to security issues of the iPhone. Is this true?
Just to be clear: I don't want to save the image for the camera. I want to save the transparent preview layer superimposed over another image, thus creating transparency. Yet for some reason I cannot make it work.
I like #Roma's suggestion of using GPU Image - great idea. . . . however if you want a pure CocoaTouch approach, here's what to do:
Implement AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage+Orientation from the sample buffer data
if (_captureFrame)
{
[captureSession stopRunning];
_captureFrame = NO;
UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
image = [image rotate:UIImageOrientationRight];
_frameCaptured = YES;
if (delegate != nil)
{
[delegate cameraPictureTaken:image];
}
}
}
Capture as Follows:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Blend the UIImage with the overlay
Now that you have the UIImage, add it to a new UIView.
Add the overlay on top as a sub-view.
Capture the new UIView
+ (UIImage*)imageWithView:(UIView*)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I can advise you to try GPU Image.
https://github.com/BradLarson/GPUImage
It uses openGL, so it's rather fast. It can process pictures from camera and add filters to them (there are lot of them) including edge detection, motion detection and a far more
It's like OpenCV but based on my own experience GPU image is easier to connect with your project and the language is objective-c.
Problem could appear if you decided to use box2d for physics - is uses openGl too and you will need to spent some time till this 2 frameworks will stop fighting))

Rendering UIWebView as Image/PDF Has Visual Artifacts

I'm rendering a UIWebView's layer into a graphics context and then using the UIGraphicsBeginPDFPageWithInfo() family of functions to include it in a PDF.
My problem is that the output includes an extra set of gray lines that aren't part of my data set. I'm hoping someone can shed some light on where they're coming from.
An example of the output is included below. The HTML document that is being rendered contains nothing but the text 'THIS IS A TEST' - the boxes you see are coming from the rendering process somewhere. When rendered on the screen, it's just black text on a white screen - no lines/boxes.
Anyone have any ideas what's going on? Thanks!
Here's the code I'm using to render this web view as a PDF:
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, CGRectZero, nil);
CGRect viewBounds = webView.bounds;
UIGraphicsBeginPDFPageWithInfo(viewBounds, nil);
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[webView.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
Also, here's a screenshot of what I'm seeing for output:
I ran into the same issue. It would happen whenever trying to render the UIWebView into the PDF context, and frame width > 512. I was not able to diagnose the exact issue, but worked around it by rendering the UIWebView into a UIImage, and then rendering the UIImage into the pdf context.
Code as:
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, kWidth, kHeight), nil);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIImage* image = nil;
UIGraphicsPushContext(currentContext);
UIGraphicsBeginImageContext(self.webview.frame.size);
{
[self.webview.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
UIGraphicsPopContext();
[image drawInRect:CGRectMake(0, 0, kWidth, kHeight)];

Adding tags to Images a cross between facebook & map pointers

I intend to make an app that adds a layer of text or pointers to images.
Kinda like adding another layer text/pointer over an image displayed in iphone.
ALSO, upon exporting, i want that image with the added text over it to be clubbed with the new "exported" image.
Im puzzeled as to how to start this..?
any ideas or references would be greatly appreciated.
Well, the easiest way is to add all the load the uiimageview, and the other views on a single parent view and
#implementation UIView (Imaging)
-(UIImage *) getSnapshotImage
{
UIGraphicsBeginImageContext(CGSizeMake(self.bounds.size.width, self.bounds.size.height));
CGColorSpaceRef color = CGColorSpaceCreateDeviceRGB();
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
CGColorSpaceRelease(color);
UIGraphicsEndImageContext();
return outputImage;
}
#end
once you have the UIImage, convert it to JPEG or PNG using
NSData * UIImageJPEGRepresentation (
UIImage *image,
CGFloat compressionQuality
);
which you can then proceed to store it in a file
- (BOOL)writeToFile:(NSString *)path atomically:(BOOL)flag