Generate a pdf from opengles context in iphone/ipad - iphone

Hi am a noob at PDF generation and currently am displaying a 3d bar charts in my application in glview which is in side a uiview. Now i want to generate a PDF document from the uiview which contains a text view and a glview. Am able to generate a PDF from text but where as GL View am unable to proceed.even i tried for a possibility over the net but unable to get any info regarding this.Can someone help me by providing the solution whether this can be possible or not.Thanks in Advance

You can ask any UIView to give you an image (which might be a start) via something like:
- (UIImage *) imageRepresentation {
// render myself (or more correctly, my layer) to an image
UIGraphicsBeginImageContext(self.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetShouldSmoothFonts(context, false);
CGContextSetShouldAntialias(context, false);
[self drawLayer:self.layer inContext:context];
[self.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
After that, you might be able to get a PDF representation of it, or investigate calls like CGPDFDocumentCreateWithURL to construct one yourself from the image.

You can use CoreGraphic's PDF-APIs to creating and drawing to a PDF.
See Apple's guide for more details.

Related

Generate document thumbnails

I am trying to display a grid of documents (saved in the documents directory) but I don't know how to generate the thumbnails for the documents. The documents can be anything that a QLPreviewController can display. PDF's and Images are fine to do but other things like .doc's I don't know about. Any guidance would help.
Since you have an UIView that can display any of this documents you could just take a shortcut:
-Create an instance of your preview controller with displayed document
-Do not add this view/controller to anything
-Create image from its layer
This might help:
+ (UIImage *)imageFromView:(UIView *)view {
CALayer *layer = view.layer;
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Play around a bit with layers and initialization as I didn't test the code..
The better option of you can use a uiwebview in which you can just load the file giving the filepath. Then take the screen shot by using the code given above by Matic Oblak and you are done.

Adding picture frame to a photo

I am making an app that adds a picture frame to a photo.I would like to know how to have my Save button save both Images (the photo, and the frame) as one Image.Right now it only saves one of the images.
In interface builder I have the save action saving the image that is loaded into an ImageView, with the frame ImageView overlaying that image.
I'd like to merge the two photos as one, so the save action can save the image with the frame.
Thanks!
In this may need to use the masking in the iphone where the unnecessary thing of the image is automatically remove and attach with the frame.
I think this help to implement best for the your applications
So you can refer the following link for Download and tutorial and Source also.
Reference link
You need to do some drawing using Core Graphics. This code should do what you want, possibly with some tweaks to the rectangles/sizes:
UIGraphicsBeginImageContext(image.size);
[image drawRect:CGRectMake(0, 0, image.size.width, image.size.height);
[frameImage drawRect:CGRectMake(0, 0, image.size.width, image.size.height);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();

Custom image mask in iOS

I have a issue with masking images. I do game "puzzle" and have to make custom images. I found and tried 2 way of custom cropping:
Using CALayer.mask property.
Using UIImage.mask property.
In first option i create my custom path, then assign it to CAShapeLayer.path property, then assign CAShapeLayer to CALayer.mask property. At the end i have custom cropped image.
In second option i use firstly use CGImageMaskCreate() method (i use previously created black mask images of puzzle), then CGContextClipToMask().
In either options i have problem with performance (mostly when i crop image into 16 puzzles and drag in over the screen).
Is there any other approaches to crop image in custom way.
(I don't know how to solve performance problem).
Thanks in advance.
There are lots of UIImage-categories out there you can use for this. Give me a moment and I'll post some links here:
Cropping an UIImage (not really a category though, but it'll fit)
UIImage: Resize, then Crop
https://sites.google.com/a/injoit.com/knowledge-base/for-developers/graphics/uiimage-routines-scaling-cropping-rotating-etc
http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
http://maybelost.com/2010/11/cropping-a-uiimage-on-iphone/
Try this:
-(UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropped;
}
...
UIImage *temp_image = [self imageByCropping:original_image toRect:clipping_rectangle];
Maybe you should consider about drawing the image in a new image with an alpha background an overdrawing the current background. I mean: All pixel which are inside the jigsaw piece: normal colour, all pixels outside the jigsaw piece = transparent. And then try to blend it to the new background or overdrawing it.
Just my 2 cents. :)

Copying the bitmap contents of a UIView's context to that of another UIView

Basically what I want to do is copy the already rendered content (a PDF drawn into the UIView's graphics context using CGContextDrawPDFPage()) onto a similar UIView, without having to re render the PDF. The idea is, that I'd then be able to perform an animated transform on the UIView and later re render the PDF with more accuracy. For both UIViews I'm using a larger-than-screen CATiledLayer to make it easier to rerender the PDF once the user zooms in, if that makes any difference.
Any tips? I'm kind of lost here.
Assuming you have rendered a PDF page in a graphics context using code similar to the following
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL (filename_url);
CGPDFPageRef page = CGPDFDocumentGetPage (document, pageNumber);
CGContextDrawPDFPage (context, page);
CGPDFDocumentRelease (document);
This code will save the contents of pdfView to a UIImage
UIGraphicsBeginImageContext(pdfView.bounds.size);
[pdfView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *pdfViewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

iPhone: Get camera preview

I'd like to get the image that is being displayed on the UIImagePickerController when user uses the camera. And when I get I want to process the image and display instead of regular camera view.
But the problem is when I want to get the camera view, the image is just a black rectangle.
Here's my code:
UIView *cameraView = [[[[[[imagePicker.view subviews] objectAtIndex:0]
subviews] objectAtIndex: 0]
subviews] objectAtIndex: 0];
UIGraphicsBeginImageContext( CGSizeMake(320, 427) );
[cameraView.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageToDisplay.image = [PixelProcessing processImage: viewImage]; //In this case the image is black
//imageToDisplay.image = viewImage; //In this case the image is black too
//imageToDisplay.image = [UIImage imageNamed: #"icon.png"]; //In this case image is being displayed properly
What am I doing wrong?
Thanks.
This one is also working quite good. Use it when the camera preview is open:
UIImage *viewImage = [[(id)objc_getClass("PLCameraController")
performSelector:#selector(sharedInstance)]
performSelector:#selector(_createPreviewImage)];
But as far as I found out it brings the same results than the following solution which takes a 'screenshot' of the current screen:
extern CGImageRef UIGetScreenImage();
CGImageRef cgoriginal = UIGetScreenImage();
CGImageRef cgimg = CGImageCreateWithImageInRect(cgoriginal, rect);
UIImage *viewImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgoriginal);
CGImageRelease(cgimg);
A problem I didn't still find a fix for is, how can one get the camera image very fast without any overlays?
The unofficial call is:
UIGetScreenImage()
which you declare above the #implementation as:
extern CGImageRef UIGetScreenImage();
There may be a documented way to do this in 3.1, but I'm not sure. If not, please please file a Radar with Apple asking them to make some kind of screen grab access public!!!
That uses your same AppleID you log in to the iPhone development portal with.
Update: This call is not yet documented, but Apple explicitly has given the OK to use it with App Store apps.
at least for now, there's no way to do this. (certainly no official documented way, and as far as I know nobody's figured out an unofficial way either.)
the camera preview data is being drawn by the OS in some way that bypasses the normal graphics methods.