iOS SDK - Programmatically generate a PDF file - iphone

Using the CoreGraphics framework is tedious work, in my honest opinion, when it comes to programmatically drawing a PDF file.
I would like to programmatically create a PDF, using various objects from views throughout my app.
I am interested to know if there are any good PDF tutorials around for the iOS SDK, maybe a drop in library.
I've seen this tutorial, PDF Creation Tutorial, but it was mostly written in C. Looking for more Objective-C style. This also seems like a ridiculous way to write to a PDF file, having to calculate where lines and other objects will be placed.
void CreatePDFFile (CGRect pageRect, const char *filename)
{
// This code block sets up our PDF Context so that we can draw to it
CGContextRef pdfContext;
CFStringRef path;
CFURLRef url;
CFMutableDictionaryRef myDictionary = NULL;
// Create a CFString from the filename we provide to this method when we call it
path = CFStringCreateWithCString (NULL, filename,
kCFStringEncodingUTF8);
// Create a CFURL using the CFString we just defined
url = CFURLCreateWithFileSystemPath (NULL, path,
kCFURLPOSIXPathStyle, 0);
CFRelease (path);
// This dictionary contains extra options mostly for 'signing' the PDF
myDictionary = CFDictionaryCreateMutable(NULL, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(myDictionary, kCGPDFContextTitle, CFSTR("My PDF File"));
CFDictionarySetValue(myDictionary, kCGPDFContextCreator, CFSTR("My Name"));
// Create our PDF Context with the CFURL, the CGRect we provide, and the above defined dictionary
pdfContext = CGPDFContextCreateWithURL (url, &pageRect, myDictionary);
// Cleanup our mess
CFRelease(myDictionary);
CFRelease(url);
// Done creating our PDF Context, now it's time to draw to it
// Starts our first page
CGContextBeginPage (pdfContext, &pageRect);
// Draws a black rectangle around the page inset by 50 on all sides
CGContextStrokeRect(pdfContext, CGRectMake(50, 50, pageRect.size.width - 100, pageRect.size.height - 100));
// This code block will create an image that we then draw to the page
const char *picture = "Picture";
CGImageRef image;
CGDataProviderRef provider;
CFStringRef picturePath;
CFURLRef pictureURL;
picturePath = CFStringCreateWithCString (NULL, picture,
kCFStringEncodingUTF8);
pictureURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), picturePath, CFSTR("png"), NULL);
CFRelease(picturePath);
provider = CGDataProviderCreateWithURL (pictureURL);
CFRelease (pictureURL);
image = CGImageCreateWithPNGDataProvider (provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease (provider);
CGContextDrawImage (pdfContext, CGRectMake(200, 200, 207, 385),image);
CGImageRelease (image);
// End image code
// Adding some text on top of the image we just added
CGContextSelectFont (pdfContext, "Helvetica", 16, kCGEncodingMacRoman);
CGContextSetTextDrawingMode (pdfContext, kCGTextFill);
CGContextSetRGBFillColor (pdfContext, 0, 0, 0, 1);
const char *text = "Hello World!";
CGContextShowTextAtPoint (pdfContext, 260, 390, text, strlen(text));
// End text
// We are done drawing to this page, let's end it
// We could add as many pages as we wanted using CGContextBeginPage/CGContextEndPage
CGContextEndPage (pdfContext);
// We are done with our context now, so we release it
CGContextRelease (pdfContext);
}
EDIT: Here's an example on GitHub using libHaru in an iPhone project.

A couple things...
First, there is a bug with CoreGraphics PDF generation in iOS that results in corrupted PDFs. I know this issue exists up to and including iOS 4.1 (I haven't tested iOS 4.2). The issue is related to fonts and only shows up if you include text in your PDF. The symptom is that, when generating the PDF, you'll see errors in the debug console that look like this:
<Error>: can't get CIDs for glyphs for 'TimesNewRomanPSMT'
The tricky aspect is that the resulting PDF will render fine in some PDF readers, but fail to render in other places. So, if you have control over the software that will be used to open your PDF, you may be able to ignore this issue (e.g., if you only intend to display the PDF on the iPhone or Mac desktops, then you should be fine using CoreGraphics). However, if you need to create a PDF that works anywhere, then you should take a closer look at this issue. Here's some additional info:
http://www.iphonedevsdk.com/forum/iphone-sdk-development/15505-pdf-font-problem-cant-get-cids-glyphs.html#post97854
As a workaround, I've used libHaru successfully on iPhone as a replacement for CoreGraphics PDF generation. It was a little tricky getting libHaru to build with my project initially, but once I got my project setup properly, it worked fine for my needs.
Second, depending on the format/layout of your PDF, you might consider using Interface Builder to create a view that serves as a "template" for your PDF output. You would then write code to load the view, fill in any data (e.g., set text for UILabels, etc.), then render the individual elements of the view into the PDF. In other words, use IB to specify coordinates, fonts, images, etc. and write code to render various elements (e.g., UILabel, UIImageView, etc.) in a generic way so you don't have to hard-code everything. I used this approach and it worked out great for my needs. Again, this may or may not make sense for your situation depending on the formatting/layout needs of your PDF.
EDIT: (response to 1st comment)
My implementation is part of a commercial product meaning that I can't share the full code, but I can give a general outline:
I created a .xib file with a view and sized the view to 850 x 1100 (my PDF was targeting 8.5 x 11 inches, so this makes it easy to translate to/from design-time coordinates).
In code, I load the view:
- (UIView *)loadTemplate
{
NSArray *nib = [[NSBundle mainBundle] loadNibNamed:#"ReportTemplate" owner:self options:nil];
for (id view in nib) {
if ([view isKindOfClass: [UIView class]]) {
return view;
}
}
return nil;
}
I then fill in various elements. I used tags to find the appropriate elements, but you could do this other ways. Example:
UILabel *label = (UILabel *)[templateView viewWithTag:TAG_FIRST_NAME];
if (label != nil) {
label.text = (firstName != nil) ? firstName : #"None";
Then I call a function to render the view to the PDF file. This function recursively walks the view hierarchy and renders each subview. For my project, I need to support only Label, ImageView, and View (for nested views):
- (void)addObject:(UIView *)view
{
if (view != nil && !view.hidden) {
if ([view isKindOfClass:[UILabel class]]) {
[self addLabel:(UILabel *)view];
} else if ([view isKindOfClass:[UIImageView class]]) {
[self addImageView:(UIImageView *)view];
} else if ([view isKindOfClass:[UIView class]]) {
[self addContainer:view];
}
}
}
As an example, here's my implementation of addImageView (HPDF_ functions are from libHaru):
- (void)addImageView:(UIImageView *)imageView
{
NSData *pngData = UIImagePNGRepresentation(imageView.image);
if (pngData != nil) {
HPDF_Image image = HPDF_LoadPngImageFromMem(_pdf, [pngData bytes], [pngData length]);
if (image != NULL) {
CGRect destRect = [self rectToPDF:imageView.frame];
float x = destRect.origin.x;
float y = destRect.origin.y - destRect.size.height;
float width = destRect.size.width;
float height = destRect.size.height;
HPDF_Page page = HPDF_GetCurrentPage(_pdf);
HPDF_Page_DrawImage(page, image, x, y, width, height);
}
}
}
Hopefully that gives you the idea.

It's a late reply, but as i struggled a lot with pdf generation, i considered it worthwhile to share my views. Instead of Core graphics, to create a context, you can also use UIKit methods to generate a pdf.
Apple has documented it well in the drawing and printing guide.

The PDF functions on iOS are all CoreGraphics based, which makes sense, because the draw primitives in iOS are also CoreGraphics based. If you want to be rendering 2D primitives directly into a PDF, you'll have to use CoreGraphics.
There are some shortcuts for objects that live in UIKit as well, like images. Drawing a PNG to a PDF context still requires the call to CGContextDrawImage, but you can do it with the CGImage that you can get from a UIImage, e.g.:
UIImage * myPNG = [UIImage imageNamed:#"mypng.png"];
CGContextDrawImage (pdfContext, CGRectMake(200, 200, 207, 385), [myPNG CGImage]);
If you want more guidance on overall design, please be more explicit about what you mean by "various objects throughout your app" and what you're trying to accomplish.

Late, but possibly helpful to others. It sounds like a PDF template style of operation might be the approach you want rather than building the PDF in code. Since you want to email it off anyway, you are connected to the net so you could use something like the Docmosis cloud service which is effectively a mail-merge service. Send it the data/images to merge with your template. That type of approach has the benefit of a lot less code and offloading most of the processing from your iPad app. I've seen it used in an iPad app and it was nice.

Related

Text Formatting in PDF Reader and Writer in iOS

In my application I have to show a PDF reader (PDF from a server) and copy everything to another PDF. I am using Reader to show the PDF.
But it is unable to format the text inside the PDF. Bold text is shown in some different narrow font other than Bold. Can you please give me a solution?
Again, while writing I am using Saving a PDF document to disk using Quartz. But here I am facing the same problem. The new PDF is showing the font other than what is used in the source PDF.
I tried the code below also. Still I am getting the same O/P. Please help me with this. I have not tested it with another text font like italic / underline. It should work for these fonts also.
- (void) generatePdfWithFilePath: (NSString *)thefilePath {
UIGraphicsBeginPDFContextToFile(thefilePath, CGRectZero, NULL);
BOOL done = NO;
do {
//Start a new page.
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, pageSize.width, pageSize.height), nil);
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"MyPdf" ofType:#"pdf"];
CFURLRef url = (CFURLRef)[NSURL fileURLWithPath:filePath];
CGPDFDocumentRef pdf = CGPDFDocumentCreateWithURL(url);
CGPDFPageRef page = CGPDFDocumentGetPage(pdf, 1);
CGRect paperSize = CGPDFPageGetBoxRect(CGPDFDocumentGetPage (pdf, 1), kCGPDFMediaBox);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, paperSize.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawPDFPage(context, page);
done = YES;
} while (!done);
UIGraphicsEndPDFContext();
}
It is happening if the source PDF is generated from a source other than an iOS device.
A couple of thoughts:
The font you're looking to display is not in any of the iOS devices, nor is it embedded in the PDF. Thus, it substitutes a font it thinks is similar.
You don't have to use Reader to display a PDF, you can use a web view, the UIDocumentInteraction Controller, or CGPDFController. You might have more luck using one of these built-in displays, rather than Reader.

Document thumbnails generated from preview content

I'm creating a grid of cells that represent downloaded files of various types (PDF, Images, Videos, HTML, .pages, .numbers, Word, etc.) in an ios application and am looking for a way to preemptively create preview thumbnails for these documents to show in the cells. I'm currently using the UIDocumentInteractionController for previewing these documents after the user selects one and was hoping that the icons array would return the preview images. Unfortunately, it only returns a generic icon.
I see that Pages and Numbers do this, but they own the documents and the format. I was hoping that the Quicklook framework would provide a solution, but I haven't yet found it. Has anybody else found a way to easily generate these thumbnails?
I'd prefer to use an actual UIImage and not a UIWebView as others have suggested. That solution seems to be a hack and I have to believe there's a better solution out there.
My next option is to generate a preview after they open the document by essentially capturing the view as an image, but that still seems hackish.
Anybody have any thoughts or clearer options?
Thanks,
My solution:
I never found a complete solution this particular problem, but I was able to handle PDF's fairly efficiently. I honestly don't remember if I wrote this or found an example. If it's your code that was posted somewhere else, thanks! The end result here is docImage is either null, or contains the first page of the PDF as an image. When loading my table rows, I set the image preview to a default image and then run this code on a background thread using blocks. Once complete, I simply animate the preview back into the cell.
NSURL *pdfFileURL = [NSURL fileURLWithPath:localFilePath];
CGPDFDocumentRef pdfDoc = CGPDFDocumentCreateWithURL((__bridge CFURLRef) pdfFileURL);
NSUInteger numPages = CGPDFDocumentGetNumberOfPages(pdfDoc);
if(numPages > 0) {
docImage = [UIImage imageFromPDFDocumentRef:pdfDoc];
}
CGPDFDocumentRelease(pdfDoc);
UIImage Category:
#implementation UIImage (CGPDFDocument)
+(UIImage *)imageFromPDFDocumentRef:(CGPDFDocumentRef)documentRef {
return [self imageFromPDFDocumentRef:documentRef page:1];
}
+(UIImage *)imageFromPDFDocumentRef:(CGPDFDocumentRef)documentRef page:(NSUInteger)page {
CGPDFPageRef pageRef = CGPDFDocumentGetPage(documentRef, page);
CGRect pageRect = CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox);
UIGraphicsBeginImageContext(pageRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, CGRectGetMinX(pageRect),CGRectGetMaxY(pageRect));
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, -(pageRect.origin.x), -(pageRect.origin.y));
CGContextDrawPDFPage(context, pageRef);
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
#end

Is it possible to determine if a UIImage is stretchable?

I'm trying to reuse a small chunk of code inside a custom button class. For this to work I need to pass in non stretchable images (an icon) or a stretchable image (a 'swoosh'). Within the code I need to set the rect to draw the height so I'd like, ideally, to simply determine if the image is stretchable or not? If it isn't I draw it at the size of the image, if not I draw at the bounds of the containing rect.
From my investigation so far capInsets (iOS 5) or leftCapWidth/topCapHeight (pre iOS 5) are not useful for this.
Is there something buried in the core or quartz information I can use?
Just curious, for now I'm coding around it with an extra parameter.
** I've since read through CGImageRef and the CI equivalent **
As far as I can tell there is no such information that we can access to identify such images, which begs the question how does the OS know?
There is no way to detect this unless you have some intense image analysis (which won't be 100% correct). UIImage is essentially some pixels with meta-information, all obtained from the file that you loaded it from. No file formats would have that information.
However, you can encode some information into the file name of the image. If you have an image called foo.png that is stretchable, why not call it foo.stretch.png? Your loading routines can analyse the file name and extract meta-information that you can associate with the UIImage (see http://labs.vectorform.com/2011/07/objective-c-associated-objects/ for associated objects) or by creating your own class that composites a UIImage with meta-information.
Good luck in your research.
When u create UIImage*, its .size property is absolute.
If u mean stretchable to your button view. just check scale for example.
- (BOOL) stretchableImage:(UIImage*)aImage toView:(UIView*)aView{
CGFloat scaleW = aView.size.width/aImage.size.width;
CGFloat scaleH = aView.size.height/aImage.size.height;
if (scaleW == scaleH){
if (scaleW < 1)
return YES;
}
return NO;
}
You can check it's class.
UIImage *img = [UIImage imageNamed:#"back"];
NSString *imgClass = NSStringFromClass(img.class);
UIImage *imgStretch = [img stretchableImageWithLeftCapWidth:10 topCapHeight:10];
NSString *imgStrClass = NSStringFromClass(imgStretch.class);
NSLog(#"Normal class:\t%#\nStretchable class:\t%#",imgClass,imgStrClass);
Console:
Normal class: UIImage
Stretchable class: _UIResizableImage

fastest way to draw a screen buffer on the iphone

I have a "software renderer" that I am porting from PC to the iPhone. what is the fastest way to manually update the screen with a buffer of pixels on the iphone? for instance in windows the fastest function I have found is SetDIBitsToDevice.
I don't know much about the iphone, or the libraries, and there seem to be so many layers and different types of UI elements, so I might need a lot of explanation...
for now I'm just going to constantly update a texture in opengl and render that to the screen, I very much doubt that this is going to be the best way to do it.
UPDATE:
I have tried the openGL screen sized texture method:
I got 17fps...
I used a 512x512 texture (because it needs to be a power of two)
just the call of
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,512,512,GL_RGBA,GL_UNSIGNED_BYTE, baseWindowGUI->GetBuffer());
seemed pretty much responsible for ALL the slow down.
commenting it out, and leaving in all my software rendering GUI code, and the rendering of the now non updating texture, resulted in 60fps, 30% renderer usage, and no notable spikes from the cpu.
note that GetBuffer() simply returns a pointer to the software backbuffer of the GUI system, there is no re-gigging or resizing of the buffer in anyway, it is properly sized and formatted for the texture, so I am fairly certain the slowdown has nothing to do with the software renderer, which is the good news, it looks like if I can find a way to update the screen at 60, software rendering should work for the time being.
I tried doing the update texture call with 512,320 rather than 512,512 this was oddly even slower... running at 10fps, also it says the render utilization is only like 5%, and all the time is being wasted in a call to Untwiddle32bpp inside openGLES.
I can change my software render to natively render to any pixle format, if it would result in a more direct blit.
fyi, tested on a 2.2.1 ipod touch G2 (so like an Iphone 3G on steroids)
UPDATE 2:
I have just finished writting the CoreAnimation/Graphics method, it looks good, but I am a little worried about how it updates the screen each frame, basically ditching the old CGImage, creating a brand new one... check it out in 'someRandomFunction' below:
is this the quickest way to update the image? any help would be greatly appreciated.
//
// catestAppDelegate.m
// catest
//
// Created by User on 3/14/10.
// Copyright __MyCompanyName__ 2010. All rights reserved.
//
#import "catestAppDelegate.h"
#import "catestViewController.h"
#import "QuartzCore/QuartzCore.h"
const void* GetBytePointer(void* info)
{
// this is currently only called once
return info; // info is a pointer to the buffer
}
void ReleaseBytePointer(void*info, const void* pointer)
{
// don't care, just using the one static buffer at the moment
}
size_t GetBytesAtPosition(void* info, void* buffer, off_t position, size_t count)
{
// I don't think this ever gets called
memcpy(buffer, ((char*)info) + position, count);
return count;
}
CGDataProviderDirectCallbacks providerCallbacks =
{ 0, GetBytePointer, ReleaseBytePointer, GetBytesAtPosition, 0 };
static CGImageRef cgIm;
static CGDataProviderRef dataProvider;
unsigned char* imageData;
const size_t imageDataSize = 320 * 480 * 4;
NSTimer *animationTimer;
NSTimeInterval animationInterval= 1.0f/60.0f;
#implementation catestAppDelegate
#synthesize window;
#synthesize viewController;
- (void)applicationDidFinishLaunching:(UIApplication *)application {
[window makeKeyAndVisible];
const size_t byteRowSize = 320 * 4;
imageData = malloc(imageDataSize);
for(int i=0;i<imageDataSize/4;i++)
((unsigned int*)imageData)[i] = 0xFFFF00FF; // just set it to some random init color, currently yellow
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
dataProvider =
CGDataProviderCreateDirect(imageData, imageDataSize,
&providerCallbacks); // currently global
cgIm = CGImageCreate
(320, 480,
8, 32, 320*4, colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrder32Little,
dataProvider, 0, false, kCGRenderingIntentDefault); // also global, probably doesn't need to be
self.window.layer.contents = cgIm; // set the UIWindow's CALayer's contents to the image, yay works!
// CGImageRelease(cgIm); // we should do this at some stage...
// CGDataProviderRelease(dataProvider);
animationTimer = [NSTimer scheduledTimerWithTimeInterval:animationInterval target:self selector:#selector(someRandomFunction) userInfo:nil repeats:YES];
// set up a timer in the attempt to update the image
}
float col = 0;
-(void)someRandomFunction
{
// update the original buffer
for(int i=0;i<imageDataSize;i++)
imageData[i] = (unsigned char)(int)col;
col+=256.0f/60.0f;
// and currently the only way I know how to apply that buffer update to the screen is to
// create a new image and bind it to the layer...???
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
cgIm = CGImageCreate
(320, 480,
8, 32, 320*4, colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrder32Little,
dataProvider, 0, false, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
self.window.layer.contents = cgIm;
// and that currently works, updating the screen, but i don't know how well it runs...
}
- (void)dealloc {
[viewController release];
[window release];
[super dealloc];
}
#end
The fastest App Store approved way to do CPU-only 2D graphics is to create a CGImage backed by a buffer using CGDataProviderCreateDirect and assign that to a CALayer's contents property.
For best results use the kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little or kCGImageAlphaNone | kCGBitmapByteOrder32Little bitmap types and double buffer so that the display is never in an inconsistent state.
edit: this should be faster than drawing to an OpenGL texture in theory, but as always, profile to be sure.
edit2: CADisplayLink is a useful class no matter which compositing method you use.
The fastest way is to use IOFrameBuffer/IOSurface, which are private frameworks.
So OpenGL seems to be the only possible way for AppStore apps.
Just to post my comment to #rpetrich's answer in the form of an answer, I will say in my tests I found OpenGL to be the fastest way. I've implemented a simple object (UIView subclass) called EEPixelViewer that does this generically enough that it should work for most people I think.
It uses OpenGL to push pixels in a wide variety of formats (24bpp RGB, 32-bit RGBA, and several YpCbCr formats) to the screen as efficiently as possible. The solution achieves 60fps for most pixel formats on almost every single iOS device, including older ones. Usage is super simple and requires no OpenGL knowledge:
pixelViewer.pixelFormat = kCVPixelFormatType_32RGBA;
pixelViewer.sourceImageSize = CGSizeMake(1024, 768);
EEPixelViewerPlane plane;
plane.width = 1024;
plane.height = 768;
plane.data = pixelBuffer;
plane.rowBytes = plane.width * 4;
[pixelViewer displayPixelBufferPlanes: &plane count: 1 withCompletion:nil];
Repeat the displayPixelBufferPlanes call for each frame (which loads the pixel buffer to the GPU using glTexImage2D), and that's pretty much all there is to it. The code is smart in that it tries to use the GPU for any kind of simple processing required such as permuting the color channels, converting YpCbCr to RGB, etc.
There is also quite a bit of logic for honoring scaling using the UIView's contentMode property, so UIViewContentModeScaleToFit/Fill, etc. all work as expected.
Perhaps you could abstract the methods used in the software renderer to a GPU shader... might get better performance. You'd need to send the encoded "video" data as a texture.
A faster method than both CGDataProvider and glTexSubImage is to use CVOpenGLESTextureCache. The CVOpenGLESTextureCache allows you to directly modify an OpenGL texture in graphics memory without re-uploading.
I used it for a fast animation view you can see here:
https://github.com/justinmeiners/image-sequence-streaming
It is a little tricky to use and I came across it after asking my own question about this topic: How to directly update pixels - with CGImage and direct CGDataProvider

Annotate PDF within iPhone SDK

I have managed to implement a very basic PDF viewer within my application, but was wondering if it was possible to add annotations to the PDF. I have looked through the SDK docs, but not found anything. I have 2 questions really:
Is it possible to do this?
What is the best approach to take?
Is there a framework or library that I can include to assist with this?
Thanks.
You can do annotation by reading in a PDF page, drawing it onto a new PDF graphics context, then drawing extra content onto that graphic context. Here is some code that adds the words 'Example annotation' at position (100.0,100.0) to an existing PDF. The method getPDFFileName returns the path of the original PD. getTempPDFFileName returns the path of the new PDF, the one that is the original plus the annotation.
To vary the annotations, just add in more drawing code in place of the drawInRect:withFont: method. See the Drawing and Printing Guide for iOS for more on how to do that.
- (void) exampleAnnotation;
{
NSURL* url = [NSURL fileURLWithPath:[self getPDFFileName]];
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL ((CFURLRef) url);// 2
size_t count = CGPDFDocumentGetNumberOfPages (document);// 3
if (count == 0)
{
NSLog(#"PDF needs at least one page");
return;
}
CGRect paperSize = CGRectMake(0.0,0.0,595.28,841.89);
UIGraphicsBeginPDFContextToFile([self getTempPDFFileName], paperSize, nil);
UIGraphicsBeginPDFPageWithInfo(paperSize, nil);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
// flip context so page is right way up
CGContextTranslateCTM(currentContext, 0, paperSize.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGPDFPageRef page = CGPDFDocumentGetPage (document, 1); // grab page 1 of the PDF
CGContextDrawPDFPage (currentContext, page); // draw page 1 into graphics context
// flip context so annotations are right way up
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextTranslateCTM(currentContext, 0, -paperSize.size.height);
[#"Example annotation" drawInRect:CGRectMake(100.0, 100.0, 200.0, 40.0) withFont:[UIFont systemFontOfSize:18.0]];
UIGraphicsEndPDFContext();
CGPDFDocumentRelease (document);
}
I am working in the stuff and created a GitHub project. Please check it out here.
I don't think PDFAnnotation or PDFKit were ported to iPhone from the desktop.... probably a great excuse to file a radar. However, Haru may get you by in the meantime.