I am working on a PDFReader application.I am making use of CALayer to render the pdf contents.currently one pdf page is being rendered at a time and is displayed on the visible view.I want to buffer few pages(say one previous page and one next page for example) in advance while the user is reading the current page.Can any one please suggest me a better way of achieving this buffering mechanism?Thanks in advance
You can take a look at this open source PDF viewer for iOS, it implements the features you asked about: http://www.vfr.org/2011/09/pdf-reader-viewer-v2-2/
If you want to draw some content in the background, you can look into using the Grand Central Dispatch API, and drawing using Core Graphics commands. You will need to be careful about thread safety, such as checking/waiting for the background drawing to finish before trying to the push the results to the display.
I found a quite useful post, Image manipulation and drawing using Quartz in the background threads, on ensuring that you only use thread safe commands to create your drawing context (the example creates a bitmap context, but obviously you will be looking to create a PDF context using CGPDFContextCreate or similar).
Related
My app will let users cut out things from photos. They'll be able to either select a photo already in their iPhone's photo library, or take a new one with the camera. From what I understand, UIImagePicker is the simplest way to accomplish picking a photo from the library or taking a new one. However, I also understand that it only provides basic image editing (zoom, crop). I want my image editing to allow for the creation of Bezier curves that, once all joined together, will cut out the enclosed area, saving it without the surrounding background.
The official apple documentation on UIImagePicker suggested that the AV Framework is required for providing custom image editing as opposed to the basic zoom and crop. So my first questions are:
Is the AV Framework indeed what I want to
use?
Will it get used in conjunction with UIImagePicker (i.e., UIImagePicker is used to select the photo or take a new one, and then my AV Framework code takes over for the image editing)?
Can anyone offer good resources on getting started on learning the code for this process?
My final question is about the actual Bezier curve generation/manipulation. It appears that the Core Graphics Framework has support for this, but there is also the UIBezierPath object, which is apparently some kind of wrapper for the Core Graphics tools I would otherwise use.
So my final question: will I want to use the UIBezierPath object, or does what I previously described require more fine-grained control that UIBezierPath can't provide, thereby forcing me to use the Core Graphics framework directly?
Thanks!
the AV Foundation allows you to talk to the camera, to configure it in various ways, and to receive a live feed from it. So it's good for taking new pictures or movies, but not for selecting them from the camera roll or for editing them. You'd likely want to use the AV Foundation to replace the image capture duties that UIImagePicker supplies. Probably you'll want to use a UIImagePicker with allowsEditing set to NO so as to be able to provide your own entirely separate editing interface.
no, it's a different sort of task.
I'm unaware of any tutorials on this sort of thing, but the docs are pretty good. I've posted the whole stuff for capturing a live feed from the camera in answers like this one, not sure if that's a more helpful way to see how some of the AV Foundation classes can be chained together?
What you'll probably end up doing in order to edit an image is starting with a UIImage, creating a CoreGraphics bitmap context (which is something you can draw to), doing some sort of compositing to that and then converting the result into an image and saving it back out to the camera roll.
UIBezierPath is a wrapper over the Core Graphics stuff, but will probably do what you want. addClip can set a defined path to be the new clipping path on the current context, or you can use the CGPath property if you need to go a bit further afield than UIKit's idea of a current context.
look for the iphone cookbook, maybe kickasstorrents still has it
C07 has everything you need, camera, overlay, loading, picking, editing, snapping,hiking camera, saving doc, sending image, image scroller, thumbnails, masking, etc....
I am looking to create thumbnails for a number of different document types (mp4, pdf, png, and ppt). I have seen different methods for doing them individually - MPMovieplayerController: requestThumbnailImagesAtTimes or get UIGraphicsGetCurrentContext of the current layer (effectively screen shot).
Is there a better way to get thumbnails of these files?
What is the preferred method of get thumbnails of items? Different method for each?
As far as I'm aware, there isn't a generic way of doing this. I'd love to be proven wrong though.
I have the same requirement in an app I'm currently working on, and wrote the thumbnail generator yesterday. The approach I took was to pass a path to the file and a completion handler block through to the thumbnail generator object.
The thumbnail generator has an NSOperationQueue that spawns the thumbnail generation process in a background thread and immediately returns a placeholder thumbnail.
When the thumbnail is generated, the thumbnail generator calls the completion handler on the main thread. You'll probably need to use an NSInvocation object to do this part.
Doing it synchronously results in a noticeable delay if you have more than a couple of thumbnails to generate. Using the placeholder+completion handler block approach means that the UI remains responsive.
It's important to call the completion handler block on the main thread because it will almost certainly be updating your views, which should only ever be done on the main thread. If you don't do this, you'll get some very strange errors, such as scroll views not showing their contents until you scroll them.
You shouldn't really need to use MPMoviePlayerController to get thumbnails of videos though; AVAssetImageGenerator is the "Apple-approved" way of doing this; there's an example of how to do this in the AV Foundation Programming Guide.
is UIWebView the only way to open a pdf document on the iphone? how can i interact with the document? eg: getting the current page number?
You should read the Apple documentation about Drawing with quartz 2d. There's a section just about handling with PDF documents.
If you don't like to show a PDF in a UIWebView, you create your own view. Just subclass UIView and overwrite the method -(void)drawRect:(CGRect)rect for your custom drawing. Create a CGContextRef a draw your PDF directly in that context using the specific function of core graphics. Core Graphics provides a lot of other functions for PDF documents, like Kalle already mentioned.
If you're not really used to core graphics, it's really difficult sometimes and you probably need a lot time to get used to it, so I recommend using UIWebView to display a PDF. It relatively simple.
It's not the only way. There's a whole toolkit built in which lets you render PDF pages to a UIView. Check out:
CGPDFDocumentCreateWithURL
CGPDFDocumentGetNumberOfPages
CGPDFDocumentGetPage
CGContextDrawPDFPage
to name a few (core graphics) functions that relate to PDF rendering.
Beside UIWebView, if you want a high level interaction class, check out: UIDocumentInteractionController, QLPreviewController.
Depending on your needs, check out the accepted answer to this question.
Check Apple's ZoomingPDFViewer for a simple example: http://developer.apple.com/library/ios/#samplecode/ZoomingPDFViewer/Introduction/Intro.html
Does anyone know how to use core graphics to draw a pdf like in iBooks. I can already draw a pdf page using core graphics but was curious how iBooks shows a lower quality view of each page so it loads fast and then when you stay on a page longer it renders it a full quality. This makes it able to open the pdf without having to make the user wait like most magazine apps you see on ipad. Any ideas would help!
Apple have some "ZoomingPDFViewer" sample code:
http://developer.apple.com/library/ios/#samplecode/ZoomingPDFViewer/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010281
I suspect that might give you some good ideas :-)
I assume they use multiple layers, the first layer loads the pdf in low resolution and the better resolution is prepared in the background. When ready these layers are swapped.
Have a look at CGPDFDocumentRef and CATiledLayer in the documentation.
What is the best way to use the custom UI graphics on the iPhone?
I have come across CGContextDrawPDFPage and Panic's Shrinkit. Should I be using PDF's to store my vector ui graphics and loading them using CGContextDrawPDFPage to draw them.
Previously I asked what way Apple store their UI graphics and was answered crushed png. The options are I can think of are below, but I'm also interested in any other techniques...
PNG (bitmapped image)
Custom UIView drawing code (generated from Opacity)
PDF (I've not used this method, is it with CGContextDrawPDFPage?)
This question is for vector graphics only (but I guess some people may only use bitmapped?). Looking for what is standard / most effective / most efficient.
Edit: Bounty added, I'm interested to hear the process of anyone who works with UI designers, or are themselves a UI designer. And pointers on resolution independence i.e. for iPad / iPhone HD future proofing.
Many thanks
Ross
I can suggest 3 different ways, 2 of which you already mentioned:
Creating custom UIView.
Drawing in a CGLayer.
loading from PDF.
Each have their advantages, depending on what you want to do:
UIView vs CGLayer
In terms of performance (for one-time drawing) and ease of use there shouldn't be much difference between the two (there are minor differences, but nothing serious). Apparently Opacity can export source code for both (I haven't personally used it). That said, there are things you should consider before choosing:
If you have a fixed image (which your question suggests so), use CGLayer. CGLayer objects will be cached on the graphics device, so re-using them is much faster. Even if the cache is cleared, you're still using the same object for redrawing, meaning there's no need for re-creating it.
On the other hand, if you need to change your drawing as the user interacts with the app, UIView could be faster, as you have the flexibility of updating just one part of the image instead of the whole view.
CGLayer is independent of the UI. So the same code works fine for Mac/iPhone/iPad, or even for saving to files.
Conclusion: Use CGLayer, unless it's a special case.
CGLayer (In code drawing) vs PDF (loading from file)
I don't have any benchmark for this, but I expect CGLayer to be slightly faster: (1) there's no need to read a file. (2) the pdf commands should be converted to system's graphic elements, which is more or less the same as creating a CGLayer. (3) I'm not sure about how pdf pages are cached, but I don't expect it to be faster than CGLayer. Anyway, all this shouldn't make much difference unless you want optimization till the last millisec. Again, the choice should be based on your use case:
CGLayer gives you more flexibility in the code. Your only access to a pdf page is through CGContextDrawPDFPage, which means even simple tasks such as scaling/transforming the drawing will be harder.
Using PDFs on the other hand, is more flexible after finishing the code. You can simply update the pdf file with a new whenever you want, load it from the web, etc. .
Creating a pdf could be easier than coding the drawing. You can use any app you want, you don't need to worry about the API and system resources. After all, the code can output a pdf file, not the other way around.
Conclusion: If you don't need to do much with the drawing (just want to show an icon or something), go with the pdf. if you need to work on it in the app, consider CGLayer.
Of course you could always mix the approaches as you see fit: e.g. Load a pdf file, put it in a CGLayer to adjust it, draw it with a UIView where you can put a badge on it!
I stumbled over this question because I have a question about PDFs too. I'm working together with a UI designer and we are succesfully using PDFs to create UI elements. For example: For a button we have 3 PDFs for ON, OFF and the shadow. I wrote a piece of code that transformes a PDF into a UIImage. It can scale the resulting image and even colorize it to have one template for many styles of buttons. It works pretty good :)
Our problem is that we can't scale up the vector graphics without quality loss. That's why we decided to use graphics that are big enough and we only have to scale them down. But I still wonder if there's a way to scale up a PDF before drawing it to a context and create a UIImage. Here's my post.