I was wondering whether the following scenario is possible; let's say a user draws his own drawing (including text and lines) over another, could the result be saved and then uploaded?
Thanks in advance!
Yes, this is perfectly possible. You will want to look into creating your own UIView subclass for drawing and you can save the resulting image using core graphics. To upload the image to a server of your choice you could use the NSMutableURLRequest and NSURLConnection classes. If you are a novice developer the first part of this might prove challenging, but it can be done.
Related
For every developer arrives the day to improve the user interface experience because apps are evalutated mainly from the ui carefulness.
So, i've took a look around the websites and I found some psd where to start to desing my apps.
My question is: How to transform a psd prototype to a well-working app?
I don't unserstand how a mockup can help a developer to build a ui...
Can someone make me some clear the situation?
Well, I'd be careful to make a distinction between the graphics an app uses and the actual User Interface. Certainly the graphics are part of the UI, but the UI is soooo much more than that. Depending on how it is done, photoshop mock ups can be simple graphics you can use for your interface to complexes 'scenes' describing how the app functions. In the latter case, the mock-up can be useful for UI design, in the former case it just gives you pretty images to use (which can certainly be useful).
But to more directly answer your question, most people take 'slices' (individual pieces) of the photoshop image and export them as .png images (or .jpg). If the .psd file doesn't already have the images 'sliced', look up 'photoshop image slicing' on Google. You can then import them into Xcode and use them as background images for the controls you want to use. Especially since iOS 5.0, images can be used for a lot of controls. Also, you'll probably want to make sure you make the image resizable with proper UIEdgeInsets. This will allow the image to resize without pixilation by setting an area that can be tiled within the image.
My app will let users cut out things from photos. They'll be able to either select a photo already in their iPhone's photo library, or take a new one with the camera. From what I understand, UIImagePicker is the simplest way to accomplish picking a photo from the library or taking a new one. However, I also understand that it only provides basic image editing (zoom, crop). I want my image editing to allow for the creation of Bezier curves that, once all joined together, will cut out the enclosed area, saving it without the surrounding background.
The official apple documentation on UIImagePicker suggested that the AV Framework is required for providing custom image editing as opposed to the basic zoom and crop. So my first questions are:
Is the AV Framework indeed what I want to
use?
Will it get used in conjunction with UIImagePicker (i.e., UIImagePicker is used to select the photo or take a new one, and then my AV Framework code takes over for the image editing)?
Can anyone offer good resources on getting started on learning the code for this process?
My final question is about the actual Bezier curve generation/manipulation. It appears that the Core Graphics Framework has support for this, but there is also the UIBezierPath object, which is apparently some kind of wrapper for the Core Graphics tools I would otherwise use.
So my final question: will I want to use the UIBezierPath object, or does what I previously described require more fine-grained control that UIBezierPath can't provide, thereby forcing me to use the Core Graphics framework directly?
Thanks!
the AV Foundation allows you to talk to the camera, to configure it in various ways, and to receive a live feed from it. So it's good for taking new pictures or movies, but not for selecting them from the camera roll or for editing them. You'd likely want to use the AV Foundation to replace the image capture duties that UIImagePicker supplies. Probably you'll want to use a UIImagePicker with allowsEditing set to NO so as to be able to provide your own entirely separate editing interface.
no, it's a different sort of task.
I'm unaware of any tutorials on this sort of thing, but the docs are pretty good. I've posted the whole stuff for capturing a live feed from the camera in answers like this one, not sure if that's a more helpful way to see how some of the AV Foundation classes can be chained together?
What you'll probably end up doing in order to edit an image is starting with a UIImage, creating a CoreGraphics bitmap context (which is something you can draw to), doing some sort of compositing to that and then converting the result into an image and saving it back out to the camera roll.
UIBezierPath is a wrapper over the Core Graphics stuff, but will probably do what you want. addClip can set a defined path to be the new clipping path on the current context, or you can use the CGPath property if you need to go a bit further afield than UIKit's idea of a current context.
look for the iphone cookbook, maybe kickasstorrents still has it
C07 has everything you need, camera, overlay, loading, picking, editing, snapping,hiking camera, saving doc, sending image, image scroller, thumbnails, masking, etc....
I am building an app with several UIViews which are generated dynamically, based on user inputs. These UIViews may contain labels, images and text. They take some time to generate so I would like the user to be able to load them up quickly on future launches of the app without having to redraw them again. One requirement is that they need to keep their interactive state so the user can continue to edit them.
I looked into NSKeyedArchiver but this doesn't seem to support UIImage. Also, I can't save it as PNG since I would like to retain their interactive state.
Is there any way to do this?
You should consider keeping the model of your data separate from the interface. You can then use this stored model to generate the interface. I know you specifically said that you don't want to do this. However, any built in method is going to have to rebuild the UIViews in exactly the same way.
If the processing of the model data is the issue, try to come up with a way to efficiently represent the state of the interface so that you don't have to start from scratch. However, that will be a lot more work.
is UIWebView the only way to open a pdf document on the iphone? how can i interact with the document? eg: getting the current page number?
You should read the Apple documentation about Drawing with quartz 2d. There's a section just about handling with PDF documents.
If you don't like to show a PDF in a UIWebView, you create your own view. Just subclass UIView and overwrite the method -(void)drawRect:(CGRect)rect for your custom drawing. Create a CGContextRef a draw your PDF directly in that context using the specific function of core graphics. Core Graphics provides a lot of other functions for PDF documents, like Kalle already mentioned.
If you're not really used to core graphics, it's really difficult sometimes and you probably need a lot time to get used to it, so I recommend using UIWebView to display a PDF. It relatively simple.
It's not the only way. There's a whole toolkit built in which lets you render PDF pages to a UIView. Check out:
CGPDFDocumentCreateWithURL
CGPDFDocumentGetNumberOfPages
CGPDFDocumentGetPage
CGContextDrawPDFPage
to name a few (core graphics) functions that relate to PDF rendering.
Beside UIWebView, if you want a high level interaction class, check out: UIDocumentInteractionController, QLPreviewController.
Depending on your needs, check out the accepted answer to this question.
Check Apple's ZoomingPDFViewer for a simple example: http://developer.apple.com/library/ios/#samplecode/ZoomingPDFViewer/Introduction/Intro.html
What is the best way to use the custom UI graphics on the iPhone?
I have come across CGContextDrawPDFPage and Panic's Shrinkit. Should I be using PDF's to store my vector ui graphics and loading them using CGContextDrawPDFPage to draw them.
Previously I asked what way Apple store their UI graphics and was answered crushed png. The options are I can think of are below, but I'm also interested in any other techniques...
PNG (bitmapped image)
Custom UIView drawing code (generated from Opacity)
PDF (I've not used this method, is it with CGContextDrawPDFPage?)
This question is for vector graphics only (but I guess some people may only use bitmapped?). Looking for what is standard / most effective / most efficient.
Edit: Bounty added, I'm interested to hear the process of anyone who works with UI designers, or are themselves a UI designer. And pointers on resolution independence i.e. for iPad / iPhone HD future proofing.
Many thanks
Ross
I can suggest 3 different ways, 2 of which you already mentioned:
Creating custom UIView.
Drawing in a CGLayer.
loading from PDF.
Each have their advantages, depending on what you want to do:
UIView vs CGLayer
In terms of performance (for one-time drawing) and ease of use there shouldn't be much difference between the two (there are minor differences, but nothing serious). Apparently Opacity can export source code for both (I haven't personally used it). That said, there are things you should consider before choosing:
If you have a fixed image (which your question suggests so), use CGLayer. CGLayer objects will be cached on the graphics device, so re-using them is much faster. Even if the cache is cleared, you're still using the same object for redrawing, meaning there's no need for re-creating it.
On the other hand, if you need to change your drawing as the user interacts with the app, UIView could be faster, as you have the flexibility of updating just one part of the image instead of the whole view.
CGLayer is independent of the UI. So the same code works fine for Mac/iPhone/iPad, or even for saving to files.
Conclusion: Use CGLayer, unless it's a special case.
CGLayer (In code drawing) vs PDF (loading from file)
I don't have any benchmark for this, but I expect CGLayer to be slightly faster: (1) there's no need to read a file. (2) the pdf commands should be converted to system's graphic elements, which is more or less the same as creating a CGLayer. (3) I'm not sure about how pdf pages are cached, but I don't expect it to be faster than CGLayer. Anyway, all this shouldn't make much difference unless you want optimization till the last millisec. Again, the choice should be based on your use case:
CGLayer gives you more flexibility in the code. Your only access to a pdf page is through CGContextDrawPDFPage, which means even simple tasks such as scaling/transforming the drawing will be harder.
Using PDFs on the other hand, is more flexible after finishing the code. You can simply update the pdf file with a new whenever you want, load it from the web, etc. .
Creating a pdf could be easier than coding the drawing. You can use any app you want, you don't need to worry about the API and system resources. After all, the code can output a pdf file, not the other way around.
Conclusion: If you don't need to do much with the drawing (just want to show an icon or something), go with the pdf. if you need to work on it in the app, consider CGLayer.
Of course you could always mix the approaches as you see fit: e.g. Load a pdf file, put it in a CGLayer to adjust it, draw it with a UIView where you can put a badge on it!
I stumbled over this question because I have a question about PDFs too. I'm working together with a UI designer and we are succesfully using PDFs to create UI elements. For example: For a button we have 3 PDFs for ON, OFF and the shadow. I wrote a piece of code that transformes a PDF into a UIImage. It can scale the resulting image and even colorize it to have one template for many styles of buttons. It works pretty good :)
Our problem is that we can't scale up the vector graphics without quality loss. That's why we decided to use graphics that are big enough and we only have to scale them down. But I still wonder if there's a way to scale up a PDF before drawing it to a context and create a UIImage. Here's my post.