image filters for iphone sdk development - iphone

I am planning to develop an iphone app which makes use of image filters like blurring, sharpening,etc. I noticed that there are few approaches for this one,
Use openGL ES. I even found an example code on apple iphone dev site. How easy is openGL for somebody who has never used it? Can the image filters be implemented using the openGL framework?
There is a Quartz demo as well posted on apple iphone dev site. Has anybody used this framework for doing image processing? How is this approach compared to openGL framework?
Don't use openGL and Quartz framework. Basically access the raw pixels from the image and do the manipulation myself.
Make use of any custom built image processing libraries like this one. Do you know of any other libraries like this one?
Can anybody provide insights/suggestions on which option is the best? Your opinions are highly appreciated. Thanks!

Here is another alternative for image filtering. They provide lots of filters using core image framework.
http://www.binpress.com/app/photo-effects-sdk-for-ios/801

Quartz doesn't have access to Core Image yet on the iPhoneOS so you can't use the Core Image filters like you do on MacOS.
I would go with a dedicated library. There's a lot of overhead in OpenGL ES you don't want to miss with if you're not using it for anything else.

If your App has a support of iOS6 the use CoreGraphics and CoreImage. It contains many filters and some other approaches through which yo get other composite filters.
If you r not on iOS6 the you can use GPUImage framwork or ImageMagick.
and the last option is to manipulating pixels values, but it needs an filter algorithm to add filters on Image

Related

Best way to build a camera app on iPhone

I am thinking of building a camera application - with the ability to do image processing (adjust contrast, apply different image filters) while you are taking picture or after the pictures has taken.
The app will also have the ability of drag and drop icons.
At the end you are able to export the edited images either to the camera roll or app memory.
There is already many apps out there like this. (Line Camera) etc...
Just wondering what is the best way to build such app.
Can I build the app purely with Objective C ios sdk? or do i need to build it with C++/cocos2d, etc...
Thanks for your help!
Your question is very broad, so here is a broad answer...
Accessing the camera/photo library
First you'll need to access the camera using UIImagePickerController to either take a new photo or grab one from your photo library. You can read up on how to accomplish this here: Camera Programming Topics for iOS
Image Manipulation
AviarySDK has much of this already built for you. Very easy to set up and use in your apps. You can download their sample app for free in the app store if you want to see what it can do. Check it out here: http://aviary.com/
Alternatively, read up on Core Image if you'd like to avoid third-party libraries. See Core Image Programming Guide for more information.
There is absolutely no need for cocos2d which is a game engine.
You can accomplish everything you mentioned using only Objective-C.
If you want real-time effects you will need to dive into OpenGL. you can use GLKit if you target iOS 5 and above.

Mapsforge analog for iphone

I need some framework for my iPhone app, which is using maps. Now these maps are raster images and I'd like to optimize my app by doing vector maps instead. I know that my colleagues from Android development had used Mapsforge framework for this purposes. Is there any analog of this library for iPhone? I need framework that could quickly render vector maps using hardware acceleration, caching maps, offline rendering and (optinal) be cross-platform. Any suggestions? Thanks!
ok, I've moved over my laziness and decided to move to github my forgotten almost year-ago work. This is Mapsforge for iOS, dirty code, but it should work without any additional setup. It can read .map files and asynchronously render tiles with vector objects to mapView. You can find it here: https://github.com/medvedNick/Mapsforge_iOS
Have a look at the following post: https://groups.google.com/forum/?fromgroups=#!topic/route-me-map/wbBa4h0R_iw
There is … libosmscout that do vector maps drawing (unix, windows,
QT, iOS, Android, …), routing, searching:
https://sourceforge.net/projects/libosmscout/
I did the iOS/OSX drawing code, it's not finished but works quite
well.

How to implement photo effects in iPhone app?

Is it possible to bring the photo Booth effects (twirl, squeeze, bulge) for an image?
The normal way in OS X to apply such filters is Core Image. Core Image isn't part of the current iPhone SDK, but Apple have announced that it'll be in iOS 5. That's currently under NDA so can't be discussed beyond the detail Apple have made public on sites such as this, but given what is public I think it'd be a very good idea to ask again or to update your question once iOS 5 is released.
In the meantime, if you want to do the effect live you're probably best off uploading the image to OpenGL and applying some sort of pixel shader, which is quite an undertaking but previous StackOverflow answers such as this one are likely to be helpful.
Core Image is now part of iOS 5. It has many filters you can use for creating photo effects by simply combining them. You can use Photo Effects SDK that is built on Core Image. It comes with 35+ ready to use photo effects. You can also easily create your own photo effects with the builtin filter chaining support.

how to do image morphing as in FatBooth app - iPhone

I want to build a similar app as fatbooth and want some ideas on how to do this. I googled for Image morphing in iPhone but didn't find anything. Should I use some server side language to morph Image?
Any help would be much appreciated!
Thanks
Saurabh
The only language you can really use is C# or c++. the maths is very complicated although I am sure you can get a book or two that cover image manipulation.
I don't know if there is an open source morphing framework, but that is your best option - it doesn't have to be specific to the iPhone, but the integration will be hard.
I don't know what you mean about server side, unless there is a server you know of that does it and you want a wrapper around it
For starters, take a look at displacement mapping: http://www.imagemagick.org/Usage/mapping/
Basically, you use grayscale images to bloat/shrink/squeeze/etc parts of a person's face. I made an app called FaceCraze which had to implement a poor man's FatBooth as part of the face transformation, and I used grayscale images very similar to the examples in http://www.imagemagick.org/Usage/mapping/#spherical
edit: You can use the ImageMagick library in your iOS apps too: Link

Image processing on the iPhone

I would like to apply image processing on pictures taken on the iPhone.
This processing would involve 2D matrix convolutions etc.
I'm afraid that the performance with nested NSArrays would be pretty bad. What is the right way to manipulate pixel based images? Should I simply use C arrays allocated with malloc?
Have you looked at the Quartz 2D engine available in the iPhone SDK? Or perhaps Core Graphics? Apple has a nice overview document describing all the different imaging technologies available on the iPhone. Unfortunately there isn't anything as nice as ImageKit on the iPhone yet.
I suggest to use OpenCV image processing library since it contains well optimized algorithms almost for anything you want. OpenCV will be definitely faster than using manual processing with NSArray.
But there are one major drawback - OpenCV library is written on C/C++, so you will have to convert your NSImage to native OpenCV image format to do processing. But it's really easy to google how to do this.
I use OpenCV in my own iPhone project, here is small how-to post of building OpenCv for IPhone: http://computer-vision-talks.com/2010/12/building-opencv-for-ios/
Yes, you would use a C array since that's how you get back the pixel data anyway.
As mentioned, you should look and see if you can use Quartz2D to do the manipulations you are interested in as it would probably perform better being hardware based. If not, just do your own over the array of pixels.
The iPhone also supports OpenCL, and it's GPU has way more processing power than the CPU.