image effects iphone sdk - iphone

are there any tutorials for creating image effects in iphone? like glow,paper effect etc
Can anyone tell me where to start?

A glow effect is not supported by default within the iPhone SDK (specifically CoreGraphics). For the paper effect I am not sure what you are looking for.
If you insist on effects not supported by the SDK, you should try to find less platform specific sources and adapt them to the iPhone:
Glow and Shadow Effects (Windows GDI)
Another possibly great source of effect-know-how are the ImageMagick sources.

Take a look at this project: http://code.google.com/p/simple-iphone-image-processing/
It includes code that can do various image effects such as canny edge detection, histogram equalisation, skeletonisation, thresholding, gaussian blur, brightness normalisation, connected region extraction, and resizing.
Another other more low level option is to take a look at ImageMagick or FreeImage which are further image processing libraries.

Related

Edge detection and removal on iOS using opencv

i have a project similar to what iphone scanner apps do (docscanner, scannerpro, etc).
but i'm new to opencv and objective-c. the app is supposed to detect and remove the edges/background of a document/paper taken a photo of using an iphone.
i've seen this DETECT the Edge of a Document in iPhoneSDK which is what i want to do. i've seen what canny does but all it shows are edges of all shapes in the image, not the paper i want to separate.
i think this is what i'm supposed to do: OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection but i can't make it work in xcode.
and i don't know how to do it (being a newbie). i've looked hard everywhere and couldn't find a way to detect edges of a document and crop them from the image. or maybe i found something but i didn't understand. i'm supposed to code it in xcode and objective-c.

iOS: Decompose UIImageView image into shapes and change their colors

I have been making an iPhone App where I need to identify and decompose different shapes(e.g Wall, Chair, Book, etc..) in UIImageView's image and change their color. So far I have implemented code to allow user to select color and apply it to selected area (pixel base) using gesture recogniser but what I am looking for is far more than what I have done!
Is it posible to detect the different shapes available in given image and change their color?
Thanks.
whatever algorithm you use, you should place it on top of one of the best frameworks for computer Vision, open CV for iOS
then you might check other projects in other languages that do this image segmentation using open cv, and with the theory may be roll your own solution ;)
good luck
Object recognition and detection is a very wide topic in computer science and, as far as I know, is not supported by UIImage's public methods. I think you have a long way to go in order to achieve your goal. Try and look up any open source iOS projects that handle object detection or maybe even look into non-native libraries that have iOS wrappers, such as openCV. Good luck, don't give up.

Shape recognition (recognizes hand drawn basic shapes - rectangles, ellipses, triangles etc.)?

I want to detect hand drawn basic shapes - rectangles, ellipses, triangles etc.
Does anybody have an idea how to implement this?
Maybe you can try the OpenCV library. Actually this library has the focus of computer vision, i.e. analyzing pixeldata of images and video and might be too heavy for your task. But on the other hand it is very powerfull and available on many plattforms (even on iOS). And a hand drawn image with shapes is also just a set of pixels, isn't it ;-)
You might have a look at the manual:
http://www.sciweavers.org/books/opencv-open-source-computer-vision-reference-manual
There is plenty of information about OpenCV here on stackoverflow as well. Some hints on stackoverflow are here:
DETECT the Edge of a Document in iPhoneSDK
and here
iPhone and OpenCV

Photo Edge Detection using a mask on iPhone

I'm looking for code to be able to contrast detect edges in a photo.
Basically, the user will roughly paint the mask with their finger on Iphone, iPod or iPad. Then the code would detect the edges and adjust the mask to the edges.
Thanks for your help!
http://www.image-y.com/before.jpg
http://www.image-y.com/after.jpg
I recommend taking a look at OpenCV (which is also compilable on iOS (take a look at https://github.com/aptogo/OpenCVForiPhone)). A nice addition (with explanations) could be provided by this article http://b2cloud.com.au/tutorial/uiimage-pre-processing-category.
When having gained a basic understanding of what you can do with OpenCV, I'd personally try to do some kind of thresholding and contour detection (take a look at cv::findContours). Afterwards you could filter the found contours by using the given input by your user.

How to improve edge detection on IPhone apps?

I'm currently developing an IPhone app that uses edge detection. I took some sample pictures and I noticed that they came out pretty dark in doors. Flash is obviously an option but it usually blinding the camera and miss some edges.
Update: I'm more interested in IPhone tips. If there is a wat to get better pictures.
Have you tried playing with contrast and/or brightness? If you increase contrast before doing the edge detection, you should get better results (although it depends on the edge detection algorithm you're using and whether it auto-magically fixes contrast first).
Histogram equalisation may prove useful here as it should allow you to maintain approximately equal contrast levels between pictures. I'm sure there's an algorithm been implemented in OpenCV to handle it (although I've never used it on iOS, so I can't be sure).
UPDATE: I found this page on performing Histogram Equalization in OpenCV