Edge detection and removal on iOS using opencv - iphone

i have a project similar to what iphone scanner apps do (docscanner, scannerpro, etc).
but i'm new to opencv and objective-c. the app is supposed to detect and remove the edges/background of a document/paper taken a photo of using an iphone.
i've seen this DETECT the Edge of a Document in iPhoneSDK which is what i want to do. i've seen what canny does but all it shows are edges of all shapes in the image, not the paper i want to separate.
i think this is what i'm supposed to do: OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection but i can't make it work in xcode.
and i don't know how to do it (being a newbie). i've looked hard everywhere and couldn't find a way to detect edges of a document and crop them from the image. or maybe i found something but i didn't understand. i'm supposed to code it in xcode and objective-c.

Related

iOS: Decompose UIImageView image into shapes and change their colors

I have been making an iPhone App where I need to identify and decompose different shapes(e.g Wall, Chair, Book, etc..) in UIImageView's image and change their color. So far I have implemented code to allow user to select color and apply it to selected area (pixel base) using gesture recogniser but what I am looking for is far more than what I have done!
Is it posible to detect the different shapes available in given image and change their color?
Thanks.
whatever algorithm you use, you should place it on top of one of the best frameworks for computer Vision, open CV for iOS
then you might check other projects in other languages that do this image segmentation using open cv, and with the theory may be roll your own solution ;)
good luck
Object recognition and detection is a very wide topic in computer science and, as far as I know, is not supported by UIImage's public methods. I think you have a long way to go in order to achieve your goal. Try and look up any open source iOS projects that handle object detection or maybe even look into non-native libraries that have iOS wrappers, such as openCV. Good luck, don't give up.

Shape recognition (recognizes hand drawn basic shapes - rectangles, ellipses, triangles etc.)?

I want to detect hand drawn basic shapes - rectangles, ellipses, triangles etc.
Does anybody have an idea how to implement this?
Maybe you can try the OpenCV library. Actually this library has the focus of computer vision, i.e. analyzing pixeldata of images and video and might be too heavy for your task. But on the other hand it is very powerfull and available on many plattforms (even on iOS). And a hand drawn image with shapes is also just a set of pixels, isn't it ;-)
You might have a look at the manual:
http://www.sciweavers.org/books/opencv-open-source-computer-vision-reference-manual
There is plenty of information about OpenCV here on stackoverflow as well. Some hints on stackoverflow are here:
DETECT the Edge of a Document in iPhoneSDK
and here
iPhone and OpenCV

Photo Edge Detection using a mask on iPhone

I'm looking for code to be able to contrast detect edges in a photo.
Basically, the user will roughly paint the mask with their finger on Iphone, iPod or iPad. Then the code would detect the edges and adjust the mask to the edges.
Thanks for your help!
http://www.image-y.com/before.jpg
http://www.image-y.com/after.jpg
I recommend taking a look at OpenCV (which is also compilable on iOS (take a look at https://github.com/aptogo/OpenCVForiPhone)). A nice addition (with explanations) could be provided by this article http://b2cloud.com.au/tutorial/uiimage-pre-processing-category.
When having gained a basic understanding of what you can do with OpenCV, I'd personally try to do some kind of thresholding and contour detection (take a look at cv::findContours). Afterwards you could filter the found contours by using the given input by your user.

Detecting particular objects in the image i.e image segmentation with opencv

I have to select any particular object visible in my image on i-phone.
Basically my project is to segment image objects on the basis of my touch.
The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch.
Is there any other method which would be more robust because I have to run it on video frames?
I am using OpenCV and iphone for the project.
PLease help if there is any other idea which has been implemented or is feasible to implement.
Have you looked at SIFT or SURF implementations? They both track object features and are resilient (to a certain degree) to rotation, translation and scale.
Also check out FAST which is a corner detection algorithm which might help you, they have an app on the app store showing how quick it is too.

Overlay "Structured Glas" Effect on iPhone Camera Feed - General Directions

I'm currently trying to write an app, that would be able to show the effects of glas, as seen through the iPhone Camera.
I'm not talking about simple, uniform glas but glass like this:
Now I already broke this into two problems:
1) Apply some Image Filter to the 2D-frames presented by the iPhone Camera. This has been done and seems possible, e.g. in the app: faceman
2) I need to get the individual lighting properties of a sheet of glas that my client supplies me with. Now basicly, there must be a way to read the information about how the glas distorts ands skews the image. I think It might be somehow possible to make a high-res picture of the plate of glasplate, laid on a checkerboard-image and somehow analyze this.
Now, I'm mostly searching for literature, weblinks on how you guys think I could start at 2. It doesn't need to be exact, in the end I just need something that looks approximately like the sheet of glass I want to show. And I'm don't even know where to search, Physics, Image Filtering or Comupational Photography books.
EDIT: I'm currently thinking, that one easy solution could be bump-mapping the texture on top of the camera-feed, I asked another question on this here.
You need to start with OpenGL. You want to effectively have a texture - similar to the one you've got above - displace the texture below it (the live camera view) to give the impression of depth and distortion. This is a 'non-trivial' problem, in that whilst it's a fairly standard problem in its field if you're coming from a background with no graphics or OpenGL experience you can expect a very steep learning curve.
So in short, the only way you can achieve this realistically on iOS is to use OpenGL, and that should be your starting point. Apple have a few guides on the matter, but you'll be better off looking elsewhere. There are some useful books such as the OpenGL ES 2.0 Programming Guide that can get you off on the right track, but where you start would depend on how comfortable you are with 3D graphics and C.
Just wanted to add that I solved this old answer using the refraction example in the Khronos OpenGl ES SDK.
Wrote a blog-entry with pictures about it :
simulating windows with refraction