I'm looking for code to be able to contrast detect edges in a photo.
Basically, the user will roughly paint the mask with their finger on Iphone, iPod or iPad. Then the code would detect the edges and adjust the mask to the edges.
Thanks for your help!
http://www.image-y.com/before.jpg
http://www.image-y.com/after.jpg
I recommend taking a look at OpenCV (which is also compilable on iOS (take a look at https://github.com/aptogo/OpenCVForiPhone)). A nice addition (with explanations) could be provided by this article http://b2cloud.com.au/tutorial/uiimage-pre-processing-category.
When having gained a basic understanding of what you can do with OpenCV, I'd personally try to do some kind of thresholding and contour detection (take a look at cv::findContours). Afterwards you could filter the found contours by using the given input by your user.
Related
I am new to OpenCV and need to know the method of OpenCV which detects different shapes (circle, square, rectangle, triangle, ellipse) in a camera captured image for iPhone.
so, could someone directs me to the right direction (references/articles/anything) that which techniques are better to get it done.
Thanks..
iOmi
First you will probably need to look at an edge detector such as Canny to extract the shapes into a binary image. (Although this may be expensive for the iphone)
For circles I would have a look at the HoughCircles.
For squares and rectangles you should look at the findContours method and the sample code squares.cpp in the samples directory when you downloaded opencv.
With a quick google search I was able to find an article about detecting shapes in C# which roughly corresponds to the methods you would use in another language while using the opencv library.
I have not used opencv in ios but I hope this will help get you started.
I'm developing an iPhone app to recognize some well known symbols from pictures.
I'm basically following these tutorials http://aishack.in/tutorials/sudoku-grabber-with-opencv-detection/ and http://sudokugrab.blogspot.it/2009/07/how-does-it-all-work.html, using OpenCv for template matching and GPUImage for image processing.
When all images are with the same luminance level, I can adjust the threshold of GPUImageLuminanceThresholdFilter and all works smooth, but, of course, I can't be sure of the luminance.
So, I need a simple adaptive threshold filter, like the one in those tutorials, which calculate the luminance into the area surrounding each pixel.
The GPUImageAdaptiveThresholdFilter doesn't fit my needs, because it detects and sharps the edges, while I need to enhance the symbols.
How can I implement that kind of filter?
Asked to, the awesome Brad Larson added a blur size property to the box blur, and modified the adaptive threshold filter, so it works as expected!
Thanx #BradLarson!
I have to select any particular object visible in my image on i-phone.
Basically my project is to segment image objects on the basis of my touch.
The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch.
Is there any other method which would be more robust because I have to run it on video frames?
I am using OpenCV and iphone for the project.
PLease help if there is any other idea which has been implemented or is feasible to implement.
Have you looked at SIFT or SURF implementations? They both track object features and are resilient (to a certain degree) to rotation, translation and scale.
Also check out FAST which is a corner detection algorithm which might help you, they have an app on the app store showing how quick it is too.
I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.
how to do morphing of two images in iphone programming.?
Your question is not iphone related.. the kind of algorithm you are looking for is language-agnostic since it just work with images.
By the way it's quite complex to morph two images, usually you have to
embed a grid of points over the two images that links characteristics that should be morphed. For example if you have two faces you would use a grid that connects eyes, the mouth, ears, the nose, the edge of the face and so on: these two grid tells the morpher how to "translate" a point into another one while blending the two images
the previous step can be done automatically (with specific software) or by hand. more points you place better will be your results
then you can do the real morphing sequence: basically you do an interpolation between the two images (in which the parameter that you use will decide how much will be the final risult similar to the first or the second image)
you should also apply some blending effect to actually create a believable result, always using a parametric function according to the morphing position
You can use UIView animation to transition from one UIView to another. This should provide some sort of lame morphing.
You can use XMRM, which is written in C++: http://www.cg.tuwien.ac.at/~xmrm/
There is no image morphing API in the iOS SDK.
No, there isn't an API for it. You'll have to do it yourself.
...ask a short question, get a short answer...