I want to detect hand drawn basic shapes - rectangles, ellipses, triangles etc.
Does anybody have an idea how to implement this?
Maybe you can try the OpenCV library. Actually this library has the focus of computer vision, i.e. analyzing pixeldata of images and video and might be too heavy for your task. But on the other hand it is very powerfull and available on many plattforms (even on iOS). And a hand drawn image with shapes is also just a set of pixels, isn't it ;-)
You might have a look at the manual:
http://www.sciweavers.org/books/opencv-open-source-computer-vision-reference-manual
There is plenty of information about OpenCV here on stackoverflow as well. Some hints on stackoverflow are here:
DETECT the Edge of a Document in iPhoneSDK
and here
iPhone and OpenCV
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm working on ways of collecting farm aerial images (images collected from a helicopter in a perpendicular fashion) that I'd want to stitch them together to build the whole photo of the area that's being covered and then I wanted to run analytics.
I'm assuming the images will come with [latitudue, longitude] coordinates, to help me determine the spots to place the images.
To understand the issues with this technology, I tried manually stitching pictures taken from my phone of some sample area in my back yard. I experienced that the edges don't usually look the same because they are being seen by the camera from different sides or angles. I guess this is a distortion in image that could potentially be fixed by ortho-rectification (not completely sure).
I quickly created the following picture to help explain my problem.
My question to you:
What are the algorithms/techniques used to do ortho-rectification?
What tools would best suit my needs: opencv, or processing or matlab or any other tool that could easily help in rectification of images and creating a mosaic photo?
What other issues should be considered in doing aerial imagery mosaic-ing and analytics?
Thank you!
Image stitching usually assumes that the camera center is fixed across all photos, and uses homographies to transform the images so that they seem continuous. When the fixed camera center assumption is not strictly valid, artifacts/distortions may appear due to the 3D of the scene. If the camera center moved by a small distance compared to the relief of the scene, "seamless image blending" techniques may be sufficient to blur out the distortions.
In more extreme cases, ortho-rectification is required. Ortho-rectification (Wikipedia entry) is the task of transforming an image observed from a given perspective camera into an orthographic (Wikipedia entry) and usually vertical point of view. The orthographic property is interesting because it makes the stitching of several images much easier. The following picture from Wikipedia is particularly clear (left is an orthographic or directional projection, right is a perspective or central projection):
The task of ortho-rectification usually requires having a 3D model of the scene, in order to map appropriately intensities observed by the perspective camera to their location with respect to the orthographic camera. In the context of aerial/satellite images, Digital Elevation Models (DEM) are often used for that purpose, but generally have the serious drawback of not including man-made structures (only Earth relief). The NASA provides freely the DEM acquired by the SRTM missions (DEM link).
Another approach, if you have two images acquired at different positions, you could try to do a 3D reconstruction using one of the stereo matching technique, and then to generate the ortho-rectified image by mapping the two images as seen by a third orthographic and vertical camera.
OpenCV has several interesting function for that purpose (e.g. stereo reconstruction, image mapping functions, etc) and might be more appropriate for intensive usage. Matlab probably has interesting functions as well, and might be more appropriate for quick tests.
First, rectification is some kind of warping, but not the one that you need. Regular rectification is used in stereo to ensure that matching points lie on the same row - not your case. Ortho-rectification warps perspective projection into orthographic - again not your case. Not only you lack a 3D model for this warping to calculate but also you dont need it since your perspective distortions are negligible and you already have image pretty close to ortho ( that is when the size of the objects is small compared to the viewing distance perspective effects are small).
You problems in aligning two images stem from small camera rotations between shots. To start fixing the problem you need to ensure that your images actually overlap by say 30%. To read about this see chapter 9 of this book.
What you need is to review a regular image stitching techniques that use homography to map two images. Note that doing so assumes that images are essentially flat. To find homography you can first manually select 4 points in one image and 4 matching points in another image and run openCV function findHomography(). Note that overlap is required to find the matches (in your picture there is no overlap). warpPerspective() can warp images for you after homogrphy is found.
"What are the algorithms/techniques used to do ortho-rectification?
If you want a good overview of the techniques, then the book "Multiple View Geometry in Computer Vision" by Hartley & Zisserman might be a good place to start: http://www.robots.ox.ac.uk:5000/~vgg/hzbook/
Andrew Zisserman also has some tutorials available here at www.robots.ox.ac.uk/~az/tutorials/ which might be more accessible/make it easier for you to find the particular technique you want to use.
"What tools would best suit my needs: opencv, or processing or matlab or any other tool that could easily help in rectification of images and creating a mosaic photo?"
OpenCV has a fair few number of tools available - take a look at Images stitching for starters. There's also a lot available for correcting distortion. However, it doesn't have to be the tool you use, there are others!
I need to create an ellipse with a width of 52 pixels and a height of 47 pixels. Using the Chipmunk engine, I've found that you can create circles with a certain radius, as well as polygons. I'm new to working with Chipmunk, and the documentation for the engine is quite brief.
How do I create ellipses in Chipmunk? I'm currently working with iPhones, using Objective-c and cocos2d.
I know it may seem useless to go in these details but I need to create it as precise as possible.
Thank you!
The recommendation from Chipmunk's author, slembcke, seems to be “approximate it using a polygon”. See this forum post.
If a polygon approximation isn't good enough, you will have to modify Chipmunk to add a new ellipse shape type, because it doesn't have support for ellipses. And adding support for ellipses is probably a significant amount of work.
You can also use PhysicsEditor to design any shape.
If you already have an image of an ellipse, then you can use that image to allow PhysicsEditor to trace the borders of the image. Either way this is a lot easier than actually programming the shape.
I am new to OpenCV and need to know the method of OpenCV which detects different shapes (circle, square, rectangle, triangle, ellipse) in a camera captured image for iPhone.
so, could someone directs me to the right direction (references/articles/anything) that which techniques are better to get it done.
Thanks..
iOmi
First you will probably need to look at an edge detector such as Canny to extract the shapes into a binary image. (Although this may be expensive for the iphone)
For circles I would have a look at the HoughCircles.
For squares and rectangles you should look at the findContours method and the sample code squares.cpp in the samples directory when you downloaded opencv.
With a quick google search I was able to find an article about detecting shapes in C# which roughly corresponds to the methods you would use in another language while using the opencv library.
I have not used opencv in ios but I hope this will help get you started.
I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.
are there any tutorials for creating image effects in iphone? like glow,paper effect etc
Can anyone tell me where to start?
A glow effect is not supported by default within the iPhone SDK (specifically CoreGraphics). For the paper effect I am not sure what you are looking for.
If you insist on effects not supported by the SDK, you should try to find less platform specific sources and adapt them to the iPhone:
Glow and Shadow Effects (Windows GDI)
Another possibly great source of effect-know-how are the ImageMagick sources.
Take a look at this project: http://code.google.com/p/simple-iphone-image-processing/
It includes code that can do various image effects such as canny edge detection, histogram equalisation, skeletonisation, thresholding, gaussian blur, brightness normalisation, connected region extraction, and resizing.
Another other more low level option is to take a look at ImageMagick or FreeImage which are further image processing libraries.