Boundary shape recognition ignoring scaling - matlab

I need some shape recognition advice. To be specific, I am data mining photos to recognize the presence of particular shapes, and these shapes may be connected together as well. i.e. the widget1 and widget2 I am interested in may be connected together by some frame.
These widgets may also be of different sizes, which may give issue with template matching techniques. For example, widget1 could be roughly 20x20 pixels in one picture and 100x100 pixels in another. Widget2 can be scaled differently in the same pictures. There can also be issues with pre-processing out some of the labeling/text that may be on these widgets as not to confuse any matching technique used.
Do you guys have any advice in which areas of image processing I should explore?
In summary, the issues are:
1) identifying known shapes
2) scaling differences can exist in the widgets between the photos
3) labeling on the widgets may exist that may confuse the algorithms used above - so should be pre-processed out
Thanks a bunch. If you guys can give some advice on suggested techniques and resources I should read-up on, that would be a great help!

This isn't necessarily a MATLAB or OpenCV question, but I would suggest having a look at MATLAB's Computer Vision toolbox and browse the Shape Descriptors sections of the OpenCV manual. Also, I realize this isn't a full answer, but there are many solutions to shape and object recognition, and the selected method depends heavily on the specifics of your problem.

Related

Algorithm for ortho-rectification; mosaic-ing aerial images [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm working on ways of collecting farm aerial images (images collected from a helicopter in a perpendicular fashion) that I'd want to stitch them together to build the whole photo of the area that's being covered and then I wanted to run analytics.
I'm assuming the images will come with [latitudue, longitude] coordinates, to help me determine the spots to place the images.
To understand the issues with this technology, I tried manually stitching pictures taken from my phone of some sample area in my back yard. I experienced that the edges don't usually look the same because they are being seen by the camera from different sides or angles. I guess this is a distortion in image that could potentially be fixed by ortho-rectification (not completely sure).
I quickly created the following picture to help explain my problem.
My question to you:
What are the algorithms/techniques used to do ortho-rectification?
What tools would best suit my needs: opencv, or processing or matlab or any other tool that could easily help in rectification of images and creating a mosaic photo?
What other issues should be considered in doing aerial imagery mosaic-ing and analytics?
Thank you!
Image stitching usually assumes that the camera center is fixed across all photos, and uses homographies to transform the images so that they seem continuous. When the fixed camera center assumption is not strictly valid, artifacts/distortions may appear due to the 3D of the scene. If the camera center moved by a small distance compared to the relief of the scene, "seamless image blending" techniques may be sufficient to blur out the distortions.
In more extreme cases, ortho-rectification is required. Ortho-rectification (Wikipedia entry) is the task of transforming an image observed from a given perspective camera into an orthographic (Wikipedia entry) and usually vertical point of view. The orthographic property is interesting because it makes the stitching of several images much easier. The following picture from Wikipedia is particularly clear (left is an orthographic or directional projection, right is a perspective or central projection):
The task of ortho-rectification usually requires having a 3D model of the scene, in order to map appropriately intensities observed by the perspective camera to their location with respect to the orthographic camera. In the context of aerial/satellite images, Digital Elevation Models (DEM) are often used for that purpose, but generally have the serious drawback of not including man-made structures (only Earth relief). The NASA provides freely the DEM acquired by the SRTM missions (DEM link).
Another approach, if you have two images acquired at different positions, you could try to do a 3D reconstruction using one of the stereo matching technique, and then to generate the ortho-rectified image by mapping the two images as seen by a third orthographic and vertical camera.
OpenCV has several interesting function for that purpose (e.g. stereo reconstruction, image mapping functions, etc) and might be more appropriate for intensive usage. Matlab probably has interesting functions as well, and might be more appropriate for quick tests.
First, rectification is some kind of warping, but not the one that you need. Regular rectification is used in stereo to ensure that matching points lie on the same row - not your case. Ortho-rectification warps perspective projection into orthographic - again not your case. Not only you lack a 3D model for this warping to calculate but also you dont need it since your perspective distortions are negligible and you already have image pretty close to ortho ( that is when the size of the objects is small compared to the viewing distance perspective effects are small).
You problems in aligning two images stem from small camera rotations between shots. To start fixing the problem you need to ensure that your images actually overlap by say 30%. To read about this see chapter 9 of this book.
What you need is to review a regular image stitching techniques that use homography to map two images. Note that doing so assumes that images are essentially flat. To find homography you can first manually select 4 points in one image and 4 matching points in another image and run openCV function findHomography(). Note that overlap is required to find the matches (in your picture there is no overlap). warpPerspective() can warp images for you after homogrphy is found.
"What are the algorithms/techniques used to do ortho-rectification?
If you want a good overview of the techniques, then the book "Multiple View Geometry in Computer Vision" by Hartley & Zisserman might be a good place to start: http://www.robots.ox.ac.uk:5000/~vgg/hzbook/
Andrew Zisserman also has some tutorials available here at www.robots.ox.ac.uk/~az/tutorials/ which might be more accessible/make it easier for you to find the particular technique you want to use.
"What tools would best suit my needs: opencv, or processing or matlab or any other tool that could easily help in rectification of images and creating a mosaic photo?"
OpenCV has a fair few number of tools available - take a look at Images stitching for starters. There's also a lot available for correcting distortion. However, it doesn't have to be the tool you use, there are others!

image based object detection and segemntation

I am currently studying image processing and learning matlab for my project.
I needed to know that if there is any method to detect a car from traffic image or parking lot image and then segment it out from it.
I have googled a lot but mostly the content is video based and I dont know anything about image processing.
language prefered : MATLAB
I am supposed to do this on images only not videos.
It's a very difficult problem in general. I'd suggest the easier way is to constrain the problem as much as possible - control lighting, size orientation of cars to detect, no occlusions.
This constraining has been the philosophy image processing has followed up until recently. Now the trend is that instead of constraining your problem, obtain as a massive amount of example data to train a suppervised learning algorithm. In fact it's possible that you can use a pre-trained model that would let you detect cars as it has been suggested in a previous answer.
There has been recently massive progress in the area of object detection in images and here are a few of the state of the art approaches based on neural network based approaches:
OverFeat
Rich feature hierarchies for accurate object detection and semantic segmentation (R-CNN paper)
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition (paper)
Framework that you could use include:
Caffe: http://caffe.berkeleyvision.org/
Theano
Torch
You can use the detection by parts method:
http://www.cs.berkeley.edu/~rbg/latent/
It contains a trained model for "car" which you can use to detect cars, surround then with a bounding box and then extract them from the images.

Shape detection using MATLAB

I am working on car parking system project. For that, I would like to detect the presence of a car.
Can anybody tell me how I can accomplish this using MATLAB?
Also, what is the algorithm for detecting a car?
There's a whole world of methods for object detection in images. You need to learn a little bit about image processing to solve this problem. I suggest you read about template matching or more generally about Object recognition. Specifically for car detection, if you know they will be seen at a certain angle (head on, for example) i'd try Viola-Jones detection which is implemented in OpenCV as haar-based feature cascade detection. Although OpenCV is not a matlab library, you can probably find something in matlab's image processing toolboxes that does a similar job (or interface into OpenCV)
Background subtraction would be a simple place to start.
In a nutshell:
Can capture an image of your empty parking lot. This is your reference image.
Compare the current image of your parking lot with the reference image. The parts that are different will be of interest.
Problems:
You need to keep updating your reference image to stay current with the conditions (e.g. day, night, cloudy, raining). Sometimes this may not be possible, because your reference image needs to have no cars in it for the approach to work.
Moving things in the background (like trees shaking in the wind) will come up as false positives
Have you considered using 3D/stereoscopic imaging in addition to using 'normal' images? If yes you could open up a whole new world of methods and intelligent tricks to remove objects based upon their distance to the camera. Then, any object that is a certain, fixed distance from the camera (e.g. your background) is easily removable and you can just process the new parts of the image (e.g. cars).
If this interests you I can supply you with an algorithm I have developed to detect animals in a livestock pen, which is a similar concept.

iphone, Image processing

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.

How to morphing of two images in iphone programming

how to do morphing of two images in iphone programming.?
Your question is not iphone related.. the kind of algorithm you are looking for is language-agnostic since it just work with images.
By the way it's quite complex to morph two images, usually you have to
embed a grid of points over the two images that links characteristics that should be morphed. For example if you have two faces you would use a grid that connects eyes, the mouth, ears, the nose, the edge of the face and so on: these two grid tells the morpher how to "translate" a point into another one while blending the two images
the previous step can be done automatically (with specific software) or by hand. more points you place better will be your results
then you can do the real morphing sequence: basically you do an interpolation between the two images (in which the parameter that you use will decide how much will be the final risult similar to the first or the second image)
you should also apply some blending effect to actually create a believable result, always using a parametric function according to the morphing position
You can use UIView animation to transition from one UIView to another. This should provide some sort of lame morphing.
You can use XMRM, which is written in C++: http://www.cg.tuwien.ac.at/~xmrm/
There is no image morphing API in the iOS SDK.
No, there isn't an API for it. You'll have to do it yourself.
...ask a short question, get a short answer...