How to perform matching of markers from two images which are taken from different perspective? - matlab

I have a markered robot with circular markers and two images from different perspective as shown: (Circular white rings are the markers)
I want to match the markers in the two images, by matching I mean the bottommost marker of 1st image should be treated as correspondence point of bottom most marker of 2nd image and so on.
The finger-like robot given in the image can bend in any direction given in space (can also bend in a U-like manner).
If it helps, the camera geometry is fixed and known beforehand.
I am lost, as simple correspondence algorithm would not work, since the perspectives are very different. How should I go about matching the two images?

You can start like this:
You know the position of the mounting point on the base panel for each perspective.
You know the positions of the white rings for each perspective as discussed here.
You can derive the direction of the arm at each ring by its tilt.
So you can easily determine the sequence of the positions starting with the mounting point stepping from ring to ring even if the arm is bent. With this you can match the rings from both images. If you have any situation where this fails, please add an according example to your question!

Unfortunately, you don't have matching points but matching curves. You might try to fit ellipses on the rings and take the ellipse centers for points to be matched.
This is an approximation, as the center of a circle does not exactly project as the center of the ellipse, but I don't think that this will be the major source of error: as you only see half circles, the fitting will not be that accurate.
If all nine circles remain visible and are ordered vertically, the matching of the centers is trivial. If they are not ordered but don't form a loop, you can probably start from the lowest and follow the chain of nearest neighbors.

Related

Remove small not connected blobs in opencv

I've got the image:
I'd like to remove small blobs like these (not all of them are specified):
Median and erosion don't suit me cause they also destroy needed edges (line-like).
My idea is to move sliding window of specified size and check whether there's a contour(blob) which does not touch window borders that is it fits completely into this window and needs to be removed.
Is there any algorithm which suits me or I have to implement aforementioned idea (but this is probable not supposed to be optimized implemented by me)
Actually when we found the contours we can just circumscribe every contour by rectangle by cv2.minAreaRect(cnt) command and then check whether width and height of the rectangle is more than our minimum-contour-size.
All contours (yellow edges) are circumscribed by red rectangles.
The same image but excluding contours which circumscribed rectangle sides less than specified threshold:

Adjacent irregularly shaped images

Is it possible to have irregularly shaped images positioned adjacent to each other, where each individual image is clickable within its own boundaries?
For example, if I had a map of the US and I want to click each state and have a separate segue for each:
(https://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/Map_of_USA_with_state_names.svg/2000px-Map_of_USA_with_state_names.svg.png)
I appreciate any tips/pointers in the right direction. Thanks!
Whether the map is really a bunch of irregularly shaped images, or just one image, is immaterial. (The latter will be easier.) You can just define a separate UIBezierPath objects that outline each of the states, and then you can use the UIBezierPath method containsPoint to determine whether some tap point is contained within the respective state.
Frankly, you might consider how much accuracy you really need. For example, if looking at map of US from continental scale, you really don't need extremely accurate bezier paths. Often a simple irregular polygon shape can approximate the boundaries and is more than sufficient for hit tests.
In fact, you sometimes deliberately use a much bigger bezier path. For example, you might draw a single path that goes around all of the Hawaiian islands, with some leeway, so that you don't have to tap right on the actual island, but just somewhere close. Or, for Rhode Island, you might allow a tap on the text "Rhode Island", as well as the state itself.

MATLAB image processing of small circles

I have an image which looks like this:
I have a task in which I should circle all the bottles around their opening. I created a simple algorithm and started working it. My algorithm follows:
Threshold the original image
Do some morphological opening in it
Fill the empty holes
Separate the portion of the image using region props such that only the area equivalent to the mouth of the bottles is selected.
Find the centroid for each and draw circle around each bottle.
I did according to the algorithm above and but I have some portion of the image around which I draw a circle. This is because I have selected the area since the area of the mouth of bottle and the remained noise is almost same. And so I yielded a figure like this.
The processing applied on the image look like this:
And my final image after plotting the circle over the original image is like this:
I think I can deal with the extra circle, that is, because of some white portion of the image remained as shown in the figure 2 below. This can be filtered out using regionproping for eccentricity. Is that a good idea or there are some other approaches to this? How would I deal with other bottles behind the glass and select them?
Nice example images you provide for your question!
One thing you can use to detect the remaining bottles (if there are any) is the well defined structure of the placement of the bottles.
The 4 by 5 grid of the bottle should be relatively easy to locate, and when the grid is located you can test if a bottle is detected at each expected bottle location.
With respect to the extra detected bottle, you can use shape features like
eccentricity,
the first Hu moment
a ratio between the perimeter length squared over the area (which is minimized for a circle) details here
If you are able to detect the grid, it should be easy to located it as an outlier (far from an expected bottle location) and discard accordingly.
Good luck with your project!
I've used the same approach as midtiby's third suggestion using the ratio between area and perimeter called shape factor:
4π * Area /perimeter^2
to detect circles from a contour traced image (from the thresholded image) to great success;
http://www.empix.com/NE%20HELP/functions/glossary/morphometric_param.htm
Regarding the 4 unfound bottles, this is rather tricky without some a priori knowledge of what it is you're looking at (as discussed using the 4 x 5 grid, then looking from the centre of each cell). I did think that from the list of contours, most would be of the bottle tops (which you can test using the shape factor stuff), however, one would be of a large rectangle. If you could find the extremities of the rectangle (from the largest contour in terms of area), then remove it from the third image, you'd be left with partial circles. If you then contour traced those partial circles and used a mixture of shape factor/curve detection etc. may help? And yes, good luck again!

Is there a way to figure out 3D distance/view angle from a 2D environment using the iPhone/iPad camera?

Maybe I'm asking this too soon in my research, but I'd better know if this is possible sooner than later.
Imagine I have the following square printed on a paper on top of a table:
The table is brown, so it does not match with any of the colors in the square. Is there a way for me, from a common iPhone camera (non-stereo view), to figure out the distance and angle from which Im looking at the square in the table?
In the end what I'm looking for is being able to draw a 3D square on top of this one using the camera image, but I'm not sure if I am going to be able to figure out the distance and position of the object in space using only a 2D image. Any hints are well appreciated.
Short answer: http://weblog.bocoup.com/javascript-augmented-reality
Big answer:
First posterize, Then vectorize, With the vectors in your power you may need to do some math tricks to define, based on the vectors position, the perspective and then the camera position.
Maybe this help:
www.pixastic.com/lib/docs/actions/posterize/
github.com/selead/cl-vectorizer
vectormagic.com/home
autotrace.sourceforge.net
www.scipy.org/PyLab
raphaeljs.com/
technabob.com/blog/2007/12/29/video-games-get-vectorized/
superuser.com/questions/88415/is-there-an-open-source-alternative-to-vector-magic
Oughta be possible. Scan the image for the red/blue/yellow pattern, then do edge detection to figure out how warped the squares are (they'll be parallelograms in anything but straight-on view). Distance would depend on the camera's zoom setting and scan resolution. But basically you'd count how many pixels are visible in each of the squares, run that past the camera's specs and you should be able to determine a rough distance.

Problem drawing a polygon on data clusters in MATLAB

I have some data points which I have devided into them into some clusters with some clustering algorithms as the picture below:(it might takes some time for the image to appear)
alt text http://www.freeimagehosting.net/uploads/05a807bc42.png
Each color represents different cluster. I have to draw polygons around each cluster. I use convhull for this reason. But as you can see the polygon for the red cluster is very big and covers a lot of areas, which is not the one I am looking for. I need to draw lines(ploygons) exactly around my data sets. For example in the picture above I want a polygon that is drawn exactly the same(and around) as the red cluster with the 3 branches. In other words, in this case I need a polygon with 3 branches to cover my red clusters not that big polygon that covers the whole area. Can anyone help me with this?
Please Note that the solution should be general, because the clusters will change in each run of the algorithm, so it needs to be in a way that is general.
I am not sure this is a fully specified question. I see this variants on this question come up quite often.
Why this can not really be answered here: Imagine six points, three in an equilateral triangle with another three in an equilateral triangle inside it in the same orientation.
What is the correct hull around this? Is it just the convex hull? Is it the inner triangle with three line spurs coming out from it? Does it matter what the relative sizes of the triangles are? Should you have to specify that parameter then?
If your clusters are very compact, you could try the following:
Create a grid, say with a spacing of 0.1.
Set every pixel in the grid to 1 if there's at least one data point covering it, set the pixel to 0 if there is no data point covering the pixel.
You may need to run imclose on your mask in order to fill little holes inside that have not been colored due to sheer bad luck.
Extract the border pixels using, e.g. bwperim. This is the outline of the polygon you're looking for.