Detect the position,orientation and color in Matlab of not overlapped Tiles to be picked by robot - matlab

I am currently working on a project where I need to find the
square shape tiles in pile which are not overlapped,
am currently working on a project where
I need to determine the orientation , position (center ) ,and color
of each square tile . These orientation and positions
will be used as input for a robot to be picked
and the robot will sort them in a specific locations .
I am using Matlab and i should transfer the data using TCP/IP.
I've experimenting with edge detection(canny,sobel) ,
found the boundaries,segmentation using threshold and FCM but
I haven't found a reliable way to determine the tiles which are
not overlapped ,i am trying to use template shape matching but
i don't know how to do that . This needs to be done in real time
as i will be using frame which is taken from a USB camera that
attached to PC . I was wondering if someone could offer a
reliable solution ? Here is a sample image.
I was wondering if someone could offer
a reliable solution to determine the square shape tiles
which are not overlapped? Here is a sample imageoverlapped Tiles

You've separated the image into tiles and background. So now simply label all the connected components. Take each one and test for single tile-ness. If you know the approximate size of the tiles, first exclude by area. Then calculate the centroid and the extreme left, right, top and bottom. If it is tile, the intersection of top bottom and left-right will be approximately in the centroid, and the half angles will be perpendicular to the tile edge. So rotate, take the bounding box, and count unset pixels, which should be almost zero for a rectangluar tile.
(You'll probably need to do a morphological operation or two to clean up the images if the tile / background separation is a bit dicey).
Check out the binary image processing library http://malcolmmclean.github.io/binaryimagelibrary 1

thanks for your quick replay.i already did some morphological operation and found connected component and below is my code in matlab ,and each tile has 2.5*2.5 cm area
a = imread('origenal image.jpg');
I = rgb2gray(a);
imshow(I)
thresold = graythresh(I);
se1=strel('diamond',2);
I1=imerode(I,se1);
figure(1)
imshow(I1);
bw = imclose(I1 , ones(25) );
imshow(bw)
CC = bwconncomp(bw);
L = labelmatrix(CC);

Related

how to find distance between black points in a image using image processing

how to find distance between black points in a image using image processing image is taken by web camera and it is a snap of moving belt which is cover by white paper having black dots
Lots of way of doing it.
Firstly you need to identify the dots. Use Otsu thresholding to separate foreground from background. Then convent to binary, and label connected components. Eliminate everything that is smaller or larger than a threshold, or anything that isn't roughly circular.
Then you get a list of frames, so you need a blob-following algorithm. Eliminate any stationary blob (not on the paper).
Finally output the distances based on the blob identifications.

How to get the region shown in the image ?

i want to get the red region as specified in the image below :
remember that the red region that is shown in the image is just for clarification , it is not present in original image , below is the original image attached :
i also have the iris point in this region, i already got that point , if that point can help me so i can share that image too.
can someone help me in this .....
For this specific image, let's call it BW, you can find the center region as:
BWnoBorder= imclearborder(BW); %# remove the white that touches the border
OnlyCenter = bwareaopen(BWnoBorder,1000); %# remove all small pixel areas
A robust method might be the snake region growing algorithm.
Seems like you thresholded the eye illuminated by IR or something. To answer your question or even to ask it correctly you have to show a number of images to evaluate the stability and noise in eye socket regions. Otherwise one can come up with a solution that works for the image above but not in general.
For example, I can invert your image, get the largest Connected Component (the dark region) and erode it till it becomes thin, see below. It is easy to get an ellipse from this binary mask but will it work in general case with your noisy input?
A good thing to start is to say what do you expect to find. Say you are looking for an eye, dark surrounding area and a bright skin tone - make it 3 Mixed models that settle simultaneously in the EM fashion. Provide some shape priors to increase accuracy. think about other visual cues such as specs on the iris, saccades, FA from blinks, etc.

How to detect the any 4 sides polygen in the image and adjust it to rectangle?

One TV screen recognition project, i need to clip the TV Screen from one image.
The TV screen actually is rectangle. But It's obvious that the TV screen is out of shape in the image from phone camera. My question are:
How to detect the any 4 sides polygen(it's not rectangle) in the image.
After i know the polygen area on the image ,how to retrieve the area to Mat.
After solve quest2, How to convert the Mat of 4 sides polygen to rectangle Mat which is fixed W/H radio.
It's very helpful that give some code sample to reference.
Thanks your answers!
if you want to detect the edges of your TV screen you can use some border
detection (like Canny) and then use Hough transform to obtained the lines.
If you then extract the points corresponding to the intersection of the lines
you can create an homography matrix H (3x3). Finally, using this homgraphy you can
"deform" your original image to a reference frame (in our case the rectangle
with a given aspect ratio). The homography is a transformation from plane
to plane, so it's exactly what you will need here.
If your going to use OpenCV (which is always a good choice!),
here are the functions that you could use:
Canny() - find edges in the image
HoughLines() - detect lines
findHomography() - this function finds from a set of correspondances,
the homography matrix. In your case, you will need to pass the method
as 0.
warpPerspective() - the function that your going to use to "deform"
the image to a reference frame.
Obviously, you can find similar functions for MATLAB and others...
I hope this helps you.

How to track the center point of the moving feature in the given picture (preferably using MATLAB)?

Suggest a method/algorithm to track the center point of the feature,
the features is part of a video. As the video is played, the feature keeps moving around but never goes out of the rectangle of size shown in figure.
I wish to track the center point over the duration of the video.
*the red point is not part of the image. I have overlaid it to show the center point I wish to track.
A very simple way:
create an image with the pattern to recognize
do cross-correlation along X and Y with your frames
select the peaks of the X and Y correlation signals to identify position
There must be a lot of material around .. start here http://en.wikipedia.org/wiki/Video_tracking
Try using vision.PointTracker in the Computer Vision System Toolbox.

MATLAB image processing of small circles

I have an image which looks like this:
I have a task in which I should circle all the bottles around their opening. I created a simple algorithm and started working it. My algorithm follows:
Threshold the original image
Do some morphological opening in it
Fill the empty holes
Separate the portion of the image using region props such that only the area equivalent to the mouth of the bottles is selected.
Find the centroid for each and draw circle around each bottle.
I did according to the algorithm above and but I have some portion of the image around which I draw a circle. This is because I have selected the area since the area of the mouth of bottle and the remained noise is almost same. And so I yielded a figure like this.
The processing applied on the image look like this:
And my final image after plotting the circle over the original image is like this:
I think I can deal with the extra circle, that is, because of some white portion of the image remained as shown in the figure 2 below. This can be filtered out using regionproping for eccentricity. Is that a good idea or there are some other approaches to this? How would I deal with other bottles behind the glass and select them?
Nice example images you provide for your question!
One thing you can use to detect the remaining bottles (if there are any) is the well defined structure of the placement of the bottles.
The 4 by 5 grid of the bottle should be relatively easy to locate, and when the grid is located you can test if a bottle is detected at each expected bottle location.
With respect to the extra detected bottle, you can use shape features like
eccentricity,
the first Hu moment
a ratio between the perimeter length squared over the area (which is minimized for a circle) details here
If you are able to detect the grid, it should be easy to located it as an outlier (far from an expected bottle location) and discard accordingly.
Good luck with your project!
I've used the same approach as midtiby's third suggestion using the ratio between area and perimeter called shape factor:
4π * Area /perimeter^2
to detect circles from a contour traced image (from the thresholded image) to great success;
http://www.empix.com/NE%20HELP/functions/glossary/morphometric_param.htm
Regarding the 4 unfound bottles, this is rather tricky without some a priori knowledge of what it is you're looking at (as discussed using the 4 x 5 grid, then looking from the centre of each cell). I did think that from the list of contours, most would be of the bottle tops (which you can test using the shape factor stuff), however, one would be of a large rectangle. If you could find the extremities of the rectangle (from the largest contour in terms of area), then remove it from the third image, you'd be left with partial circles. If you then contour traced those partial circles and used a mixture of shape factor/curve detection etc. may help? And yes, good luck again!