i need some help with a corner detection.
I printed a checkerboard and created an image of this checkerboard with a webcam. The problem is that the webcam has a low resolution, therefore it do not find all corners. So i enhanced the number of searched corner. Now it finds all corner but different one for the same Corner.
All Points are stored in a matrix therefore i don't know which element depends to which point.
(I can not use the checkerboard function because the fuction is not available in my Matlab Version)
I am currently using the matlab function corner.
My Question:
Is it possible to search the Extrema of all the point clouds to get 1 Point for each Corner? Or has sb an idea what i could do ? --> Please see the attached photo
Thanks for your help!
Looking at the image my guess is that the false positives of the corner detection are caused by compression artifacts introduced by the lossy compression algorithm used by your webcam's image acquisition software. You can clearly spot ringing artifacts around the edges of the checkerboard fields.
You could try two different things:
Check in your webcam's acquisition software whether you can disable the compression or change to a lossless compression
Working with the image you already have, you could try to alleviate the impact of the compression by binarising the image using a simple thresholding operation (which in the case of a checkerboard would not even mean loosing information since the image is intrinsically binary).
In case you want to go for option 2) I would suggest to do the following steps. Let's assume the variable storing your image is called img
look at the distribution of grey values using e.g. the imhist function like so: imhist(img)
Ideally you would see a clean bimodal distribution with no overlap. Choose an intensity value I in the middle of the two peaks
Then simply binarize by assigning img(img<I) = 0; img(img>I) = 255 (assuming img is of type uint8).
Then run the corner algorithm again and see if the outliers have disappeared
Related
I have two images of the same shoe sole, one taken with a scanning machine and another with a digital camera. I want to scale one of the images so that it can be easily aligned with the other without having to do it all by hand.
My thought was to use edge detection, connect all the points on the outside of the shoe, scale one image to fit right inside the other, and then scale the original image at the same rate.
I've messed around using different tools in the Image Processing toolbox in MatLab, but am making no progress.
Is there a better way to go about this?
My advise would be to firstly use the function activecontour to obtain the outer contour of the shoe on both images. Then use the function procrustes with the binary images as input.
[~, CameraFittedToScan] = procrustes(Scan,Camera);
This transforms the camera image to best fit with the scanned image. If the scan and camera are not the same size then this needs to be adjusted first using the function imresize.
Two cameras takes two images of a wooden plank. The images have an overlap of the plank which I need to stitch together in a way that it looks natural and preferably seamless to the human eye for inspection purposes. The images are cropped to the same size and masked to remove the background and most of the non-overlapping areas but the plank can have a slight tilt on the conveyor belt.
Currently I'm using the normxcorr2 function on the general overlay area, following ideas from the Matlab totorial of the normxcorr2 function, to try and identify one of the images in the other and work out an overlay offset, following the tutorial. However, this fails quite often as the normxcorr2 functions returns a zero offset - resulting in a bad stitching:
c = normxcorr2(plank_part1,plank_part2);
Find peak in cross-correlation:
[ypeak, xpeak] = ind(c==max(c(:)));
Account for the padding that normxcorr2 adds:
yoffSet = ypeak-size(onion,1);
xoffSet = xpeak-size(onion,2);
[xoffSet,yoffSet]
ans =
0 0
It would seem normxcorr2 can not always find the correct overlay of the images, or any overlay at all(?), even though I try to make it easier by increasing the gray scale contrast by the function histeq. My guess is that the amount of "gray-ish" area from the sapwood overwhelms the distinct knots, which are the important parts to stitch propperly.
Does anyone know of a way to either increase the likelihood of this stitching process, maybe by some more preprocessing, or use any other matlab skills/functions to make this work better?
P.S I can not use anything but freely accessible scripts as this would probably become license/copyright issues for my project.
Thank you for your time in trying to help!
You should look at the following link. The term that you should be looking for is image registration. There are more advanced methods than normxcorr2
The project is about measurement of different objects under the Kinect 2. The image acquisition code sample from the SDK is adapted in a way to save the depth information at the whole range and without limiting the values at 0-255.
The Kinect is mounted on a beam hoist and moved in a straight line over the model. At every stop, multiple images are taken, the mean calculated and the error correction applied. Afterwards the images are put together to have a big depth map instead of multiple depth images.
Due to the reduced image size (to limit the influence of the noisy edges), every image has a size of 350x300 pixels. For the moment, the test is done with three images to be put together. As with the final program, I know in which direction the images are taken. Due to the beam hoist, there is no rotation, only translation.
In Matlab the images are saved as matrices with the depth values going from 0 to 8000. As I could only find ideas on how to treat images, the depth maps are transformed into images with a colorbar. Then only the color part is saved and put into the stitching script, i.e. not the axes and the grey part around the image.
The stitching algorithm doesn’t work. It seems to me that the grayscale-images don’t have enough contrast to be treated by the algorithms. Maybe I am just searching in the wrong direction?
Any ideas on how to treat this challenge?
At image i need find "table" - simple rectangle.
Problem is with edge recognition, because potencial photos will be "dark".
I tried edge - sobel, canny, log, .... - recognition and after that Hough transformation and line finding. But this algorithms are not enough for this task.
Something what can help me:
- it is rectangle!, only in perspective view (something like fitting perspective rectangle?)
- that object MUST cover atleast for example 90% of photo (i know i need looking near photo edges)
- that rectangle have fast same color (for example wood dining table)
- i need find atleast "only" 4 corners..(but yes, better will be find the edges of that table)
I know how for example sobel, canny or log algorithms works and Hough as well. And naturally those algorithms fail at dark or non-contrast images. But is there some another method for example based at "fitting"?
Images showing photo i can get (you see it would be dark) and what i need find:
and this is really "nice" picture (without noise). I tested it on more noise pictures and the result was..simply horrible..
Result of this picture with actual algorithm log (with another ones it looks same):
I know image and edge recognition is not simple challenge but are there some new better methods or something like that what i can try to use?
In one of posts in here i found LSD algorithm. It seems very nice descripted and it seems it is recognizing really nice straight lines as well. Do you think it would be better to use it insted of the canny or sobel detection?
Another solution will be corner detection, on my sample images it works better but it recognize too much points and there will problem with time..i will need to connect all the points and "find" the table..
Another solution:
I thought about point to point mapping. That i will have some "virtual" table and try to map that table above with that "virtual" table (simple 2d square in painting :] )..But i think point to point mapping will give me big errors or it will not working.
Does someone have any advice what algorithm use to?
I tried recognize edges in FIJI and then put the edge detected image in matlab, but with hough it works bad as well..:/..
What do you think it would be best to use? In short i need find some algorithm working on non contrast, dark images.
I'd try some modified snakes algorithm:
you parameterize your rectangle with 4 points and initialize them somewhere in the image corners. Then you move the points towards image features using some optimization algorithm (e.g. gradient descent, simulated annealing, etc.).
The image features could be a combination of edge features (e.g. sobel directly or sobel of some gaussian filtered image) to be evaluated on the lines between those four points and corner features to be evaluated at those 4 points.
Additionally you can penalize unlikely rectangles (maybe depending on the angles between the points or on the distance to the image boundary).
I have a stack of images with a bar close to the center. As the stack progresses the bar pivots around one end and the entire stack contains images with the bar rotated at many different angles up to 45 degrees above or below horizontal.
As shown here:
I'm looking for a way to rotate the bar and/or entire image and align everything horizontally before I do my other processing. Ideally this would be done in Matlab / imageJ / ImageMagick. I'm currently trying to work out a method using first Canny edge detection, followed by a Hough transform, followed by an image rotation, but I'm hoping this is a specific case of a more general problem which has already been solved.
If you have the image processing toolbox you can use regionprops with the 'Orientation' property to find the angle.
http://www.mathworks.com/help/images/ref/regionprops.html#bqkf8ji
The problem you are solving is known as image registration or image alignment.
-The first thing you need to due is to treshold the image, so you end up with a black and white image. This will simplify the process.
-Then you need to calculate the mass center of the imgaes and then translate them to match each others centers.
Then you need to rotate the images to matcheach other. This could be done using the principal axis measure. The principal axis will give you the two axis that explain most of the variance in the population. Which will basically give you a vector showing which way your bar is pointing. Then all you need to due is rotate the bars in the same direction.
-After the principal axis transformation you can try rotating the pictues a little bit more in each direction to try and optimise the rotation.
All the way through your translation and rotation you need a measure for showing you how good a fit your tranformation is. This measure can be many thing. If the picture is black and white a simple subtraction of the pictures is enough. Otherwise you can use measures like mutual information.
...you can also look at procrustes analysis see this link for a matlab function http://www.google.dk/search?q=gpa+image+analysis&oq=gpa+image+analysis&sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8#hl=da&tbo=d&sclient=psy-ab&q=matlab+procrustes+analysis&oq=matlab+proanalysis&gs_l=serp.3.1.0i7i30l4.5399.5883.2.9481.3.3.0.0.0.0.105.253.2j1.3.0...0.0...1c.1.5UpjL3-8aC0&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.1355534169,d.Yms&fp=afcd637d8ae07bde&bpcl=40096503&biw=1600&bih=767
You might want to look into the SIFT transform.
You should take as your image the rectangle that represents a worst case guess for your bar and determine the rotation matrix for that.
See http://www.vlfeat.org/overview/sift.html
Use the StackReg plugin of ImageJ. I'm not 100% sure but I think it already comes installed with FIJI (FIJI Is Just ImageJ).
EDIT: I think I have misread your question. That is not a stack of images you are trying to fix, right? In that case, a simple approach (probably not the most efficient but definetly works), is the following algorithm:
threshold the image (seems easy, your background is always white)
get a long horizontal line as a structuring element and dilate the image with it
rotate the structuring element and keep dilating image, measuring the size of the dilation.
the angle that maximizes it, is the rotation angle you'll need to fix your image.
There are several approaches to this problem as suggested by other answers. One approach possibly similar to what you are already trying, is to use Hough transform. Hough transform is good at detecting line orientations. Combining this with morphological processing and image rotation after detecting the angle you can create a system that corrects for angular variations. The basic steps would be
Use Morphological operations to make the bar a single line blob.
Use Hough transform on this image.
Find the maximum in the transform output and use that to find orientation angle.
Use the angle to fix original image.
A full example which comes with Computer Vision System Toolbox for this method. See
http://www.mathworks.com/help/vision/examples/rotation-correction-1.html
you can try givens or householder transform, I prefer givens.
it require an angle, using cos(angle) and sin(angle) to make the givens matrix.