Can anyone guide me how the function 'hough transform' works in matlab?? The problem is that i have an image containing two straight rectangles and one rectangle is tilted at some angle. According to me after applying hough transform; i should get a line structure of 1X6 but i am getting a structure of 1x14. Can anyone help me? I have also uploaded the images:
You can't restrict Hough Transform to give a structure of 1x6.It doesn't produce stabile results.It also doesn't work well when looking further ahead on curved roads. I should not acquire 1x6 structure from each frame.Instead, I should take all returned line segments and use some logic to determine the lane markings.
First of all, your image actually looks slightly blurred. I don't know if it actually is , but if so, you need to run an edge detection algorithm, so your hough transform does not detect the blurred part of the line.
Second of all, you need to reduce the number of detected lines, simply by taking out any lines which does not have enough points going through it. This can be done by thresholding the H variable in [H,t,r]=hough(image).
Additional Sources:
http://en.wikipedia.org/wiki/Hough_transform
http://www.mathworks.com/help/toolbox/images/ref/hough.html
Related
I am stuck at a current project:
I have an input picture showing the ground with some shapes on it. I have to find a specific shape with a given template.
I have to use distance transformation into skeletonization. My question now is: How can I compare two skeletons? As far as I noticed and have been told, the most methods from the Image Processing Toolbox to match templates don't work, since they are not scale-invariant and rotation invariant.
Also some skeletons are really showing the shapes, others are just one or two short lines, with which I couldn't identify the shapes, if I didn't know what they should be.
I've used edge detection, and region growing on the input so there are only interessting shapes left.
On the template I used distance transformation and skeletonization.
Really looking forward to some tips.
Greetings :)
You could look into convolutions?
Basically move your template over your image and see if there is a match, and where.
The max value of your array [x,y] is the location of your object in the image.
Matlab has a built-in 2D convolution function for this
I am working on a lane detection project. Figure 1 shows one of the frames of the dashboard recording.
I have taken the following steps:
Removed the noise.
Detected the edges using Canny Edge Detector.
Masked out the top half of the image.
Applied a close morpholody to make continuous lines.
Applied a skeleton morphology to get the thinnest lines.
Performed a Hough transform on binary mask to obtain Hough matrix.
Identified the peaks in Hough matrix.
Extracted lines from the binary mask using houghlines function.
Calculated the line and marker positions from the previous step.
Visualized the line ([165,1,28]) and marker ([255,157,11]) on Figure 1.
But, as it can be seen, the white dashed lanes on the RHS are not detected. Where do you think the issue is and do you have any suggestions to fix it? Thanks.
I have tried to threshold the image by creating a binary mask using the histogram of the image and obtained better results but that method required some parameter tuning, which I am trying to avoid as this code (eventually) will be tested on a continuous video instead of a single frame.
The Matlab implementation of houghlines accepts the arguments 'MinLength' and 'FillGap'.
'MinLength'
By playing around (=decreasing) with the 'MinLength' Parameter you should be able to make houghlines() accept also the short line on the right side. As your Image looks quite clean, you can even try to reduce it to a very small value such that even the very small part in the lower right corner can be detected as a line.
'FillGap'
Next increase 'FillGap' to allow larger gaps between two line sgments. This might help you, produce one single line, for both right segments.
Hope this helps.
At image i need find "table" - simple rectangle.
Problem is with edge recognition, because potencial photos will be "dark".
I tried edge - sobel, canny, log, .... - recognition and after that Hough transformation and line finding. But this algorithms are not enough for this task.
Something what can help me:
- it is rectangle!, only in perspective view (something like fitting perspective rectangle?)
- that object MUST cover atleast for example 90% of photo (i know i need looking near photo edges)
- that rectangle have fast same color (for example wood dining table)
- i need find atleast "only" 4 corners..(but yes, better will be find the edges of that table)
I know how for example sobel, canny or log algorithms works and Hough as well. And naturally those algorithms fail at dark or non-contrast images. But is there some another method for example based at "fitting"?
Images showing photo i can get (you see it would be dark) and what i need find:
and this is really "nice" picture (without noise). I tested it on more noise pictures and the result was..simply horrible..
Result of this picture with actual algorithm log (with another ones it looks same):
I know image and edge recognition is not simple challenge but are there some new better methods or something like that what i can try to use?
In one of posts in here i found LSD algorithm. It seems very nice descripted and it seems it is recognizing really nice straight lines as well. Do you think it would be better to use it insted of the canny or sobel detection?
Another solution will be corner detection, on my sample images it works better but it recognize too much points and there will problem with time..i will need to connect all the points and "find" the table..
Another solution:
I thought about point to point mapping. That i will have some "virtual" table and try to map that table above with that "virtual" table (simple 2d square in painting :] )..But i think point to point mapping will give me big errors or it will not working.
Does someone have any advice what algorithm use to?
I tried recognize edges in FIJI and then put the edge detected image in matlab, but with hough it works bad as well..:/..
What do you think it would be best to use? In short i need find some algorithm working on non contrast, dark images.
I'd try some modified snakes algorithm:
you parameterize your rectangle with 4 points and initialize them somewhere in the image corners. Then you move the points towards image features using some optimization algorithm (e.g. gradient descent, simulated annealing, etc.).
The image features could be a combination of edge features (e.g. sobel directly or sobel of some gaussian filtered image) to be evaluated on the lines between those four points and corner features to be evaluated at those 4 points.
Additionally you can penalize unlikely rectangles (maybe depending on the angles between the points or on the distance to the image boundary).
I have a stack of images with a bar close to the center. As the stack progresses the bar pivots around one end and the entire stack contains images with the bar rotated at many different angles up to 45 degrees above or below horizontal.
As shown here:
I'm looking for a way to rotate the bar and/or entire image and align everything horizontally before I do my other processing. Ideally this would be done in Matlab / imageJ / ImageMagick. I'm currently trying to work out a method using first Canny edge detection, followed by a Hough transform, followed by an image rotation, but I'm hoping this is a specific case of a more general problem which has already been solved.
If you have the image processing toolbox you can use regionprops with the 'Orientation' property to find the angle.
http://www.mathworks.com/help/images/ref/regionprops.html#bqkf8ji
The problem you are solving is known as image registration or image alignment.
-The first thing you need to due is to treshold the image, so you end up with a black and white image. This will simplify the process.
-Then you need to calculate the mass center of the imgaes and then translate them to match each others centers.
Then you need to rotate the images to matcheach other. This could be done using the principal axis measure. The principal axis will give you the two axis that explain most of the variance in the population. Which will basically give you a vector showing which way your bar is pointing. Then all you need to due is rotate the bars in the same direction.
-After the principal axis transformation you can try rotating the pictues a little bit more in each direction to try and optimise the rotation.
All the way through your translation and rotation you need a measure for showing you how good a fit your tranformation is. This measure can be many thing. If the picture is black and white a simple subtraction of the pictures is enough. Otherwise you can use measures like mutual information.
...you can also look at procrustes analysis see this link for a matlab function http://www.google.dk/search?q=gpa+image+analysis&oq=gpa+image+analysis&sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8#hl=da&tbo=d&sclient=psy-ab&q=matlab+procrustes+analysis&oq=matlab+proanalysis&gs_l=serp.3.1.0i7i30l4.5399.5883.2.9481.3.3.0.0.0.0.105.253.2j1.3.0...0.0...1c.1.5UpjL3-8aC0&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.1355534169,d.Yms&fp=afcd637d8ae07bde&bpcl=40096503&biw=1600&bih=767
You might want to look into the SIFT transform.
You should take as your image the rectangle that represents a worst case guess for your bar and determine the rotation matrix for that.
See http://www.vlfeat.org/overview/sift.html
Use the StackReg plugin of ImageJ. I'm not 100% sure but I think it already comes installed with FIJI (FIJI Is Just ImageJ).
EDIT: I think I have misread your question. That is not a stack of images you are trying to fix, right? In that case, a simple approach (probably not the most efficient but definetly works), is the following algorithm:
threshold the image (seems easy, your background is always white)
get a long horizontal line as a structuring element and dilate the image with it
rotate the structuring element and keep dilating image, measuring the size of the dilation.
the angle that maximizes it, is the rotation angle you'll need to fix your image.
There are several approaches to this problem as suggested by other answers. One approach possibly similar to what you are already trying, is to use Hough transform. Hough transform is good at detecting line orientations. Combining this with morphological processing and image rotation after detecting the angle you can create a system that corrects for angular variations. The basic steps would be
Use Morphological operations to make the bar a single line blob.
Use Hough transform on this image.
Find the maximum in the transform output and use that to find orientation angle.
Use the angle to fix original image.
A full example which comes with Computer Vision System Toolbox for this method. See
http://www.mathworks.com/help/vision/examples/rotation-correction-1.html
you can try givens or householder transform, I prefer givens.
it require an angle, using cos(angle) and sin(angle) to make the givens matrix.
I have two images of yeast plates:
Permissive:
Xgal:
The to images should be in the same spot and roughly the same size. I am trying to use one of the images to generate a grid and then apply that grid to the other image. The grid is made by looking at the colonies on permissive plate, the plate should have 1536 colonies on it. The problem is that the camera that was used to take the images moves a bit up and down and the images can also be shifted slightly due to the other plate not being in exactly the same place.
This then means that when I use the permissive plate to generate the grid on the xgal plate the grid shifts. Does anyone know a way in which I can compensate for this? I am using perl with the gd module. Any advice would be greatly appreciated. Thank you
I've done this in other languages in relation to motion analysis. You can mathematically determine the shift in position between two images using cross correlation.
Fortunately, you may not need to actually do the maths :) You could use something like ImageMagick, which provides a lot of image processing functions for you, and is perl scriptable. Independently scripts already exists for tasks very much like yours -- see.
If you have only a few pairs of images and, as in the examples, they are very different in appearance then an alternative method to Tim Barrass' would be
Open the first image in gimp, find the co-ordinates of a landmark feature
Open the second image in gimp, find the co-ordinates of the same landmark
Calculate the offset
Shift the second image using ImageMagick's convert command with the affine option. Set the parameters sx=sy=1.0, rx=ry=0.0, tx= negative horizontal offset, ty= negative vertical offset