I am attempting to implement a Gabor filter in Matlab in such a way that it discriminates "vertical" textures. Vertical textures means structures that run from top to bottom in the image. If this is difficult to visualize, picture a white wall with windows on it. I want to find the sides of the window frames, not the tops or bottoms. My understanding is that this should be described as a horizontal variation in contrast. Please correct any error in nomenclature. What I am trying to determine is whether this search for "vertical textures" calls for an orientation of 0 or 90. When I check the documentation for the gabor function it says this:
the orientation is defined as the normal direction to the sinusoidal
plane wave.
But I cannot seem to grok that.
P.S. I know that other methods like find edge or difference of gaussians can do this too but suffice to say that I want to use gabor.
If you run the second example in the gabor documentation:
https://www.mathworks.com/help/images/ref/gabor.html
The direction of oscillation of the gabor kernel is the same as the direction of maximum response of periodic/texture content. So, 0 degrees would be activated by vertically oriented texture of the same wavelength as the gabor kernel.
90 Degrees would be activated be horizontally oriented texture of the same wavelength.
Related
I have image of robot with yellow markers as shown
The yellow points shown are the markers. There are two cameras used to view placed at an offset of 90 degrees. The robot bends in between the cameras. The crude schematic of the setup can be referred.
https://i.stack.imgur.com/aVyDq.png
Using the two cameras I am able to get its 3d co-ordinates of the yellow markers. But, I need to find the 3d-co-oridnates of the central point of the robot as shown.
I need to find the 3d position of the red marker points which is inside the cylindrical robot. Firstly, is it even feasible? If yes, what is the method I can use to achieve this?
As a bonus, is there any literature where they find the 3d location of such internal points which I can refer to (I searched, but could not find anything similar to my ask).
I am welcome to a theoretical solution as well(as long as it assures to find the central point within a reasonable error), which I can later translate to code.
If you know the actual dimensions, or at least, shape (e.g. perfect circle) of the white bands, then yes, it is feasible and possible.
You need to do the following steps, which are quite non trivial to do, and I won't do them here:
Optional but extremely suggested: calibrate your camera, and
undistort it.
find the equation of the projection of a 3D circle into a 2D camera, for any given rotation. You can simplify this by assuming the white line will be completely horizontal. You want some function that takes the parameters that make a circle and a rotation.
Find all white bands in the image, segment them, and make them horizontal (rotate them)
Fit points in the corrected white circle to the equation in (1). That should give you the parameters of the circle in 3d (radious, angle), if you wrote the equation right.
Now that you have an analytic equation of the actual circle (equation from 1 with parameters from 3), you can map any point from this circle (e.g. its center) to the image location. Remember to uncorrect for the rotations in step 2.
This requires understanding of curve fitting, some geometric analytical maths, and decent code skills. Not trivial, but this will provide a solution that is highly accurate.
For an inaccurate solution:
Find end points of white circles
Make line connecting endpoints
Chose center as mid point of this line.
This will be inaccurate because: choosing end points will have more error than fitting an equation with all points, ignores cone shape of view of the camera, ignores geometry.
But it may be good enough for what you want.
I have been able to extract the midpoint by fitting an ellipse to the arc visible to the camera. The centroid of the ellipse is the required midpoint.
There will be wrong ellipses as well, which can be ignored. The steps to extract the ellipse were:
Extract the markers
Binarise and skeletonise
Fit ellipse to the arc (found a matlab function for this)
Get the centroid of the ellipse
hsv_img=rgb2hsv(im);
bin=new_hsv_img(:,:,3)>marker_th; %was chosen 0.35
%skeletonise
skel=bwskel(bin);
%use regionprops to get the pixelID list
stats=regionprops(skel,'all');
for i=1:numel(stats)
el = fit_ellipse(stats(i).PixelList(:,1),stats(i).PixelList(:,2));
ellipse_draw(el.a, el.b, -el.phi, el.X0_in, el.Y0_in, 'g');
The link for fit_ellipse function
Link for ellipse_draw function
I have two closed curve stereo rectified edge images. Is it possible to find the disparity(along x-axis in image coordinates) between the edge images and do a 3D reconstruction since I know the camera matrix. I am using matlab for the process. And I will not be able to do a window based technique as it's a binary image since a window based technique requires texture. The question how will I compute the disparity between the edge images? The images are available in the following links. Left Edge image https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?dl=0 Right Edge Image https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?dl=0
For this type of images, you can easily map each edge pixel from the left image to its counterpart in the right image, and therefore calculate the disparity for those pixels as usual.
The mapping can be done in various ways, depending on how typical these images are. For example, using DTW like approach to match curvatures.
For all other pixels in the image, you just don't have any information.
#Photon: Thanks for the suggestion. I did what you suggested. I matched each edge pixel in the left and right image in a DTW like fashion. But there are some pixels whose y-pixel coordinate value differ by 1 or 2 pixels, albeit they are properly rectified. So I calculated the depth by averaging those differing(up to 2-pixel difference in y-axis) edge pixels using least squares method. But I ended getting this space curve (https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?dl=0) when they actually should have been like this (https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?dl=0) which is obtained using RGB images.I couldn't think of any other reason why it would be the case since I compared by traversing along the 408 edge pixels.
I'm very new to 3D image processing.i'm working in my project to find the perspective angle of an circle.
A plate having set of white circles,using those circles i want to find the rotation angles (3D) of that plate.
For that i had finished camera calibration part and got camera error parameters.The next step i have captured an image and apply the sobel edge detection.
After that i have a little bit confusion about the ellipse fitting algorithm.i saw a lot of algorithms in ellipse fit.which one is the best method and fast method?
after finished ellipse fit i don't know how can i proceed further?how to calculate rotation and translation matrix using that ellipse?
can you tell me which algorithm is more suitable and easy. i need some matlab code to understand concept.
Thanks in advance
sorry for my English.
First, find the ellipse/circle centres (e.g. as Eddy_Em in other comments described).
You can then refer to Zhang's classic paper
https://research.microsoft.com/en-us/um/people/zhang/calib/
which allows you to estimate camera pose from a single image if some camera parameters are known, e.g. centre of projection. Note that the method fails for frontal recordings, i.e. the more of a perspective effect, the more accurate your estimate will be. The algorithm is fairly simple, you'll need a SVD and some cross products.
I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).
I have an image that was rotated to an unknown angle, and I don't have the original image. How I determine the angle of rotation with matlab commands?
I need to rotate the image back with this angle to reach the original image.
As #High Performance Mark mentions in his comment, it is difficult to give an answer when it is unclear how you can recognize that the image is rotated, or what would make you decide that the rotation has properly been corrected.
In other words, you will first have to find a way to determine the rotation angle by analyzing the image with respect to specific features that inform you about a potential rotation. For example, if your image contains a face, you'd do face detection (for which there is plenty of code on the File Exchange and then rotate so that the eyes are up and the mouth down. If your image contains lines that should be vertical and/or horizontal in an un-rotated image, you can apply a Hough-transform to your image and find the most likely angle of rotation using houghpeaks.
Finally, to rotate your image, you can use imrotate.
Without examples or a more detailed description, it's hard to give good advice. But generally, this can be done for some types of images.
For example, suppose the image shows buildings, poles, furniture or something that should have vertical edges. Run an edge detector, then take a Fourier transform. There should be peaks, or some visible pattern in the power spectrum, along the Y axis for an unrotated image. The power spectrum rotates the same way as the image. If you can devise an algorithm to find the spectral features that indicate vertical edges, you can measure its angle w.r.t. the origin (zero frequency). That is the angle of image rotation.
But you will have to distinguish that particular feature from all other image features that show in the power spectrum. Have fun with that - this is the kind of detail that will take most of your creativity and time.