I have an image that was rotated to an unknown angle, and I don't have the original image. How I determine the angle of rotation with matlab commands?
I need to rotate the image back with this angle to reach the original image.
As #High Performance Mark mentions in his comment, it is difficult to give an answer when it is unclear how you can recognize that the image is rotated, or what would make you decide that the rotation has properly been corrected.
In other words, you will first have to find a way to determine the rotation angle by analyzing the image with respect to specific features that inform you about a potential rotation. For example, if your image contains a face, you'd do face detection (for which there is plenty of code on the File Exchange and then rotate so that the eyes are up and the mouth down. If your image contains lines that should be vertical and/or horizontal in an un-rotated image, you can apply a Hough-transform to your image and find the most likely angle of rotation using houghpeaks.
Finally, to rotate your image, you can use imrotate.
Without examples or a more detailed description, it's hard to give good advice. But generally, this can be done for some types of images.
For example, suppose the image shows buildings, poles, furniture or something that should have vertical edges. Run an edge detector, then take a Fourier transform. There should be peaks, or some visible pattern in the power spectrum, along the Y axis for an unrotated image. The power spectrum rotates the same way as the image. If you can devise an algorithm to find the spectral features that indicate vertical edges, you can measure its angle w.r.t. the origin (zero frequency). That is the angle of image rotation.
But you will have to distinguish that particular feature from all other image features that show in the power spectrum. Have fun with that - this is the kind of detail that will take most of your creativity and time.
Related
I have image of robot with yellow markers as shown
The yellow points shown are the markers. There are two cameras used to view placed at an offset of 90 degrees. The robot bends in between the cameras. The crude schematic of the setup can be referred.
https://i.stack.imgur.com/aVyDq.png
Using the two cameras I am able to get its 3d co-ordinates of the yellow markers. But, I need to find the 3d-co-oridnates of the central point of the robot as shown.
I need to find the 3d position of the red marker points which is inside the cylindrical robot. Firstly, is it even feasible? If yes, what is the method I can use to achieve this?
As a bonus, is there any literature where they find the 3d location of such internal points which I can refer to (I searched, but could not find anything similar to my ask).
I am welcome to a theoretical solution as well(as long as it assures to find the central point within a reasonable error), which I can later translate to code.
If you know the actual dimensions, or at least, shape (e.g. perfect circle) of the white bands, then yes, it is feasible and possible.
You need to do the following steps, which are quite non trivial to do, and I won't do them here:
Optional but extremely suggested: calibrate your camera, and
undistort it.
find the equation of the projection of a 3D circle into a 2D camera, for any given rotation. You can simplify this by assuming the white line will be completely horizontal. You want some function that takes the parameters that make a circle and a rotation.
Find all white bands in the image, segment them, and make them horizontal (rotate them)
Fit points in the corrected white circle to the equation in (1). That should give you the parameters of the circle in 3d (radious, angle), if you wrote the equation right.
Now that you have an analytic equation of the actual circle (equation from 1 with parameters from 3), you can map any point from this circle (e.g. its center) to the image location. Remember to uncorrect for the rotations in step 2.
This requires understanding of curve fitting, some geometric analytical maths, and decent code skills. Not trivial, but this will provide a solution that is highly accurate.
For an inaccurate solution:
Find end points of white circles
Make line connecting endpoints
Chose center as mid point of this line.
This will be inaccurate because: choosing end points will have more error than fitting an equation with all points, ignores cone shape of view of the camera, ignores geometry.
But it may be good enough for what you want.
I have been able to extract the midpoint by fitting an ellipse to the arc visible to the camera. The centroid of the ellipse is the required midpoint.
There will be wrong ellipses as well, which can be ignored. The steps to extract the ellipse were:
Extract the markers
Binarise and skeletonise
Fit ellipse to the arc (found a matlab function for this)
Get the centroid of the ellipse
hsv_img=rgb2hsv(im);
bin=new_hsv_img(:,:,3)>marker_th; %was chosen 0.35
%skeletonise
skel=bwskel(bin);
%use regionprops to get the pixelID list
stats=regionprops(skel,'all');
for i=1:numel(stats)
el = fit_ellipse(stats(i).PixelList(:,1),stats(i).PixelList(:,2));
ellipse_draw(el.a, el.b, -el.phi, el.X0_in, el.Y0_in, 'g');
The link for fit_ellipse function
Link for ellipse_draw function
I am attempting to implement a Gabor filter in Matlab in such a way that it discriminates "vertical" textures. Vertical textures means structures that run from top to bottom in the image. If this is difficult to visualize, picture a white wall with windows on it. I want to find the sides of the window frames, not the tops or bottoms. My understanding is that this should be described as a horizontal variation in contrast. Please correct any error in nomenclature. What I am trying to determine is whether this search for "vertical textures" calls for an orientation of 0 or 90. When I check the documentation for the gabor function it says this:
the orientation is defined as the normal direction to the sinusoidal
plane wave.
But I cannot seem to grok that.
P.S. I know that other methods like find edge or difference of gaussians can do this too but suffice to say that I want to use gabor.
If you run the second example in the gabor documentation:
https://www.mathworks.com/help/images/ref/gabor.html
The direction of oscillation of the gabor kernel is the same as the direction of maximum response of periodic/texture content. So, 0 degrees would be activated by vertically oriented texture of the same wavelength as the gabor kernel.
90 Degrees would be activated be horizontally oriented texture of the same wavelength.
I am currently taking my first steps in the field of computer vision and image processing.
One of the tasks I'm working on is finding the center coordinates of (overlapping and occluded) circles.
Here is a sample image:
Here is another sample image showing two overlapping circles:
Further information about the problem:
Always a monochrome, grayscale image
Rather low resolution images
Radii of the circles are unknown
Number of circles in a given image is unknown
Center of circle is to be determined, preferably with sub-pixel accuracy
Radii do not have to be determined
Relative low overhead of the algorithm is of importance; the processing is supposed to be carried out with real-time camera images
For the first sample image, it is relatively easy to calculate the center of the circle by finding the center of mass. Unfortunately, this is not going to work for the second image.
Things I tried are mainly based on the Circle Hough Transform and the Distance Transform.
The Circle Hough Transform seemed relatively computationally expensive due to the fact that I have no information about the radii and the range of possible radii is large. Furthermore, it seems hard to identify the (appropriate) pixels along the edge because of the low resolution of the image.
As for the Distance Transform, I have trouble identifying the centers of the circles and the fact that the image needs to be binarized implies a certain loss of information.
Now I am looking for viable alternatives to the aforementioned algorithms.
Some more sample images (images like the two samples above are extracted from images like the following):
Just thinking aloud to try and get the ball rolling for you... I would be thinking of a Blob, or Connected Component analysis to separate out your blobs.
Then I would start looking at each blob individually. First thing is to see how square the bounding box is for each blob. If it is pretty square AND the centroid of the blob is central within the square, then you have a single circle. If it is not square, or the centroid is not central, you have more than one circle.
Now I am going to start looking at where the white areas touch the edges of the bounding box for some clues as to where the centres are...
I'm very new to 3D image processing.i'm working in my project to find the perspective angle of an circle.
A plate having set of white circles,using those circles i want to find the rotation angles (3D) of that plate.
For that i had finished camera calibration part and got camera error parameters.The next step i have captured an image and apply the sobel edge detection.
After that i have a little bit confusion about the ellipse fitting algorithm.i saw a lot of algorithms in ellipse fit.which one is the best method and fast method?
after finished ellipse fit i don't know how can i proceed further?how to calculate rotation and translation matrix using that ellipse?
can you tell me which algorithm is more suitable and easy. i need some matlab code to understand concept.
Thanks in advance
sorry for my English.
First, find the ellipse/circle centres (e.g. as Eddy_Em in other comments described).
You can then refer to Zhang's classic paper
https://research.microsoft.com/en-us/um/people/zhang/calib/
which allows you to estimate camera pose from a single image if some camera parameters are known, e.g. centre of projection. Note that the method fails for frontal recordings, i.e. the more of a perspective effect, the more accurate your estimate will be. The algorithm is fairly simple, you'll need a SVD and some cross products.
I am currently doing a project called eye controlled cursor using MATLAB.
I have few stages before I extract out the center of the iris (which can be considered as a pupil location). face detetcion - > eye detection -- > iris detection -->And finally i have obtained the center of the iris as show in the figure.
Now, I am trying to map this position (X,Y) to my computer screen pixel (1366 x 768). In most of the journals I have found, they require a reference point such as lips, nose or eye corner. But I am only able to extract the center of iris by doing certain thresholding. How can i map this position (X,Y) to my computer screen pixel (1366 x 768)?
Well you either have to fix the head to a certain position (which isn't very practical) or you will have to adapt to the face position. Depending on your image, you will have to choose points that are always on that image and are easy to detect. If you just have one point (like the nose), you can only adjust for the x/y shift of your head. If you have more points (like the 4 corners of the eye, the nose, maybe the corners of the mouth), you can also extract the 3 rotational values of the head and therefore calculate the direction of sight much better. For a first approach, I guess only the two inner corners of the eye (they are "easy" to detect) will do.
I would also recommend using a calibration sequency. You present the user with a sequence of 4 red points in the corners of the screen and he has to look at them. You can then record the positions of the pupils and interpolate between them.