I have a video of a giant whirlpool, similar to the below image
Can anyone give an algorithm / code to detect SPIRAL OPTICAL FLOW?
Is it possible to fit a spiral curve over it depending on the spiral optical flow? If yes how?
Thank you.
You can compute optical flow using vision.OpticalFlow object in the Computer Vision System Toolbox. As for determining whether it is spiral, that seems to be the crux of your project.
Optical flow takes a pair of consecutive frames, and attempts to give you a vector at every pixel describing its motion from frame 1 to frame 2.
If you do not care about the motion of every single pixel you can track a sparse set of points over time using vision.PointTracker.
Edit:
If you have a recent version of the Computer Vision System Toolbox, try the new optical flow functions: opticalFlowHS, opticalFlowLK, opticalFlowLKDoG, and opticalFlowFarneback.
Related
I process a bunch 2D each having a sinusoidal feature randomly located in it:
Whereas the amplitude and the period of the sine are known in advance, the exact position is not.
I want to find an exact position of the sine in each image using MATLAB. Standard fitting techniques like Surface Fit won't work here because I only need to fit one feature, not the whole image.
The best idea that comes to my mind is to generate a reference image with a sine with a known location, and then use cross-corrrelation (xcorr2) to find the offset between the two. Maybe you could suggest any faster and simpler solution?
Is it Possible to Find out Measurement of ISNT ( Inferior, Superior, Nasal, Temporal) Distances by Detecting Optic Cup and Optic Disc using Circular Hough Transformation from following Image? -
Over on the MathWorks file exchange. There is a function for detecting circles using the Hough Transform. You should be able to calculate distances in pixels and translate those to whatever unit you wish, so long as you have a reference to go against.
I haven't personally used the function so I can't say if it is exactly what you need, but it's a place to start.
I am well aware of the existence of this question but mine will differ. I also know that there could be significant errors with this approach but I want to understand the configuration also theoretically.
I have some basic questions which I find hard to answer for myself clearly. There is a lot of information about accelerometers and gyroscopes but I still haven't found an explanation "from first principles" of some basic properties.
So I have a plate sensor that contains an accelerometer and gyroscope. There is also a magnetometer which I skip for now.
The accelerometer gives information in each time t about the temporary acceleration vector a = (ax, ay, az) in m/s^2 according to the fixed coordinate system to the sensor.
The gyroscope gives a 3D vector in deg/s which says the temporary speed of rotation of the three axes (Ox, Oy and Oz). From this information, one can get a rotation matrix that corresponds to an infinitesimal rotation of the coordinate system (according to the previous moment). Here is some explanation how to obtain a quaternion, that represents R.
So we know that the infinitesimal movement can be calculated considering that the acceleration is the second derivative of the position.
Imagine that your sensor is attached to your hand or leg. In the first moment we can consider its point in 3D space as (0,0,0) and the initial coordinate system also attached in this physical point. So for the very first time step we will have
r(1) = 0.5a(0)dt^2
where r is the infinitesimal movement vector, a(0) is the acceleration vector.
In each of the following steps we will use the calculations
r(t+1) = 0.5a(t)dt^2 + v(t)dt + r(t)
where v(t) is the speed vector which will be estimated in some way, for example as (r(t)-r(t-1)) / dt.
Also, after each infinitesimal movement we will have to take into account the data from the gyroscope. We will use the rotation matrix to rotate the vector r(t+1).
In this way, probably with tremendous error I will get some trajectory according to the initial coordinate system.
My queries are:
Am I principally correct with this algorithm? If not, where am I wrong?
I would very much appreciate some resources with a working example where the first principles are not skipped.
How should I proceed with using the Kalman's filter to obtain a better trajectory? In what way exactly do I pass all the IMU data (accelerometer, gyroscope and magnetometer) to the Kalman filter?
Your conceptual framework is correct, but the equations need some work. The acceleration is measured in the platform frame, which can rotate very quickly, so it is not advisable to integrate acceleration in the platform frame and rotate the position change. Rather, the accelerations are transformed into a relatively slowly rotating frame and the integration to velocity change and position change is done there. Typically a locally-level frame (e.g. North-East-Down or Wander Aziumuth) or an Earth-centered frame (ECEF or ECI). Gravity and Coriolis force must be included in the acceleration.
Derivations from first principles can be found in many references, one of my favorites is Strapdown Inertial Navigation Technology by Titterton and Weston. Derivations of the inertial navigation equations in locally-level and Earth-fixed frames are given in Chapter 3.
As you've recognized in your question - the initial velocity is an unknown constant of integration. Without some estimate of initial velocity the trajectory resulting from integrating the inertial data can be wildly wrong.
I'm very new to 3D image processing.i'm working in my project to find the perspective angle of an circle.
A plate having set of white circles,using those circles i want to find the rotation angles (3D) of that plate.
For that i had finished camera calibration part and got camera error parameters.The next step i have captured an image and apply the sobel edge detection.
After that i have a little bit confusion about the ellipse fitting algorithm.i saw a lot of algorithms in ellipse fit.which one is the best method and fast method?
after finished ellipse fit i don't know how can i proceed further?how to calculate rotation and translation matrix using that ellipse?
can you tell me which algorithm is more suitable and easy. i need some matlab code to understand concept.
Thanks in advance
sorry for my English.
First, find the ellipse/circle centres (e.g. as Eddy_Em in other comments described).
You can then refer to Zhang's classic paper
https://research.microsoft.com/en-us/um/people/zhang/calib/
which allows you to estimate camera pose from a single image if some camera parameters are known, e.g. centre of projection. Note that the method fails for frontal recordings, i.e. the more of a perspective effect, the more accurate your estimate will be. The algorithm is fairly simple, you'll need a SVD and some cross products.
I have a video of moving parts taken using a static camera. I wish to track & analyze the co-ordinates of various parts in the video. But the co-ordinates values are affected by camera movement. How do I calibrate the camera shake? I don't have any static point in the video (except for the top&bottom edges of video).
All I wish to get is the co-ordinates of (centroids, may be) moving parts adjusted for camera shake. I use MATLAB's computer vision toolbox to process the video.
I've worked on super-resolution algorithms in the past, and as a side affect, I got image stabilization using phase correlation. It's very resilient to noise, and it's quite fast. You should be able to achieve sub-pixel accuracy using a weighted centroid around the peak location, or some kind of peak-fitting routine. Running phase correlation on successive frames will tell you the translational shift that occurs frame-to-frame. You can use an affine warp to remove the shift.
A similar, but slower, approach is here this example is using Normalized Cross Correlation.
If you're using Matlab 2013a or later then video stabilization can be done using point matching Point Matching or by Template Matching. I guess they're available in Matlab 2012b but I haven't tested it out.