Lets some objects make complex spiral moving in 3D and we have get their trajectories projected on a plane.
How to find a median trajectory of such movements and estimate the amplitudes of spirals?
I assume that this requires averaging the coordinates of the trajectories, then somehow finding the distances from the extreme points of the trajectories to the midline. But I don't know a concrete algorithm for this. Can someone suggest this algorithm?
By median trajectory I mean a line that ges between path waves, something like linew on a picture below.
This is a case for a Kalman filter, but this method is a little complicated.
A simpler one is a moving average, with a number of samples that covers as closes as possible to a full period (which you can estimate visually).
Regarding the distances, you can compute the shortest Euclidean distance of every point to the midline (using a line-to-segment function). This will yield an alternating plot, which you can smoothen with a moving maximum (rather than average) over a period.
Suppose I want to uniformly sample points inside a convex polygon.
One of the most common approaches described here and on the internet in general consists in triangulation of the polygon and generate uniformly random points inside each triangles using different schemes.
The one I find most practical is to generate exponential distributions from uniform ones taking -log(U) for instance and normalizing the sum to one.
Within Matlab, we would have this code to sample uniformly inside a triangle:
vertex=[0 0;1 0;0.5 0.5]; %vertex coordinates in the 2D plane
mix_coeff=rand(10000,size(vertex,1)); %uniform generation of random coefficients
x=-log(x); %make the uniform distribution exponential
x=bsxfun(#rdivide,x,sum(x,2)); %normalize such that sum is equal to one
unif_samples=x*vertex; %calculate the 2D coordinates of each sample inside the triangle
And this works just fine:
However, using the exact same scheme for anything other than a triangle just fails. For instance for a quadrilateral, we get the following result:
Clearly, sampling is not uniform anymore and the more vertices you add, the more difficult it is to "reach" the corners.
If I triangulate the polygon first then uniform sampling in each triangle is easy and obviously gets the job done.
But why? Why is it necessary to triangulate first?
Which specific property have triangle (and simplexes in general since this behaviour seems to extend to n-dimensional constructions) that makes it work for them and not for the other polygons?
I would be grateful if someone could give me an intuitive explanation of the phenomena or just point to some reference that could help me understand what is going on.
I should point out that it's not strictly necessary to triangulate a polygon in order to sample uniformly from it. Another way to sample a shape is rejection sampling and proceeds as follows.
Determine a bounding box that covers the entire shape. For a polygon, this is as simple as finding the highest and lowest x and y coordinates of the polygon.
Choose a point uniformly at random in the bounding box.
If the point lies inside the shape, return that point. (For a polygon, algorithms that determine this are collectively called point-in-polygon predicates.) Otherwise, go to step 2.
However, there are two things that affect the running time of this algorithm:
The time complexity depends greatly on the shape in question. In general, the acceptance rate of this algorithm is the volume of the shape divided by the volume of the bounding box. (In particular, the acceptance rate is typically very low for high-dimensional shapes, in part because of the curse of dimensionality: typical shapes cover a much smaller volume than their bounding boxes.)
Also, the algorithm's efficiency depends on how fast it is to determine whether a point lies in the shape in question. Because of this, it's often the case that complex shapes are made up of simpler shapes, such as triangles, circles, and rectangles, for which it's easy to determine whether a point lies in the complex shape or to determine that shape's bounding box.
Note that rejection sampling can be applied, in principle, to sample any shape of any dimension, not just convex 2-dimensional polygons. It thus works for circles, ellipses, and curved shapes, among others.
And indeed, a polygon could, in principle, be decomposed into a myriad of shapes other than triangles, one of those shapes sampled in proportion to its area, and a point in that shape sampled at random via rejection sampling.
Now, to explain a little about the phenomenon you give in your second image:
What you have there is not a 4-sided (2-dimensional) polygon, but rather a 3-dimensional simplex (namely a tetrahedron) that was projected to 2-dimensional space. (See also the previous answer.) This projection explains why points inside the "polygon" appear denser in the interior than in the corners. You can see why if you picture the "polygon" as a tetrahedron with its four corners at different depths. With higher dimensions of simplex, this phenomenon becomes more and more acute, again due partly to the curse of dimensionality.
Well, there are less expensive methods to sample uniform in the triangle. You're sampling Dirichlet distribution in the simplex d+1 and taking projection, computing exponents and such. I would refer you to the code sample and paper reference here, only square roots, a lot simpler algorithm.
Concerning uniform sampling in complex areas (quadrilateral in your case) general approach is quite simple:
Triangulate. You'll get two triangles with vertices (a,b,c)0 and (a,b,c)1
Compute triangle areas A0 and A1 using, f.e. Heron's formula
First step, randomly select one of the triangles based on area.
if (random() < A0/(A0+A1)) select triangle 0 else select triangle 1. random() shall return float in the range [0...1]
Sample point in selected triangle using method mentioned above.
This approach could be easily extended to sample for any complex area with uniform density: N triangles, Categorical distribution sampling with probabilities proportional to areas will get you selected triangle, then sample point in the triangle.
UPDATE
We have to triangulate because we know good (fast, reliable, only 2 RNG calls, ...) algorithm to sample with uniform density in triangle. Then we could build on it, good software is all about reusability, and pick one triangle (at the cost of another rng call) and then back to sample from it, total three RNG calls to get uniform density sampling from ANY area, convex and concave alike. Pretty universal method, I would say. And triangulation is a solved problem, and
basically you do it once (triangulate and build weights array Ai/Atotal) and sample till infinity.
Another part of the answer is that we (me, to be precise, but I've worked with random sampling ~20years) don't know good algorithm to sample precisely with uniform density from arbitrary convex more-than-three-vertices closed polygon. You proposed some algorithm based on hunch and it didn't work out. And it shouldn't work, because what you use is Dirichlet distribution in d+1 simplex and project it back to d hyperplane. It is not extendable even to quadrilateral, not talking to some arbitrary convex polygon. And I would state conjecture, that even such algorithm exist, n-vertices polygon would require n-1 calls to RNG, which means there is no triangulation setup, but each call to get a point would be rather expensive.
Few words on complexity of the sampling. Assuming you did triangulation, then with 3 calls to RNG you'll get one point sampled uniformly inside your polygon.
But complexity of sampling wrt number of triangles N would be O(log(N)) at best. YOu basically would do binary search over partial sums of Ai/Atotal.
You could do a bit better, there is O(1) (constant time) sampling using Alias sampling of the triangle. The cost would be a bit more setup time, but it could be fused with triangulation. Also, it would require one more RNG calls. So for four RNG calls you would have constant point sampling time independent of complexity of your polygon, works for any shape
I am using a 3D cross correlation technqiue to track a particle in 3D. It is very robust but my z dimension is 4x times lower resolution than my x and y. The cross correlation produces a 3D image with a single maximum. I would like to localise this point with sub-pixel accuracy using interpolation of some sort I expect.
Any help welcome!
Craig
You could use bicubic (tricubic in 3D?) or similar interpolation around the peak, as used for image scaling, to better localize the peak. This is commonly done in image processing, for example when localizing peaks in difference-of-gaussian stacks for blob detection, by performing a cubic approximation in each dimension, with the respective neighbouring pixels.
I have a multiple plants in a single binary image. How would I identify each leaf in the image assuming that each leaf is approximately elliptical?
example input: http://i.imgur.com/BwhLVmd.png
I was thinking a good place to start would be finding the tip of each leaf and then getting the center of each plant. Then I could fit the curves starting from the tip and then going to the center. I've been looking online and saw something involving a watershed method, but I do not know where to begin with that idea.
You should be aware that these things are tricky to get working robustly - there will always be a failure case.
This said, I think your idea is not bad.
You could start as follows:
Identify the boundary curve of each plant (i.e. pixels with both foreground and background in their neighbourhood).
Compute the centroid of each plant.
Convert each plant boundary to a polar coordinate system, with the centroid as the origin. This amounts to setting up a coordinate system with the distance of each boundary curve point on the Y axis and the angle on the X axis.
In this representation of the boundary curve, try to identify maxima; these are the tips of the leaves. You will probably need to do some smoothing. Use the parts of the curve before and after the maxima the start fitting your ellipses or some other shape.
Generally, a polar coordinate system is always useful for analysing stuff thats roughly circular.
To fit you ellipses, once you have a rough initial position, I would probably try an EM-style approach.
I would do something like this (I is your binary image)
I=bwmorph(bwmorph(I, 'bridge'), 'clean');
SK=bwmorph(I, 'skel', Inf);
endpts = bwmorph(SK,'endpoints');
props=regionprops(I, 'All');
And then connect every segment from the centroids listed in props.centroid to the elements of endpts that should give you your leaves (petals?).
A bit of filtering is probably necessary, bwmorph is your friend. Have fun!
So I have a 3 dimensional matrix of points that (presumably) define a surface. For my purposes, X and Y can be random values but when plotted along with their Z coordinates, they will define some undulating surface. I'd like to measure the local curvatures of said surface, and in order to do that, I need to be able to find the gradient of said surface, at which point calculating the curvature is trivial.
I have not yet found an implementation of how to measure this curvature that doesn't make use of Matlab's gradient function. The problem with Matlab's gradient function is that it assumes that the points are in some sort of order, similar to diff(X). This would suffice if my points were spaced along a grid, which is not necessarily the case.
One possible solution to measuring the gradient is to give in and assign each point to a discrete coordinate in a grid in the XY plane, thus overcoming this issue. However, this solution seems somewhat inelegant and was curious to see if anyone had suggestions. Thanks!
You can use griddata to interpolate from your scattered data points to grid spaced points and then calculate the gradient.