constrained Delaunay Triangulation vs Ear-clipping - triangulation

I'm not an expert in triangulation questions. So I decide to ask. :)
There is a simple Ear Clipping algoritm which has complexity O(n^2)
And there is constrained Delaunay algoritm which has complexity O(n * log n)
So the question is. Is Delaunay algoritm faster than Ear Clipping? I ask, because I understand, that if n time is significantly bigger for Delaunay, it may be slower after all.
P.S. http://code.google.com/p/poly2tri/ - Delaunay,
http://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf - Ear clipping
P.P.S By the way, is the constrained Delaunay the fastest one?

Sweepline Delaunay algoritm can be O(n*log(n)) not O(log(n)).
With small number of points an implementation with worst case O(n^2) can be faster than O(n*log(n)) implementation.
One reason can be that the O(n*log(n)) algorithm might have to use a hierarchical data structure. Constantly adding and removing points and balancing a tree can be costly and make the algorithm run slower.

In real world settings you can observe linear running time for Delaunay triangulations. At least for C++ there are libraries that triangulate >1 mio points per second:
www.cgal.org
http://www.geom.at/fade2d/html/
http://www.cs.cmu.edu/~quake/triangle.html

You can try to lift the points and project back the lower convex hull of the lifted points to the 2d plane. The result should give the delaunay triangulation :https://cs.stackexchange.com/questions/2400/brute-force-delaunay-triangulation-algorithm-complexity.

Related

Fast libraries for merging two tangled convex hulls accessible from python?

Am looking for something that is incremental (with accessible state). So that likely means some merge method is exposed.
So in general I want to start with a set of points, that has a ConvexHull calculated and add a point to it (which trivially has itself as a convex hull). Was looking for alternatives to BowyerWatson through convex hull merges. Not sure if this is a bad idea. Not sure if this should be a question in CS except it's about finding a real solution in the python echosystem.
I see some related content here.
Merging two tangled convex hulls
And Qhull (scipy Delaunay and ConvexHull use this) has a lot of options I do not yet understand
http://www.qhull.org/html/qh-optq.htm
You can use Andrew's modification of the Graham scan algorithm.
Here is a reference to some short python code implementing it.
What makes it suited for your needs is that after the points are sorted in xy-order, the upper and lower hulls are computed in linear time. Since you already have the convex hull (possibly both convex hulls), the xy-sorting of the convex hull points will take linear time (e.g., reverse the lower hulls and merge-sort four sorted lists). The rest of the algorithm will take linear time (in the number of points on the convex hulls, which may well be much smaller than the original number of points).
All the functionality for this implementation is in the code referenced above, and for the merge you can use the code from this SO answer or implement your own.

distance metrics for clustering non-normally distributed data

The dataset I want to cluster consists of ~1000 samples and 10 features, which have different scales and ranges (negative, positive, both). Using scipy.stats.normaltest() I found that none of the features are normally-distributed (all p-values < 1e-4, small enough to reject the null hypothesis that the data are taken from a normal distribution). But all of the distance measures that I'm aware of assume normally-distributed data (I was using Mahalanobis until I realized how non-uniform the data was). What distance measures would one use in this situation? Or is this where one simply has to normalize every feature and hope that that doesn't introduce bias?
Why do you think all distances would assume normal (which btw. is not the same as uniform) data?
Consider Euclidean distance. In many physical applications this distance makes perfect sense, because it is "as the crow flies". Manhattan distance makes a lot of sense when movement is constrained to two axes that cannot be used at the same time. These are completely appropriate for non-normal distributed data.

Which algorithm of triangulation is used by Opengl?

Which the algorithm of triangulation is the faster among existings? does he exist with complexity O(N)? Which algorithm is used by OpenGl? I implemented the algorithm with dynamic cache of searching triangle, but it is slow
You can use an incremental algorithm and a monster curve to presort the points. Translate x- and y coordinate to a binary and concatenate it and sort the points. I think it can work with other triangulation but I recommend to try it with bowyer-watson. You can look into CGAL sourcecode it uses a monster curve and bowyer-watson.

how to make a smooth plot in matlab

I have about 100 data points which mostly satisfying a certain function (but some points are off). I would like to plot all those points in a smooth curve but the problem is the points are not uniformly distributed. So is that anyway to get the smooth curve? I am thinking to interpolate some points in between, but the only way that comes up to my mind is to linearly insert some artificial points between two data points. But that will show a pretty weird shape (like some sharp corner). So any better idea? Thanks.
If you know more or less what the actual curve should be, you can try to fit that curve to your points (e.g. using polyfit). Depending on how many points are off and how far, you can get by with least squares regression (which is fairly easy to get working). If you have too many outliers (or they are much too large/small), you can also try robust regression (e.g. least absolute deviation fitting) using the robustfit function.
If you can manually determine the outliers, you can also fit a curve through the other points to get better results or even use interpolation methods (e.g. interp1 in MATLAB) on those points to get a smoother curve.
If you know which function describes your data, robust fitting (using, e.g. ROBUSTFIT, or the new convenient functions LINEARMODEL and NONLINEARMODEL with the robust option) is a good way to go if there are outliers in your data.
If you don't know the function that describes your data, but want a smooth trendline that is little affected by outliers, SMOOTHN from the File Exchange does an excellent job in my experience.
Have you looked at the use of smoothing splines? Like interpolating splines, but with the knot points and coefficients chosen to minimise a least-squares error function. There is an excellent implementation available from Matlab central which I have used successfully.

Smooth of series data

i need to smooth better this kind of plot, I've already used a moving average (10 points) to get this plot but it's not yet perfect. I want to remove all these little peaks dued by noise, I need to consider only the bigger ones because I'm counting the num of beats from a sensor.
(ie.: in the first 30 seconds I should have just one peak instead of several successive little peaks)
I thought to use a cubic spline but isn't simple to implement in C and it's going to take almost 1-2 weeks of work.
Is there a simpler method / algorithm to use for this achievement? I'm working on this project for iOS (iPhone) environment.
a busy cat http://img15.imageshack.us/img15/1929/schermata022455973alle1o.png
The answer to your question depends a lot on the underlying data. Is the jaggedness of the data really 'noise' or is it really jagged data.
Strategies you could try:
windowing the data and take the median/mean in each window -- so each window is 50 (from your x axis)
sample the data
Nonlinear least squares curve fit (you'd probably have to use a C++ library for that, here is an open source version you could port http://www.ics.forth.gr/~lourakis/levmar/)
some sort of naive bezier smoothing should be pretty easy.
All of these methods have ramifications and none are without problems. Good luck.