Which algorithm of triangulation is used by Opengl? - triangulation

Which the algorithm of triangulation is the faster among existings? does he exist with complexity O(N)? Which algorithm is used by OpenGl? I implemented the algorithm with dynamic cache of searching triangle, but it is slow

You can use an incremental algorithm and a monster curve to presort the points. Translate x- and y coordinate to a binary and concatenate it and sort the points. I think it can work with other triangulation but I recommend to try it with bowyer-watson. You can look into CGAL sourcecode it uses a monster curve and bowyer-watson.

Related

Fast libraries for merging two tangled convex hulls accessible from python?

Am looking for something that is incremental (with accessible state). So that likely means some merge method is exposed.
So in general I want to start with a set of points, that has a ConvexHull calculated and add a point to it (which trivially has itself as a convex hull). Was looking for alternatives to BowyerWatson through convex hull merges. Not sure if this is a bad idea. Not sure if this should be a question in CS except it's about finding a real solution in the python echosystem.
I see some related content here.
Merging two tangled convex hulls
And Qhull (scipy Delaunay and ConvexHull use this) has a lot of options I do not yet understand
http://www.qhull.org/html/qh-optq.htm
You can use Andrew's modification of the Graham scan algorithm.
Here is a reference to some short python code implementing it.
What makes it suited for your needs is that after the points are sorted in xy-order, the upper and lower hulls are computed in linear time. Since you already have the convex hull (possibly both convex hulls), the xy-sorting of the convex hull points will take linear time (e.g., reverse the lower hulls and merge-sort four sorted lists). The rest of the algorithm will take linear time (in the number of points on the convex hulls, which may well be much smaller than the original number of points).
All the functionality for this implementation is in the code referenced above, and for the merge you can use the code from this SO answer or implement your own.

"Constrained" triangulation

*Note this question applies to all languages
I am using triangulation in MATLAB for a Monte Carlo simulation of a physical surface. The triangulation represents a tethered sphere network and I have a certain constraint for the triangulation. In this simulation, I need the length of the tethers, represented by the edges, to be within a certain range of length. Note that is not the typical constraint used for constrained triangulation. How can I triangulate a surface, such that the the edge have lengths between a minimum and maximum length?
If there is an easier way to do this in another language, I am also willing to consider that.

constrained Delaunay Triangulation vs Ear-clipping

I'm not an expert in triangulation questions. So I decide to ask. :)
There is a simple Ear Clipping algoritm which has complexity O(n^2)
And there is constrained Delaunay algoritm which has complexity O(n * log n)
So the question is. Is Delaunay algoritm faster than Ear Clipping? I ask, because I understand, that if n time is significantly bigger for Delaunay, it may be slower after all.
P.S. http://code.google.com/p/poly2tri/ - Delaunay,
http://www.geometrictools.com/Documentation/TriangulationByEarClipping.pdf - Ear clipping
P.P.S By the way, is the constrained Delaunay the fastest one?
Sweepline Delaunay algoritm can be O(n*log(n)) not O(log(n)).
With small number of points an implementation with worst case O(n^2) can be faster than O(n*log(n)) implementation.
One reason can be that the O(n*log(n)) algorithm might have to use a hierarchical data structure. Constantly adding and removing points and balancing a tree can be costly and make the algorithm run slower.
In real world settings you can observe linear running time for Delaunay triangulations. At least for C++ there are libraries that triangulate >1 mio points per second:
www.cgal.org
http://www.geom.at/fade2d/html/
http://www.cs.cmu.edu/~quake/triangle.html
You can try to lift the points and project back the lower convex hull of the lifted points to the 2d plane. The result should give the delaunay triangulation :https://cs.stackexchange.com/questions/2400/brute-force-delaunay-triangulation-algorithm-complexity.

how to make a smooth plot in matlab

I have about 100 data points which mostly satisfying a certain function (but some points are off). I would like to plot all those points in a smooth curve but the problem is the points are not uniformly distributed. So is that anyway to get the smooth curve? I am thinking to interpolate some points in between, but the only way that comes up to my mind is to linearly insert some artificial points between two data points. But that will show a pretty weird shape (like some sharp corner). So any better idea? Thanks.
If you know more or less what the actual curve should be, you can try to fit that curve to your points (e.g. using polyfit). Depending on how many points are off and how far, you can get by with least squares regression (which is fairly easy to get working). If you have too many outliers (or they are much too large/small), you can also try robust regression (e.g. least absolute deviation fitting) using the robustfit function.
If you can manually determine the outliers, you can also fit a curve through the other points to get better results or even use interpolation methods (e.g. interp1 in MATLAB) on those points to get a smoother curve.
If you know which function describes your data, robust fitting (using, e.g. ROBUSTFIT, or the new convenient functions LINEARMODEL and NONLINEARMODEL with the robust option) is a good way to go if there are outliers in your data.
If you don't know the function that describes your data, but want a smooth trendline that is little affected by outliers, SMOOTHN from the File Exchange does an excellent job in my experience.
Have you looked at the use of smoothing splines? Like interpolating splines, but with the knot points and coefficients chosen to minimise a least-squares error function. There is an excellent implementation available from Matlab central which I have used successfully.

how to do clustering with similarity as a measure?

I read about spherical kmeans but i did not come across an implementation.To be clear, similarity is simple the dot product of two document unit vectors.I have read that standard k means uses distance as measure. Is the distance being specified the vector distance just like in coordinate geometry sqrt((x2 -x1)^2 + (y2-y1)^2)?
There are more clustering methods than k-means. The problem with k-means is not so much that is is built on Euclidean distance, but that the mean must reduce the distances for the algorithm to converge.
However, there are tons of other clustering algorithms that do not need to compute a mean or have triangle inequality. If you read the Wikipedia article on DBSCAN, it also mentions a version called GDBSCAN, Generalized DBSCAN. You definitely should be able to plug your similarity function into GDBSCAN. Most likely, you could just use 1/similarity and use it as a distance function, unless the algorithm requires triangle inequality. So this trick should work with DBSCAN and OPTICS, for example. Probably also with hierarchical clustering, k-medians and k-medoids (PAM).