*Note this question applies to all languages
I am using triangulation in MATLAB for a Monte Carlo simulation of a physical surface. The triangulation represents a tethered sphere network and I have a certain constraint for the triangulation. In this simulation, I need the length of the tethers, represented by the edges, to be within a certain range of length. Note that is not the typical constraint used for constrained triangulation. How can I triangulate a surface, such that the the edge have lengths between a minimum and maximum length?
If there is an easier way to do this in another language, I am also willing to consider that.
Related
I was developing an analysis of the performance of different edge detetors (Canny, Sobel and Roberts). Matlab give us the function edge, that has as one of its inputs the parameter threshold. I gave the same threshold (=0.1) to all of them (Matlab automatically generated the low threshold for Canny's detector). The result, given the code that I wrote, was:
(Ignored the LoG detector, I think I can interpret those results).
After that, I tested those same filters but with a different threshold (=0.8, which gave a 0.32 low-threshold for Canny's detector). However, now only Canny detects boundaries that are associated with stronger edges (stronger gradients associated with boundaries that separate structures with higher contrast):
!shows same results for higher threshold, some
methods don’t find any edges
I can't understand those results, because if Canny detects stronger boundaries and Sobel is more sensitive for stronger boundaries (as we seen for threshold = 0.1 where it almost only detects abrupt changes of intensity), then why does Sobel not seem to calculate an estimate of the gradient that is comparable to that given by Canny?
With that arises another question: what does the threshold value for Canny, Sobel and Roberts really mean? I would say they were a value of the magnitude of the gradient, somehow normalized because it has to belong to [0,1] (that I don't understand as well, normalized relative to what?)
Different edge detectors have no reason to require equal thresholds because they respond differently to different types of edges (in terms of contrast, sharpness, noise), and no threshold will segment the same edge set.
In addition, the formulas can have different "scaling factors", depending on the implementation. The best you can hope for is that if you pick thresholds that suit you for different methods on the same image, the thresholds will vary proportionally on other images.
I have a greyscale image, represented by a histogram below (x and y axes are pixels, z axis is pixel intensity).
Each cluster of bars represents an object, with the local maxima fairly approximating the centroid of the object. My goal is to find the Full Width Half Max of each object – so I'm roughly approximating each object as a Gaussian distribution.
How can I detect each cluster individually? I understand how to mathematically calculate the FWHM, but I'm not sure how to detect each cluster based on its (roughly) Gaussian features. (e.g., in the example below I would want to detect 6 clusters. One can see a small cluster in the middle but its amplitude is so small that I am okay with missing it).
I appreciate any advice - and efficiency is not a major issue, so I can implement relatively expensive solutions.
To find the centers of each of these groupings you could use a type of A* search algorithm, or similar linear optimization algorithm.
It will find its way to the maxima of a grouping. The issue after that is you wont know if you are at a local maxima (which in your scenario is likely). After your current search has bottomed out at the highest point, and you have calculated the FWHM for that area, you could set all the nodes your A* has traversed to 0, (or mark each node as visited so as to not be visited again), and start the A* algorithm again, until all nodes have been seen, and all groupings found.
Which the algorithm of triangulation is the faster among existings? does he exist with complexity O(N)? Which algorithm is used by OpenGl? I implemented the algorithm with dynamic cache of searching triangle, but it is slow
You can use an incremental algorithm and a monster curve to presort the points. Translate x- and y coordinate to a binary and concatenate it and sort the points. I think it can work with other triangulation but I recommend to try it with bowyer-watson. You can look into CGAL sourcecode it uses a monster curve and bowyer-watson.
I was trying to implement Shape Context (in MatLab). I was trying to achieve rotation invariance.
The general approach for shape context is to compute distances and angles between each set of interest points in a given image. You then bin into a histogram based on whether these calculated values fall into certain ranges. You do this for both a standard and a test image. To match two different images, from this you use a chi-square function to estimate a "cost" between each possible pair of points in the two different histograms. Finally, you use an optimization technique such as the hungarian algorithm to find optimal assignments of points and then sum up the total cost, which will be lower for good matches.
I've checked several websites and papers, and they say that to make the above approach rotation invariant, you need to calculate each angle between each pair of points using the tangent vector as the x-axis. (ie http://www.cs.berkeley.edu/~malik/papers/BMP-shape.pdf page 513)
What exactly does this mean? No one seems to explain it clearly. Also, from which of each pair of points would you get the tangent vector - would you average the two?
A couple other people suggested I could use gradients (which are easy to find in Matlab) and use this as a substitute for the tangent points, though it does not seem to compute reasonable cost scores with this. Is it feasible to do this with gradients?
Should gradient work for this dominant orientation?
What do you mean by ordering the bins with respect to that orientation? I was originally going to have a square matrix of bins - with the radius between two given points determining the column in the matrix and the calculated angle between two given points determining the row.
Thank you for your insight.
One way of achieving (somewhat) rotation invariance is to make sure that where ever you compute your image descriptor their orientation (that is ordering of the bins) would be (roughly) the same. In order to achieve that you pick the dominant orientation at the point where you extract each descriptor and order the bins with respect to that orientation. This way you can compare bin-to-bin of different descriptors knowing that their ordering is the same: with respect to their local dominant orientation.
From my personal experience (which is not too much) these methods looks better on paper than in practice.
I am new to this neural network in matlab. I wanted to create a Neural Network using matlab simulation.
This matlab simulation is using pattern recognition.
I am running on a windows XP platform.
For example, I have a sets of waveforms of circular shape.
I have extracted out the poles.
These poles will teach my Neural Network that it is circular in shape, hence whenever I input another set of slightly different circular shape waveform, the Neural Network is able to distinguish between the shape.
Currently, I have extracted the poles of these 3 shapes, cylinder, circle and rectangle.
But I am clueless of how I should go about creating my Neural Network.
I'd recommend utilizing SOM (Self-organizing map) for pattern recognition since it's really robust. Also there's a Som Toolbox for Matlab you might be interested in. However, to make it learn waves while neglecting their offsets, you'd need to make some changes to the "similarity function". These changes will affect quite a lot on the SOM's training time but if that's not a problem, keep reading.
For the SOM you'll have to sample your waves to constant sized vectors, let say:
sin x -> sin_vector = (a1, a2, a3, ..., aN)
cos x -> cos_vector = (b1, b2, b3, ..., bN)
Usually similarity of "SOM-vectors" is calculated with euclidian distance. Euclidian distance of those two vectors is huge since they have a different offset. In your case they should be considered to be similar ie. distance to be small. So.. if you don't sample all the similar waves from the same starting point, they will be classified in different classes. That is probably a problem. But! Similarity of vectors in SOM is calculated in order to find the BMU (best-matching unit) from the map and pulling the BMU's and its neigborhood's vectors torwards the values of the given sample. So all you need to change is the way to compare those vectors and the way to pull the vectors' values torwards the sample so that both will be "offset-tolerent".
Slow but working solution is first finding the best offset index for each vector. Best offset index is the one that will produce the smallest value with euclidian distance for the sample. Smallest distance calculated with some node of the net will then be the BMU. Then the BMU's and its neigborhood's vectors are pulled torwards the given sample using the offset index calculated for each node just before. Everything else should work out-of-the-box.
This solution is relatively slow but should work great. I'd recommend studying the consept of SOM thoroughly and then reading this post (and angry comments) again :)
PLEASE comment if you know some mathematical solution that would be better than that previous one!
You can try to use Matlab's Neural network pattern recognition tool nprtool as it is specialize to train and test neural network for pattern recognition.