I would like to measure geodesic distances interactively on MeshLab. I couldn't find how to do this in the software neither looking online. I see that there is a function "colorize by geodesic distance from a given point" but colors the mesh and do not give the distance values themselves.
Other distances that I'm also interested in measuring are surface area and volume of meshes. I couldn't find functions to compute those either.
I'm using MeshLab 2016.12 on Ubuntu.
"Colorize by geodesic distance from a given point" saves the actual distance values in the multipurpose scalar "vertex quality" (vq) field. If you save the resulting mesh as a ply file you can view the actual values (for example, save as an ascii ply file and open with a text editor). Not really interactive, but possible.
Volume and surface area can be found with the "Compute Geometric Measures" filter, as well as many other parameters.
Related
Context: I want to create an interactive heatmap with areas separated by a ZIP code. I've found no way of displaying it directly (i.e. using Google Maps or OSM), so I want to create curves or lines that are separating those areas, and visualize it in maps.
I have a set of points, represented by their coordinates and their according class (ZIP code). I want to get a curve separating them. The problem is that these points are not linearly separable.
I tried to use softmax regression, but that doesn't work well with non-linearly separable classes. The only methods I know which are able to separate non-linearly are nearest neighbors and neural networks. But such classifiers only classify, they don't tell me the borders between classes.
Is there a way to get the borders somehow?
If you have a dense cloud of known points within each Zip code with coordinates [latitude. longitude, zip code], using machine learning to find the boundary enclosing those points sounds like overkill.
You could probably get a good approximation of the boundary by using computational geometry, e.g finding the 2D convex hull of each Zip code's set of points using the Matlab convhull function
K = convhull(X,Y)
The result K would be a vector of points enclosing the input X, Y vector of points, that could be used to draw a polygon.
The only complication would be what coordinate system to work in, you might need to do a bit of work going between (lat, lon) and map (x,y) coordinates. If you do not have the Matlab Mapping Toolbox, you could look at the third party library M_Map M_Map home page, which offers some of the same functionality.
Edit: If the cloud of points for Zip codes has a bounding region that is non convex, you may need a more general computational geometry technique to find a better approximation to the bounding region. Performing a Voronoi tesselation of the region, as suggested in the comments, is one such possibility.
In part of an Artificial Neural Network matlab code, I want to find nearest points of two convex polygons.
I saw
dsearchn(X,T,XI)
command's description here, but that finds closest points between two sets of points, and polygons (like convexs) have infinites points.
So can you suggest any way/idea?
Notice: I'm using MATLAB 2014a. I have the coordinate of each convex's vertex point.
If you are not happy with what is provided by dsearchn, then, If I were you, I would do one of two following:
Find Nearest Neighbours on the vertices (for example which vertex of
polygon A is the NN of a given vertex of polygon B).
Pick a random point inside polygon A (you may want to compute the
convex hull of A, but you may skip that and take into account only
the vertices you know already). That random point is the query. Find
an NN of that point from the vertices of polygon B.
You may want to ask in Software recommendations for more.
Edit:
Another approach is this:
Create a representative dataset of polygon A. Set the size of the dataset yourself and fill it with samples of points that lie inside the polygon. Choose them uniformly randomly inside the polygon.
Then take a point of polygon B (either a vertex or a random point inside polygon B) and that's the query point, for which you will seek Nearest Neighbour(s) inside the representative dataset of polygon A.
Of course that's just an approximation, but I can't think of something else now.
Notice that you can of course do the same for polygon B.
With This File in File Exchange, I've find the solution.
With a little code modification, I have drawn a perpendicular bisector which I wanted. Of course, this way is time consuming.
I need some help with matlab and detected features. I'll start with the fact that I am not so good with matlab.
I am trying to automate morphing process by combining feature point detection with morph. I use CascadeObjectDetector to find the features of a human face for two images and I find their strongest features and position with Location function. What I need is to sort the x,y coordinates of detected features so that coordinates of points in pic1 would match the nearest coordinates of pic2 because the main thing in morphing is to have pairs of corresponding points.
There are a lot of similar questions but I can't get a clear answer out of them. So, I want to represent latitude and longitude in a 2D space such that I can calculate the distances if necessary.
There is the equirectangular approach which can calculate the distances but this is not exactly what I want.
There is the UTM but it seems there are many zones and letters. So the distance should take into consideration the changing of zone which is not trivial.
I want to have a representation such that i can deal with x,y as numbers in Euclidean space and perform the standard distance formula on them without multiplying with the diameter of Earth every time I need to calculate the distance between two points.
Is there anything in Matlab that can change lat/long to x,y in Euclidean space?
I am not a matlab speciallist but the answer is not limited to matlab. Generally in GIS when you want to perform calculations in Euclidean space you have to apply 'projection' to the data. There are various types of projections, one of the most popular being Transverse Mercator
The common feature of such projections is the fact you can't precisely represent whole world with it. I mean the projection is based on chosen meridian and is precise enough up to some distance from it (e.g. Gauss Krueger projection is quite accurate around +-500km from the meridian.
You will always have to choose some kind of 'zone' or 'meridian', regardless of what projection you choose, because it is impossible to transform a sphere into plane without any deformations (be it distance, angle or area).
So if you are working on a set of data located around some geographical area you can simply transform (project) the data and treat it as normal Enclidean 2d space.
But if you think of processing data located around the whole world you will have to properly cluster and project it using proper zone.
the output of some processing consists of a binary map with several connected areas.
The objective is, for each area, to compute and draw on the image a line crossing the area on its longest axis, but not extending further. It is very important that the line lies just inside the area, therefore ellipse fitting is not very good.
Any hint on how to do achieve this result in an efficient way?
If you have the image processing toolbox you can use regionprops which will give you several standard measures of any binary connected region. This includes
You can also get the tightest rectangular bounding box, centroid, perimeter, orientation. These will all help you in ellipse fitting.
Depending on how you would like to draw your lines, the regionprops function also returns the length for major and minor axes in 2-D connected regions and does it on a per-connected-region basis, giving you a vector of axis lengths. If you specify 4 neighbor connected you are fairly sure that the length will be exclusively within the connected region. But this is not guaranteed since `regionprops' calculates major axis length of an ellipse that has the same normalized second central moment as the connected region.
My first inclination would be to treat the pixels as 2D points and use principal components analysis. PCA will give you the major axis of each region (princomp if you have the stat toolbox).
Regarding making line segments and not lines, not knowing anything about the shape of these regions, an efficient method doesn't occur to me. Assuming the region could have any arbitrary shape, you could just trace along each line until you reach the edge of the region. Then repeat in the other direction.
I assumed you already have the binary image divided into regions. If this isn't true you could use bwlabel (if the regions aren't touching) or k-means (if they are) first.