How to get area proportional nodes in Gephi? - visualization

Gephi has a great functionality to size nodes based on a given variable. The nodes seem to be sized so that the radius of the circle is linearly proportional to the value of the variable. How do I get the area of the circle to be proportional to the value of the variable?
Using the spline functionality to get exact proportionality seems quite complicated. I imagine one solution might be to export the node table, calculate the square root of the variable, and re-import the data. I was wondering whether I am missing any more direct solution.

Calculating the square root would be valid if we were displaying squares. But since we are displaying circles, the correct Excel formula would probably be:
=SQRT(*value here*/(2*PI()))

Related

Change the scale of pixels to non-square dimensions in GEE

I want to export an image in Google Earth Engine and I want the pixel size to match some in-situ plots with dimensions 2mx30m. How can I set the scale parameter to match this diamesions?
What I currently have (for pixel size 30mx30m):
var myimage= sat_image.reduceRegions(my_points, ee.Reducer.first(),30)
print(myimage)
Export.table.toDrive(myimage,
"pts30",
"insitu_points")
Instead of specifying the scale parameter, specify a crsTransform parameter. It is a “a row-major ordering of the 3x2 transform matrix”, as the documentation puts it, which means that you should specify a list of numbers like
crsTransform=[xScaling, 0, 0, 0, yScaling, 0]
Note that these numbers are not the same as the scale parameter. The scale parameter specifies meters of nominal scale per pixel. These number specify distance in the projection's coordinate system per pixel. For example, if the projection's numerical units are degrees, then the crsTransform factors are degrees per pixel. Thus, it is usually a good idea to specify a crs together with crsTransform so that you know what projection you're measuring with.
Also, this may not be the best option for such very non-square pixels. Consider, instead of using reduceRegions on points, converting your points to rectangle features and then using reduceRegion with a mean reducer. This will have a similar effect, but lets you choose the exact shape you want to sample with.
I'm not sure how well either option will work or whether there are further issues to deal with — I haven't done anything like this myself. But I looked around and there is very little discussion crsTransform at all, so I figured it was worth writing up.

Spatial data visualization level of detail

I have a 3D point cloud data set with different attributes that I visualize as points so far, and I want to have LOD based on distance from the set. I want to be able to have a generalized view from far away with fewer and larger points, and as I zoom in I want a more points correctly spaced out appearing automatically.
Kind of like this video below, behavior wise: http://vimeo.com/61148577
I thought one solution would be to use an adaptive octree, but I'm not sure if that is a good solution. I've been looking into hierarchical clustering with seamless transitions, but I'm not sure which solution I should go with that fits my goal.
Any ideas, tips on where to start? Or some specific method?
Thanks
The video you linked uses 2D metaballs. When metaballs clump together, they form blobs, not larger circles. Are you okay with that?
You should read an intro to metaballs before continuing. Just google 2D metaballs.
So, hopefully you've read about metaball threshold values and falloff functions. Your falloff function should have a radius--a distance at which the function falls to zero.
We can achieve an LOD effect by tuning the threshold and the radius. Basically, as you zoom out, increase radius so that points have influence over a larger area and start to clump together. Also, adjust threshold so that areas with insufficient density of points start to disappear.
I found this existing jsfiddle 2D metaballs demo and I've modified it to showcase LOD:
LOD 0: Individual points as circles. (http://jsfiddle.net/TscNZ/370/)
LOD 1: Isolated points start to shrink, but clusters of points start to form blobs. (http://jsfiddle.net/TscNZ/374/)
LOD 2: Isolated points have disappeared. Blobs are fewer and larger. (change above URL to jsfiddle revision 377)
LOD 3: Blobs are even fewer and even larger. (change above URL to jsfiddle revision 380)
As you can see in the different jsfiddle revisions, changing LOD just requires tuning a few variables:
threshold = 1,
max_alpha = 1,
point_radius = 10,
A crucial point that many metaballs articles don't touch on: you need to use a convention where only values above your threshold are considered "inside" the metaball. Then, when zoomed far out, you need to set your threshold value above the peak value of your falloff function. This will cause an isolated point to disappear completely, leaving only clumps visible.
Rendering metaballs is a whole topic in itself. This jsfiddle demo takes a very inefficient brute-force approach, but there's also the more efficient "marching squares".

How do I optimize point-to-circle matching?

I have a table that contains a bunch of Earth coordinates (latitude/longitude) and associated radii. I also have a table containing a bunch of points that I want to match with those circles, and vice versa. Both are dynamic; that is, a new circle or a new point can be added or deleted at any time. When either is added, I want to be able to match the new circle or point with all applicable points or circles, respectively.
I currently have a PostgreSQL module containing a C function to find the distance between two points on earth given their coordinates, and it seems to work. The problem is scalability. In order for it to do its thing, the function currently has to scan the whole table and do some trigonometric calculations against each row. Both tables are indexed by latitude and longitude, but the function can't use them. It has to do its thing before we know whether the two things match. New information may be posted as often as several times a second, and checking every point every time is starting to become quite unwieldy.
I've looked at PostgreSQL's geometric types, but they seem more suited to rectangular coordinates than to points on a sphere.
How can I arrange/optimize/filter/precalculate this data to make the matching faster and lighten the load?
You haven't mentioned PostGIS - why have you ruled that out as a possibility?
http://postgis.refractions.net/documentation/manual-2.0/PostGIS_Special_Functions_Index.html#PostGIS_GeographyFunctions
Thinking out loud a bit here... you have a point (lat/long) and a radius, and you want to find all extisting point-radii combinations that may overlap? (or some thing like that...)
Seems you might be able to store a few more bits of information Along with those numbers that could help you rule out others that are nowhere close during your query... This might avoid a lot of trig operations.
Example, with point x,y and radius r, you could easily calculate a range a feasible lat/long (squarish area) that could be used to help rule it out if needless calculations against another point.
You could then store the max and min lat and long along with that point in the database. Then, before running your trig on every row, you could Filter your results to eliminate points obviously out of bounds.
If I undestand you correctly then my first idea would be to cache some data and eliminate most of the checking.
Like imagine your circle is actually a box and it has 4 sides
you could store the base coordinates of those lines much like you have lines (a mesh) on a real map. So you store east, west, north, south edge of each circle
If you get your coordinate and its outside of that box you can be sure it won't be inside the circle either since the box is bigger than the circle.
If it isn't then you have to check like you do now. But I guess you can eliminate most of the steps already.

Unity TerrainData not compatible with absolute elevations?

Is it possible for the Unity TerrainData structure to take absolute elevations? I have a terrain generator that generates absolute elevations, but they are huge. The perlin octave with the highest amplitude is the one that decides what altitude the entire map is at, with an amplitude of 2500 and wavelength 10000. In order for my map to tile properly and transition between altitudes seamlessly, I need to be able to use this system of absolute altitude. I would scale down my generator's output to fit in the limited space (between 0 and 1), and stretch the y scale of the TerrainData, but it will lose too much precision.
What can I do? Is there a way I can use elevations that may vary by as much as 2500 meters?
One thing that might be important is that there will never be that much variation in the space of a single Terrain object, but across many, many Terrain objects, it is possible for the player to traverse that kind of altitude.
I've tested changing different variables, and I've reached the following conclusion...
Heightmap Resolution does not mean precision of data (some people I asked believed it determined the number of possible height values). It means the number of samples per row and column. This, along with size determines how far apart samples are, and effectively how large the polygons of the terrain are. It's my impression that there is no way to improve precision, although I now know how to increase the height of the terrain object. Instead, since I will never have 2500 meters of elevation difference in the same terrain object, each piece of terrain generated by my generator I will put in a terrain object that is positioned and sized to contain all of the data in that square. The data will also have to be converted so that it will fit, but other than that, I see no drawbacks to this method.
Important note: Resolution must be 2^n + 1 where n is any number. If you provide a different value for resolution, the next permitted value down will be selected (always the one below your choice).

Problem drawing a polygon on data clusters in MATLAB

I have some data points which I have devided into them into some clusters with some clustering algorithms as the picture below:(it might takes some time for the image to appear)
alt text http://www.freeimagehosting.net/uploads/05a807bc42.png
Each color represents different cluster. I have to draw polygons around each cluster. I use convhull for this reason. But as you can see the polygon for the red cluster is very big and covers a lot of areas, which is not the one I am looking for. I need to draw lines(ploygons) exactly around my data sets. For example in the picture above I want a polygon that is drawn exactly the same(and around) as the red cluster with the 3 branches. In other words, in this case I need a polygon with 3 branches to cover my red clusters not that big polygon that covers the whole area. Can anyone help me with this?
Please Note that the solution should be general, because the clusters will change in each run of the algorithm, so it needs to be in a way that is general.
I am not sure this is a fully specified question. I see this variants on this question come up quite often.
Why this can not really be answered here: Imagine six points, three in an equilateral triangle with another three in an equilateral triangle inside it in the same orientation.
What is the correct hull around this? Is it just the convex hull? Is it the inner triangle with three line spurs coming out from it? Does it matter what the relative sizes of the triangles are? Should you have to specify that parameter then?
If your clusters are very compact, you could try the following:
Create a grid, say with a spacing of 0.1.
Set every pixel in the grid to 1 if there's at least one data point covering it, set the pixel to 0 if there is no data point covering the pixel.
You may need to run imclose on your mask in order to fill little holes inside that have not been colored due to sheer bad luck.
Extract the border pixels using, e.g. bwperim. This is the outline of the polygon you're looking for.