Change the scale of pixels to non-square dimensions in GEE - export-to-csv

I want to export an image in Google Earth Engine and I want the pixel size to match some in-situ plots with dimensions 2mx30m. How can I set the scale parameter to match this diamesions?
What I currently have (for pixel size 30mx30m):
var myimage= sat_image.reduceRegions(my_points, ee.Reducer.first(),30)
print(myimage)
Export.table.toDrive(myimage,
"pts30",
"insitu_points")

Instead of specifying the scale parameter, specify a crsTransform parameter. It is a “a row-major ordering of the 3x2 transform matrix”, as the documentation puts it, which means that you should specify a list of numbers like
crsTransform=[xScaling, 0, 0, 0, yScaling, 0]
Note that these numbers are not the same as the scale parameter. The scale parameter specifies meters of nominal scale per pixel. These number specify distance in the projection's coordinate system per pixel. For example, if the projection's numerical units are degrees, then the crsTransform factors are degrees per pixel. Thus, it is usually a good idea to specify a crs together with crsTransform so that you know what projection you're measuring with.
Also, this may not be the best option for such very non-square pixels. Consider, instead of using reduceRegions on points, converting your points to rectangle features and then using reduceRegion with a mean reducer. This will have a similar effect, but lets you choose the exact shape you want to sample with.
I'm not sure how well either option will work or whether there are further issues to deal with — I haven't done anything like this myself. But I looked around and there is very little discussion crsTransform at all, so I figured it was worth writing up.

Related

Compare two nonlinear transformed (monochromatic) images

Given are two monochromatic images of same size. Both are prealigned/anchored to one common point. Some points of the original image did move to a new position in the new image, but not in a linear fashion.
Below you see a picture of an overlay of the original (red) and transformed image (green). What I am looking for now is a measure of "how much did the "individual" points shift".
At first I thought of a simple average correlation of the whole matrix or some kind of phase correlation, but I was wondering whether there is a better way of doing so.
I already found that link, but it didn't help that much. Currently I implement this in Matlab, but this shouldn't be the point I guess.
Update For clarity: I have hundreds of these image pairs and I want to compare each pair how similar they are. It doesn't have to be the most fancy algorithm, rather easy to implement and yielding in a good estimate on similarity.
An unorthodox approach uses RASL to align an image pair. A python implementation is here: https://github.com/welch/rasl and it also
provides a link to the RASL authors' original MATLAB implementation.
You can give RASL a pair of related images, and it will solve for the
transformation (scaling, rotation, translation, you choose) that best
overlays the pixels in the images. A transformation parameter vector
is found for each image, and the difference in parameters tells how "far apart" they are (in terms of transform parameters)
This is not the intended use of
RASL, which is designed to align large collections of related images while being indifferent to changes in alignment and illumination. But I just tried it out on a pair of jittered images and it worked quickly and well.
I may add a shell command that explicitly does this (I'm the author of the python implementation) if I receive encouragement :) (today, you'd need to write a few lines of python to load your images and return the resulting alignment difference).
You can try using Optical Flow. http://www.mathworks.com/discovery/optical-flow.html .
It is usually used to measure the movement of objects from frame T to frame T+1, but you can also use it in your case. You would get a map that tells you the "offset" each point in Image1 moved to Image2.
Then, if you want a metric that gives you a "distance" between the images, you can perhaps average the pixel values or something similar.

3d reconstruction from 2 views

I'm doing some study on the 3d reconstruction from two views and fixed known camera focal length. Something that is unclear to me is does triangulation gives us the real world scale of an object or the scale of the result is different to the actual one? If the scale is different than the actual size, how can I find the depth of points from it? I was wondering if there is more information that I need to create a real world scale of object.
Scale is arbitrary in SfM tasks so the result may be different in every reconstruction since points are initially projected on a random depth value.
You need at least one known distance in your scene to recover the absolute (real-world) scale. You can include one object with known size in your scene so you will be able to convert your scale afterwards.

are GL_TEXTURE_EXTERNAL_OES texture2D coordinates normalized or no?

I understand that most textures are normalized except GL_TEXTURE_RECTANGLE.
However, I can't find information on GL_TEXTURE_EXTERNAL_OES. Are the coordinates normalized or in the range of [0, imageWidth], [0, imageHeight]?
I would also appreciate if you mention where you got the information from. I couldn't find it on khronos website.
They use normalized texture coordinates. You can address them with texture coordinates in the range [0.0, 1.0]. While it might have been nice to point that out in the extension spec, they probably thought it was not necessary because it's just like all other textures in OpenGL ES.
Source: Tried it on a Kindle Fire HDX 7" tablet.
Like you I frustratingly couldn't quickly find a definitive statement. However...
The extension documentation for OES_EGL_image_external mentions both that:
Their default min filter is LINEAR. It is an INVALID_ENUM error to set the min filter value to anything other than LINEAR or NEAREST.
And:
The default s and t wrap modes are CLAMP_TO_EDGE and it is an
INVALID_ENUM error to set the wrap mode to any other value.
Which are pretty clear clues that coordinates aren't normalised if you're used to dealing with non-power-of-two textures. Indeed the whole tenor of the extension — that one to three hardware sampling units may be used, that some varyings may be lost and that only a single level-of-detail is permitted — strongly reserves the right for an implementation to do the exact same thing as if you'd sampled Y, U and V separately from non-power-of-two sources and combined them arithmetically yourself.
But in terms of providing a thorough finger-on-paper answer: CLAMP_TO_EDGE is defined by the appropriate man page as:
GL_CLAMP_TO_EDGE causes coordinates to be clamped to the range (1/2N, 1 - 1/2N), where N is the size of the texture in the direction of
clamping.
... which, again, makes little sense if coordinates were normalised (though it wouldn't actually be undefined).
So I'm willing to gamble strongly that they're not normalised.

Spatial data visualization level of detail

I have a 3D point cloud data set with different attributes that I visualize as points so far, and I want to have LOD based on distance from the set. I want to be able to have a generalized view from far away with fewer and larger points, and as I zoom in I want a more points correctly spaced out appearing automatically.
Kind of like this video below, behavior wise: http://vimeo.com/61148577
I thought one solution would be to use an adaptive octree, but I'm not sure if that is a good solution. I've been looking into hierarchical clustering with seamless transitions, but I'm not sure which solution I should go with that fits my goal.
Any ideas, tips on where to start? Or some specific method?
Thanks
The video you linked uses 2D metaballs. When metaballs clump together, they form blobs, not larger circles. Are you okay with that?
You should read an intro to metaballs before continuing. Just google 2D metaballs.
So, hopefully you've read about metaball threshold values and falloff functions. Your falloff function should have a radius--a distance at which the function falls to zero.
We can achieve an LOD effect by tuning the threshold and the radius. Basically, as you zoom out, increase radius so that points have influence over a larger area and start to clump together. Also, adjust threshold so that areas with insufficient density of points start to disappear.
I found this existing jsfiddle 2D metaballs demo and I've modified it to showcase LOD:
LOD 0: Individual points as circles. (http://jsfiddle.net/TscNZ/370/)
LOD 1: Isolated points start to shrink, but clusters of points start to form blobs. (http://jsfiddle.net/TscNZ/374/)
LOD 2: Isolated points have disappeared. Blobs are fewer and larger. (change above URL to jsfiddle revision 377)
LOD 3: Blobs are even fewer and even larger. (change above URL to jsfiddle revision 380)
As you can see in the different jsfiddle revisions, changing LOD just requires tuning a few variables:
threshold = 1,
max_alpha = 1,
point_radius = 10,
A crucial point that many metaballs articles don't touch on: you need to use a convention where only values above your threshold are considered "inside" the metaball. Then, when zoomed far out, you need to set your threshold value above the peak value of your falloff function. This will cause an isolated point to disappear completely, leaving only clumps visible.
Rendering metaballs is a whole topic in itself. This jsfiddle demo takes a very inefficient brute-force approach, but there's also the more efficient "marching squares".

Unity TerrainData not compatible with absolute elevations?

Is it possible for the Unity TerrainData structure to take absolute elevations? I have a terrain generator that generates absolute elevations, but they are huge. The perlin octave with the highest amplitude is the one that decides what altitude the entire map is at, with an amplitude of 2500 and wavelength 10000. In order for my map to tile properly and transition between altitudes seamlessly, I need to be able to use this system of absolute altitude. I would scale down my generator's output to fit in the limited space (between 0 and 1), and stretch the y scale of the TerrainData, but it will lose too much precision.
What can I do? Is there a way I can use elevations that may vary by as much as 2500 meters?
One thing that might be important is that there will never be that much variation in the space of a single Terrain object, but across many, many Terrain objects, it is possible for the player to traverse that kind of altitude.
I've tested changing different variables, and I've reached the following conclusion...
Heightmap Resolution does not mean precision of data (some people I asked believed it determined the number of possible height values). It means the number of samples per row and column. This, along with size determines how far apart samples are, and effectively how large the polygons of the terrain are. It's my impression that there is no way to improve precision, although I now know how to increase the height of the terrain object. Instead, since I will never have 2500 meters of elevation difference in the same terrain object, each piece of terrain generated by my generator I will put in a terrain object that is positioned and sized to contain all of the data in that square. The data will also have to be converted so that it will fit, but other than that, I see no drawbacks to this method.
Important note: Resolution must be 2^n + 1 where n is any number. If you provide a different value for resolution, the next permitted value down will be selected (always the one below your choice).