Mapbox Terrain RGB image ' - mapbox

I am new to mapbox and am trying to get the Terrain rgb data. I have followed the example from the documentation here :
https://www.mapbox.com/help/access-elevation-data/
and used the following query:
api.mapbox.com/v4/mapbox.terrain-rgb/{z}/{x}/{y}.pngraw?access_token={my_access_token}
which works find for zoom levels from 0 to 5 and returns RGB tiles of elevation data from a high level.
api.mapbox.com/v4/mapbox.terrain-rgb/5/0/0.pngraw?access_token={my_access_token}
I need to get the data at a much higher zoom level than 5, but once I use a zoom level above 5 it returns 'Tile Does Not Exist'.
The documentation says that there is data up to zoom of 15. My access token works and I have tried x,y tiles of 0, 0 (which should exist at all levels). Has anyone any help or suggestions ?

Had a similar issue and in my case the mistake was assuming that given an {x}/{y} tuple the zoom {z} was free to change, but it isn't, if zoom changes xy changes too.
To get the correct tiles I used mercantile (pip install mercantile)
import mercantile
print(mercantile.tile(-71.0638031, 42.3578952, 15))
>>> Tile(x=9915, y=12120, z=15)

Related

Fusioncharts Timeseries

Currently I am running into an issue in FusionCharts where TimeSeries view is inaccurate when using min or max aggregation on step-line graphs. While the data when zoomed in is accurate, upon zooming out the data becomes warped and inaccurate. Currently I can not find anything in the documentation about force limiting the zoom so that users are unable to zoom out to a distorted view. While the features of the chart are great, the distortion is to the level that it renders the chart useless. Does anyone know a way to limit the zoom out function to a max time range or increase the amount of points shown on the graph?
You can use the binning feature available in FusionTime.
Read more about it here: https://www.fusioncharts.com/dev/fusioncharts-aspnet-visualization/components/fusiontime-components/fusioncharts-net-binning

Setting the CRS for non-geographical svg map

I'm trying to show a custom non-geographical map with CRS.simple as explained here:
In a CRS.Simple, one horizontal map unit is mapped to one horizontal
pixel, and idem with vertical
However, I wish to use an SVG vector image as an overlay, but I don't get how the map unit is decided in this case, since the vector images don't really have a resolution?
Also, how could set the CRS origin's location to a specific point?
Thanks for helping

What projection is this geoTIF and how do I convert it?

I am working with viewing weather data online and the website I use (University of Oklahoma) provides a link to the data displayed as a geoTIF, for research purposes. This is a direct link to the geoTIF that I am requesting help with.
I am trying to use the image in a Mapbox map, but there seems to be an issue with projection. I use GDAL tools (though I am a novice) and I can't even figure out what projection it is in to start with. When I use gdalinfo, I get the following result :
Warning 1: RowsPerStrip not defined ... assuming all one strip.
Raster dataset parameters:
Projection:
RasterCount: 1
RasterSize (7000,3500)
Using driver GeoTIFF
Image Structure Metadata:
0: COMPRESSION=DEFLATE
1: INTERLEAVE=BAND
Corner Coordinates:
Upper Left (-130, 55)
Lower Left (-130, 20)
Upper Right (-60, 55)
Lower Right (-60, 20)
Center (-95, 37.5)
Coordinate System is:
Band 1 :
DataType: Float32
ColorInterpretation: Gray
Description:
Size (7000,3500)
BlockSize (7000,3500)
NoDataValue: -999
Offset: 0
Scale: 1
I have been converting other geoTIF files for Mapbox use with the following command successfully :
gdalwarp -t_srs EPSG:3857 example.tif example-projected.tif
... The above code always works except with the file I am needing help with. I am very new with GDAL and though I have been trying it is difficult for me. What am I not doing right and how would I do this the correct way?
Your geotiff has no embedded coordinate system associated although the corners are obviously geographic coordinates (unprojected). WGS84 is my guess.
My suggestion is defining a source coordinate system in your command [-s_srs srs_def] and see the result,
-s_srs EPSG:4326
If that doesn't work you'd better ask OU. They should know better.

Incorrect coordinates in mbtiles generated with Tippecanoe

I generated an mbtiles file using Tippecanoe with just -zg and --drop-densest-as-needed as extra parameters. I uploaded the file to Mapbox Studio and everything works well, both in Studio and when loading the tiles through a mobile app.
I then tried my luck at self-hosting the tiles, using a very basic HTTP server in Go. Tiles were transferred from SQLite to a PostgreSQL database (the reason for this is Go + PSQL is the existing stack for the app).
For some reason the features are shifted depending on the zoom level. At level 1, data that's supposed to be in the US is in the Antarctic, at zoom level 2 it's off the coast of Chile, etc. The only one properly working is level 0 as there's only one tile.
I checked what tiles Mapbox was requesting when in San Francisco for zoom level 11: column 327, row 791. No tile exists for this row/col combination in the .mbtiles file although there's data there.
Is there additional things to be done to the mbtiles besides looking them up in the database using the z/x/y? Or maybe stuff to configure on the app side?
Server code:
row := db.QueryRow(`
SELECT tile_data FROM tiles
WHERE
zoom_level = $1
AND tile_column = $2
AND tile_row = $3
`,
z, x, y,
)
On Android:
map.addSource(
VectorSource(
"tiles",
TileSet("2.2.0", "http://my.local.server:4000/tiles/{z}/{x}/{y}.mvt?key=2448A697EACDDC41432AAD9A1833E")
)
)
I tried setting the VectorSource's center and bounds found in the mbtiles metadata but it didn't change anything.
So I looked into existing server implementations and it turns out the offset is because the mbtiles are stored in a TMS format in which the Y coordinate is flipped. So we just need to convert the Y from the XYZ format to get the proper tile:
From Mapbox's own Node implementation:
// Flip Y coordinate because MBTiles files are TMS.
y = (1 << z) - 1 - y;
1 << z is the number of rows for a given zoom level, or two to the power of z.
More info about XYZ vs TMS can also be found here.

Interpolation between two images with different pixelsize

For my application, I want to interpolate between two images(CT to PET).
Therefore I map between them like that:
[X,Y,Z] = ndgrid(linspace(1,size(imagedata_ct,1),size_pet(1)),...
linspace(1,size(imagedata_ct,2),size_pet(2)),...
linspace(1,size(imagedata_ct,3),size_pet(3)));
new_imageData_CT=interp3(imagedata_ct,X,Y,Z,'nearest',-1024);
The size of my new image new_imageData_CT is similar to PET image. The problem is that data of my new image is not correct scaled. So it is compressed. I think the reason for that is that the pixelsize between the two images is different and not involved to the interpolation. So for example :
CT image size : 512x512x1027
CT voxel size[mm] : 1.5x1.5x0.6
PET image size : 192x126x128
PET voxel size[mm] : 2.6x2.6x3.12
So how could I take care about the voxel size regarding to the interpolation?
You need to perform a matching in the patient coordinate system, but there is more to consider than just the resolution and the voxel size. You need to synchronize the positions (and maybe the orientations also, but this is unlikely) of the two volumes.
You may find this thread helpful to find out which DICOM Tags describe the volume and how to calculate transformation matrices to use for transforming between the patient (x, y, z in millimeters) and volume (x, y, z in column, row, slice number).
You have to make sure that the volume positions are comparable as the positions of the slices in the CT and PET do not necsesarily refer to the same origin. The easy way to do this is to compare the DICOM attribute Frame Of Reference UID (0020,0052) of the CT and PET slices. For all slices that share the same Frame Of Reference UID, the position of the slice in the DICOM header refers to the same origin.
If the datasets do not contain this tag, it is going to be much more difficult, unless you just put it as an assumption. There are methods to deduce the matching slices of two different volumes from the contents of the pixel data referred to as "registration" but this is a science of its own. See the link from Hugues Fontenelle.
BTW: In your example, you are not going to find a matching voxel in both volumes for each position as the volumes have different size. E.g. for the x-direction:
CT: 512 * 1.5 = 768 millimeters
PET: 192 * 2.6 = 499 millimeters
I'll let to someone else answering the question, but I think that you're asking the wrong one. I lack context of course, but at first glance Matlab isn't the right tool for the job.
Have a look at ITK (C++ library with python wrappers), and the "Multi-modal 3D image registration" article.
Try 3DSlicer (it has a GUI for the previous tool)
Try FreeSurfer (similar, focused on brain scans)
After you've done that registration step, you could export the resulting images (now of identical size and spacing), and continue with your interpolation in Matlab if you wish (or with the same tools).
There is a toolbox in slicer called PETCTFUSION which aligns the PET scan to the CT image.
you can install it in slicer new version.
In the module's Display panel shown below, options to select a colorizing scheme for the PET dataset are provided:
Grey will provide white to black colorization, with black indicating the highest count values.
Heat will provide a warm color scale, with Dark red lowest, and white the highest count values.
Spectrum will provide a warm color scale that goes cooler (dark blue) on the low-count end to white at the highest.
This panel also provides a means to adjust the window and level of both PET and CT volumes.
I normally use the resampleinplace tool after the registration. you can find it in the package: registration and then, resample image.
Look at the screensht here:
If you would like to know more about the PETCTFUSION, there is a link below:
https://www.slicer.org/wiki/Modules:PETCTFusion-Documentation-3.6
Since slicer is compatible with python, you can use the python interactor to run your own code too.
And let me know if you face any problem