What projection is this geoTIF and how do I convert it? - mapbox

I am working with viewing weather data online and the website I use (University of Oklahoma) provides a link to the data displayed as a geoTIF, for research purposes. This is a direct link to the geoTIF that I am requesting help with.
I am trying to use the image in a Mapbox map, but there seems to be an issue with projection. I use GDAL tools (though I am a novice) and I can't even figure out what projection it is in to start with. When I use gdalinfo, I get the following result :
Warning 1: RowsPerStrip not defined ... assuming all one strip.
Raster dataset parameters:
Projection:
RasterCount: 1
RasterSize (7000,3500)
Using driver GeoTIFF
Image Structure Metadata:
0: COMPRESSION=DEFLATE
1: INTERLEAVE=BAND
Corner Coordinates:
Upper Left (-130, 55)
Lower Left (-130, 20)
Upper Right (-60, 55)
Lower Right (-60, 20)
Center (-95, 37.5)
Coordinate System is:
Band 1 :
DataType: Float32
ColorInterpretation: Gray
Description:
Size (7000,3500)
BlockSize (7000,3500)
NoDataValue: -999
Offset: 0
Scale: 1
I have been converting other geoTIF files for Mapbox use with the following command successfully :
gdalwarp -t_srs EPSG:3857 example.tif example-projected.tif
... The above code always works except with the file I am needing help with. I am very new with GDAL and though I have been trying it is difficult for me. What am I not doing right and how would I do this the correct way?

Your geotiff has no embedded coordinate system associated although the corners are obviously geographic coordinates (unprojected). WGS84 is my guess.
My suggestion is defining a source coordinate system in your command [-s_srs srs_def] and see the result,
-s_srs EPSG:4326
If that doesn't work you'd better ask OU. They should know better.

Related

Problem with creating a 3D mask of DICOM-RT contour data using MATLAB

I have troubles extracting a tumor using a RT mask from a dicom image. Due to GDPR I am not allowed to share the dicom images even though they are anonymized. However I am allowed to share the images themself. I want to extract the drawn tumor from the CT images using the draw GTV stored as a RT structure using MATLAB.
Lets say that the file directory where my CT images are stored is called DicomCT and that the RT struct dicom file is called rtStruct.dcm.
I can read and visualize my CT images as follows:
V = dicomreadVolume(“DicomCT”);
V = squeeze(V);
volshow(V)
volume V - 3D CT image
I can load my rt structure using:
Info = dicominfo(“rtStruct.dcm”);
rtContours = dicomContours(Info);
I get the plot giving the different contours.
plotContour(rtContours)
Contours for the GTV of the CT image
I used this link for the information on how to create the mask such that I can apply it to the 3D CT image: https://nl.mathworks.com/help/images/create-and-display-3-d-mask-of-dicom-rt-contour-data.html#d124e5762
The dicom information tells mee the image should be 3mm slices, hence I took 3x3x3 for the referenceInfo.
referenceInfo = imref3d(size(V),3,3,3);
rtMask = createMask(rtContours, 1, referenceInfo)
When I plot my rtMask, I get a grey screen without any trace of the mask. I think that something is wrong with the way that I define the referenceInfo, but I have no idea how to fix it or what is wrong.
volshow(rtMask)
Volume plot of the RT mask
What would be the best way forward?
i was actually having some sort of similar problem to you a couple of days ago. I think you might have two possible problems (none of them your fault).
Your grey screen might be an error rendering that it's not showing because of how the actual volshow() script works. I found it does some things i don't understand with graphics memory and representing numeric type volumes vs logic volumes. I found this the hard way in my job PC where i only have intel HD graphics. Using
iptsetpref('VolumeViewerUseHardware',true)
for logical volumes worked fine for me. You an also test this by trying to replot the mask as double instead of logical by
rtMask = double(rtMask)
volshow(rtMask)
If it's not a rendering error caused by the interactions between your system and volshow() it might be an actual confusion and how the createMask and the actual reference info it needs (created by an actual bad explanation in the tutorial you just linked). Using pixel size info instead of actual axes limits can create partial visualization in segmentation or even missing it bc of scale. This nice person explained more elegantly in this post by using actual geometrical info of the dicom contours as limits.
https://es.mathworks.com/support/search.html/answers/1630195-how-to-convert-dicom-rt-structure-to-binary-mask.html?fq%5B%5D=asset_type_name:answer&fq%5B%5D=category:images/basic-import-and-export&page=1
basically use
plotContour(rtContours);
ax = gca;
referenceInfo = imref3d(size(V),ax.XLim,ax.YLim,ax.ZLim);
rtMask = createMask(rtContours, 1, referenceInfo)
In addition to your code and it might work.
I hope this could be of help to you.

Vega-Lite - Error in the scales of circles and lines

I'm trying to use Vega-Lite to do a plot similar to a graph, where nodes are connected to other nodes by edges. The nodes vary in size according to a variable,while the edges width also vary.
The information for the nodes is stored in a different dataset than the information for the edges.
The problem is, once I try to plot everything together ("nodes" + "edges"), Vega-Lite is marking the size of the nodes very small (almost not visible). If I don't use the "size" for my edges plot, everything comes out normal.
Here is an example of what I'm doing (note that the notation is a bit different from native Vega-Lite, because I'm programming on Julia):
v1 = #vlplot(
mark={:circle,opacity=1},
x={μ[:,1],type="quantitative",axis=nothing},
y={μ[:,2],type="quantitative",axis=nothing},
width=600,
height=400,
size={μ_n,legend=nothing,type="q"})
v2 = #vlplot(
mark={"type"=:circle,color="red",opacity=1},
x={ν[:,1],type="quantitative",axis=nothing},
y={ν[:,2],type="quantitative",axis=nothing},
width=600,
height=400,
size={ν_m,legend=nothing,type="q"})
Then, when I create the visualization for the edges, and plot everything together:
v3 = #vlplot(
mark={"type"=:line,color="black",clip=false},
data = df,
encoding={
x={"edges_x:q",axis=nothing},
y={"edges_y:q",axis=nothing},
opacity={"ew:q",legend=nothing},
size={"ew:q",scale={range=[0,2]},legend=nothing},
detail={"pe:o"}},
width=600,
height=400
)
#vlplot(view={stroke=nothing})+v3+v2+v1
Any ideas on why this is happening and how to fix it?
(Note that the "scale" attribute is not the reason. Even if I remove it, the problem persists).
When rendering compound charts, Vega-Lite uses shared scales by default (see https://vega.github.io/vega-lite/docs/resolve.html): it looks like when your size scale is shared between the line and circle plots, it leads to poor results.
I'm not familiar with the VegaLite.jl syntax, but the JSON specification you'll want on the top-level chart is:
"resolve": {"scale": {"size": "independent"}}

Mapbox Terrain RGB image '

I am new to mapbox and am trying to get the Terrain rgb data. I have followed the example from the documentation here :
https://www.mapbox.com/help/access-elevation-data/
and used the following query:
api.mapbox.com/v4/mapbox.terrain-rgb/{z}/{x}/{y}.pngraw?access_token={my_access_token}
which works find for zoom levels from 0 to 5 and returns RGB tiles of elevation data from a high level.
api.mapbox.com/v4/mapbox.terrain-rgb/5/0/0.pngraw?access_token={my_access_token}
I need to get the data at a much higher zoom level than 5, but once I use a zoom level above 5 it returns 'Tile Does Not Exist'.
The documentation says that there is data up to zoom of 15. My access token works and I have tried x,y tiles of 0, 0 (which should exist at all levels). Has anyone any help or suggestions ?
Had a similar issue and in my case the mistake was assuming that given an {x}/{y} tuple the zoom {z} was free to change, but it isn't, if zoom changes xy changes too.
To get the correct tiles I used mercantile (pip install mercantile)
import mercantile
print(mercantile.tile(-71.0638031, 42.3578952, 15))
>>> Tile(x=9915, y=12120, z=15)

Incorrect coordinates in mbtiles generated with Tippecanoe

I generated an mbtiles file using Tippecanoe with just -zg and --drop-densest-as-needed as extra parameters. I uploaded the file to Mapbox Studio and everything works well, both in Studio and when loading the tiles through a mobile app.
I then tried my luck at self-hosting the tiles, using a very basic HTTP server in Go. Tiles were transferred from SQLite to a PostgreSQL database (the reason for this is Go + PSQL is the existing stack for the app).
For some reason the features are shifted depending on the zoom level. At level 1, data that's supposed to be in the US is in the Antarctic, at zoom level 2 it's off the coast of Chile, etc. The only one properly working is level 0 as there's only one tile.
I checked what tiles Mapbox was requesting when in San Francisco for zoom level 11: column 327, row 791. No tile exists for this row/col combination in the .mbtiles file although there's data there.
Is there additional things to be done to the mbtiles besides looking them up in the database using the z/x/y? Or maybe stuff to configure on the app side?
Server code:
row := db.QueryRow(`
SELECT tile_data FROM tiles
WHERE
zoom_level = $1
AND tile_column = $2
AND tile_row = $3
`,
z, x, y,
)
On Android:
map.addSource(
VectorSource(
"tiles",
TileSet("2.2.0", "http://my.local.server:4000/tiles/{z}/{x}/{y}.mvt?key=2448A697EACDDC41432AAD9A1833E")
)
)
I tried setting the VectorSource's center and bounds found in the mbtiles metadata but it didn't change anything.
So I looked into existing server implementations and it turns out the offset is because the mbtiles are stored in a TMS format in which the Y coordinate is flipped. So we just need to convert the Y from the XYZ format to get the proper tile:
From Mapbox's own Node implementation:
// Flip Y coordinate because MBTiles files are TMS.
y = (1 << z) - 1 - y;
1 << z is the number of rows for a given zoom level, or two to the power of z.
More info about XYZ vs TMS can also be found here.

Interpolation between two images with different pixelsize

For my application, I want to interpolate between two images(CT to PET).
Therefore I map between them like that:
[X,Y,Z] = ndgrid(linspace(1,size(imagedata_ct,1),size_pet(1)),...
linspace(1,size(imagedata_ct,2),size_pet(2)),...
linspace(1,size(imagedata_ct,3),size_pet(3)));
new_imageData_CT=interp3(imagedata_ct,X,Y,Z,'nearest',-1024);
The size of my new image new_imageData_CT is similar to PET image. The problem is that data of my new image is not correct scaled. So it is compressed. I think the reason for that is that the pixelsize between the two images is different and not involved to the interpolation. So for example :
CT image size : 512x512x1027
CT voxel size[mm] : 1.5x1.5x0.6
PET image size : 192x126x128
PET voxel size[mm] : 2.6x2.6x3.12
So how could I take care about the voxel size regarding to the interpolation?
You need to perform a matching in the patient coordinate system, but there is more to consider than just the resolution and the voxel size. You need to synchronize the positions (and maybe the orientations also, but this is unlikely) of the two volumes.
You may find this thread helpful to find out which DICOM Tags describe the volume and how to calculate transformation matrices to use for transforming between the patient (x, y, z in millimeters) and volume (x, y, z in column, row, slice number).
You have to make sure that the volume positions are comparable as the positions of the slices in the CT and PET do not necsesarily refer to the same origin. The easy way to do this is to compare the DICOM attribute Frame Of Reference UID (0020,0052) of the CT and PET slices. For all slices that share the same Frame Of Reference UID, the position of the slice in the DICOM header refers to the same origin.
If the datasets do not contain this tag, it is going to be much more difficult, unless you just put it as an assumption. There are methods to deduce the matching slices of two different volumes from the contents of the pixel data referred to as "registration" but this is a science of its own. See the link from Hugues Fontenelle.
BTW: In your example, you are not going to find a matching voxel in both volumes for each position as the volumes have different size. E.g. for the x-direction:
CT: 512 * 1.5 = 768 millimeters
PET: 192 * 2.6 = 499 millimeters
I'll let to someone else answering the question, but I think that you're asking the wrong one. I lack context of course, but at first glance Matlab isn't the right tool for the job.
Have a look at ITK (C++ library with python wrappers), and the "Multi-modal 3D image registration" article.
Try 3DSlicer (it has a GUI for the previous tool)
Try FreeSurfer (similar, focused on brain scans)
After you've done that registration step, you could export the resulting images (now of identical size and spacing), and continue with your interpolation in Matlab if you wish (or with the same tools).
There is a toolbox in slicer called PETCTFUSION which aligns the PET scan to the CT image.
you can install it in slicer new version.
In the module's Display panel shown below, options to select a colorizing scheme for the PET dataset are provided:
Grey will provide white to black colorization, with black indicating the highest count values.
Heat will provide a warm color scale, with Dark red lowest, and white the highest count values.
Spectrum will provide a warm color scale that goes cooler (dark blue) on the low-count end to white at the highest.
This panel also provides a means to adjust the window and level of both PET and CT volumes.
I normally use the resampleinplace tool after the registration. you can find it in the package: registration and then, resample image.
Look at the screensht here:
If you would like to know more about the PETCTFUSION, there is a link below:
https://www.slicer.org/wiki/Modules:PETCTFusion-Documentation-3.6
Since slicer is compatible with python, you can use the python interactor to run your own code too.
And let me know if you face any problem