How to convert `mapbox.mapbox-terrain-v2` tiles to heightmap tiles? - unity3d

Mapbox provides a kindle of map tiles--mapbox.mapbox-terrain-v2 which is stored in pbf format and saved in mvt suffix. The height data is represented by contour (line).
I want to generate terrain with satellite texture and this height data in Unity3D. How could I convert this pbf data to a height map(a pixel for a height value)?
There is an example
https://api.mapbox.com/v4/mapbox.mapbox-terrain-v2/12/1171/1566.jpg?access_token=pk.eyJ1Ijoib2xlb3RpZ2VyIiwiYSI6ImZ2cllZQ3cifQ.2yDE9wUcfO_BLiinccfOKg
And the mvt file
https://api.mapbox.com/v4/mapbox.mapbox-terrain-v2/12/1171/1566.mvt?access_token=pk.eyJ1Ijoib2xlb3RpZ2VyIiwiYSI6ImZ2cllZQ3cifQ.2yDE9wUcfO_BLiinccfOKg
And the document of Mapbox:
https://www.mapbox.com/vector-tiles/mapbox-terrain/
https://www.mapbox.com/vector-tiles/specification/

MapBox have buid a Unity3d package: MapBox-Unity-SDK
SDK here : https://www.mapbox.com/unity/
Just click download.
This is an asset you can open in Unity directly.
Launch Unity3d, goto Menu>Assets>ImportPackage>CustomPackage
and select your downloaded file.
It will unpack some files and the folders, you will find into some exemples scene files to help you.

The current vector terrain layer isn't designed to be turned into a heightmap: we've processed terrain into elevation contours and lines, so turning those back into raw data would be difficult (much like doing the opposite: we do a lot of processing because it would also be difficult to transfer raw data and derive visual data).
A new and improved vector terrain model that supports your usecase is on the way, but we've also introduced RGB terrain, which was actually designed specifically to address cases like Unity - decoding the RGB-encoded elevation tiles tends to be much simpler in software.

Related

How can I generate mbtiles with a smaller extent with openmaptiles?

I'm using OpenMapTiles to download OSM data and create a mbtiles with mapbox vector tiles. This all works great, except I'm targeting an embedded platform.
At zoom level 14 with the default extent of 4096, a single tile can be over 1MB and cover an entire city. Not only is that a huge file to process for an embedded platform, it also means you're potentially sifting through every house in an entire city. I went as far as writing a streaming protobuf parser, but it takes 10 minutes to just parse such a file.
How can I generate mapbox vector tiles with a smaller extent?
I found there appears to be a parameter for it, but can't figure out where it actually gets used to generate tiles and how to modify it: https://github.com/openmaptiles/openmaptiles-tools/blob/4cc6e88dfdef83de69bd49845e0f23908d9edecc/openmaptiles/sqltomvt.py#L25
I'm not married to openmaptiles, but it's what I'm currently using to download and process openstreetmap data.
in case others need it, use https://github.com/mapbox/tilelive and specify maxzoom you want
bin/tilelive-copy --minzoom=0 --maxzoom=10 ~/Apps/tileserver/tiles/osm-2017-07-03-v3.6.1-planet.mbtiles ~/Apps/tileserver/tiles/small-osm-2017-07-03-v3.6.1-planet.mbtiles

Mapbox geometries get degraded when exported to tilesets

I'm trying out Mapbox for the first time, and playing around with drawing some polygons in the dataset editor for export to a tileset. However, the polygons in the resulting tileset are not the same as what I create in the editor. The polygons are only very rough, simplified approximations of the originals.
In dataset editor:
In map layer as tileset export:
I understand that Mapbox does vector simplification at certain zoom levels, but these changes are not zoom-dependent. I zoom in all the way and the shapes are still like this.
Moreover, such extreme degredation of the geometries makes tilesets essentially useless for features that require any sort of accuracy, like property lot lines.
Am I missing something, or is this really the expected behavior? Is there really no way to get accurate geometries into a tileset?
UPDATE: It appears this is only happening with shapes I create by drawing in the Mapbox data editor. So far the geometries that I've uploaded as geojson files have gotten converted to tilesets accurately...
I suspect this is because the maxzoom is too low.
When you create a Mapbox Tileset, either by uploading GeoJSON directly as a new Tileset, or by exporting your your Dataset to a Tileset, Mapbox will try to guess an appropriate minzoom and maxzoom of the Tileset.
Sometimes the min/max zoom's used aren't suitable for the map you're trying to create. Since there is no way to specify a maxzoom in either of the two approaches the only alternative is to create your Tileset locally with https://github.com/mapbox/tippecanoe specifying an appropriate maxzoom for your data and then uploading the resulting .mbtiles as a Mapbox Tileset.

Mapbox vector tile MVT is missing data

I'm trying to make my own renderer for Mapbox MVT vector tiles, but I have hit the obstacle and couldn't find the answer. My problem is that MVT tile downloaded form Mapbox server contains all roads, but only a few bulding numbers (should be much more for given area) and no land types (grean area on map) and buildings.
Anyone had same problem or know the answer?
link I use to download tile is:
https://api.mapbox.com/v4/mapbox.mapbox-terrain-v2,mapbox.mapbox-streets-v7/18/143415/87627.mvt?access_token={access_token}
Below raster tile for the same area:
The trick is in decoding MVT tiles (using mapbox-vector-tile-java), to parse it properly MvtReader.RING_CLASSIFIER_V1 needs to be used:
final JtsMvt result = MvtReader.loadMvt(f, new GeometryFactory(), new TagKeyValueMapConverter(), MvtReader.RING_CLASSIFIER_V1);
Resolved thanks to boldtrn commit on mapsforge/vtm

Kinect v1 with Processing to .obj to unity [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to create an hologram that is exported via the kinect to an hololense. But it's very slow.
I use this tutorial to collect point cloud data, and this library to export my data as a 3D object in .obj format. The library that exports obj doesnt accept points so I had to draw little triangles. I save the files .obj .png and .mtl on my local xampp.
Next, I download the files with a unity script and WWW object. I also use Runtime OBJ Importer from unity's asset store to create a 3D object at runtime.
The last part is to export the unity app on a hololense. (I will do it next).
But before that,
The process is working but is very slow. I want the hologram to be fluid. A lot of time is wasted :
take depth and rgb data of the kinect
export data to an obj png and mtl file
download the files on unity as frequent as possible
render the files
I think of streaming but does unity need a complet obj file to render ? If I compress .png to .jpg will a gain some time ?
Do you have some pointers to help me ?
Currently the way your question is phrased is confusing: it's unclear wether you want to record point clouds that you later load and render in Unity or you want to somehow stream the point cloud with the aligned RGB texture in close to realtime to Unity.
You initial attempts are using Processing.
In terms of recording data, I recommend using the SimpleOpenNI library which can record both depth and RGB data to an .oni file (see the RecorderPlay example).
Once you have a recording, you can loop through each frame and for each frame store the vertices to a file.
In terms of saving to .obj you'll need to convert the point cloud to a mesh (triangulate the vertices in 3D).
Another option would be to store the point cloud to a format like .ply.
You can find more info on writing to a .ply file in Processing in this answer
In terms of streaming the data, this will be complicated:
if you stream all the vertices it that's a lot of data: up to 921600 floats ( (640 x 480 = 307200) * 3)
if you stream both depth (11bit 640x480) and RGB (8bit 640x480) images that will be even more data.
One option might be to only send the vertices that have a depth and overall skipping points (e.g. send every 3rd point). In terms of sending the data you can try OSC
Once you get the points in Unity you should be able to render a point cloud in Unity
What would be ideal in terms of network performance is a codec (compressor/decompressor) for the depth data. I haven't used one thus far, but doing a quick I see there are options like this one(very date).
You'll need to do a bit of research and see what kinect v1 depth streaming libraries are out there already, test and see what works best for your scenario.
Ideally, if the library is written in C# there's a chance you'll be able to use it to decode the received in Unity.

tilemill creates jagged edges for worldmap

Trying to create a custom world map texture with tilemill to load into leafletjs. I have downloaded a free .tiff file from natural earth data and loaded it into tilemill.
When i want to export however, i notice alot of jagged edges mainly around greenland/canada on the lowest zoom level.
a few zoom levels down and it seems ok again. After exporting the tiles to png's the jagged edges stay. How can i improve the quality of these images?
How can i improve the quality of these images?
By using more detailed input data.
By the looks of it, you are projecting a raster image in EPSG:4326 projection into the EPSG:3857 "web mercator" projection. In the original data, each pixel spans the same amount of longitude and latitude degrees. In a mercator projection, each pixel spans the same amount of longitude, but a different amount of latitude. The artifacts you are experiencing are akin to a Tissot's indicatrix.
You can try using a different value for the raster-scaling symbolizer option in your tilemill stylesheet, but that's gonna make the artifacts different, not get rid of them.