When zooming into the map, typically level 16 and above, vertex positions tend to jump around. This can be seen on the mapbox demo: https://docs.mapbox.com/mapbox-gl-js/example/custom-style-layer/. However, there is an absence of this when using the GeoJSON LineStrings sample (https://docs.mapbox.com/mapbox-gl-js/example/geojson-line/).
Are there any work arounds for this?
Related
I was wondering how I can go about storing and displaying small, but geographically accurate distances in the mapbox unity SDK?
I'm storing radius' about markers on a map, I get the value in meters (from ~0.5m-10m), and then, adaptively with the zoom level, I want to accurately display those meters in Unity world space (draw an ellipse) using these stored values. The problem is that the mapbox api from my understanding only lets you to convert lat/long to unity world coordinates and I'm running into precision errors. I can get adequate precision when using the CheapRuler class and meters, but as soon as I use the _map.GeoToWorld(latlon) method the precision is lost.
How would I go about keeping adequate precession, is there a way I can use the marker as the reference point and the radius as the offset, and get the relative unity world coordinate distance (of the radius) that way? I know you can also store scale relative to the mapbox tiles, but I'm not sure how I can convert that back to a unity world distance. I'm operating on very small distances, so any warping due to lat/long being a Mercator projection can probably be ignored.
I figured out a round-about solution.
First I convert the meters into unity world space using whatever IMapScalingStrategy Mapbox is currently using.
Then I convert from world to the view space of whatever camera I want to scale to the given bounds.
After that, I use find out the scale of the bounds, solving for:
UnityRelativeScaleChange = 2Map Zoom Level Change; which (to my estimations) is the relationship between unity scale and mapbox zoom levels.
This solutions works great as long as you don't have to zoom in/out by too much, otherwise you'll run into precision problems as the functions rely on the relative view-based size of a given bounds to do their calculations which will lead to unstable results if those initially take a tiny portion of the screen.
I'm trying to display a 3D skyline/panoromic view of Manhattan using Mapbox yet with Mapbox GL extruded 3D buildings start to disappear below zoom level of 15. Is there a way to render/display them at zoom level around 11-12? Or an alternative way by clipping/filtering rendered/loaded data by mapbox?
I have tried to filter the map to the area of Manhattan and Roosevelt islands yet later found out it's not possible to clip/filter MapBox to a geographical area.
As the final result I'm hoping to render the buildings at the final view shown in the last image.
I'm trying out Mapbox for the first time, and playing around with drawing some polygons in the dataset editor for export to a tileset. However, the polygons in the resulting tileset are not the same as what I create in the editor. The polygons are only very rough, simplified approximations of the originals.
In dataset editor:
In map layer as tileset export:
I understand that Mapbox does vector simplification at certain zoom levels, but these changes are not zoom-dependent. I zoom in all the way and the shapes are still like this.
Moreover, such extreme degredation of the geometries makes tilesets essentially useless for features that require any sort of accuracy, like property lot lines.
Am I missing something, or is this really the expected behavior? Is there really no way to get accurate geometries into a tileset?
UPDATE: It appears this is only happening with shapes I create by drawing in the Mapbox data editor. So far the geometries that I've uploaded as geojson files have gotten converted to tilesets accurately...
I suspect this is because the maxzoom is too low.
When you create a Mapbox Tileset, either by uploading GeoJSON directly as a new Tileset, or by exporting your your Dataset to a Tileset, Mapbox will try to guess an appropriate minzoom and maxzoom of the Tileset.
Sometimes the min/max zoom's used aren't suitable for the map you're trying to create. Since there is no way to specify a maxzoom in either of the two approaches the only alternative is to create your Tileset locally with https://github.com/mapbox/tippecanoe specifying an appropriate maxzoom for your data and then uploading the resulting .mbtiles as a Mapbox Tileset.
Is it possible to apply fill-extrusion for a GeoJSON LineString feature?
Basically I'm looking for a way to draw lines (can be 1 line or multiple connected) in a 3d mode with z-offset.
If that's not possible, maybe this can be done with a polygon instead?
Like, converting my lines to polygons (how can i do that?)
What you're asking for isn't yet implemented, but ticketed in Mapbox GL JS at https://github.com/mapbox/mapbox-gl-js/issues/3993.
For now you'll need to opt for your second suggestion of converting the LineString feature to a Polygon. You can do this with turf's buffer function http://turfjs.org/docs#buffer.
The whole line/polygon will be offset at the same height, so depending on your application you could use turf's linkChunk http://turfjs.org/docs#lineChunk to get it broken up into smaller features which you assign different height properties to.
I'm looking for an alternative technique for rendering reflections in OpenGL ES on the iPhone. Usually I would do this by using the stencil buffer to mark where the reflection can be seen (the reflective surface) and then render the reversed image only in those pixels. Thus when the reflected object moves off the surface its reflection is no longer seen. However, since the iPhone's implementation doesn't support the stencil buffer I can't determine how to hide the portions of the reflection that fall outside of the surface.
To clarify, the issue isn't rendering the reflections themselves, but hiding them when they wouldn't be visible.
Any ideas?
Render the reflected scene first; copy out to a texture using glCopyTexImage2D; clear the framebuffer; draw the scene proper, applying the copied texture to the reflective surface.
I don't have an answer for reflections, but here's how I'm doing shadows without the stencil buffer, perhaps it will give you an idea:
I perform basic front-face/back-face determination of the mesh from the point of view of the light source. I then get a list of all edges that connect a front triangle to a back triangle. I treat this edge list as a line "loop". I project the vertices of this loop along the object-light ray until it intersects the ground. These intersection points are then used to calculate a 2D polygon on the same plane as the ground. I then use a tesselation algorithm to turn that poly into triangles. (This works fine as long as your lights sources or objects don't move too often.)
Once I have the triangles, I render them with a slight offset such that the depth buffer will allow the shadow to pass. Alternatively you can use a decaling algorithm such as the one in the Red Book.