I'm trying to get through the learning curve of not just the mapbox api but how map applications work in general. Currently I'm having difficulty understanding the calculation used in sizing and placing tiles based on LngLat and Zoom level.
I checked out the Slippy maps wiki but it does not seem to align with how mapbox works (or more likely my understanding is incorrect).
I'm hoping someone can point me to a resource that can clearly explain the calculations for the mapbox-gl api tile placement.
Thanks!
More Specifically: I'm trying to figure out how to cover a tile with a 3D plane using threebox. To do this I need to:
get the tile's size (which changes depending on zoom level)
get the tile's position (which I can get using bbox, however I don't think my calculations are correct because at zoom level 2 the 3D plane's latitude is off by 40.97 degrees when placed using threebox)
My calculation for placing the tiles:
var offset = 40.97// temporarily used to fix placement.
var loc_x = bounds[0] + ((bounds[2] - bounds[0])/2); // this works as expected
var loc_y = bounds[1] + offset;
var loc_z = 0;
if (bounds[1] < 0) {
loc_y = bounds[3] - offset;
}
Found the reason I needed the offset property. The 3D plane's registration (0,0 coords) needed to match the tile. By default the 3D plane's registration point was in the center of the mesh, rather than the bottom left.
Related
So ive got a 2000x2000 image as a CRS map where i need to fit -600000 +600000 coordinates, but the image becomes so zoomed that im obviously not doing it right, i guess the program does not know how to combine and calculate that by itself to fit the units into pixels, or does it have a setting for that?
I guess it still makes each pixel of the image approx 600units big but i guess thats accurate enough for what im trying to do.
Coordinates themselves seem to be working as the center of the image is exactly spot on.
var mapOptions = {
center: [11999, 9199],
minZoom: 0,
maxZoom: 0
}
// Creating a map object
var map = L.map('bigmapleaf', {
crs: L.CRS.Simple
}, mapOptions);
var bounds = [[600000,-600000], [-600000,600000]];
var image = L.imageOverlay('../pictures/map.png', bounds).addTo(map);
// Adding layer to the map
map.fitBounds(bounds);
map.setView( center, 1);
Edit:
Now i have managed to find the math that makes the map the right size without any zooming with good precision, but i need to make the map to respond to those kind of large units which i believe requires another conversion back to the original units which does not feel right. I wish i could work with the large units and let leaflet do all the map view conversions. Maybe i try to make a script that discusses both ways with their own units.
Still waiting if someone knows easy answer for my troubles.
In addition to being easier working with them large units, i could also make the map with absolute accuracy.
So I'm currently working on Deck.gl and React and I need to load a geojson file which has an additional property named "floors" or something similar which tells how many floors the building has.
Is there any way to extrude alternating floors horizontally just a little bit so that it looks like a floor edge like some of the buildings in this image (although most of them just go thinner at the top). I tried searching the deck.gl but there is no such thing. I looked up and found MapBox-gl-js has something called an extrusion-base-height which lets you add polygon above another but there is no such thing as extruding horizontally to make 1 floor thinner and then back to the original size. This would give and edge whenever a new floor starts.
I have scoured the docs for deck.gl but couldn't find any thing on extruding horizontally or in another sense changing the polygon area/size so that I can draw multiple size polygons on the same spot.
Another clear picture of what I'm trying
Things I want to do.
The red polygon is tilted. Need to make it's orientation the same as the green one and reducing it's area at the same time.
Move the red polygon base at the top of the green polygon.
The test data I'm using is given below,
var offset = 0.00001;
var data = [{
polygon: [
[-77.014904,38.816248],
[-77.014842,38.816395],
[-77.015056,38.816449],
[-77.015117,38.816302],
[-77.014904,38.816248]
],
height: 30
},
{
polygon: [
[-77.014904 + offset ,38.816248],
[-77.014842 - offset, 38.816395 - offset],
[-77.015056 - offset, 38.816449 - offset],
[-77.015117 + offset, 38.816302],
[-77.014904 + offset, 38.816248]
],
height: 40
}
];
EDIT:- I think the proper way would be to convert longitude/latitude to Cartesian Coordinates, get the vectors to the 4 corners of the polygon translate the vectors to move towards the center by the offset amount then convert back. But this would only work with quad/rectangle polygon, for buildings that are made up of multiple quads I'd need another way.
If I'm understanding correctly, your problem boils down to: given a polygon (the footprint of the lower part of the building), generate a slightly smaller version of the same polygon, centered within it.
Fortunately, this is really easy using Turf's transformScale method.
So your steps will be:
Convert your polygon data into GeoJSON. (Which I assume you have some mechanism to do, in order to display it in Mapbox-GL-JS in the first place.)
Generate a smaller polygon using turf.transformScale(base, 0.9)
Add the new polygon with map.addSource
Display the new polygon with map.addLayer, setting the extrusion base height etc as required.
In order to cache tiles for off-line use, I tried to calculate tiles coordinates according to a certain zoom level. Calculated x coordinates were correct but the y coordinates Were not.
This Old example compares actually received coordinates with that calculated. (click in the map to display results)
I was using map.project(latlng,zoom) to get the projected coordinates and then divide by tileSize which is 256. is this approach even correct ?
EDIT :
Thanks to Ivan Sanchez for the orientation about y inversion in TMS. Actually after projecting the point with map.project(latlng,zoom) you need to inverse the y coordinate as follow :
You calculate _globalTileRange(zoom) for the corresponding zoom level, then
InvertedY = _globalTileRange(zoom).max.y - y ;
Here is another Link that shows the correct calculation of y coordinates for the current zoom of the map, for other zoom levels the globalTileRange need to be recalculated accordingly.
Regards,
Your approach is correct. However:
In order to get the tile coordinates loaded by Leaflet, you are looping through all the loaded images and outputting the min/max of those values.
The problem with this approach is that Leaflet doesn't immediately unload off-screen tiles. See the keepBuffer option, bug #4039 and PR #4650.
In order to fetch the bounds of tiles visible within the map bounds, see the private methods used internally by L.GridLayer around this line of code.
In TMS, the y coordinate goes up, and in non-TMS tiles it does down. This is because TMS was done by geographers, where the y coordinate is the northing, and non-TMS tiles were initially done by computer programmers, who interpret the y coordinate as downward pixels.
For more background, read https://wiki.openstreetmap.org/wiki/TMS#The_Y_coordinate and https://wiki.osgeo.org/wiki/Tile_Map_Service_Specification#TileMap_Diagram and https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#X_and_Y
I have a web based map that is using Mapbox / Leaflet JS API.
On the map, I have several stationary markers and other markers that I am moving around based on GPS data that is pushed to the browser. When a moving marker is dropped on a stationary marker, I want to identify the two markers that were involved.
I have implemented a handler for the moving marker's "dragend" event which enables me to identify the marker that was dragged/dropped.
My questions is, how can I identify the marker that it was dropped on?
That's quite hard to do, because the only thing that lets you correctly identify a marker is it's latitude/longitude position. So if you try to drop a marker onto a marker with lat/lng 0,0, you need to drop it exactly onto that position which will turn out to be a very hard thing to do.
You could ofcourse build some sort of tolerance into it, but that tolerance will need to vary according to zoom level which i think will be very hard to get right. You could do something like this:
// Drag has ended
marker.on('dragend', function (e) {
// Get position of dropped marker
var latLng = e.target.getLatLng();
// Object to hold nearest marker and distance
var nearest = {};
// Loop over layer which holds rest of the markers
featureLayer.eachLayer(function(layer) {
// Calculate distance between each marker and dropped marker
var distance = latLng.distanceTo(layer.getLatLng());
// Set the first as nearest
if (!nearest.marker) {
nearest.marker = layer;
nearest.distance = distance;
// If this marker is nearer, set this marker as nearest
} else if (distance < nearest.distance) {
nearest.marker = layer;
nearest.distance = distance;
}
});
});
Example on Plunker: http://plnkr.co/edit/GDixNNDGqW9rvO4R1dku?p=preview
Now the nearest object will hold the marker that is closest to your drop position. Closest distance may vary according to your zoom level. When you're at zoom level 1, it may look like you've dropped it exactly on the other marker but you could be thousands of miles off. At zoom 18 the difference will be much smaller, but to drop it exactly on the same lat/lng is virtually impossible. Otherwise you could simply compare all the latlng's against the dropped latlng but that won't work in practice.
So now you have the nearest marker and it's distance to the dropped marker you could implement tolerance, something along the lines of: if (nearest.distance < (x / y)) where x is the distance and y the zoomlevel. It's something you'll need to play with to get right. Once you've figured out the correct tolerance you could implement it right along with the distance comparison in the handler.
Good luck, hope this helps
I'm currently working on a mapping app for iPhone. I've created some custom maps of various sizes, but I've run into an issue:
I would like to implement the ability for users' locations to be checked automatically, but since Im not using a MapView this is much more dificult. (see below)
given the different coordinate systems, I would like to receive a geolocation (green dot) and translate it into a pixel location on a custom map.
Ive got the geolocations for the 4 corners, but the rect is askew. Ive calculated the angle of rotation, but Im just generally confused.
note: the size of the maps arent big enough for the spherical nature of the earth to come into calculation.
Any help is appreciated!
To convert a geolocation to point you need to first understand the mapping. assuming you are using Mercator.
x = R*long
y = R*(1+sin(lat))/cos(lat)
where lat and long are in radians.R is radius of earth. the scale of the image would be from 0 to R*PI
so to get it within view.frame.size you may have to divide by a scale factor.
for difference between points.
x2-x1 = R* (long2-long1)
y2-y1 = R* ( (1+sin(lat2))/cos(lat2) - (1+sin(lat1))/cos(lat1) )