Calculate lat long on static mapbox img for 256px tiles - mapbox

I have a static image with only center point lat/long (for example https://api.mapbox.com/styles/v1/mapbox/light-v9/static/-78.4649,42.5128,5,0,0/300x200) and I want to put on this map some markers(lat.long) with the help of canvas.
But I need to calculate somehow the xy coordinates for those markers.
So I know the center of map(lat/long) and the lat/long marker coordinates. Is there any way to convert lat/long to xy knowing only zoom level and center?
Or if I know the xy of the center lat/long(it always be the same 150px * 100px) and zoom level, could I calculate the xy for other markers?
I have a lot of markers (>200, and they all are custom svg generated and so on) to place it on this map. I can't use mapbox mapbox static map because of the markers limitation and so on.
UPD: Based on the comments I updated the question.
How to calculate it for 256px square tiles?

Based on the OP comment I'm assuming that the requested image is square, for the sake of simplicity (TILE_SIZE could be decomposed in a TILE_SIZE_X and TILE_SIZE_Y component). I'm also assuming that the image is 256-pixels wide TILE_SIZE=256
I'm giving both the pixel coordinates relative to the center of the image (distanceInPixels function), and to the Lower Left Corner (imageCoordinates function). Changing to the Upper Left Corner in case that's necessary should be trivial (X will be equal and Y = TILE_SIZE -Y).
<!DOCTYPE html>
<html>
<body>
<p id="demo"></p>
<script>
var latLngMarker = {};
var latLngCenter = {};
// Image dimensions in pixels
var TILE_SIZE = 256;
var zoom = 5;
// Coordinates of the marker to be projected on the image
latLngMarker.lat = 41.850;
latLngMarker.lng = -87.650;
// Coordinates of the image center
latLngCenter.lat = 41.850;
latLngCenter.lng = -87.650;
// Coordinates projected on the cartographic plane (Mercator)
var centerProjected = project(latLngCenter);
var markerProjected = project(latLngMarker);
// The result should be X=Y=0, because I made Marker Lat Lng = Center Lat Lng
var distanceFromCenter = distanceInPixels(centerProjected, markerProjected);
alert("X: " + distanceFromCenter.x + " Y: " + distanceFromCenter.y);
// The result should be X=Y=256/2=128 for the same reason
var coords = imageCoordinates(centerProjected, markerProjected);
alert("X: " + coords.x + " Y: " + coords.y);
// The horizontal distance represented by one pixel for a given latitude and zoom level
function pixelResolution (latLng, zoom) {
var radius = 6378137.0 // semi-axis of WGS84 ellipsoid
var circumference = 2 * Math.PI * radius;
var distancePerImage = circumference * Math.cos(latLng.lat * Math.PI / 180.0) / Math.pow(2,zoom);
var distancePerPixel = distancePerImage / TILE_SIZE;
return distancePerPixel
}
// Web mercator projection.
function project(latLng) {
var siny = Math.sin(latLng.lat * Math.PI / 180);
siny = Math.min(Math.max(siny, -0.9999), 0.9999);
var xy = {};
xy.x = TILE_SIZE * (0.5 + latLng.lng / 360);
xy.y = TILE_SIZE * (0.5 - Math.log((1 + siny) / (1 - siny)) / (4 * Math.PI));
return xy
}
// Marker pixel coordinates relative to the image Center
function distanceInPixels(centerProjected, markerProjected) {
var delta = {};
var spacing = pixelResolution(latLngCenter, zoom);
delta.x = Math.round((centerProjected.x - markerProjected.x)/spacing);
delta.y = Math.round((centerProjected.y - markerProjected.y)/spacing);
return delta
}
// Marker pixel coordinates relative to the Lower Left Corner
function imageCoordinates(centerProjected, markerProjected) {
var pixelCoordinates = {};
var spacing = pixelResolution(latLngCenter, zoom);
var deltaPixels = distanceInPixels(centerProjected, markerProjected);
pixelCoordinates.x = TILE_SIZE / 2 - deltaPixels.x;
pixelCoordinates.y = TILE_SIZE / 2 - deltaPixels.y;
return pixelCoordinates
}
</script>
</body>
</html>
Note: I can confirm that the pixelResolution function only works with square image tiles with dimensions of powers of 2. The Math.pow(2,zoom); snippet gives the game away!
Web Mercator function based on:
https://developers-dot-devsite-v2-prod.appspot.com/maps/documentation/javascript/examples/map-coordinates
Horizontal distance represented by one pixel from :
https://wiki.openstreetmap.org/wiki/Zoom_levels
See also:
https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#Resolution_and_Scale

If you're going to linearly interpolate you'd need to know the lat/long & x/y for 2 points. It wouldn't be possible with only the center point unless you also have a conversion metric for pixels - ie. 50 pixels is .1 delta lat/long.
If you have the lat/long & x/y for two points you can create the ratio as y1 - y2 / lat1-lat2 or x1-x2/long1-long2 each of which should result in the same ratio
Then it'd be relatively easy, assume the ratio is 5 meaning 5px/l so you had a point that was (3,-4) away from that center point you'd simply multiple to find the pixel offset (15,-20) and add that to the center = (165, 80).
Since all of your images are zoomed the same amount you could manually calculate the ratio once and store it as a constant.
sudo/untested python:
def getRatio(latlongs=[(1,1),(0,0)], xys=[(5,5),(0,0)]:
return (xys[0][1]-xys[1][1]) / (latlongs[0][0] - latlongs[1][0])
centerLatLong = (5,5)
centerXY = (150, 100)
def getCoord(lat,long,ratio):
y = (lat-centerLatLong[0])*ratio + centerXY[1]
x = (long-centerLatLong[1])*ratio + centerXY[0]
return x, y

Related

Compute coordinates position with projection

Given 2 coordinates (point 1 and 2 in red) in WGS84 I need to find the coordinates of the point perpendicular (point 3) to the line at a given distance.
I could manage to make the math to compute this perpendicular point, but when displayed on the map, the point seems to be at a wrong place, probably because of the projection.
What I want on a map:
And what I have instead on the map:
How can I take into account the projection so that the point on the map appears perpendicular to the line? The algorithm below to compute the point comes from here: https://math.stackexchange.com/questions/93424/calculate-rectangle-coordinates-from-line-and-height
public static Coords ComputePerpendicularPoint(Coords first, Coords last, double distance)
{
double slope = -(last.Lon.Value - first.Lon.Value) / (last.Lat.Value - first.Lat.Value);
// number of km per degree = ~111km (111.32 in google maps, but range varies between 110.567km at the equator and 111.699km at the poles)
// 1km in degree = 1 / 111.32km = 0.0089
// 1m in degree = 0.0089 / 1000 = 0.0000089
distance = distance * 0.0000089 / 100; //0.0000089 => represents around 1m in wgs84. /100 because distance is in cm
double t = distance / Math.Sqrt(1 + (slope * slope));
Coords perp_coord = new Coords();
perp_coord.Lon = first.Lon + t;
perp_coord.Lat = first.Lat + (t * slope);
return perp_coord;
}
Thank you in advance!

Get the exact satellite image for a given Lat/Long bbox rectangle?

For a visualization I need an optical satellite image for a specific rectangular AOI, that is defined by two lat/long coordinates. I tried Mapbox Static Images API, which takes a lat/long bounding box and a resolution in width/height pixel for the output. The problem is that it looks like to me that if ratio of the lat/long box is not the same as the w/h pixels, it will add padding to the lat/long bounding box to fill the w/h of the pixel image.
And this would prevent me from combining the optical image with the other data, because I would not know which image pixel would (roughly) correspond to which lat/long coordinate.
I see three "solutions", but I don't know how to achive any of them.
"Make" Mapbox return the images with out padding.
Compute the ratio for the correct w/h pixel ratio using the lat/long coordinate, so there would be no padding. Maybe with https://en.wikipedia.org/wiki/Equirectangular_projection like discussed here: https://stackoverflow.com/a/16271669/380038?
Find a way to determine the lat/long coordinates of the optical satellite image so I can cut off the possible padding.
I checked How can I extract a satellite image from google maps given a Lat Long Rectangle?, but I would prefer to use my existing paid Mapbox account and I got the impression that I still wouldn't get the exact optical image or the exact corner coordinates of the optical image.
Mapbox Static Images API serves maps
You have optical image from other source
You want to overlay these data
Right?
Note the Red and Green pins: the waypoints are at opposite corners on Mapbox.
After Equirectangular correction Mapbox matches Openstreetmaps (little wonder), but Google coordinates are quite close too.
curl -g "https://api.mapbox.com/styles/v1/mapbox/streets-v11/static/[17.55490,47.10434,17.55718,47.10543]/600x419?access_token=YOUR_TOKEN_HERE" --output example-walk-600x419-nopad.png
What is your scale? 1 km - 100 km?
What is your source of optical image?
What is the required accuracy?
Just to mention, optical images have their own sources of distortions.
In practice:
You must have the extent of your non optical satellite data (let's preserve the mist around...) I'll call it ((x1, y1), (x2, y2)) We are coders, not cartographers - right!?
If you feed your extent to https://docs.mapbox.com/playground/static/ as
min longitude = x1, min lattitude = y1, max longitude = x2, max lattitude = y2
Select "Bounding box" entry! Do you see mapbox around your data!? Don't mind the exact dimensions, just check if mapbox is related to your data! May be you have to swap some values to get to the right corner of the globe.
If you have the right ((x1, y1), (x2, y2)) coordinates, do the equirectangular transformation to get the right pixel size.
You've called it Solution #2.
Let's say the with of your non optical satellite data is Wd, the height is Hd.
The mapbox image will fit your data, if you ask for Wm widht, and Hm height of mapbox data where
Wm = Wd
Hm = Wd * (y2 - y1) * cos(x1) / (x2 - x1)
Now you can pull the mapbox by
curl -g "https://api.mapbox.com/styles/v1/mapbox/streets-v11/static/[<x1>,<y1>,<x2>,<y2>]/<Wm>x<Hm>?access_token=<YOUR_TOKEN>" --output overlay.png
If (Hd == Hm)
then {you are lucky :) the two images just fit each other}
else { the two images are for the same area, but you have to scale the height of one of the images to make match }
Well... almost. You have not revealed what size of area you want to cover. The equation above is just an approximation which works up to the size of a smaller country (~100 km or so). For continent scale you probably have to apply more accurate formulas.
In my opinion, your #2 idea is the way to go. You do have the LLng bbox, so all that remains is calculate its "real" size in pixels.
Let us say that you want (or can allow, or can afford) a resolution of 50m per pixel, and the area is small enough not to have distortions (i.e., a rectangle of say 1 arcsecond of latitude and 1 arcsecond of longitude has top and bottom sides of the same length, with an error less than your chosen resolution). These are, I believe, very loose requisites and easy to fulfill.
Then, you just need to calculate the distance between the (Lat1, Lon1) and (Lat1, Lon2) points, and betwen (Lat1, Lon1) and (Lat2, Lon1). Divide that distance in meters by 50, and you'll get the exact number of pixels:
Lon1 Lon2
Lat1 +---------------+
| |
| |
Lat2 +---------------+
And you have a formula for that - the haversine formula.
If you need a higher precision, you could recourse to the Vincenty oblate spheroid (here a Javascript library). On the MT site (first link) there is a live calculator that you can use to plug data from your calls, and verify whether the approach is indeed working. I.e. you plug in your bounding box, get the distance in meters, divide and get the pixel size of the image (if the image is good, chances are that you can go with the simpler haversine. If it isn't, then there has to be some further quirk in the maps API - its projection, perhaps - that doesn't return the expected bounding box. But it seems unlikely).
I've had this exact problem when using a satellite image on an apple watch. I overlay some markers and a path. I convert everything from coordinates to pixels. Below is my code to determine the exact bbox result
var maxHoleLat = 52.5738902
var maxHoleLon = 4.9577606
var minHoleLat = 52.563994
var minHoleLon = 4.922364
var mapMaxLat = 0.0
var mapMaxLon = 0.0
var mapMinLat = 0.0
var mapMinLon = 0.0
let token = "your token"
var resX = 1000.0
var resY = 1000.0
let screenX = 184.0
let screenY = 224.0 // 448/2 = 224 - navbarHeight
let navbarHeight = 0.0
var latDist = 111000.0
var lonDist = 111000.0
var dx = 0.0
var dy = 0.0
func latLonDist(){
//calgary.rasc.ca/latlong.htm
let latRad = maxHoleLat * .pi / 180
//distance between 1 degree of longitude at given latitude
self.lonDist = 111412.88 * cos(latRad) - 0.09350*cos(3 * latRad) + 0.00012 * cos(5 * latRad)
print("lonDist = \(self.lonDist)")
//distance between 1 degree of latitude at a given longitude
self.latDist = 111132.95 - 0.55982 * cos(2 * latRad) + 0.00117 * cos(4 * latRad)
print("latDist = \(self.latDist)")
}
func getMapUrl(){
self.dx = (maxHoleLon - minHoleLon) * lonDist
self.dy = (maxHoleLat - minHoleLat) * latDist
//the map is square, but the hole not
//check if the hole has less x than y
if dx < dy {
mapMaxLat = maxHoleLat
mapMinLat = minHoleLat
let midLon = (maxHoleLon + minHoleLon ) / 2
mapMaxLon = midLon + dy / 2 / lonDist
mapMinLon = midLon - dy / 2 / lonDist
} else {
mapMaxLon = maxHoleLon
mapMinLon = minHoleLon
let midLat = (maxHoleLat + minHoleLat ) / 2
mapMaxLat = midLat + dx / 2 / latDist
mapMinLat = midLat - dx / 2 / latDist
}
self.imageUrl = URL(string:"https://api.mapbox.com/styles/v1/mapbox/satellite-v9/static/[\(mapMinLon),\(mapMinLat),\(mapMaxLon),\(mapMaxLat)]/1000x1000?logo=false&access_token=\(token)")
print("\(imageUrl)")
}

Dividing long and lat coordinates into sub-coordinates(smaller squares)?

I have 2 long, lat points of a rectangle(bottom left and top right) and I want to divide this rectangle into smaller ones based on a base area (long and lat) I already have. I already know that I can't deal with long and lat as distance measured with meters and kilometres but degrees on an approximation of Earth's surface shape.
The points taken is extracted by leaflet with a 4326 SRID and so are the original points. I need the centre of the "smaller squares" or the long and lat coordinates.
For example, this is my base rectangle 24.639567,46.782406 24.641452,46.785413 and for the rectangle, I want to divide 24.584749,46.612782 24.603323,46.653809.
First, let's turn your two points into a leaflet bounds object:
const bounds - L.latLngBounds(point1, point2)
Now let's pick a sample interval, meaning how many sub-rectangles across the width and height of your bounds. For example, a sampling size of 10 would give 100 sub-rectangles (10 x 10), though if your sub-rectangles don't need the same aspect-ratio as your main bounds, you could choose two separate sampling intervals (one for x and one for y)
const samplingInterval = 10 // easy to change
To properly interpolate through your main bounds, we'll grab the corners of it, as well as the width in longitude degrees, and height in latitude degrees, called dLat and dLng (for delta):
const sw = bounds.getSouthWest();
const nw = bounds.getNorthWest();
const ne = bounds.getNorthEast();
const dLat = ne.lat - sw.lat;
const dLng = ne.lng - nw.lng;
Now we can build an array of new bounds extrapolated from the original:
let subBounds = [];
for (let i = 0; i < samplingInterval - 1; i++){
for (let j = 1; j < samplingInterval; j++){
const corner1 = [
sw.lat + (dLat * i) / samplingInterval,
sw.lng + (dLng * j) / samplingInterval
];
const corner2 = [
sw.lat + (dLat * (i + 1)) / samplingInterval,
sw.lng + (dLng * (j + 1)) / samplingInterval
];
subBounds.push(L.latLngBounds(corner1, corner2));
}
}
Now to get the centers of these bounds, you can call .getCenter() on them:
const centerPoints = subBounds.map(bounds => bounds.getCenter());
Working codesandbox

Map distance to zoom in Google Static Maps

I am using Google Static Maps to display maps in my AppleTV app. What I need is to somehow map a distance of e.g. 1km to the zoom parameter of the Static Maps API.
In other words I have an imageView in which I wish to load the map image and if I know that the height of my imageView is 400px, and I wish for this map to show a real Earth surface of 1000m North to South, how would I tell the API to return me the map with this exact zoom?
I found a very similar question here, however no suitable answer is provided.
As stated at Google Maps Documentation:
Because the basic Mercator Google Maps tile is 256 x 256 pixels.
Also note that every zoom level, the map has 2 n tiles.
Meaning that at zoomLevel 2,the pixels in any direction of a map are = 256 * 2² = 1024px.
Taking into account that the earth has a perimeter of ~40,000 kilometers, in zoom 0, every pixel ~= 40,000 km/256 = 156.25 km
At zoom 9, pixels are 131072: 1px = 40,000 km / 131072 = 0.305 km ... and so on.
If we want 400px = 1km, we have to choose the closest approximation possible, so: 1px = 1km/400 = 0.0025km
I tried zoom = 15 and obtained 1px = 0.00478 and zoom = 16 that gave me 1px = 0.00238km
Meaning that you should use zoom = 16, and you will have 0.955km every 400px in the Equator line and only for x coordinates.
As you go north or south in latitude, perimeter is everytime smaller, thus changing the distance. And of course it also changes the correlation in the y axis as the projection of a sphere is tricky.
If you want to calculate with a function the exact distance, you should use the one provided by Google at their documentation:
// Describe the Gall-Peters projection used by these tiles.
gallPetersMapType.projection = {
fromLatLngToPoint: function(latLng) {
var latRadians = latLng.lat() * Math.PI / 180;
return new google.maps.Point(
GALL_PETERS_RANGE_X * (0.5 + latLng.lng() / 360),
GALL_PETERS_RANGE_Y * (0.5 - 0.5 * Math.sin(latRadians)));
},
fromPointToLatLng: function(point, noWrap) {
var x = point.x / GALL_PETERS_RANGE_X;
var y = Math.max(0, Math.min(1, point.y / GALL_PETERS_RANGE_Y));
return new google.maps.LatLng(
Math.asin(1 - 2 * y) * 180 / Math.PI,
-180 + 360 * x,
noWrap);
}
};

Convert coordinates between two rotated systems

I have a map with coordinates in meters and an overlaying building plan with pixel coordinates:
I already know the scale factor and I am able to convert coordinates between the two systems if they are aligned (i.e. the overlay image is exactly horizontal with no rotation)--> conversionfactor (= number of overlay pixels in one meter on the map)
MAPx(ImageX) = centerpointX + ImageX * conversionfactor
MAPy(ImageY) = centerpointY + ImageY * conversionfactor
How can I convert between the coordinates if the overlay is rotated assuming that I have above formulas and I want to include a rotation angle?
EDIT (#tsauerwein):
Here is the marker style that you have requested:
planStyle = function(feature, resolution){
var style = new ol.style.Style({
image: new ol.style.Icon({
src: feature.dataURL,
scale: feature.resolution / resolution,
rotateWithView: true,
rotation: feature.rotation * (Math.PI / 180),
anchor: [.5, .5],
anchorXUnits: 'fraction',
anchorYUnits: 'fraction',
opacity: feature.opacity
})
})
return [style];
};
Assuming that you are using ol.source.ImageStatic: When you configure your layer, you have the size of the image in pixels(e.g. width=500, height=200) and also the extent that this image covers in coordinates.
Now, if you have a coordinate, you can easily check if the coordinate is inside the image extent (ol.extent.containsXY(extent, x, y)). Then you can also translate the real-world coordinate to a pixel coordinate:
// image size
var width = 500;
var height = 250;
// image extent
var extent = [2000, 0, 4000, 1000];
// coordinates
var x = 3000;
var y = 500;
if (ol.extent.containsXY(extent, x, y)) {
var pixelX = width * (x - extent[0]) / ol.extent.getWidth(extent);
var pixelY = height * (y - extent[1]) / ol.extent.getHeight(extent);
}
Doing it like this, it doesn't matter if the map is rotated or not.