How to correctly find UV on sphere - openstreetmap

I have a sphere and a texture for it.
Texture consist of 16 tiles of zoom = 2 from OSM. Tile size is 256x256.
At top and bottom I added space to cover area in ranges [90, 85.0511] and [-85.0511, -90], proportionally. So texture size was 1024x1083.
I also tried texture without these two spaces, its size was 1024x1024 (map tiles only).
The problem is that after UV mapping on Y-axis objects are smaller on equator and bigger on poles.
There are two types of formulas
u = (lon + 180) / 360; // lon = [-180, 180]
v = (lat + 90) / 180; // lat = [-85.0511, 85.0511]
----
u = Math.atan2(z, x) / (2 * Math.PI) + 0.5; // x, y, z are vertex coordinates
v = Math.asin(y) / Math.PI + 0.5;
I tried all 8 variations: two textures, two u-formulas and two v-formulas.
The result is like on image above, or worse.
What am I doing wrong? Is it about texture, or UV-formulas, or something else?
P.S.: for poles (vertices in lat range = [-90, -85.0511], [85.0511, 90]) in fragment shader I don't use color from texture, but just solid color

OSM uses the Web Mercator projection. See also on OSM wiki.
The conversion from world (x,y,z) to texture (u,v) coordinates would be:
lon = atan2(y, x)
lat = atan2(z, sqrt(x*x+y*y))
u = (lon + pi)/(2*pi)
v = (log(tan(lat/2 + pi/4)) + pi)/(2*pi)
(I assume that z points north like in WGS-84 and all coordinates are right-handed.)
This projection doesn't cover the entire sphere: as the latitude approaches the poles, the v coordinate blows up to infinity. Therefore extending the map to the north or south direction is not going to be helpful.
Instead keep the original square 1024x1024 texture and render a texture mapped sphere capped at the ±85.051129° latitute (that's where v = 0,1) using the above coordinate mapping.
Alternatively (and this is more in-line with Web Mercator spirit), render each tile regular in the UV coordinates, and calculate the XYZ coordinates by reversing the above transformation.

Related

Depth to world registration hololens2 unity

I'm working on a program on hololens2 research mode on unity. Hololens give us a depth image that is distance from depth sensor to object in front, for every pixel.
What I do is for every pixel I project pixel to image plane, then backproject it according to depth distance captured by depth sensor and it gives me the xyz in depth sensor coordinate frame. now it is needed to transform this coordinate to global coordinate system. to do so I get camera coordinate from unity by cam_pose = Camera.main.transform and in the other hand saved depth sensor extrinsic matrix.
From these two matrices I create a depth_to_world = cam_pose # inv(extrinsic). Now for every xyz on depth I perform global_xyz = depth_to_world # xyz to get point in real world. Problem is it return a point with 10-15 cm error. What am I doing wrong? (code is in python)
x = self.us[Depth_i, Depth_j] # projection from pixels to image plane
y = self.vs[Depth_i, Depth_j] # projection from pixels to image plane
D = distance_img[Depth_i, Depth_j] #distance_img is depth image
distance = 1000*float(D) / np.sqrt(x * x + y * y + 1) #distance according to spherical image plane D is in millimeter
depth_to_world = cam_pose # np.linalg.inv(Constants.camera_extrinsic)
X = (np.array([x * distance, y * distance, 1.0 * distance, 1])).reshape(4, 1)
point = (depth_to_world # X )[0:3, 0]
I got it! according to (https://github.com/petergu684/HoloLens2-ResearchMode-Unity) first I passed unity world origin to a winrt plugin, and depth_to_world was depth_to_world = inv(extrinsic) * cam_pose witch cam_pose is given by TryLocateAtTimeStamp. And other point is that unity coordinate is left handed (surprisingly!) so we should multiply a (-1) to z. (z <- -z)
my depth_to_world transformation was near but not correct.

Get the exact satellite image for a given Lat/Long bbox rectangle?

For a visualization I need an optical satellite image for a specific rectangular AOI, that is defined by two lat/long coordinates. I tried Mapbox Static Images API, which takes a lat/long bounding box and a resolution in width/height pixel for the output. The problem is that it looks like to me that if ratio of the lat/long box is not the same as the w/h pixels, it will add padding to the lat/long bounding box to fill the w/h of the pixel image.
And this would prevent me from combining the optical image with the other data, because I would not know which image pixel would (roughly) correspond to which lat/long coordinate.
I see three "solutions", but I don't know how to achive any of them.
"Make" Mapbox return the images with out padding.
Compute the ratio for the correct w/h pixel ratio using the lat/long coordinate, so there would be no padding. Maybe with https://en.wikipedia.org/wiki/Equirectangular_projection like discussed here: https://stackoverflow.com/a/16271669/380038?
Find a way to determine the lat/long coordinates of the optical satellite image so I can cut off the possible padding.
I checked How can I extract a satellite image from google maps given a Lat Long Rectangle?, but I would prefer to use my existing paid Mapbox account and I got the impression that I still wouldn't get the exact optical image or the exact corner coordinates of the optical image.
Mapbox Static Images API serves maps
You have optical image from other source
You want to overlay these data
Right?
Note the Red and Green pins: the waypoints are at opposite corners on Mapbox.
After Equirectangular correction Mapbox matches Openstreetmaps (little wonder), but Google coordinates are quite close too.
curl -g "https://api.mapbox.com/styles/v1/mapbox/streets-v11/static/[17.55490,47.10434,17.55718,47.10543]/600x419?access_token=YOUR_TOKEN_HERE" --output example-walk-600x419-nopad.png
What is your scale? 1 km - 100 km?
What is your source of optical image?
What is the required accuracy?
Just to mention, optical images have their own sources of distortions.
In practice:
You must have the extent of your non optical satellite data (let's preserve the mist around...) I'll call it ((x1, y1), (x2, y2)) We are coders, not cartographers - right!?
If you feed your extent to https://docs.mapbox.com/playground/static/ as
min longitude = x1, min lattitude = y1, max longitude = x2, max lattitude = y2
Select "Bounding box" entry! Do you see mapbox around your data!? Don't mind the exact dimensions, just check if mapbox is related to your data! May be you have to swap some values to get to the right corner of the globe.
If you have the right ((x1, y1), (x2, y2)) coordinates, do the equirectangular transformation to get the right pixel size.
You've called it Solution #2.
Let's say the with of your non optical satellite data is Wd, the height is Hd.
The mapbox image will fit your data, if you ask for Wm widht, and Hm height of mapbox data where
Wm = Wd
Hm = Wd * (y2 - y1) * cos(x1) / (x2 - x1)
Now you can pull the mapbox by
curl -g "https://api.mapbox.com/styles/v1/mapbox/streets-v11/static/[<x1>,<y1>,<x2>,<y2>]/<Wm>x<Hm>?access_token=<YOUR_TOKEN>" --output overlay.png
If (Hd == Hm)
then {you are lucky :) the two images just fit each other}
else { the two images are for the same area, but you have to scale the height of one of the images to make match }
Well... almost. You have not revealed what size of area you want to cover. The equation above is just an approximation which works up to the size of a smaller country (~100 km or so). For continent scale you probably have to apply more accurate formulas.
In my opinion, your #2 idea is the way to go. You do have the LLng bbox, so all that remains is calculate its "real" size in pixels.
Let us say that you want (or can allow, or can afford) a resolution of 50m per pixel, and the area is small enough not to have distortions (i.e., a rectangle of say 1 arcsecond of latitude and 1 arcsecond of longitude has top and bottom sides of the same length, with an error less than your chosen resolution). These are, I believe, very loose requisites and easy to fulfill.
Then, you just need to calculate the distance between the (Lat1, Lon1) and (Lat1, Lon2) points, and betwen (Lat1, Lon1) and (Lat2, Lon1). Divide that distance in meters by 50, and you'll get the exact number of pixels:
Lon1 Lon2
Lat1 +---------------+
| |
| |
Lat2 +---------------+
And you have a formula for that - the haversine formula.
If you need a higher precision, you could recourse to the Vincenty oblate spheroid (here a Javascript library). On the MT site (first link) there is a live calculator that you can use to plug data from your calls, and verify whether the approach is indeed working. I.e. you plug in your bounding box, get the distance in meters, divide and get the pixel size of the image (if the image is good, chances are that you can go with the simpler haversine. If it isn't, then there has to be some further quirk in the maps API - its projection, perhaps - that doesn't return the expected bounding box. But it seems unlikely).
I've had this exact problem when using a satellite image on an apple watch. I overlay some markers and a path. I convert everything from coordinates to pixels. Below is my code to determine the exact bbox result
var maxHoleLat = 52.5738902
var maxHoleLon = 4.9577606
var minHoleLat = 52.563994
var minHoleLon = 4.922364
var mapMaxLat = 0.0
var mapMaxLon = 0.0
var mapMinLat = 0.0
var mapMinLon = 0.0
let token = "your token"
var resX = 1000.0
var resY = 1000.0
let screenX = 184.0
let screenY = 224.0 // 448/2 = 224 - navbarHeight
let navbarHeight = 0.0
var latDist = 111000.0
var lonDist = 111000.0
var dx = 0.0
var dy = 0.0
func latLonDist(){
//calgary.rasc.ca/latlong.htm
let latRad = maxHoleLat * .pi / 180
//distance between 1 degree of longitude at given latitude
self.lonDist = 111412.88 * cos(latRad) - 0.09350*cos(3 * latRad) + 0.00012 * cos(5 * latRad)
print("lonDist = \(self.lonDist)")
//distance between 1 degree of latitude at a given longitude
self.latDist = 111132.95 - 0.55982 * cos(2 * latRad) + 0.00117 * cos(4 * latRad)
print("latDist = \(self.latDist)")
}
func getMapUrl(){
self.dx = (maxHoleLon - minHoleLon) * lonDist
self.dy = (maxHoleLat - minHoleLat) * latDist
//the map is square, but the hole not
//check if the hole has less x than y
if dx < dy {
mapMaxLat = maxHoleLat
mapMinLat = minHoleLat
let midLon = (maxHoleLon + minHoleLon ) / 2
mapMaxLon = midLon + dy / 2 / lonDist
mapMinLon = midLon - dy / 2 / lonDist
} else {
mapMaxLon = maxHoleLon
mapMinLon = minHoleLon
let midLat = (maxHoleLat + minHoleLat ) / 2
mapMaxLat = midLat + dx / 2 / latDist
mapMinLat = midLat - dx / 2 / latDist
}
self.imageUrl = URL(string:"https://api.mapbox.com/styles/v1/mapbox/satellite-v9/static/[\(mapMinLon),\(mapMinLat),\(mapMaxLon),\(mapMaxLat)]/1000x1000?logo=false&access_token=\(token)")
print("\(imageUrl)")
}

Get XY Tile Coordinate at Z Zoom Level with Leaflet

I have figured out how to get XYZ coordinates by extending Leaflet with a createTile function.
But what I'm wanting to know is how do I access the XY tile name/coordinate for a fixed Z zoom level around my GPS coordinates, even if I'm not zoomed in.
Why? I'm working on a P2P/decentralized version of Uber, and the XY coordinates are a good common/shared location index for users to lookup/subscribe/query against. As in, everybody within that X mile radius all will know the same XY coordinate name and use that as a deterministic key to find each other with.
This project will "Convert lon, lat to screen pixel x, y from 0, 0 origin, at a certain zoom level." https://github.com/mapbox/sphericalmercator
UPDATED:
function lng2tile(lon,z) { return (Math.floor((lon+180)/360*Math.pow(2,z))) }
function lat2tile(lat,z) { return (Math.floor((1-Math.log(Math.tan(lat*Math.PI/180) + 1/Math.cos(lat*Math.PI/180))/Math.PI)/2 *Math.pow(2,z))) }
Or try this:
var row = Math.floor((location.lng + 180) / (360 / Math.pow(2, zoomLevel)));
var col = Math.floor((90 + (location.lat * -1)) / (180 / Math.pow(2, (zoomLevel - 1))));

Draw circle using latitude and longitude

I want to plot a latitude and longitude using matlab. Using that latitude and longitude as center of the circle, I want to plot a circle of radius 5 Nm.
r = 5/60;
nseg = 100;
x = 25.01;
y = 55.01;
theta = 0 : (2 * pi / nseg) : (2 * pi);
pline_x = r * cos(theta) + x;
pline_y = r * sin(theta) + y;
hold all
geoshow(pline_x, pline_y)
geoshow(x, y)
The circle does not look of what I expected.
Drawing a circle on earth is more complex that it looks like.
Drawing a line or a poly line is simple, because the vertices are defined.
Not so on circle.
a circle is defined by all points having the same distance from center (in meters! not in degrees!!!)
Unfortuantley lat and lon coordinates have not the same scale.
(The distance between two degrees of latidtude is always approx. 111.3 km, while for longitude this is only true at the equator. At the poles the distance between two longitudes approach zero. In Europe the factor is about 0.6. (cos(48deg))
There are two solution, the first is more universal, usefull for nearly all problems.
convert spherical coordinate (of circle center) to cartesian plane with unit = 1m, using a transformation (e.g equidistant transformation, also called equirectangular transf., this transformation works with the cos(centerLat) compensation factor)
calculate points (e.g circle points) in x,y plane using school mathematics.
transform all (x,y) points back to spherical (lat, lon) coordinates, using the inverse transformation of point 1.
Other solution
1. write a function which draws an ellipse in defined rectangle (all cartesian x,y)
2. define bounding of the circle to draw:
2a: calculate north-south diameter of circle/ in degrees: this a bit tricky: the distance is define in meters, you need a transformation to get the latitudeSpan: one degrees of lat is approx 111.3 km (eart circumence / 360.0): With this meters_per_degree value calc the N-S disatcne in degrees.
2b: calculate E-W span in degrees: now more tricky: calculate like 2a, but now divide by cos(centerLatitude) to compensate that E-W distances need more degrees when moving north to have the same meters.
Now draw ellipseInRectangle using N-S and E_W span for heigh and width.
But a circle on a sphere looks on the projected monitor display (or paper) only like a circle in the center of the projection. This shows:
Tissot's Error Ellipse

How to draw real world coordinates rotated relative to the device around a center coordinate?

I'm working on a simple location-aware game where the current location of the user is shown on a game map, as well as the locations of other players around him. It's not using MKMapView but a custom game map with no streets.
How can I translate the other lat/long coordinates of other players into CGPoint values to represent them in the world scale game map with a fixed scale like 50 meters = 50 points in screen, and orient all the points such that the user can see in which direction he would have to go to reach another player?
The key goal is to generate CGPoint values for lat/long coordinates for a flat top-down view, but orient the points around the users current location similar to the orient map feature (the arrow) of Google Maps so you know where is what.
Are there frameworks which do the calculations?
first you have to transform lon/lat to cartesian x,y in meters.
next is the direction in degrees to your other players. the direction is dy/dx where dy = player2.y to me.y, same for dx. normalize dy and dx by this value by dividing by distance between playerv2 and me.
you receive
ny = dy / sqrt(dx*dx + dy*dy)
nx = dx / sqrt(dx*dx + dy*dy)
multipl with 50. now you have a point 50 m in direction of the player2:
comp2x = 50 * nx;
comp2y = 50 * ny;
now center the map on me.x/me.y. and apply the screen to meter scale
You want MKMapPointForCoordinate from MapKit. This converts from latitude-longitude pairs to a flat surface defined by an x and y. Take a look at the documentation for MKMapPoint which describes the projection. You can then scale and rotate those x,y pairs into CGPoints as needed for your display. (You'll have to experiment to see what scaling factors work for your game.)
To center the points around your user, just subtract the value of their x and y position (in MKMapPoints) from the points of all other objects. Something like:
MKMapPoint userPoint = MKMapPointForCoordinate(userCoordinate);
MKMapPoint otherObjectPoint = MKMapPointForCoordinate(otherCoordinate);
otherObjectPoint.x -= userPoint.x; // center around your user
otherObjectPoint.y -= userPoint.y;
CGPoint otherObjectCenter = CGPointMake(otherObjectPoint.x * 0.001, otherObjectPoint.y * 0.001);
// Using (50, 50) as an example for where your user view is placed.
userView.center = CGPointMake(50, 50);
otherView.center = CGPointMake(50 + otherObjectCenter.x, 50 + otherObjectCenter.y);