Swift plot coordinates on uiimageview with world map - swift

I am trying to plot some coordinates on the earth on an UIImage which contains a map of the world. (I don't want to use maps)
See an example of the UIImageView below below:
As you see it's working out pretty well but the mapping from coordinates and X Y are incorrect!
Amsterdam's coordinates are: (52.36666489, 4.883333206) and the Center's are (0,0).
I've done the following things to try to make this happen but unfortunately this isn't working out:
I've tried first to 'normalize' the coordinates since latitude ranges from -90 to 90 and latitude -180 to 180. This is done by adding 90 to the real latitude and 180 to the real longitude which yiels the 'normalized' versions:
let normalizedLat = location.coordinate.latitude + 90.0.
let normalizedLng = location.coordinate.longitude + 180.0
After that I've calculated the scale factor where the normalizedLat and normalizedLng should scale with:
let heightScaleFactor = mapImageView.frame.height / 180.0
let widthScaleFActor = mapImageView.frame.width / 360.0
And 3. After that i've got the scaling factors I finally can calculate the coordinates by:
let x = Double(widthScaleFActor * CGFloat(normalizedLng))
let y = Double(heightScaleFactor * CGFloat(normalizedLat))
dot.frame = CGRect(x: x, y: y, width: Double(dot.frame.width), height: Double(dot.frame.height))
But for some strange reason Amsterdam is not on the Amsterdam spot and the Center is not on the Center spot.
I am quite sure that my calculations has gone wrong. Any ideas?

Remember, in iOS the origin is in the top-left, not the bottom-left. Positive-y goes down, not up.
You need to factor that in.
dot.frame = CGRect(x: x, y: mapImageView.frame.height - y, width: Double(dot.frame.width), height: Double(dot.frame.height))
Also note that the equator in your image is not in the middle. It's lower in the image so you need to add an additional offset in your calculation of the y value based on the equator's offset in the image.
dot.frame = CGRect(x: x, y: mapImageView.frame.height - y + equatorOffset, width: Double(dot.frame.width), height: Double(dot.frame.height))
It's also possible that your map projection doesn't have a simple linear latitude scale. 0-10 degrees might be 12 pixels while 10-20 degrees might be 11 pixels, etc. and 80-90 is only 3 pixels (or whatever).

Related

Get the exact satellite image for a given Lat/Long bbox rectangle?

For a visualization I need an optical satellite image for a specific rectangular AOI, that is defined by two lat/long coordinates. I tried Mapbox Static Images API, which takes a lat/long bounding box and a resolution in width/height pixel for the output. The problem is that it looks like to me that if ratio of the lat/long box is not the same as the w/h pixels, it will add padding to the lat/long bounding box to fill the w/h of the pixel image.
And this would prevent me from combining the optical image with the other data, because I would not know which image pixel would (roughly) correspond to which lat/long coordinate.
I see three "solutions", but I don't know how to achive any of them.
"Make" Mapbox return the images with out padding.
Compute the ratio for the correct w/h pixel ratio using the lat/long coordinate, so there would be no padding. Maybe with https://en.wikipedia.org/wiki/Equirectangular_projection like discussed here: https://stackoverflow.com/a/16271669/380038?
Find a way to determine the lat/long coordinates of the optical satellite image so I can cut off the possible padding.
I checked How can I extract a satellite image from google maps given a Lat Long Rectangle?, but I would prefer to use my existing paid Mapbox account and I got the impression that I still wouldn't get the exact optical image or the exact corner coordinates of the optical image.
Mapbox Static Images API serves maps
You have optical image from other source
You want to overlay these data
Right?
Note the Red and Green pins: the waypoints are at opposite corners on Mapbox.
After Equirectangular correction Mapbox matches Openstreetmaps (little wonder), but Google coordinates are quite close too.
curl -g "https://api.mapbox.com/styles/v1/mapbox/streets-v11/static/[17.55490,47.10434,17.55718,47.10543]/600x419?access_token=YOUR_TOKEN_HERE" --output example-walk-600x419-nopad.png
What is your scale? 1 km - 100 km?
What is your source of optical image?
What is the required accuracy?
Just to mention, optical images have their own sources of distortions.
In practice:
You must have the extent of your non optical satellite data (let's preserve the mist around...) I'll call it ((x1, y1), (x2, y2)) We are coders, not cartographers - right!?
If you feed your extent to https://docs.mapbox.com/playground/static/ as
min longitude = x1, min lattitude = y1, max longitude = x2, max lattitude = y2
Select "Bounding box" entry! Do you see mapbox around your data!? Don't mind the exact dimensions, just check if mapbox is related to your data! May be you have to swap some values to get to the right corner of the globe.
If you have the right ((x1, y1), (x2, y2)) coordinates, do the equirectangular transformation to get the right pixel size.
You've called it Solution #2.
Let's say the with of your non optical satellite data is Wd, the height is Hd.
The mapbox image will fit your data, if you ask for Wm widht, and Hm height of mapbox data where
Wm = Wd
Hm = Wd * (y2 - y1) * cos(x1) / (x2 - x1)
Now you can pull the mapbox by
curl -g "https://api.mapbox.com/styles/v1/mapbox/streets-v11/static/[<x1>,<y1>,<x2>,<y2>]/<Wm>x<Hm>?access_token=<YOUR_TOKEN>" --output overlay.png
If (Hd == Hm)
then {you are lucky :) the two images just fit each other}
else { the two images are for the same area, but you have to scale the height of one of the images to make match }
Well... almost. You have not revealed what size of area you want to cover. The equation above is just an approximation which works up to the size of a smaller country (~100 km or so). For continent scale you probably have to apply more accurate formulas.
In my opinion, your #2 idea is the way to go. You do have the LLng bbox, so all that remains is calculate its "real" size in pixels.
Let us say that you want (or can allow, or can afford) a resolution of 50m per pixel, and the area is small enough not to have distortions (i.e., a rectangle of say 1 arcsecond of latitude and 1 arcsecond of longitude has top and bottom sides of the same length, with an error less than your chosen resolution). These are, I believe, very loose requisites and easy to fulfill.
Then, you just need to calculate the distance between the (Lat1, Lon1) and (Lat1, Lon2) points, and betwen (Lat1, Lon1) and (Lat2, Lon1). Divide that distance in meters by 50, and you'll get the exact number of pixels:
Lon1 Lon2
Lat1 +---------------+
| |
| |
Lat2 +---------------+
And you have a formula for that - the haversine formula.
If you need a higher precision, you could recourse to the Vincenty oblate spheroid (here a Javascript library). On the MT site (first link) there is a live calculator that you can use to plug data from your calls, and verify whether the approach is indeed working. I.e. you plug in your bounding box, get the distance in meters, divide and get the pixel size of the image (if the image is good, chances are that you can go with the simpler haversine. If it isn't, then there has to be some further quirk in the maps API - its projection, perhaps - that doesn't return the expected bounding box. But it seems unlikely).
I've had this exact problem when using a satellite image on an apple watch. I overlay some markers and a path. I convert everything from coordinates to pixels. Below is my code to determine the exact bbox result
var maxHoleLat = 52.5738902
var maxHoleLon = 4.9577606
var minHoleLat = 52.563994
var minHoleLon = 4.922364
var mapMaxLat = 0.0
var mapMaxLon = 0.0
var mapMinLat = 0.0
var mapMinLon = 0.0
let token = "your token"
var resX = 1000.0
var resY = 1000.0
let screenX = 184.0
let screenY = 224.0 // 448/2 = 224 - navbarHeight
let navbarHeight = 0.0
var latDist = 111000.0
var lonDist = 111000.0
var dx = 0.0
var dy = 0.0
func latLonDist(){
//calgary.rasc.ca/latlong.htm
let latRad = maxHoleLat * .pi / 180
//distance between 1 degree of longitude at given latitude
self.lonDist = 111412.88 * cos(latRad) - 0.09350*cos(3 * latRad) + 0.00012 * cos(5 * latRad)
print("lonDist = \(self.lonDist)")
//distance between 1 degree of latitude at a given longitude
self.latDist = 111132.95 - 0.55982 * cos(2 * latRad) + 0.00117 * cos(4 * latRad)
print("latDist = \(self.latDist)")
}
func getMapUrl(){
self.dx = (maxHoleLon - minHoleLon) * lonDist
self.dy = (maxHoleLat - minHoleLat) * latDist
//the map is square, but the hole not
//check if the hole has less x than y
if dx < dy {
mapMaxLat = maxHoleLat
mapMinLat = minHoleLat
let midLon = (maxHoleLon + minHoleLon ) / 2
mapMaxLon = midLon + dy / 2 / lonDist
mapMinLon = midLon - dy / 2 / lonDist
} else {
mapMaxLon = maxHoleLon
mapMinLon = minHoleLon
let midLat = (maxHoleLat + minHoleLat ) / 2
mapMaxLat = midLat + dx / 2 / latDist
mapMinLat = midLat - dx / 2 / latDist
}
self.imageUrl = URL(string:"https://api.mapbox.com/styles/v1/mapbox/satellite-v9/static/[\(mapMinLon),\(mapMinLat),\(mapMaxLon),\(mapMaxLat)]/1000x1000?logo=false&access_token=\(token)")
print("\(imageUrl)")
}

How does UIScreen.main.nativeBounds relate to x and y coordinates?

If I print UIScreen.main.nativeBounds.height it returns 2048 for my iPad simulator which is fine. But how does this relate to the x and y coordinates? For example, the middle of the screen is x: 0, y: 0. And if I move to the left on the x coordinates I use minus, like x: -100, and if I move to the right I use plus, like x: 100.
In my head this would mean that UIScreen.main.nativeBounds.height is divided by 2 so that I can move between x: -1024 and x: 1024. Is that correct? Well, it seems that it's not because this code:
portrait = gui.addPortrait(width: 80, height: 80, x:-350 , y: 180)
addChild(portrait)
will take me to the far end of the device's x-axis, like this:
meaning that the total x-axis must be something like 750 in total. What happened to 1024? I think I must misunderstand how the x-coordinates relate to the UIScreen.main.nativeBounds.height. The reason I want to understand this is because I want to set the x position dynamically so that it works on all devices. Something like x: -(UIScreen.main.nativeBounds.height / 3). What am I missing?
In contrast to UIScreen.bounds, which specifies "[t]he bounding rectangle of the screen, measured in points" (docs), UIScreen.nativeBounds specifies "[t]he bounding rectangle of the physical screen, measured in pixels" (docs).
Thus, you'll want to transform the values into points with the help of UIScreen.nativeScale:
let width = round(UIScreen.main.nativeBounds.width / UIScreen.main.nativeScale)
let height = round(UIScreen.main.nativeBounds.height / UIScreen.main.nativeScale)
You can get the height and width of the screen in points like this:
let height = UIScreen.main.bounds.height
let width = UIScreen.main.bounds.width
No point of the screen has negative coordinates:
The top left-hand corner of the screen is x: 0, y :0
The top right-hand corner of the screen is x: width, y: 0
The bottom left-hand corner of the screen is x: 0, y: height
The bottom right-hand corner of the screen is x: width, y: height

Map distance to zoom in Google Static Maps

I am using Google Static Maps to display maps in my AppleTV app. What I need is to somehow map a distance of e.g. 1km to the zoom parameter of the Static Maps API.
In other words I have an imageView in which I wish to load the map image and if I know that the height of my imageView is 400px, and I wish for this map to show a real Earth surface of 1000m North to South, how would I tell the API to return me the map with this exact zoom?
I found a very similar question here, however no suitable answer is provided.
As stated at Google Maps Documentation:
Because the basic Mercator Google Maps tile is 256 x 256 pixels.
Also note that every zoom level, the map has 2 n tiles.
Meaning that at zoomLevel 2,the pixels in any direction of a map are = 256 * 2² = 1024px.
Taking into account that the earth has a perimeter of ~40,000 kilometers, in zoom 0, every pixel ~= 40,000 km/256 = 156.25 km
At zoom 9, pixels are 131072: 1px = 40,000 km / 131072 = 0.305 km ... and so on.
If we want 400px = 1km, we have to choose the closest approximation possible, so: 1px = 1km/400 = 0.0025km
I tried zoom = 15 and obtained 1px = 0.00478 and zoom = 16 that gave me 1px = 0.00238km
Meaning that you should use zoom = 16, and you will have 0.955km every 400px in the Equator line and only for x coordinates.
As you go north or south in latitude, perimeter is everytime smaller, thus changing the distance. And of course it also changes the correlation in the y axis as the projection of a sphere is tricky.
If you want to calculate with a function the exact distance, you should use the one provided by Google at their documentation:
// Describe the Gall-Peters projection used by these tiles.
gallPetersMapType.projection = {
fromLatLngToPoint: function(latLng) {
var latRadians = latLng.lat() * Math.PI / 180;
return new google.maps.Point(
GALL_PETERS_RANGE_X * (0.5 + latLng.lng() / 360),
GALL_PETERS_RANGE_Y * (0.5 - 0.5 * Math.sin(latRadians)));
},
fromPointToLatLng: function(point, noWrap) {
var x = point.x / GALL_PETERS_RANGE_X;
var y = Math.max(0, Math.min(1, point.y / GALL_PETERS_RANGE_Y));
return new google.maps.LatLng(
Math.asin(1 - 2 * y) * 180 / Math.PI,
-180 + 360 * x,
noWrap);
}
};

How to draw real world coordinates rotated relative to the device around a center coordinate?

I'm working on a simple location-aware game where the current location of the user is shown on a game map, as well as the locations of other players around him. It's not using MKMapView but a custom game map with no streets.
How can I translate the other lat/long coordinates of other players into CGPoint values to represent them in the world scale game map with a fixed scale like 50 meters = 50 points in screen, and orient all the points such that the user can see in which direction he would have to go to reach another player?
The key goal is to generate CGPoint values for lat/long coordinates for a flat top-down view, but orient the points around the users current location similar to the orient map feature (the arrow) of Google Maps so you know where is what.
Are there frameworks which do the calculations?
first you have to transform lon/lat to cartesian x,y in meters.
next is the direction in degrees to your other players. the direction is dy/dx where dy = player2.y to me.y, same for dx. normalize dy and dx by this value by dividing by distance between playerv2 and me.
you receive
ny = dy / sqrt(dx*dx + dy*dy)
nx = dx / sqrt(dx*dx + dy*dy)
multipl with 50. now you have a point 50 m in direction of the player2:
comp2x = 50 * nx;
comp2y = 50 * ny;
now center the map on me.x/me.y. and apply the screen to meter scale
You want MKMapPointForCoordinate from MapKit. This converts from latitude-longitude pairs to a flat surface defined by an x and y. Take a look at the documentation for MKMapPoint which describes the projection. You can then scale and rotate those x,y pairs into CGPoints as needed for your display. (You'll have to experiment to see what scaling factors work for your game.)
To center the points around your user, just subtract the value of their x and y position (in MKMapPoints) from the points of all other objects. Something like:
MKMapPoint userPoint = MKMapPointForCoordinate(userCoordinate);
MKMapPoint otherObjectPoint = MKMapPointForCoordinate(otherCoordinate);
otherObjectPoint.x -= userPoint.x; // center around your user
otherObjectPoint.y -= userPoint.y;
CGPoint otherObjectCenter = CGPointMake(otherObjectPoint.x * 0.001, otherObjectPoint.y * 0.001);
// Using (50, 50) as an example for where your user view is placed.
userView.center = CGPointMake(50, 50);
otherView.center = CGPointMake(50 + otherObjectCenter.x, 50 + otherObjectCenter.y);

Cant Understand Angle of Inclination Calculation using Accelerometer on iPhone

double = rollingZ = acceleration.x;
double = rollingX = acceleration.y;
if (rollingZ > 0.0) {
self.centerCoordinate.inclination = atan(rollingX / rollingZ) + M_PI / 2.0; //LINE 1
}
else if (rollingZ < 0.0) {
self.centerCoordinate.inclination = atan(rollingX / rollingZ) - M_PI / 2.0; // LINE 2
}
else if (rollingX < 0) {
self.centerCoordinate.inclination = M_PI/2.0; //atan returns a radian
}
else if (rollingX >= 0) {
self.centerCoordinate.inclination = 3 * M_PI/2.0;
Im just trying to fully understand this piece of code. I'm looking to build AR apps on the iphone and this code has the function of calculating the angle of inclination of the device using the accelerometer readings.
My understanding is this:
Assuming a portrait orientation if i roll the device forward the x axis of the accelerometer increases towards a negative number of -1.0 (i.e. the device is laid flat with the screen facing up). If i tilt the device towards me the x axis value increases towards a value of 1.0 (until the device is flat facing the ground).
The y axis changes up and down its axis between -1.0 and 0.0 (0 implies the device is horizontal).
If we take some example readings say x = 0.5 (a -45 degree angle, tilting the device towards me) and y = 0.8. If i plotted this on a cartesian coordinate graph with y (rollingX as the vertical axis) and x (rollingZ as the horizontal) and draw a line between them i understand that i can use the reverse tangent function (atan) to calculate the angle. My confusion comes on line 1. I dont understand why that line adds 90 degrees (in radians) to the calculated angle given by the atan function?
I just cant seem to visualise on a graph whats going on. If someone could shed some light on this - that would be much appreciated.
I suppose that these +90 degrees or -90 degrees (in case of negative rollingZ) are added to bring inclination value to widely used Polar coordinate system with angle between -180 and 180 degrees.
Assuming that you have Z line projecting upward when you look at the screen of the device and Z line looking at you from the screen, the result of calculations above vill give you an angle between screen plane and horizontal plane.
Let us assume that acceleration value is positive when it is goes "inside" the device:
1) Device is in vertical position, we have rollingZ = 1, rollingX = 0. The code returns 90 degrees.
2) Device is tilted towards user. Let rollingZ be 0.7 and rollingX be -0.7. This will give us 45 degree angle.
3) Device is in upside-down position, now we have rollingZ = -1 and rollingX = 0, and it is -90 degrees.