I'm trying to create a view that present a bunch of coordinates without using a map.
The user's coordinates should be at the center of the screen(in the middle of the circle),
and the rest of the coordinates will be layout relative to one another according to their real latitude and longitude.
Something like this:
I understood that I can't do this with MKMapKit because it will be a violation of Google license, so I need a way to place and manage the coordinates myself.
What is best practice to do something like this? how should I convert the coordinates to a screen points?
I'd go about it as follows:
1) I'd start by normalizing at the user's current location (translate it to 0,0 then apply the same translation to the rest of the coordinates).
2) Once you've done that, use a distance function to find out which coordinate in your list is furthest away from your current location.
3) Use the furthest away coordinate to determine the scale of your view.
4) Calculate the X & Y screen coordinates of all your locations based on the scale you come up with in #3
Related
I've been tasked with creating a Google Earth Web link programmatically when given coordinates. I have the street address as well, where I'd ideally like to drop a pin.
For example, I can get a link to the white house using its lat/lon at a distance of 150 meters like this:
https://earth.google.com/web/#38.8976633,-77.0365739,150d
If I search using the google earth web app I can generate a link with a pin, where a few of the parameters in the link change slightly:
https://earth.google.com/web/#38.8976763,-77.0365298,18.0497095a,800.41606338d,35y,0h,45t,0r/data=ChIaEAoIL20vMDgxc3EYAiABKAIoAg
Am I able to dynamically generate the data element, or whichever element creates the pin, at my desired location? I've also had trouble finding the correct distance d and elevation a parameters in my links.
As you found, you can generate links to specific views in the Google Earth web client by adding the correct parameters to the URL, including the latitude, longitude and altitude (a) of the view target, and the distance (d) of the camera from that target. Note that altitude and distance are both in meters, and altitude is above sea level, not above ground elevation. If you look at the a and d parameters that Earth puts in the URL as you fly around, often altitude will be the terrain (or builing-top) elevation at the target lat/lon, and the distance will be how far the camera is from that altitude. The other available parameters include heading (h) and roll (r).
So long as your tilt (t) remains zero, then altitude and distance should be interchangeable, or if both are >0, then they will be summed together for the final camera height above sea level. But if you add a tilt (zero degrees is looking straight down), then the altitude determines the elevation of the view target (above the lat & lon location), and the distance determines how far the camera is from that point. If you make d=0, then altitude will define both the view target and camera height above sea level. If you make a=0, then the distance will be from the lat,lon at sea level (even if that's underground).
Unfortunately there's no way to manually construct the data parameter, as it can contain many different things. To do that right would require an API, which Earth for Web currently does not provide. Hopefully that kind of functionality will come after Earth finishes its work to become cross-browser compatible via Web Assembly. Until then, there's not a way to add a point the map via just a URL.
In order to cache tiles for off-line use, I tried to calculate tiles coordinates according to a certain zoom level. Calculated x coordinates were correct but the y coordinates Were not.
This Old example compares actually received coordinates with that calculated. (click in the map to display results)
I was using map.project(latlng,zoom) to get the projected coordinates and then divide by tileSize which is 256. is this approach even correct ?
EDIT :
Thanks to Ivan Sanchez for the orientation about y inversion in TMS. Actually after projecting the point with map.project(latlng,zoom) you need to inverse the y coordinate as follow :
You calculate _globalTileRange(zoom) for the corresponding zoom level, then
InvertedY = _globalTileRange(zoom).max.y - y ;
Here is another Link that shows the correct calculation of y coordinates for the current zoom of the map, for other zoom levels the globalTileRange need to be recalculated accordingly.
Regards,
Your approach is correct. However:
In order to get the tile coordinates loaded by Leaflet, you are looping through all the loaded images and outputting the min/max of those values.
The problem with this approach is that Leaflet doesn't immediately unload off-screen tiles. See the keepBuffer option, bug #4039 and PR #4650.
In order to fetch the bounds of tiles visible within the map bounds, see the private methods used internally by L.GridLayer around this line of code.
In TMS, the y coordinate goes up, and in non-TMS tiles it does down. This is because TMS was done by geographers, where the y coordinate is the northing, and non-TMS tiles were initially done by computer programmers, who interpret the y coordinate as downward pixels.
For more background, read https://wiki.openstreetmap.org/wiki/TMS#The_Y_coordinate and https://wiki.osgeo.org/wiki/Tile_Map_Service_Specification#TileMap_Diagram and https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#X_and_Y
I have x, y coordinates of a feature from a single photograph. I know camera parameters. How can i get the 3D coordinate of that feature (in matlab). please help me.
Give a look to this: http://www.cim.mcgill.ca/~langer/558/4-cameramodel.pdf
Systems like this that I've seen before requires that you know where the camera is (latitude and longitude), in which direction (azimuth and elevation) the camera is pointing, together with the field of view. Then you project this onto the geodata of the environment and from there you can do all kinds of things, like finding the 3D position of an object based on its location in the photograph.
I'm currently working on a mapping app for iPhone. I've created some custom maps of various sizes, but I've run into an issue:
I would like to implement the ability for users' locations to be checked automatically, but since Im not using a MapView this is much more dificult. (see below)
given the different coordinate systems, I would like to receive a geolocation (green dot) and translate it into a pixel location on a custom map.
Ive got the geolocations for the 4 corners, but the rect is askew. Ive calculated the angle of rotation, but Im just generally confused.
note: the size of the maps arent big enough for the spherical nature of the earth to come into calculation.
Any help is appreciated!
To convert a geolocation to point you need to first understand the mapping. assuming you are using Mercator.
x = R*long
y = R*(1+sin(lat))/cos(lat)
where lat and long are in radians.R is radius of earth. the scale of the image would be from 0 to R*PI
so to get it within view.frame.size you may have to divide by a scale factor.
for difference between points.
x2-x1 = R* (long2-long1)
y2-y1 = R* ( (1+sin(lat2))/cos(lat2) - (1+sin(lat1))/cos(lat1) )
I want to now how to convert longitude, latitude to its equivalent xy coordinate components in iPhone programming. (I am using only CoreLocation programming, and want to show a point on iPhone screen without any map).
thanks
Well the exact conversion depends on exactly which part of the Earth you want to show, and the stretching along longitude varies according to latitude, at least in Mercator.
That being said, even if you don't want to display an actual MapKit map, it would probably be easiest to create an MKMapView and keep it to one side. If you set the area you want to display appropriately on that (by setting the region property), you can use convertCoordinate:toPointToView: to map from longitude and latitude to a 2d screen location.
Note that MKMapView adjusts the region you set so as to make sense for the viewport its been given (eg, if you gave it a region that was a short fat rectangle, but the view it had was a tall thin rectangle, it'd pick the smallest region that covers the entire short fat rectangle but is the shape of a tall thin rectangle), so don't get confused if you specify a region with the top left being a particular geolocation, but then that geolocation isn't at the exact top left of the view.