In order to cache tiles for off-line use, I tried to calculate tiles coordinates according to a certain zoom level. Calculated x coordinates were correct but the y coordinates Were not.
This Old example compares actually received coordinates with that calculated. (click in the map to display results)
I was using map.project(latlng,zoom) to get the projected coordinates and then divide by tileSize which is 256. is this approach even correct ?
EDIT :
Thanks to Ivan Sanchez for the orientation about y inversion in TMS. Actually after projecting the point with map.project(latlng,zoom) you need to inverse the y coordinate as follow :
You calculate _globalTileRange(zoom) for the corresponding zoom level, then
InvertedY = _globalTileRange(zoom).max.y - y ;
Here is another Link that shows the correct calculation of y coordinates for the current zoom of the map, for other zoom levels the globalTileRange need to be recalculated accordingly.
Regards,
Your approach is correct. However:
In order to get the tile coordinates loaded by Leaflet, you are looping through all the loaded images and outputting the min/max of those values.
The problem with this approach is that Leaflet doesn't immediately unload off-screen tiles. See the keepBuffer option, bug #4039 and PR #4650.
In order to fetch the bounds of tiles visible within the map bounds, see the private methods used internally by L.GridLayer around this line of code.
In TMS, the y coordinate goes up, and in non-TMS tiles it does down. This is because TMS was done by geographers, where the y coordinate is the northing, and non-TMS tiles were initially done by computer programmers, who interpret the y coordinate as downward pixels.
For more background, read https://wiki.openstreetmap.org/wiki/TMS#The_Y_coordinate and https://wiki.osgeo.org/wiki/Tile_Map_Service_Specification#TileMap_Diagram and https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#X_and_Y
I need to represent on the map (iPhone) some points such as (88, 60) or (90, 55), but the custom annotations representing these points get deallocated. I also noticed that these points are not actually displayed on the google map, they are somehow above the visible map. This happens for any point that is above 85 deg. N latitude or -85 deg. S latitude.
I know it's a really old post, but just to answer your question.
Most (allmost all) commercial maps are displayed in the mercator projection. This is what you see in mapKit or on google maps. This means that the latitude and longitude lines run horizontal and vertical.
If you would change this to for instance polar project (world from the top) it would become way to difficult to calculate the postion of objects because the lat and long lines converge rapidly...
So it's just for ease of use....
I would like to create a MKCoordinateRegion (to zoom to the good region on the map) from the northeast and southwest points given by Google. For that I need to compute the coordinate of the center between these two coordinates. Any clue? I could do simple math but I will have problems with the equator...
Thanks!!!
Assuming you mean anti-meridian and not the equator then here goes (While all this works on a flattened map and should be good enough for your purpose, it's completely bung on a sphere. see note at the bottom).
What I've done in other cases is start at either point, and if the next point is more than 180 degrees to the east, I convert it so that it is less than 180 to the west like so
if(pointa.lon - pointb.lon > 180)
pointb.lon += 360:
else if (pointa.lon - pointb.lon < -180)
pointb.lon -= 360
At this time pointb.lon might be an invalid longitude like 190 but you can at least work out the mid-point between pointa and point b because they will be on a continuous scale, so you might have points 175 and 190. Then just get the mid-point between them as 182.5, then convert that to make sure it is within the usual limits and you get -177.5 as the latitude between the two points. Working out the latitude is easy.
Of course on a sphere this is wrong because the midpoint between (-180,89) and (180,89) is (0*,90) not (0,89).
* = could be anything
Also, couldn't you just zoomToRect made with the defined corners? It'd save you doing this calculation and then next one which would be to work out what zoom level you need to be at when centered on that point to include the two corners you know about. Since the Maps app doesn't appear to scroll over the anti-meridian I assume MKMapview can't either so your rectangle is going to have to have the northeast coord as the top right and the southwest as the bottom left.
This SO post has the code to zoom a map view to fit all its annotations.
Can anyone confirm that setRegion "snaps" to predefined zoom levels and whether or not this behavior is as designed (although undocumented) or a known bug? Specifically, it appears that setRegion snaps to the same zoom levels that correspond to the zoom levels used when the user double-taps the map.
I'm trying to restore a previously saved region but this behavior makes it impossible if the saved region was set via a pinch zoom and not a double-tap zoom.
A big clue to me that things are broken on the mapkit side is what occurs if I call regionThatFits on the map's current region. It should return the same region (since it obviously fits the map's frame) but it returns the region that corresponds to the next higher predefined zoom level instead.
setVisibleMapRect behaves similarly.
Any further insight or information would be appreciated.
I found these related posts but neither included a solution or definitive confirmation that this is in fact a mapkit bug:
MKMapView setRegion: odd behavior?
MKMapView show incorrectly saved region
EDIT:
Here is an example that demonstrates the problem. All values are valid for my map view's aspect ratio:
MKCoordinateRegion initialRegion;
initialRegion.center.latitude = 47.700200f;
initialRegion.center.longitude = -122.367109f;
initialRegion.span.latitudeDelta = 0.065189f;
initialRegion.span.longitudeDelta = 0.067318f;
[map setRegion:initialRegion animated:NO];
NSLog(#"DEBUG initialRegion: %f %f %f %f", initialRegion.center.latitude, initialRegion.center.longitude, initialRegion.span.latitudeDelta, initialRegion.span.longitudeDelta);
NSLog(#"DEBUG map.region: %f %f %f %f", map.region.center.latitude, map.region.center.longitude, map.region.span.latitudeDelta, map.region.span.longitudeDelta);
OUTPUT:
DEBUG initialRegion: 47.700199 -122.367111 0.065189 0.067318
DEBUG map.region: 47.700289 -122.367096 0.106287 0.109863
Note the discrepancy in the latitude/longitude delta values. The map's values are almost double what I requested. The larger values correspond to one of the zoom levels used when the user double-taps the map.
Yes, it snaps to discrete levels. I've done quite a bit of experimentation, and it seems to like multiples of 2.68220906e-6 degrees of longitude per pixel.
So if your map fills the whole width of the screen, the first level spans .0008583 degrees, then the next level up you can get is twice that, .001717, and then the next one is twice that, .003433, and so on. I'm not sure why they chose to normalize by longitude, it means that fixes zoom levels vary depending on what part of the world you are looking at.
I've also spent a lot of time trying to understand the significance of that number .68220906e-6 degrees. It comes out to about 30cm at the equator, which kind of makes sense since the high resolution photos used by Google Maps have a 30cm resolution, but I would have expected them to use latitude instead of longitude to establish the zoom levels. That way, at maximum zoom, you always the native resolution of the satellite images, but who knows, they probably have some smart-people reason for making it work like that.
In my application I need to display a certain range of latitude. I'm gonna work on some code to try to zoom the map as close as possible to that. If anyone is interested, contact me.
I found a solution.
If the received snapped zoom level, is, lets say a factor of 1.2 bigger than the desired one:
use this algorithm to correct:
Asumption: you want to set the map view to exactly show "longitudinalMeters" from left to right
1) Calculate the correction scale:
Calculate the relation between longitudinal span you received, to that one you have got.
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(center, 0, longitudinalMeters);
MKCoordinateRegion regionFits = [mapView regionThatFits: region];
double correctionFactor = regionFits.span.longitudeDelta / region.span.longitudeDelta;
2) Create the transformation and apply it to the map
CGAffineTransform mapTransform = CGAffineTransformMakeScale(correctionScale, correctionScale);
CGAffineTransform pinTransform = CGAffineTransformInvert(mapTransform);
[mapView setTransform:mapTransform];
3) Apply the inverse transformation to the Map pins, to keep them at original size
[mapView setTransform:mapTransform];
for (id<MKAnnotation> annotation in self.mapView.annotations)
{
[[self.mapView viewForAnnotation:annotation] setTransform:pinTransform];
}
The weird behavior seems to be due to the fact that while one requests a particular region or view size, the actual API call to google is invoked with a center point and a zoom level. E.G.:
map.setCenter(new google.maps.LatLng(234.3453, 454.2345), 42);
Now it would be possible for Apple to request the appropriate zoom level and then adjust the sizing of the view to accommodate the actual region request, but it seems they fail to do so. I am drawing bus routes on a map, and one of my routes barely triggers a larger zoom level and thus scales too small (under-zooms) and looks ugly and smashed.
#pseudopeach, Please update me on the progress of your attempts to work around this issue. If one could detect the boundaries of a zoom level, the region request could then be deliberately underscaled to avoid the under-zoom. Since you are onto this I would be interested in seeing your code before I have to make an attempt at it myself.
There is an interesting category that the author of the blog Backspace Prolog has written to enable the direct manipulation of the Google Maps API by emulating their setCenter(centerPoint,ZoomLevel) call signature. You can find it here. I haven't spent the time yet, but the math can probably be reverse engineered to yield a means of calculating the zoom level for a given Region or MapRect. Depending on how far it is within the zoom level's range - i.e. how far it is over the threshold that triggers the lower zoom level - it could decide whether to go to the lower level or keep to higher one by under-requesting.
This is clearly a behavioral bug that needs to be fixed so that MKMapView can be used in a more refined manner.
This is an old question, but I recently investigated Google maps in detail, and can share some insight. I don't know whether this is also valid for the current Apple maps.
The reason that the resolution snaps to predefined zoomlevels is because the original maps fetched from Google's servers are drawn with those zoomlevels. The size of the features on those maps are drawn with a certain resolution in mind. For example, the width (in pixels) of a road on those maps is always the same. On higher resolution maps, more secundary roads are drawn, but their width is always the same. The resolution snaps to predefined levels to make sure those features are always depicted with the same size. That is, it is not a bug but a feature.
Those predefined resolutions vary with latitude because of the Mercator projection of the maps. Mercator projection is easy to work with because latitude lines are depicted straight and horizontal and longitude lines are straight and vertical. But with Mercator projection the top of the map has a slightly higher resolution than the bottom (on the Northern hemisphere). That has consequences for fitting maps together at the northern and sourthern edges.
That is, when you start on the equator and drive north, then the resolution of the Mercator maps you drive over will gradually increase. The longitude lines remain vertical, and therefore the longitude spans remains the same. But the resolution increases, and therefore the latitude span decreases. Still, on all those maps the roads have the same width in pixels, and texts are depicted in the same font size, etc.
Google uses a Mercator projection where the equator circumference is 256 pixels at zoomlevel 0. Each next zoomlevel doubles that amount. That is, at zoomlevel 1, the equator is 512 pixels long, at zoomlevel 2, the equator is 1024 pixels long, etc. The model for the earth they use is a FAI globe with a radius of exactly 6371 km, or circumference of 40030 km.
Therefore, resolution for zoomLevel 0 at the equator is 156.37 km/pixel, at zoomlevel 1 it is 78.19 km/pixel, etc. Those resolutions then vary with the cosinus of the latitude anywhere else on the earth.
MKCoordinateRegion region;
region.center.latitude = latitude;
region.center.longitude = longitude;
region.span.latitudeDelta = 5.0;
region.span.longitudeDelta = 5.0;
[mapView setRegion:region animated:YES];
I restore the region with no problem and with no variance as you describe. It is really impossible to tell what is specifically wrong in your case without some code to look at but here's what works for me:
Save both the center and span values somewhere. When you are restoring them specifically set both the center and span.
Restoring should look like this:
MKCoordinateRegion initialRegion;
initialRegion.center.latitude = Value you've stored
initialRegion.center.longitude = Value you've stored
initialRegion.span.latitudeDelta = Value you've stored
initialRegion.span.longitudeDelta = Value you've stored
[self.mapView setRegion:initialRegion animated:NO];
Also remember that this method is available in 4.0: `mapRectThatFits:edgePadding: MapRectThatFits helpfully adds a reasonable border to ensure that say a map annotation on the edge is not obscured and the the rect that you're attempting to display is fully visible. If you want to control the border use the call that gives you access to set edgePadding as well.
If you set up the MapView in InterfaceBuilder, make sure you don't do this:
_mapView = [[MKMapView alloc] init];
As soon as I removed this init line, my map view suddenly began responding properly to all the updates I sent it. I suspect that what happens is that if you do the alloc init, it's actually creating another view that's not being shown anywhere. The one you see on the screen is the one initialized by your nib. But if you alloc init a new one, then that's something somewhere else and it's not going to do anything.
I have a static map image with a bunch of circles and squares on it that depict cities. I have loaded the image into an imageView that is sub-classed under a scrollView so that I can capture user touches and zoom/scroll across the map. My challenge is that I want to pop-up a label whenever a user touches one of these circles/squares for a city to tell them which city it is and possibly load a detail view for the city. I figured I could pre-load all the relative CGPoints for the cities based on the imageView map into a dictionary so I can reference them during a "touchesBegan" event, but I'm quickly getting in over my head and possibly going about this the wrong way.
So far everything is working and I can capture the CGPoint x and y coordinates of touches. The biggest issue I have is determining the proximity of the user touches to a discrete point I may have in the dictionary. In other words if the dictionary has "Boston = NSPoint: {235, 118};" how can I tell when a user is close to that point without making them repeat the touch until it is exact? Is there an easy way to determine if a user touch is "close" to a pre-existing point? Am I going about this the right way?
Any advice or slaps in the back of the head are welcome.
Thanks, Mike
You could use UIButtons to represent the cities. Then you'll get the standard touch, highlight, etc, behaviors with less effort. Adding the buttons as subviews on your map should cause them to scale and scroll along with the map.
if i understand it correctly, you want to know if the point at which the user tapped is "close" enough to a point that is marked as a city.
you would have to quantify close i.e. set a threshold value after which the tap is farther, before which the tap is closer.
once you do that, calculate the cartesian coordinate distance sqrt ( (x1-x2)^2 + (y1-y2)^2)
for each element ( read dictionary with x,y values for cities) in the array and store the results in another array. then take the minimum of the result. the index of that result is the city that is closest to the tap if it is lesser than the said threshold.
you can either use an R-Tree, or you can calculate the proximity of the touch to each visible point in the current view. To calculate the proximity you would normally use the Pythagorean theorem but in this case you can skip the square-root because you're only comparing the relative sizes. Also you can declare a distance cut off if you like say 50 pixels squared to 2500. So you'd put the result into an object containing distance and reference point and put the objects in an NSMutableArray, not adding the results under your cutoff, and select the minimum result.
So if you have a touched point pT, then for each point pN, you'd calculate:
d=(pT.x-pN.x)*(pT.x-pN.x) + (pT.y-pN.y)*(pT.y-pN.y); //d is the squared distance
The point pN with the minimum d is the point that was closest to pT. And like I said if you want only touches within 10 pixels to count, you can test that d <= 10*10;
The method of testing for touches within a 20x20 square area works too, except if two points are within 20 pixels of each other, then you need to know which is the closest touched point.