In 3.1 I've been using an "offscreen" MKMapView to create map images that I can rotate, crop and so forth before presenting them the user. In 3.2 and 4.0 this technique no longer works quite right. Here's some code that illustrates the problem, followed by my theory.
// create map view
_mapView = [[MKMapView alloc] initWithFrame:CGRectMake(0, 0, MAP_FRAME_SIZE, MAP_FRAME_SIZE)];
_mapView.zoomEnabled = NO;
_mapView.scrollEnabled = NO;
_mapView.delegate = self;
_mapView.mapType = MKMapTypeSatellite;
// zoom in to something enough to fill the screen
MKCoordinateRegion region;
CLLocationCoordinate2D center = {30.267222, -97.763889};
region.center = center;
MKCoordinateSpan span = {0.1, 0.1 };
region.span = span;
_mapView.region = region;
// set scrollview content size to full the imageView
_scrollView.contentSize = _imageView.frame.size;
// force it to load
#ifndef __IPHONE_3_2
// in 3.1 we can render to an offscreen context to force a load
UIGraphicsBeginImageContext(_mapView.frame.size);
[_mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIGraphicsEndImageContext();
#else
// in 3.2 and above, the renderInContext trick doesn't work...
// this at least causes the map to render, but it's clipped to what appears to be
// the viewPort size, plus some padding
[self.view addSubview:_mapView];
#endif
when the map is done loading, I snap picture of it and stuff it in my scrollview
- (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView {
NSLog(#"[MapBuilder] mapViewDidFinishLoadingMap");
// render the map to a UIImage
UIGraphicsBeginImageContext(mapView.bounds.size);
// the first sub layer is just the map, the second is the google layer, this sublayer structure might change of course
[[[mapView.layer sublayers] objectAtIndex:0] renderInContext:UIGraphicsGetCurrentContext()];
// we are done with the mapView at this point, we need its ram!
_mapView.delegate = nil;
[_mapView release];
[_mapView removeFromSuperview];
_mapView = nil;
UIImage* mapImage = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
_imageView.image = mapImage;
[mapImage release], mapImage = nil;
}
The first problem is that in 3.1 rendering to a context would trigger the map to begin loading. This no longer works in 3.2, 4.0. The only thing I have found would trigger the load is to temporarily add the map to the view (i.e. make it visible). The problem being that the map only renders to the visible area of the screen, plus a little padding. The frame/bounds are fine, but it appears to be "helpfully" optimizes the loading to limit the tiles to those visible on the screen or close to it.
Any ideas how to force the map to load at full size? Anyone else have this issue?
Related
I'm trying to trace the route by free hand on a MKMapView using overlays (MKOverlay).
Each time when we move the finger i extend the polyline with last coordinate with new coordinate,all are working fine except when extending polyline overlay the whole overlay is blinking in device(only sometimes),so i can,t trace the problem.
The code i have tried is given below.
- (void)viewDidLoad
{
j=0;
coords1 = malloc(2* sizeof(CLLocationCoordinate2D));
coordinatearray=[[NSMutableArray alloc]init];
UIPanGestureRecognizer *GestureRecogonized = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(gestureDetacted:)];
[self.myMapView addGestureRecognizer:GestureRecogonized];
}
- (void)gestureDetacted:(UIPanGestureRecognizer *)recognizer
{
if(UIGestureRecognizerStateBegan==recognizer.state)
{
CGPoint point = [recognizer locationInView:self.myMapView];
CLLocationCoordinate2D tapPoint = [self.myMapView convertPoint:point toCoordinateFromView:self.view];
CLLocation *curLocation = [[CLLocation alloc] initWithLatitude:tapPoint.latitude longitude:tapPoint.longitude];
[coordinatearray addObject:curLocation];
}
coords1[0]=[[coordinatearray objectAtIndex:j] coordinate];
if(UIGestureRecognizerStateChanged==recognizer.state)
{
j++;
CGPoint point = [recognizer locationInView:self.myMapView];
CLLocationCoordinate2D tapPoint = [self.myMapView convertPoint:point toCoordinateFromView:self.view];
CLLocation *curLocation = [[CLLocation alloc] initWithLatitude:tapPoint.latitude longitude:tapPoint.longitude];
[coordinatearray addObject:curLocation];
coords1[1]=CLLocationCoordinate2DMake(tapPoint.latitude,tapPoint.longitude);
polyLine = [MKPolyline polylineWithCoordinates:coords1 count:2];
[self.myMapView addOverlay:polyLine];
}
}
in overlay delegate
- (MKOverlayView *)mapView:(MKMapView *)mapView viewForOverlay:(id <MKOverlay>)overlay {
if([overlay isKindOfClass:[MKPolyline class]]){
MKPolylineView *polylineView = [[MKPolylineView alloc] initWithPolyline:overlay];
polylineView.strokeColor = [UIColor orangeColor];
polylineView.lineWidth = 20;
polylineView.fillColor=[[UIColor orangeColor] colorWithAlphaComponent:.1];
return polylineView;
}
}
can anybody know why that flickering or blinking effect is coming and how to remove it.
Thanks in advance.
Rather than adding hundreds of really small views (which is really computationally intensive), i would rather remove the polyline overlay and add a new one with all the points on the map in it at each change in the pan recognizer (for a smoother effect, you can first add the new one and then remove the old one). Use your coordinateArray to create the MKPolyline overlay that contains all of the points, rather than the last 2 points.
You could do something like:
[coordinatearray addObject:curLocation];;
CLLocationCoordinate2D* coordArray = malloc(sizeof(CLLocationCoordinate2D)*[coordinatearray count]);
memcpy(coordArray, self.coordinates, sizeof(CLLocationCoordinate2D)*([coordinatearray count]-1));
coordArray[[coordinatearray count]-1] = curLocation;
free(self.coordinates);
self.coordinates = coordArray;
MKPolyline *polyline = [MKPolyline polylineWithCoordinates:coordArray count:[coordinatearray count]];
MKPolyline *old = [[self.mapView overlays] lastObject];
[self.mapView addOverlay:polyline];
[self.mapView removeOverlay:old];
Where self.coordinate is a property of type CLLocationCoordinate2D*, this way you can memcpy the existing array into a new one (memcpy is really efficent) and only need to add the last point to the array, rather than having to loop through each point of the NSArray *coordinatearray.
You also have to change your if(UIGestureRecognizerStateBegan==recognizer.state) part, to add in self.coordinates the first tapped point.
Just something like:
self.coordinates = malloc(sizeof(CLLocationCoordinate2D));
self.coordinates[0] = curLocation;
EDIT: I think the problem is that map overlays draw themselves for different discrete values of zoom levels. At a certain zoom level it would appear that the overlay first draws itself at the bigger zoom level, and then at the smaller zoom level (in fact drawing over the previously drawn overlay). When you try to show an animation such as that from drawing the user panning gesture at this zoom level, that is why the overlay keeps flickering. A possible solution would be to use a transparent view that you put on top of the map view, where the drawing is performed while the user keeps moving the finger. Only once the panning gesture ends you then create a map overlay that you "pin" to the map and you clean the drawing view. You also need to be careful when redrawing the view, as you should each time specify only the rect to be redrawn and then redraw only in that rect, as redrawing the whole thing each time would cause a flicker in this view as well. This is definitely much more coding involved, but it should work. Check this question for how to incrementally draw on a view.
I want to implement iPhone Photo App (default iPhone app). I faced difficulties, when I want to load big image (2500 * 3700). When I want to scroll from one image to another, I see something like stuttering. To display images I use ImageScrollView from apple site It has displaying method: ImageScrollView.m
- (void)displayImage:(UIImage *)image
{
// clear the previous imageView
[imageView removeFromSuperview];
[imageView release];
imageView = nil;
// reset our zoomScale to 1.0 before doing any further calculations
self.zoomScale = 1.0;
self.imageView = [[[UIImageView alloc] initWithImage:image] autorelease];
[self addSubview:imageView];
self.contentSize = [image size];
[self setMaxMinZoomScalesForCurrentBounds];
self.zoomScale = self.minimumZoomScale;
}
- (void)setMaxMinZoomScalesForCurrentBounds
{
CGSize boundsSize = self.bounds.size;
CGSize imageSize = imageView.bounds.size;
// calculate min/max zoomscale
CGFloat xScale = boundsSize.width / imageSize.width; // the scale needed to perfectly fit the image width-wise
CGFloat yScale = boundsSize.height / imageSize.height; // the scale needed to perfectly fit the image height-wise
CGFloat minScale = MIN(xScale, yScale); // use minimum of these to allow the image to become fully visible
// on high resolution screens we have double the pixel density, so we will be seeing every pixel if we limit the
// maximum zoom scale to 0.5.
CGFloat maxScale = 1.0 / [[UIScreen mainScreen] scale];
// don't let minScale exceed maxScale. (If the image is smaller than the screen, we don't want to force it to be zoomed.)
if (minScale > maxScale) {
minScale = maxScale;
}
self.maximumZoomScale = maxScale;
self.minimumZoomScale = minScale;
}
For my app I have 600*600 images, I show them first. When user scrolls to next image he sees only 600*600 image. Then in background I load 3600 * 3600 image
[operationQueue addOperationWithBlock:^{
UIImage *image = [self getProperBIGImage];
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
ImageScrollView *scroll = [self getCurrentScroll];
[scroll displayImage:newImage];
}];
}];
I see, that when the dimensions of image are 3600 * 3600 and I want to display image in 640 * 960 screen, iPhone waste 1 second of main queue time to scale the image, and that's why I can't scroll to next image during this 1 second.
I want to scale image, because I need user to be able to zoom this image. I tried to use this approach, but this didn't help.
I see some possible solutions:
1) to provide scaling of image in UIImageView in background (but I know, that UI should be changed only in main thread)
2) to use - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollViewCalled to show only 600 * 600 image at the beginning and then load big image, when user tries to zoom (but I tried this, and I will loose 1 second, when I try to init UIImageView with bigImage and then return this UIImageView; And I can't even implement it, because I see bad scroll view, where scrolling behavior is wrong (difficault to explain), when I try to return different view for different scales)
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollViewCalled
{
if (!zooming)
{
ImageScrollView *scroll = (ImageScrollView *)scrollViewCalled;
UIImageView *imageView = (UIImageView *)[scroll imageView];
return imageView;
}
else
{
UIImageView *bigImageView = [self getBigImageView];
return bigImageView;
}
}
Unfortunately, that image is too large for the iPhone to handle. I know on the iPad, the limit is roughly the size of the device screen. Any bigger than that and you'll have to use a CATiledLayer.
I would take a look at the WWDC UIScrollView presentations over the last three year, starting with 2010. In that, they discuss ways to handle large images on the iPhone. The sample code will also get you on your way.
Good luck!
I am using this function for rendering MKMapView instance into image:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This works fine. But with iphone4 the rendered image doesn't have same resolution as it has really on device. On device I have the 640x920 map view quality, and rendered image has the resolution 320x460.
Then I doubled the size that is provided to UIGraphicsBeginImageContext() function but that filled the only top-left image part.
Question: Is there any way to get map rendered to image with full resolution 640x920?
Try using UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
See QA1703 for more details. It says:
Note: Starting from iOS 4,
UIGraphicsBeginImageContextWithOptions
allows you to provide with a scale
factor. A scale factor of zero sets it
to the scale factor of the device's
main screen. This enables you to get
the sharpest, highest-resolustion
snapshot of the display, including a
Retina Display.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];
I'm implementing a subclass of UIView that displays a gauge dial with a sprite for the indicator. It has angle property that I can vary to make the needle point to different angles. It works, but on the same values for the position of the needle make it show up in different locations on the phone and the simulator. It's an iPhone 4, so I'm sure the double resolution thing is behind this, but I don't know what to do about it. I tried setting the UIView's layer's contentScaleFactor but that fails. I thought UIView got the resolution thing for free. Any suggestions?
I should note that the NSLog statements report 150 for both .frame.size. dimensions, in both the simulator and the device.
Here's the .m file
UPDATE: In the simulator, I found how to set the hardware to iPhone 4, and it looks just like the device now, both are scaling and positioning the sprite at half size.
UPDATE 2: I made a workaround. I set the .scale of my sprite equal to the UIView's contentScaleFactor and then use it to dived the UIView in half if it's a lo-res screen and the full width if it's hi-res. I still don't see why this is necessary, as I should be working in points now, not pixels. It must have something to do with the custom drawing code in the Sprite or VectorSprite classes.
I'd still appreciate some feedback if anyone has some...
#import "GaugeView.h"
#implementation GaugeView
#synthesize needle;
#define kVectorArtCount 4
static CGFloat kVectorArt[] = {
3,-4,
2,55,
-2,55,
-3,-4
};
- (id)initWithCoder:(NSCoder *)coder {
if (self = [super initWithCoder:coder]) {
needle = [VectorSprite withPoints:kVectorArt count:kVectorArtCount];
needle.scale = (float)self.contentScaleFactor; // returns 1 for lo-res, 2 for hi-res
NSLog(#" needle.scale = %1.1f", needle.scale);
needle.x = self.frame.size.width / ((float)(-self.contentScaleFactor) + 3.0); // divisor = 1 for hi-res, 2 for lo-res
NSLog(#" needle.x = %1.1f", needle.x);
needle.y = self.frame.size.height / ((float)(-self.contentScaleFactor) + 3.0);
NSLog(#" needle.y = %1.1f", needle.y);
needle.r = 0.0;
needle.g = 0.0;
needle.b = 0.0;
needle.alpha = 1.0; }
}
self.backgroundColor = [UIColor clearColor];
return self;
}
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
- (void)dealloc {
[needle release];
[super dealloc];
}
#end
I believe the answer is that iOS takes care of the resolution scaling automatically in drawRect methods, but in custom drawing code, you have to do it yourself.
In my example, I used the UIView's contentsScaleFactor to scale my sprite. In the future, in my custom draw method (not shown) I'll query [UIScreen mainScreen] scale and scale accordingly there.
For an app I'm building, I have a custom "old map" effect PNG that I'd like to overlay on the MKMapView view. Unfortunately, neither way I've tried to do this is exactly right:
Add a subview to the MKMapView (issue: the UIImageView for the PNG gets placed above the Annotation pins, which looks weird
Make the image an annotation view itself and place it below the other ones (issue: on scroll, I have to move the annotation view image, and for a while there's just the regular map instead of the old-timey-effect).
What I basically want to accomplish is having a subview that is layered above the maptiles but below the annotation views, and that will hold steady while the user scrolls around—any thoughts?
Note that all MKMapView subviews hierarchy, annotations, behaviour etc proceeds in internal private MKMapViewInternal class so changing anything (in a way I suggest or another) in this structure may cause your application to be rejected from Appstore.
I do not have a full solution, just an idea. My idea is to make an overlay view have the same superview as annotation views and ensure that annotations will be placed over our overlay. In MKMapViewDelegate we implement:
- (void)mapView:(MKMapView *)aMapView didAddAnnotationViews:(NSArray *)views{
if (views.count > 0){
UIView* tView = [views objectAtIndex:0];
UIView* parView = [tView superview];
UIView* overlay = [tView viewWithTag:2000];
if (overlay == nil){
overlay = [[UIImageView alloc] initWithImage:[UIImage imageNamed:MY_IMAGE_NAME]];
overlay.frame = parView.frame; //frame equals to (0,0,16384,16384)
overlay.tag = 2000;
overlay.alpha = 0.7;
[parView addSubview:overlay];
[overlay release];
}
for (UIView* view in views)
[parView bringSubviewToFront:view];
}
}
This code does a half of what you want - you get a view placed between map tiles and annotations. The remaining task is to change custom overlay view frame according to map scrolling/zooming (I'll post if I find the way).
I did something similar and in a different way in the end. I wanted to fade the mapview out by adding a white overlay, but I didn't want to fade the annotations. I added an overlay to the map (actually I had to add two as it didn't like me trying to add an overlay for the entire globe).
CLLocationCoordinate2D pt1, pt2, pt3, pt4;
pt1 = CLLocationCoordinate2DMake(85, 0);
pt2 = CLLocationCoordinate2DMake(85, 180);
pt3 = CLLocationCoordinate2DMake(-85, 180);
pt4 = CLLocationCoordinate2DMake(-85, 0);
CLLocationCoordinate2D coords[] = {pt1, pt2, pt3, pt4};
MKPolygon *p = [MKPolygon polygonWithCoordinates:coords count:4];
[self.mapView addOverlay:p];
pt1 = CLLocationCoordinate2DMake(85, 0);
pt2 = CLLocationCoordinate2DMake(85, -180);
pt3 = CLLocationCoordinate2DMake(-85, -180);
pt4 = CLLocationCoordinate2DMake(-85, 0);
CLLocationCoordinate2D coords2[] = {pt1, pt2, pt3, pt4};
MKPolygon *p2 = [MKPolygon polygonWithCoordinates:coords2 count:4];
[self.mapView addOverlay:p2];
Then you need to add the map delegate to draw them:
- (MKOverlayRenderer *)mapView:(MKMapView *)mapView rendererForOverlay:(id<MKOverlay>)overlay {
if ([overlay isKindOfClass:MKPolygon.class]) {
MKPolygonRenderer *polygonView = [[MKPolygonRenderer alloc] initWithOverlay:overlay];
polygonView.fillColor = [UIColor colorWithRed:255.0/255.0 green:255.0/255.0 blue:255.0/255.0 alpha:0.4];
return polygonView;
}
return nil;
}
It's pretty old, but maybe still relevant for some of you.
I wanted to make the map darker, but not the annotation views.
What i did was very simple and i don't yet know if it has some fallbacks...
I put a UIView with a black-transparent background above the MKMapView, and then added the following code:
- (void)mapView:(MKMapView *)mapView didAddAnnotationViews:(NSArray<MKAnnotationView *> *)views
{
for (MKAnnotationView *view in views) {
view.center = [self.mapView convertCoordinate:view.annotation.coordinate toPointToView:self.viewDark];
[self.viewDark addSubview:view];
}
}
Hope it helps.. and of course - if you find any issues about this solution please let me know :)
Edit #1:
Forgot to mention that self.viewDark should have userInteraction disabled.
Edit #2:
Another thing that helped me is setting self.viewDark's autoresizesSubviews to NO.