Get part of the map view as an image in iOS - iphone

I am creating an app where in I have to show just the part of the area where on my place will be plotted, Similar like the one below.
Clicking on this will image will take my app further. But here it's just an static image generated depending upon my longitude and latitude.
Any help would be really appreciated.
Thanks

Here's the working code. Incase anybody's still searching out for the answer.
NSString *staticMapUrl = [NSString stringWithFormat:#"http://maps.google.com/maps/api/staticmap?markers=color:red|%f,%f&%#&sensor=true",yourLatitude, yourLongitude,#"zoom=10&size=270x70"];
NSURL *mapUrl = [NSURL URLWithString:[staticMapUrl stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
UIImage *image = [UIImage imageWithData: [NSData dataWithContentsOfURL:mapUrl]];
Thanks

You can try Google Static Maps API
Here's sample code
NSString *urlString = #"http://maps.googleapis.com/maps/api/staticmap?center=Berkeley,CA&zoom=14&size=200x200&sensor=false";
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]];
UIImage *imageData = [UIImage imageWithData:data];
[yourImageView setImage:imageData];
Through this you can get static image based on your coordinates.
But I'm afraid to say that customization of annotation may not be possible.

You could also use MKMapSnapshotter, this allows you to take a screen shot of the map;
MKMapSnapshotOptions * snapOptions= [[MKMapSnapshotOptions alloc] init];
self.options=snapOptions;
CLLocation * Location = [[CLLocation alloc] initWithLatitude:self.lat longitude:self.long];
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(Location.coordinate, 300, 300);
self.options.region = region;
self.options.size = self.view.frame.size;
self.options.scale = [[UIScreen mainScreen] scale];
MKMapSnapshotter * mapSnapShot = [[MKMapSnapshotter alloc] initWithOptions:self.options];
[mapSnapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if (error) {
NSLog(#"[Error] %#", error);
return;
}
UIImage *image = snapshot.image;//map image
NSData *data = UIImagePNGRepresentation(image);
}];

What you are wanting to do is fundamental mapkit api functionality. Check out the documentation here:
http://code.google.com/apis/maps/articles/tutorial-iphone.html
and one of my favorite ios blogs has a mapkit tutorial here:

Today there is this, without Google:
https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/LocationAwarenessPG/MapKit/MapKit.html
Find the code at paragraph "Creating a Snapshot of a Map".
So the result looks consistent with Apple Maps.

I would like to add to #DevC answer, by providing a method (with completion block) that adds an annotation in the middle of the image. At first I was interested in seeing if I could add an annotation before the snapshot begins, however MKMapSnapshotter does not support this. So the solution I came up with, is to create a pin, draw it on top of the map image and then create another image of the map + pin. Here is my custom method:
-(void)create_map_screenshot:(MKCoordinateRegion)region :(map_screenshot_completion)map_block {
// Set the map snapshot properties.
MKMapSnapshotOptions *snap_options = [[MKMapSnapshotOptions alloc] init];
snap_options.region = region;
snap_options.size = // Set the frame size e.g.: custom_view.frame.size;
snap_options.scale = [[UIScreen mainScreen] scale];
// Initialise the map snapshot camera.
MKMapSnapshotter *map_camera = [[MKMapSnapshotter alloc] initWithOptions:snap_options];
// Take a picture of the map.
[map_camera startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
// Check if the map image was created.
if ((error == nil) && (snapshot.image != nil)) {
// Create the pin image view.
MKAnnotationView *pin = [[MKPinAnnotationView alloc] initWithAnnotation:nil reuseIdentifier:nil];
// Get the map image data.
UIImage *image = snapshot.image;
// Create a map + location pin image.
UIGraphicsBeginImageContextWithOptions(image.size, YES, image.scale); {
[image drawAtPoint:CGPointMake(0.0f, 0.0f)];
CGRect rect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
// Create the pin co-ordinate point.
CGPoint point = [snapshot pointForCoordinate:region.center];
if (CGRectContainsPoint(rect, point)) {
// Draw the pin in the middle of the map.
point.x = (point.x + pin.centerOffset.x - (pin.bounds.size.width / 2.0f));
point.y = (point.y + pin.centerOffset.y - (pin.bounds.size.height / 2.0f));
[pin.image drawAtPoint:point];
}
}
// Get the new map + pin image.
UIImage *map_plus_pin = UIGraphicsGetImageFromCurrentImageContext();
// Return the new image data.
UIGraphicsEndImageContext();
map_block(map_plus_pin);
}
else {
map_block(nil);
}
}];
}
This method also requires a completion header to be declared like so:
typedef void(^map_screenshot_completion)(UIImage *);
I hope this is of some use to people looking to add annotations to the map image.

To be clear, there is no way to get an image for a specific location through map kit.
You have to use the MKMapView, disable user from panning/zooming, add annotation and handle click on annotation to do whatever you wanted to do..

Related

MKMapView to UIImage iOS 7

The code to render a MKMapView to an UIImage no longer works in iOS 7. It returns an empty image with nothing but the word "Legal" at the bottom and a black compass on the top right. The map itself is missing. Below is my code:
UIGraphicsBeginImageContext(map.bounds.size);
[map.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Map is an IBOutlet that points to a MKMapView. Is there any way to render a MKMapView correctly in iOS 7?
From this SO post:
You can use MKMapSnapshotter and grab the image from the resulting MKMapSnapshot. See the discussion of it WWDC 2013 session video, Putting Map Kit in Perspective.
For example:
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc] init];
options.region = self.mapView.region;
options.scale = [UIScreen mainScreen].scale;
options.size = self.mapView.frame.size;
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
UIImage *image = snapshot.image;
NSData *data = UIImagePNGRepresentation(image);
[data writeToFile:[self snapshotFilename] atomically:YES];
}];
Having said that, the renderInContext solution still works for me. There are notes about only doing that in the main queue in iOS7, but it still seems to work. But MKMapSnapshotter seems like the more appropriate solution for iOS7.

incorrect image size in iPhone

I need to use image height and width.I have 800*800 pixel image in my ios simulator. But when i am finding the size of image using
image.size.width and image.size.height
it is giving me 315*120 which is incorrect why so?
- (void)elcImagePickerController:(ELCImagePickerController *)picker didFinishPickingMediaWithInfo:(NSArray *)info
{
NSLog(#"%#",info);
int temp = [[DataManager sharedObj] imageCount];
for(NSDictionary *dict in info)
{
[[DataManager sharedObj] setImageCount:temp+1];
NSMutableDictionary* dataDic = [[NSMutableDictionary alloc] init];
UIImage* img = [dict objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"%.2f",img.size.width);
NSLog(#"%.2f",img.size.height);
}
}
Try to call image.frame = cgrectmake(x, y, 800, 800).
If you have an UIImageView *image instead of UIImage *image, problably you get the frame of the view not the image.
SOLVED:
change fullScreenImage to fullResolutionImage in line 33 of your ELCImagePickerController.m.
You can get the image size by using this
UIImage *imageS = [dict objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"width = %.2f, height = %.2f", imageS.size.width, imageS.size.height);
NSLog(#"Image Size = %#",NSStringFromCGSize(imageS.size));
Both lines will work.
It will print right. Please check your simulator image once again. If you added it via google may me you have been saved the image thumbnail instead of original image thats why you are getting the size wrong. Otherwise method of getting image size is correct.

Coreplot Library - Extract entire graph image

i am creating image for graph using this code
UIImage *newImage=[graph imageOfLayer]
NSData *newPNG= UIImageJPEGRepresentation(newImage, 1.0);
NSString *filePath=[NSString stringWithFormat:#"%#/graph.jpg", [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]];
if([newPNG writeToFile:filePath atomically:YES])
NSLog(#"Created new file successfully");
But i get only visible area(320*460) in image, How can i get whole graph image with axis.
Please provide some code snippet, how can i do it with coreplot.
Thanks In Advance...
Make a new graph the size of the desired output image. It doesn't have to be added to a hosting view—that's only needed for displaying it on screen.
CPTXYGraph *graph = [(CPTXYGraph *)[CPTXYGraph alloc] initWithFrame:desiredFrame];
// set up the graph as usual
UIImage *newImage=[graph imageOfLayer];
// process output image
My approach to this problem was to create a method to extract the image.
In that method I momentarily make the hosting view, scroll view, graph bounds, and plot area frame bounds momentarily bigger. I then convert the graph to a an image.
I then remove the hosting view from its container to remove it from screen. I then call the doPlot method to reinitialise the plot and its data via the DoPlot method. My code is below.
There might be a visual jitter whilst this is carried out. However, you could always disguise this by using a alert either to enter an email for export, or just a simple alert saying image exported.
//=============================================================================
/**
Gets the image of the chart to export via email.
*/
//=============================================================================
-(UIImage *) getImage
{
//Temprorarilty make plot bigger.
// CGRect rect = self.hostingView.bounds;
CGRect rect = self.scroller.bounds;
rect.size.height = rect.size.height -100;
rect.origin.x = 0;
rect.size.width = rect.size.width + [fields count] * 100.0;
[self.hostingView setBounds:rect];
[scroller setContentSize: hostingView.frame.size];
graph.plotAreaFrame.bounds = rect;
graph.bounds = rect;
UIImage * image =[graph imageOfLayer];//get image of plot.
//Redraw the plot back at its normal size;
[self.hostingView removeFromSuperview];
self.hostingView = nil;
[self doPlot];
return image;
}//============================================================================

Rendering MKMapView to UIImage with real resolution

I am using this function for rendering MKMapView instance into image:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This works fine. But with iphone4 the rendered image doesn't have same resolution as it has really on device. On device I have the 640x920 map view quality, and rendered image has the resolution 320x460.
Then I doubled the size that is provided to UIGraphicsBeginImageContext() function but that filled the only top-left image part.
Question: Is there any way to get map rendered to image with full resolution 640x920?
Try using UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
See QA1703 for more details. It says:
Note: Starting from iOS 4,
UIGraphicsBeginImageContextWithOptions
allows you to provide with a scale
factor. A scale factor of zero sets it
to the scale factor of the device's
main screen. This enables you to get
the sharpest, highest-resolustion
snapshot of the display, including a
Retina Display.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];

Why isn't my ScrollView/CATiledView zooming properly?

I have a UIScrollView backed by a CATiledLayer. The layer is pulling tiles from a server and displaying them in response to what is requested when drawLayer:inContext is called. Here's what it looks like:
-(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx{
ASIHTTPRequest *mapRequest;
WMSServer *root = self.mapLayer.parentServer;
while(root == nil){
root = self.mapLayer.parentLayer.parentServer;
}
//Get where we're drawing
CGRect clipBox = CGContextGetClipBoundingBox(ctx);
double minX = CGRectGetMaxY(clipBox);
double minY = CGRectGetMinX(clipBox);
double maxX = CGRectGetMinY(clipBox);
double maxY = CGRectGetMaxX(clipBox);
//URL for the request made here using the min/max values from above
NSURL *url = [[NSURL alloc] initWithString:path];
mapRequest = [ASIHTTPRequest requestWithURL:url];
[url release];
[mapRequest startSynchronous];
//Wait for it to come back...
//Turn the Data into an image
NSData *response = [mapRequest responseData];
//Create the entire image
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((CFDataRef)response);
CGImageRef image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
//Now paint the PNG on the context
CGContextDrawImage(ctx, clipBox, image);
}
The problem occurs when I zoom in. The size of the tiles (as indicated by the clipBox rectangle) shrinks down, and if it's at say, 32 by 32 pixels, the URL is properly formed for 32pixels, but what is displayed on the screen scaled up to the default tileSize (128 by 128 pixels in this case). Is there some way I can get the layer to properly draw these images.
Found it. Get the CGAffineTransform with CGContextGetCTM(CGContextRef), and use the value of the a (or b) property as the zoom. Get the image in a screen res that is the zoomed appropriately (multiply the height and width by the zoom). Then draw that image into the clip box.