MKMapView to UIImage iOS 7 - iphone

The code to render a MKMapView to an UIImage no longer works in iOS 7. It returns an empty image with nothing but the word "Legal" at the bottom and a black compass on the top right. The map itself is missing. Below is my code:
UIGraphicsBeginImageContext(map.bounds.size);
[map.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Map is an IBOutlet that points to a MKMapView. Is there any way to render a MKMapView correctly in iOS 7?

From this SO post:
You can use MKMapSnapshotter and grab the image from the resulting MKMapSnapshot. See the discussion of it WWDC 2013 session video, Putting Map Kit in Perspective.
For example:
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc] init];
options.region = self.mapView.region;
options.scale = [UIScreen mainScreen].scale;
options.size = self.mapView.frame.size;
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
UIImage *image = snapshot.image;
NSData *data = UIImagePNGRepresentation(image);
[data writeToFile:[self snapshotFilename] atomically:YES];
}];
Having said that, the renderInContext solution still works for me. There are notes about only doing that in the main queue in iOS7, but it still seems to work. But MKMapSnapshotter seems like the more appropriate solution for iOS7.

Related

IOS 6 Face Detection Not Working

I have used the following code to detect the face for IOS 5
CIImage *cIImage = [CIImage imageWithCGImage:image.CGImage];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
NSArray *features = nil;
features = [detector featuresInImage:cIImage];
if ([features count] == 0)
{
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
features = [detector featuresInImage:cIImage options:imageOptions];
}
With this code, i am able to detect the face in IOS 5, But recently we have upgraded our Systems to Xcode 4.4 and IOS 6, Now, the face Detection is not working properly,
What changes i need to do for detecting the face in IOS 6.
Anykind of Help is highly Appreciated
I have noticed that face detection in iOS6 is not as good as in iOS5.
Try with a selection of images. You will likely find that it works OK in iOS6 with a lot of the images, but not all of them.
I have been testing the same set of images in:
1. The emulator running iOS6.
2. iPhone 5 (iOS6)
3. iPhone 3GS (iOS5).
The 3GS detects more faces than the other two options.
Here's the code, it works on both, but just not as well on iOS6:
- (void)analyseFaces:(UIImage *)facePicture {
// Create CI image of the face picture
CIImage* image = [CIImage imageWithCGImage:facePicture.CGImage];
// Create face detector with high accuracy
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// Create array of detected faces
NSArray* features = [detector featuresInImage:image];
// Read through the faces and add each face image to facesFound mutable
for(CIFaceFeature* faceFeature in features)
{
CGSize parentSize = facePicture.size;
CGRect origRect = faceFeature.bounds;
CGRect flipRect = origRect;
flipRect.origin.y = parentSize.height - (origRect.origin.y + origRect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([facePicture CGImage], flipRect);
UIImage *faceImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
if (faceImage)
[facesFound addObject:faceImage];
}
}
I hope this helpful for u...
add the CoreImage.framework
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:
[UIImage imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self markFaces:image];
}
-(void)faceDetector
{
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage
imageNamed:#"facedetectionpic.jpg"]];
// Draw the face detection image
[self.window addSubview:image];
// Execute the method used to markFaces in background
[self performSelectorInBackground:#selector(markFaces:) withObject:image];
// flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];
// flip the entire window to make everything right side up
[self.window setTransform:CGAffineTransformMakeScale(1, -1)];
}
and check this two links also
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
http://i.ndigo.com.br/2012/01/ios-facial-recognition/

incorrect image size in iPhone

I need to use image height and width.I have 800*800 pixel image in my ios simulator. But when i am finding the size of image using
image.size.width and image.size.height
it is giving me 315*120 which is incorrect why so?
- (void)elcImagePickerController:(ELCImagePickerController *)picker didFinishPickingMediaWithInfo:(NSArray *)info
{
NSLog(#"%#",info);
int temp = [[DataManager sharedObj] imageCount];
for(NSDictionary *dict in info)
{
[[DataManager sharedObj] setImageCount:temp+1];
NSMutableDictionary* dataDic = [[NSMutableDictionary alloc] init];
UIImage* img = [dict objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"%.2f",img.size.width);
NSLog(#"%.2f",img.size.height);
}
}
Try to call image.frame = cgrectmake(x, y, 800, 800).
If you have an UIImageView *image instead of UIImage *image, problably you get the frame of the view not the image.
SOLVED:
change fullScreenImage to fullResolutionImage in line 33 of your ELCImagePickerController.m.
You can get the image size by using this
UIImage *imageS = [dict objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"width = %.2f, height = %.2f", imageS.size.width, imageS.size.height);
NSLog(#"Image Size = %#",NSStringFromCGSize(imageS.size));
Both lines will work.
It will print right. Please check your simulator image once again. If you added it via google may me you have been saved the image thumbnail instead of original image thats why you are getting the size wrong. Otherwise method of getting image size is correct.

Orientation does not behave correctly with Photo in ALAsset

I current have an app that uses ALAsssetsLibrary to fetch the photos. I have placed the photo to an image view and I am able to upload to the server. When I tested on the real device after taking some photos, I found out the photos that supposed to be taken in Portrait become a landscape.
Therefore, I called different function to get the CGImage like this:
UIImage *image = [UIImage imageWithCGImage:[representation fullResolutionImage] scale:1.0 orientation:(UIImageOrientation)[representation orientation]];
The first tried out, I used this :
UIImage *image = [UIImage imageWithCGImage:[representation fullResolutionImage]]
I thought the one with scale and orientation could give me the right orientation that the photo was taken. But it didn't give me the right solution.
Do I miss anything that is necessary to generate a correct orientation of photo?
The correct orientation handling depends on the iOS version you are using.
On iOS4 and iOS 5 the thumbnail is already correctly rotated, so you can initialize your UIImage without specifying any rotation parameters.
However for the fullScreenImage, the behavior is different for each iOS version. On iOS 5 the image is already rotated on iOS 4 not.
So on iOS4 you should use:
ALAssetRepresentation *defaultRep = [asset defaultRepresentation];
UIImage *_image = [UIImage imageWithCGImage:[defaultRep fullScreenImage]
scale:[defaultRep scale] orientation:(UIImageOrientation)[defaultRep orientation]];
On iOS5 the following code should work correctly:
ALAssetRepresentation *defaultRep = [asset defaultRepresentation];
UIImage *_image = [UIImage imageWithCGImage:[defaultRep fullScreenImage] scale:[defaultRep scale] orientation:0];
Cheers,
Hendrik
Try this code:-
UIImage* img = [UIImage imageWithCGImage:asset.thumbnail];
img = [UIImage imageWithCGImage:img.CGImage scale:1.0 orientation:UIImageOrientationUp];
This may help you.
My experience is limited to IOS 5.x but I can tell you that the thumbnail and fullscreen images are oriented properly. It's the fullresolutionimage that's horizontal when shot vertically. My solution is to use a category on uiimage that I got from here:
http://www.catamount.com/forums/viewtopic.php?f=21&t=967&start=0
It provides a nice rotating method on a UIImage like this:
UIImage *tmp = [UIImage imageWithCGImage:startingFullResolutionImage];
startingFullResolutionImage = [[tmp imageRotatedByDegrees:-90.0f] CGImage];
For fullResolutionImage, I'd like to provide a solution as follows,
ALAssetRepresentation *rep = [asset defaultRepresentation];
// First, write orientation to UIImage, i.e., EXIF message.
UIImage *image = [UIImage imageWithCGImage:[rep fullResolutionImage] scale:rep.scale orientation:(UIImageOrientation)rep.orientation];
// Second, fix orientation, and drop out EXIF
if (image.imageOrientation != UIImageOrientationUp) {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawInRect:(CGRect){0, 0, image.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
image = normalizedImage;
}
// Third, compression
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
imageData is what you want, and just upload it to your photo server.
By the way, if you think EXIF is useful, you can complement it to normalizedImage as you wish.

Get part of the map view as an image in iOS

I am creating an app where in I have to show just the part of the area where on my place will be plotted, Similar like the one below.
Clicking on this will image will take my app further. But here it's just an static image generated depending upon my longitude and latitude.
Any help would be really appreciated.
Thanks
Here's the working code. Incase anybody's still searching out for the answer.
NSString *staticMapUrl = [NSString stringWithFormat:#"http://maps.google.com/maps/api/staticmap?markers=color:red|%f,%f&%#&sensor=true",yourLatitude, yourLongitude,#"zoom=10&size=270x70"];
NSURL *mapUrl = [NSURL URLWithString:[staticMapUrl stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
UIImage *image = [UIImage imageWithData: [NSData dataWithContentsOfURL:mapUrl]];
Thanks
You can try Google Static Maps API
Here's sample code
NSString *urlString = #"http://maps.googleapis.com/maps/api/staticmap?center=Berkeley,CA&zoom=14&size=200x200&sensor=false";
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]];
UIImage *imageData = [UIImage imageWithData:data];
[yourImageView setImage:imageData];
Through this you can get static image based on your coordinates.
But I'm afraid to say that customization of annotation may not be possible.
You could also use MKMapSnapshotter, this allows you to take a screen shot of the map;
MKMapSnapshotOptions * snapOptions= [[MKMapSnapshotOptions alloc] init];
self.options=snapOptions;
CLLocation * Location = [[CLLocation alloc] initWithLatitude:self.lat longitude:self.long];
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(Location.coordinate, 300, 300);
self.options.region = region;
self.options.size = self.view.frame.size;
self.options.scale = [[UIScreen mainScreen] scale];
MKMapSnapshotter * mapSnapShot = [[MKMapSnapshotter alloc] initWithOptions:self.options];
[mapSnapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if (error) {
NSLog(#"[Error] %#", error);
return;
}
UIImage *image = snapshot.image;//map image
NSData *data = UIImagePNGRepresentation(image);
}];
What you are wanting to do is fundamental mapkit api functionality. Check out the documentation here:
http://code.google.com/apis/maps/articles/tutorial-iphone.html
and one of my favorite ios blogs has a mapkit tutorial here:
Today there is this, without Google:
https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/LocationAwarenessPG/MapKit/MapKit.html
Find the code at paragraph "Creating a Snapshot of a Map".
So the result looks consistent with Apple Maps.
I would like to add to #DevC answer, by providing a method (with completion block) that adds an annotation in the middle of the image. At first I was interested in seeing if I could add an annotation before the snapshot begins, however MKMapSnapshotter does not support this. So the solution I came up with, is to create a pin, draw it on top of the map image and then create another image of the map + pin. Here is my custom method:
-(void)create_map_screenshot:(MKCoordinateRegion)region :(map_screenshot_completion)map_block {
// Set the map snapshot properties.
MKMapSnapshotOptions *snap_options = [[MKMapSnapshotOptions alloc] init];
snap_options.region = region;
snap_options.size = // Set the frame size e.g.: custom_view.frame.size;
snap_options.scale = [[UIScreen mainScreen] scale];
// Initialise the map snapshot camera.
MKMapSnapshotter *map_camera = [[MKMapSnapshotter alloc] initWithOptions:snap_options];
// Take a picture of the map.
[map_camera startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
// Check if the map image was created.
if ((error == nil) && (snapshot.image != nil)) {
// Create the pin image view.
MKAnnotationView *pin = [[MKPinAnnotationView alloc] initWithAnnotation:nil reuseIdentifier:nil];
// Get the map image data.
UIImage *image = snapshot.image;
// Create a map + location pin image.
UIGraphicsBeginImageContextWithOptions(image.size, YES, image.scale); {
[image drawAtPoint:CGPointMake(0.0f, 0.0f)];
CGRect rect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
// Create the pin co-ordinate point.
CGPoint point = [snapshot pointForCoordinate:region.center];
if (CGRectContainsPoint(rect, point)) {
// Draw the pin in the middle of the map.
point.x = (point.x + pin.centerOffset.x - (pin.bounds.size.width / 2.0f));
point.y = (point.y + pin.centerOffset.y - (pin.bounds.size.height / 2.0f));
[pin.image drawAtPoint:point];
}
}
// Get the new map + pin image.
UIImage *map_plus_pin = UIGraphicsGetImageFromCurrentImageContext();
// Return the new image data.
UIGraphicsEndImageContext();
map_block(map_plus_pin);
}
else {
map_block(nil);
}
}];
}
This method also requires a completion header to be declared like so:
typedef void(^map_screenshot_completion)(UIImage *);
I hope this is of some use to people looking to add annotations to the map image.
To be clear, there is no way to get an image for a specific location through map kit.
You have to use the MKMapView, disable user from panning/zooming, add annotation and handle click on annotation to do whatever you wanted to do..

Rendering MKMapView to UIImage with real resolution

I am using this function for rendering MKMapView instance into image:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This works fine. But with iphone4 the rendered image doesn't have same resolution as it has really on device. On device I have the 640x920 map view quality, and rendered image has the resolution 320x460.
Then I doubled the size that is provided to UIGraphicsBeginImageContext() function but that filled the only top-left image part.
Question: Is there any way to get map rendered to image with full resolution 640x920?
Try using UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
See QA1703 for more details. It says:
Note: Starting from iOS 4,
UIGraphicsBeginImageContextWithOptions
allows you to provide with a scale
factor. A scale factor of zero sets it
to the scale factor of the device's
main screen. This enables you to get
the sharpest, highest-resolustion
snapshot of the display, including a
Retina Display.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];