I have a UIScrollView backed by a CATiledLayer. The layer is pulling tiles from a server and displaying them in response to what is requested when drawLayer:inContext is called. Here's what it looks like:
-(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx{
ASIHTTPRequest *mapRequest;
WMSServer *root = self.mapLayer.parentServer;
while(root == nil){
root = self.mapLayer.parentLayer.parentServer;
}
//Get where we're drawing
CGRect clipBox = CGContextGetClipBoundingBox(ctx);
double minX = CGRectGetMaxY(clipBox);
double minY = CGRectGetMinX(clipBox);
double maxX = CGRectGetMinY(clipBox);
double maxY = CGRectGetMaxX(clipBox);
//URL for the request made here using the min/max values from above
NSURL *url = [[NSURL alloc] initWithString:path];
mapRequest = [ASIHTTPRequest requestWithURL:url];
[url release];
[mapRequest startSynchronous];
//Wait for it to come back...
//Turn the Data into an image
NSData *response = [mapRequest responseData];
//Create the entire image
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((CFDataRef)response);
CGImageRef image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
//Now paint the PNG on the context
CGContextDrawImage(ctx, clipBox, image);
}
The problem occurs when I zoom in. The size of the tiles (as indicated by the clipBox rectangle) shrinks down, and if it's at say, 32 by 32 pixels, the URL is properly formed for 32pixels, but what is displayed on the screen scaled up to the default tileSize (128 by 128 pixels in this case). Is there some way I can get the layer to properly draw these images.
Found it. Get the CGAffineTransform with CGContextGetCTM(CGContextRef), and use the value of the a (or b) property as the zoom. Get the image in a screen res that is the zoomed appropriately (multiply the height and width by the zoom). Then draw that image into the clip box.
Related
In my Ipad app i'm using OpenGL framework to crop the image.I'm not much familier with OpenGL.Here in my app I need to crop some portion of image and display it on an image view. For cropping effect I use OpenGL framework. The cropping effect was working fine but when I am assigning that image to the imageview full black image is getting displayed instead of the cropped image.I'm not getting where I had gone wrong.If any one works on it please guide me and also please post/suggest links which are related to my requirement.
I'm using this code: _sourceImage is the captured image.
-(void)showResult
{
UIImage *imageCrop;
float scaleCrop;
if (_sourceImage.size.width >= IMAGEWIDTH)
{
scaleCrop = IMAGEWIDTH / _sourceImage.size.width;
imageCrop = [ImageCropViewController scaleImage:_sourceImage with:CGSizeMake(_sourceImage.size.width*scaleCrop, _sourceImage.size.height*scaleCrop)];
}
else
{
scaleCrop = 1;
imageCrop = _sourceImage;
}
float scale = _sourceImage.size.width / resizeImage.size.width * 2;
IplImage *iplImage = [ImageCropViewController CreateIplImageFromUIImage:imageCrop] ;
Quadrilateral rectan;
rectan.point[0].x = _touchLayer.rectan.pointA.x*scale*scaleCrop;
rectan.point[0].y = _touchLayer.rectan.pointA.y*scale*scaleCrop;
rectan.point[1].x = _touchLayer.rectan.pointB.x*scale*scaleCrop;
rectan.point[1].y = _touchLayer.rectan.pointB.y*scale*scaleCrop;
rectan.point[2].x = _touchLayer.rectan.pointC.x*scale*scaleCrop;
rectan.point[2].y = _touchLayer.rectan.pointC.y*scale*scaleCrop;
rectan.point[3].x = _touchLayer.rectan.pointD.x*scale*scaleCrop;
rectan.point[3].y = _touchLayer.rectan.pointD.y*scale*scaleCrop;
IplImage* dest = cropDoc2(iplImage,rectan);
IplImage *image = cvCreateImage(cvGetSize(dest), IPL_DEPTH_8U, dest->nChannels);
cvCvtColor(dest, image, CV_BGR2RGB);
cvReleaseImage(&dest);
UIImage *tempImage = [ImageCropViewController UIImageFromIplImage:image withImageOrientation:_sourceImage.imageOrientation];
[self crop:tempImage];
cvReleaseImage(&image);
}
-(void)crop:(UIImage*)image
{
//Adjust the image size, to scale the image to 1013 of width
float targetWidth = 1000.0f;
float scale = targetWidth / image.size.width;
float scaleheight = image.size.height * scale;
UIImage *imageToSent = [ImageCropViewController scaleImage:image with:CGSizeMake(targetWidth, scaleheight)];
// Image data with compression
imageData = UIImageJPEGRepresentation(imageToSent,0.75);
NSDate *now = [NSDate dateWithTimeIntervalSinceNow:0];
NSString *caldate = [now description];
appDelegate.imagefilePath= [NSString stringWithFormat:#"%#/%#.jpg", DOCUMENTS_FOLDER,caldate];
[imageData writeToFile:appDelegate.imagefilePath atomically:YES];
appDelegate.cropimage=imageToSent;
}
Black usually means you failed to pull the data out of OpenGL, and you ended up with a blank image.
How are you getting the image back from OpenGL? OpenGL doesn't work with normal images - you have to issued custom OpenGL calls to pull the image in.
I need to use image height and width.I have 800*800 pixel image in my ios simulator. But when i am finding the size of image using
image.size.width and image.size.height
it is giving me 315*120 which is incorrect why so?
- (void)elcImagePickerController:(ELCImagePickerController *)picker didFinishPickingMediaWithInfo:(NSArray *)info
{
NSLog(#"%#",info);
int temp = [[DataManager sharedObj] imageCount];
for(NSDictionary *dict in info)
{
[[DataManager sharedObj] setImageCount:temp+1];
NSMutableDictionary* dataDic = [[NSMutableDictionary alloc] init];
UIImage* img = [dict objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"%.2f",img.size.width);
NSLog(#"%.2f",img.size.height);
}
}
Try to call image.frame = cgrectmake(x, y, 800, 800).
If you have an UIImageView *image instead of UIImage *image, problably you get the frame of the view not the image.
SOLVED:
change fullScreenImage to fullResolutionImage in line 33 of your ELCImagePickerController.m.
You can get the image size by using this
UIImage *imageS = [dict objectForKey:UIImagePickerControllerOriginalImage];
NSLog(#"width = %.2f, height = %.2f", imageS.size.width, imageS.size.height);
NSLog(#"Image Size = %#",NSStringFromCGSize(imageS.size));
Both lines will work.
It will print right. Please check your simulator image once again. If you added it via google may me you have been saved the image thumbnail instead of original image thats why you are getting the size wrong. Otherwise method of getting image size is correct.
i am creating image for graph using this code
UIImage *newImage=[graph imageOfLayer]
NSData *newPNG= UIImageJPEGRepresentation(newImage, 1.0);
NSString *filePath=[NSString stringWithFormat:#"%#/graph.jpg", [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]];
if([newPNG writeToFile:filePath atomically:YES])
NSLog(#"Created new file successfully");
But i get only visible area(320*460) in image, How can i get whole graph image with axis.
Please provide some code snippet, how can i do it with coreplot.
Thanks In Advance...
Make a new graph the size of the desired output image. It doesn't have to be added to a hosting view—that's only needed for displaying it on screen.
CPTXYGraph *graph = [(CPTXYGraph *)[CPTXYGraph alloc] initWithFrame:desiredFrame];
// set up the graph as usual
UIImage *newImage=[graph imageOfLayer];
// process output image
My approach to this problem was to create a method to extract the image.
In that method I momentarily make the hosting view, scroll view, graph bounds, and plot area frame bounds momentarily bigger. I then convert the graph to a an image.
I then remove the hosting view from its container to remove it from screen. I then call the doPlot method to reinitialise the plot and its data via the DoPlot method. My code is below.
There might be a visual jitter whilst this is carried out. However, you could always disguise this by using a alert either to enter an email for export, or just a simple alert saying image exported.
//=============================================================================
/**
Gets the image of the chart to export via email.
*/
//=============================================================================
-(UIImage *) getImage
{
//Temprorarilty make plot bigger.
// CGRect rect = self.hostingView.bounds;
CGRect rect = self.scroller.bounds;
rect.size.height = rect.size.height -100;
rect.origin.x = 0;
rect.size.width = rect.size.width + [fields count] * 100.0;
[self.hostingView setBounds:rect];
[scroller setContentSize: hostingView.frame.size];
graph.plotAreaFrame.bounds = rect;
graph.bounds = rect;
UIImage * image =[graph imageOfLayer];//get image of plot.
//Redraw the plot back at its normal size;
[self.hostingView removeFromSuperview];
self.hostingView = nil;
[self doPlot];
return image;
}//============================================================================
I am creating an app where in I have to show just the part of the area where on my place will be plotted, Similar like the one below.
Clicking on this will image will take my app further. But here it's just an static image generated depending upon my longitude and latitude.
Any help would be really appreciated.
Thanks
Here's the working code. Incase anybody's still searching out for the answer.
NSString *staticMapUrl = [NSString stringWithFormat:#"http://maps.google.com/maps/api/staticmap?markers=color:red|%f,%f&%#&sensor=true",yourLatitude, yourLongitude,#"zoom=10&size=270x70"];
NSURL *mapUrl = [NSURL URLWithString:[staticMapUrl stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
UIImage *image = [UIImage imageWithData: [NSData dataWithContentsOfURL:mapUrl]];
Thanks
You can try Google Static Maps API
Here's sample code
NSString *urlString = #"http://maps.googleapis.com/maps/api/staticmap?center=Berkeley,CA&zoom=14&size=200x200&sensor=false";
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]];
UIImage *imageData = [UIImage imageWithData:data];
[yourImageView setImage:imageData];
Through this you can get static image based on your coordinates.
But I'm afraid to say that customization of annotation may not be possible.
You could also use MKMapSnapshotter, this allows you to take a screen shot of the map;
MKMapSnapshotOptions * snapOptions= [[MKMapSnapshotOptions alloc] init];
self.options=snapOptions;
CLLocation * Location = [[CLLocation alloc] initWithLatitude:self.lat longitude:self.long];
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(Location.coordinate, 300, 300);
self.options.region = region;
self.options.size = self.view.frame.size;
self.options.scale = [[UIScreen mainScreen] scale];
MKMapSnapshotter * mapSnapShot = [[MKMapSnapshotter alloc] initWithOptions:self.options];
[mapSnapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if (error) {
NSLog(#"[Error] %#", error);
return;
}
UIImage *image = snapshot.image;//map image
NSData *data = UIImagePNGRepresentation(image);
}];
What you are wanting to do is fundamental mapkit api functionality. Check out the documentation here:
http://code.google.com/apis/maps/articles/tutorial-iphone.html
and one of my favorite ios blogs has a mapkit tutorial here:
Today there is this, without Google:
https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/LocationAwarenessPG/MapKit/MapKit.html
Find the code at paragraph "Creating a Snapshot of a Map".
So the result looks consistent with Apple Maps.
I would like to add to #DevC answer, by providing a method (with completion block) that adds an annotation in the middle of the image. At first I was interested in seeing if I could add an annotation before the snapshot begins, however MKMapSnapshotter does not support this. So the solution I came up with, is to create a pin, draw it on top of the map image and then create another image of the map + pin. Here is my custom method:
-(void)create_map_screenshot:(MKCoordinateRegion)region :(map_screenshot_completion)map_block {
// Set the map snapshot properties.
MKMapSnapshotOptions *snap_options = [[MKMapSnapshotOptions alloc] init];
snap_options.region = region;
snap_options.size = // Set the frame size e.g.: custom_view.frame.size;
snap_options.scale = [[UIScreen mainScreen] scale];
// Initialise the map snapshot camera.
MKMapSnapshotter *map_camera = [[MKMapSnapshotter alloc] initWithOptions:snap_options];
// Take a picture of the map.
[map_camera startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
// Check if the map image was created.
if ((error == nil) && (snapshot.image != nil)) {
// Create the pin image view.
MKAnnotationView *pin = [[MKPinAnnotationView alloc] initWithAnnotation:nil reuseIdentifier:nil];
// Get the map image data.
UIImage *image = snapshot.image;
// Create a map + location pin image.
UIGraphicsBeginImageContextWithOptions(image.size, YES, image.scale); {
[image drawAtPoint:CGPointMake(0.0f, 0.0f)];
CGRect rect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
// Create the pin co-ordinate point.
CGPoint point = [snapshot pointForCoordinate:region.center];
if (CGRectContainsPoint(rect, point)) {
// Draw the pin in the middle of the map.
point.x = (point.x + pin.centerOffset.x - (pin.bounds.size.width / 2.0f));
point.y = (point.y + pin.centerOffset.y - (pin.bounds.size.height / 2.0f));
[pin.image drawAtPoint:point];
}
}
// Get the new map + pin image.
UIImage *map_plus_pin = UIGraphicsGetImageFromCurrentImageContext();
// Return the new image data.
UIGraphicsEndImageContext();
map_block(map_plus_pin);
}
else {
map_block(nil);
}
}];
}
This method also requires a completion header to be declared like so:
typedef void(^map_screenshot_completion)(UIImage *);
I hope this is of some use to people looking to add annotations to the map image.
To be clear, there is no way to get an image for a specific location through map kit.
You have to use the MKMapView, disable user from panning/zooming, add annotation and handle click on annotation to do whatever you wanted to do..
I am using this function for rendering MKMapView instance into image:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This works fine. But with iphone4 the rendered image doesn't have same resolution as it has really on device. On device I have the 640x920 map view quality, and rendered image has the resolution 320x460.
Then I doubled the size that is provided to UIGraphicsBeginImageContext() function but that filled the only top-left image part.
Question: Is there any way to get map rendered to image with full resolution 640x920?
Try using UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext:
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
See QA1703 for more details. It says:
Note: Starting from iOS 4,
UIGraphicsBeginImageContextWithOptions
allows you to provide with a scale
factor. A scale factor of zero sets it
to the scale factor of the device's
main screen. This enables you to get
the sharpest, highest-resolustion
snapshot of the display, including a
Retina Display.
iOS 7 introduced a new method to generate screenshots of a MKMapView. It is now possible to use the new MKMapSnapshot API as follows:
MKMapView *mapView = [..your mapview..]
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc]init];
options.region = mapView.region;
options.mapType = MKMapTypeStandard;
options.showsBuildings = NO;
options.showsPointsOfInterest = NO;
options.size = CGSizeMake(1000, 500);
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc]initWithOptions:options];
[snapshotter startWithQueue:dispatch_get_main_queue() completionHandler:^(MKMapSnapshot *snapshot, NSError *error) {
if( error ) {
NSLog( #"An error occurred: %#", error );
} else {
[UIImagePNGRepresentation( snapshot.image ) writeToFile:#"/Users/<yourAccountName>/map.png" atomically:YES];
}
}];
Currently all overlays and annotations are not rendered. You have to render them afterwards onto the resulting snapshot image yourself. The provided MKMapSnapshot object has a handy helper method to do the mapping between coordinates and points:
CGPoint point = [snapshot pointForCoordinate:locationCoordinate2D];