MKOverlay View is blurred - iphone

I'm trying to add a png image as a custom map using MKOverlayView. I'm almost there - I am able to get the image lined up in the right place, and I know that the -drawMapRect: method in the subclass of MKOverlayView is being called periodically; I just can't seem to get the image to render properly. It's totally blurry, almost beyond recognition. I also know the image is large enough (it is 1936 × 2967). Here is my code for -drawMapRect:
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
// Load image from applicaiton bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"map.jpg"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithJPEGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
// save context before screwing with it
CGContextSaveGState(context);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetAlpha(context, 1.0);
// get the overlay bounds
MKMapRect theMapRect = [self.overlay boundingMapRect];
CGRect theRect = [self rectForMapRect:theMapRect];
// Draw image
CGContextDrawImage(context, theRect, image);
CGImageRelease(image);
CGContextRestoreGState(context);
Does anyone have a clue what's going on?
Thanks!
-Matt

I've had a similar problem. The problem was that my boundingMapRect was defined incorrectly. The full image is rendered scaled down when the scale is small on a tile. Then the map is zoomed and not all the image tiles are in the boundingMapRect tiles so they are not redrawn in the correct scale and the scaled down version is zoomed. At least that's what I think happens.
Hope this helps.

Get rid of CGContextScaleCTM(context, 1.0, -1.0); and do a vertical flip on your image in preview instead. the mapkit seems to use the context information to determine which part of the image to render more clearly. Know it's been a while, but hope it helps!

Thanks Rob, you made my day. My blur overlay image got sharpen when I replaced
CGContextScaleCTM(context, 1.0, -1.0);
with
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, theRect.size.height);
CGContextConcatCTM(context, flipVertical);

Related

What's wrong with the way I have tried to flip (mirror) a UIImage?

I have been attempting this for a few days now. I'm creating a sprite sheet loader, however I must also be able to load the sprites facing the opposite direction. This involves flipping the images that I already have loaded.
I have already attempted to do this using the UIImageOrientation / UIImageOrientationUpMirrored method however this has absolutely no effect and simply draws the frame with the exact same orientation as before.
I have since attempted a slightly more complicated way which I will include below. But still, simply draws the image in exactly the same way as it is loaded into the application. (Not mirrored).
I've included the method below (along with my comments so that you can maybe follow my thought pattern), can you see what I am doing wrong?
- (UIImage*) getFlippedFrame:(UIImage*) imageToFlip
{
//create a context to draw that shizz into
UIGraphicsBeginImageContext(imageToFlip.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//WHERE YOU LEFT OFF. you're attempting to find a way to flip the image in imagetoflip. and return it as a new UIimage. But no luck so far.
[imageToFlip drawInRect:CGRectMake(0, 0, imageToFlip.size.width, imageToFlip.size.height)];
//take the current context with the old frame drawn in and flip it.
CGContextScaleCTM(currentContext, -1.0, 1.0);
//create a UIImage made from the flipped context. However will the transformation survive the transition to UIImage? UPDATE: Apparently not.
UIImage* flippedFrame = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return flippedFrame;
}
Thank you,
Guy.
I would have expected that you have to change the transform of the context and then draw. Also, you would need to translate because you are flipping to negative coordinates, so, replace
[imageToFlip drawInRect:CGRectMake(0, 0, imageToFlip.size.width, imageToFlip.size.height)];
CGContextScaleCTM(currentContext, -1.0, 1.0);
with (edited based on comments)
CGContextTranslateCTM(currentContext, imageToFlip.size.width, 0);
CGContextScaleCTM(currentContext, -1.0, 1.0);
[imageToFlip drawInRect:CGRectMake(0, 0, imageToFlip.size.width, imageToFlip.size.height)];
NOTE: From comments, a category to use
#implementation UIImage (Flip)
- (UIImage*)horizontalFlip {
UIGraphicsBeginImageContext(self.size);
CGContextRef current_context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(current_context, self.size.width, 0);
CGContextScaleCTM(current_context, -1.0, 1.0);
[self drawInRect:CGRectMake(0, 0, self.size.width, self.size.height)];
UIImage *flipped_img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return flipped_img;
}
#end

saving image from a page of CGPDfDocument is not perfectly fitted in UIImageview

I am having some trouble with saving a PDF page as UIImage...the pdf is loaded from the internet and it has one page(original PDF has been splitted in sever)...but the converted image sometimes is cropped...sometimes it is small and leave white space when it is putted on UIImageview...
here is the code
-(UIImage *)imageFromPdf:(NSString *) pdfUrl{
NSURL *pdfUrlStr=[NSURL URLWithString:pdfUrl];
CFURLRef docURLRef=(CFURLRef)pdfUrlStr;
UIGraphicsBeginImageContext(CGSizeMake(768, 1024)); //840, 960
NSLog(#"save begin");
CGContextRef context = UIGraphicsGetCurrentContext();
//CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("/file.pdf"), NULL, NULL);
CGPDFDocumentRef pdf = CGPDFDocumentCreateWithURL(docURLRef);
NSLog(#"save complete");
CGContextTranslateCTM(context, 0.0, 900);//320
CGContextScaleCTM(context, 1.0, -1.0);
CGPDFPageRef page = CGPDFDocumentGetPage(pdf, 1);
CGContextSaveGState(context);
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, CGRectMake(0, 0, 768, 1024), 0, true);
CGContextConcatCTM(context, pdfTransform);
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
btw I have prepared my UIImageview by coding like this
self.PDFImageVIew.contentMode = UIViewContentModeScaleAspectFit;
self.PDFImageVIew.clipsToBounds = YES;
I just want this image perfectly fitted on UIImageview and may be its reducing the quality of image...can you have suggesion how can I keep the quality also? please help and give me some suggestion
thanks
CGContextTranslateCTM(context, 0.0, 900);//320
Here generally last parameter of translate operation should be the height of context or height of rectangle for which you creating image. So, i think it should be 1024(You have taken height of image context is 1024 so here i am assuming that status bar is not present). This may eliminate the issue of cropping. Some more things that i have noted on your code you should have to save the state of graphics before any operation on context. You have are saving it but after few operations.
Above code will try to make it height fit so if height of actual page is bigger than your context height then it will be scaled down. so you can obviously see white space around page.
One more thing if your original pdf page have white space in it then there is no way to eliminate it as far as i know.

Proper use of MKOverlayView

I am writing an iPhone app in which I place a large PNG image (1936 × 2967) on an MKMapView using MKOverlayView. I am a little confused about how to appropriately implement the drawMapRect: function in MKOverlayView - should I manually segment my image before drawing it? Or should I let the mechanisms of MKOverlayView handle all that?
My impression from other posts is that before MKOverlayView was available, you were expected to segment images yourself for this kind of task, and use a CATiledLayer. I thought maybe MKOverlayView took care of all the dirty work.
The real reason I ask though is because when I run my app through Instruments using the allocations tool, I find that the number of live bytes my app is using steadily increases with the introduction of the custom image on the map. Right now I am NOT segmenting my image, but I also am seeing no record of memory leaks in the leaks tool in Instruments. Here is my drawMapRect: function:
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
// Load image from applicaiton bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"map.png"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
MKMapRect overlayMapRect = [self.overlay boundingMapRect];
CGRect overlayRect = [self rectForMapRect:overlayMapRect];
// draw image
CGContextSaveGState(context);
CGContextDrawImage(context, overlayRect, image);
CGContextRestoreGState(context);
CGImageRelease(image);
}
If my drawMapRect: function is not the cause of these memory issues, does anybody know what it might be? I know through debugging that my viewForOverlay: function for the mapView only gets called once for each overlay, so it's not that memory is leaking there or something.
Any advice is welcome!
Thanks, -Matt
EDIT: so it turns out that the memory issue is actually being caused by MKMapView - every time I move the map at all the memory usage goes up very steadily and never comes down - this doesn't seem good :(
A bit of a late answer, leaving it here if somebody else hits the same problem in the future. The flaw here is trying to render a whole image while documentation clearly says -
In addition, you should avoid drawing the entire contents of the overlay each time this method is called. Instead, always take the mapRect parameter into consideration and avoid drawing content outside that rectangle.
so, you have to only draw the part of the image in the area defined by mapRect
updated: keep in mind that drawRect here can be larger than mapRect, need to adjust the paint and cut regions accordingly
let overlayMapRect = overlay.boundingMapRect
let overlayDrawRect = self.rect(for: overlayMapRect)
// watch out for draw rect adjustment here --
let drawRect = self.rect(for: mapRect).intersection(overlayDrawRect)
let scaleX = CGFloat(image.width) / overlayRect.width
let scaleY = CGFloat(image.height) / overlayRect.height
let transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
let imageCut = drawRect.applying(transform)
// omitting optionals checks, you should not
let cutImage = image.cropping(to: imageCut)
// the usual vertical flip issue with image.draw
context.translateBy(x: 0, y: drawRect.maxY + drawRect.origin.y)
context.scaleBy(x: 1, y: -1)
context.draw(image, in: drawRect, byTiling: false)
Here is the objc version based on epolyakov's answer. It works great, but only without any rotation.
- (void) drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGImageRef overlayImage = <your_uiimage>.CGImage;
CGRect overlayRect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGRect drawRect = [self rectForMapRect:mapRect];
CGRect rectPortion = CGRectIntersection(overlayRect, drawRect);
CGFloat scaleX = rotatedImage.size.width / overlayRect.size.width;
CGFloat scaleY = rotatedImage.size.height / overlayRect.size.height;
CGAffineTransform transform = CGAffineTransformMakeScale(scaleX, scaleY);
CGRect imagePortion = CGRectApplyAffineTransform(rectPortion, transform);
CGImageRef cutImage = CGImageCreateWithImageInRect(overlayImage, imagePortion);
CGRect finalRect = rectPortion;
CGContextTranslateCTM(context, 0, finalRect.origin.y + CGRectGetMaxY(finalRect));
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetAlpha(context, self.alpha);
CGContextDrawImage(context, finalRect, cutImage);
}
If you need to manage also the rotation of your image, I found a trick using a rotated version of the original image (this because the map rendering always draw vertical rects and rotating the image in this method will cut it).
So using a rotated version of the original image allows to render with vertical rects as the map expects
UIImage* rotatedImage = [self rotatedImage:<your_uiimage> withAngle:<angle_of_image>];
CGImageRef overlayImage = rotatedImage.CGImage;
And this is the method that produce a rotated image in a bounding rect
- (UIImage*) rotatedImage:(UIImage*)image withAngle:(CGFloat)angle
{
float radians = degreesToRadians(angle);
CGAffineTransform xfrm = CGAffineTransformMakeRotation(radians);
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
CGRect rotatedImageBoundingRect = CGRectApplyAffineTransform (imageRect, xfrm);
UIGraphicsBeginImageContext(rotatedImageBoundingRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM (ctx, rotatedImageBoundingRect.size.width/2., rotatedImageBoundingRect.size.height/2.);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextRotateCTM (ctx, radians);
CGContextDrawImage (ctx, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Image drawing problem when using CGContext. Image is mirrored horizontally

When I am drawing UIImage* image on UIView using the below code, the image is mirrored horizontally. It is like that if I draw 4, it is drawing like this ..
CGRect rect = CGRectMake(x, y, imageWidth, imageHeight);
CGContextDrawImage((CGContextRef) g, rect, ((UIImage*)image).CGImage);
That is the problem??? I am doing wrong??? or if somebody know how to fix it, please let me also know. I very appreciate that in advance.
Thanks a loooooooooot.
See: CGContextDrawImage draws image upside down when passed UIImage.CGImage
Use [image drawInRect:rect] instead of CGContextDrawImage.
you can turn the picture the right way around using
CGAffineTransform transform =CGAffineTransformMakeTranslation(0.0, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
//draw image in the context
CGContextDrawImage(context, rect, ((UIImage*)image).CGImage);
Using [image drawInRect:rect] uses the default context i.e. the screen, you can not give it your current context, e.g. if you want to put it as part of a button's ima

Capturing EAGLview content WITH alpha channel on iPhone

have been struggling with this issue for quite some time now and couldn't find an answer so far. Basically, what I want to do, is capturing the content of my EAGLview and then use it to merge it with other images. Anyway, the mainproblem is, that everything transparent in my EAGLview renders opaque when saving it to the photoalbum or putting it into a UIImageView. Let me share some code with you, I found somewhere else:
- (CGImageRef) glToUIImage {
unsigned char buffer[320*480*4];
glReadPixels(0,0,320,480,GL_RGBA,GL_UNSIGNED_BYTE,&buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320*480*4, NULL);
CGImageRef iref = CGImageCreate(320,480,8,32,320*4,CGColorSpaceCreateDeviceRGB(),kCGBitmapByteOrderDefault,ref,NULL,true,kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width*height*4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:outputRef];
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
return outputRef;
}
As I already mentioned, this perfectly grabs the content of my EAGLview, but I can not get the image with its alpha values.
Any help appreciated. Thanks!
Two places I can see that you might be losing your transparency:
when you're drawing your scene: does your scene have a transparent background? make sure you're doing a glClear to something like (0,0,0,0) rather than (0,0,0,1).
when you're drawing the image to flip it over: what is the default background color here? Seems likely it's a non-transparent black and you'll end up with that where the transparent parts of your scene used to be.
You could check if #2 is your problem by saving the image before you flip it over, and if it is, you could avoid the flipping over process by flipping the memory in your pixels buffer directly rather than using Core Graphics calls.