how to sharp/blur an uiimage in iphone? - iphone

I have a view with UIImageView and an UIImage set to it. How do I make image sharp or blur using coregraphics?

Apple has a great sample program called GLImageProcessing that includes a very fast blur/sharpen effect using OpenGL ES 1.1 (meaning it works on all iPhones, not just the 3gs.)
If you're not fairly experienced with OpenGL, the code may make your head hurt.

Going down the OpenGL route felt like insane overkill for my needs (blurring a touched point on an image). Instead I implemented a simple blurring process that takes a touch point, creates a rect containing that touch point, samples the image in that point and then redraws the sample image upside down on top of the source rect several times slightly offset with slightly different opacity. This produces a pretty nice poor man's blur effect without an insane amount of code and complexity. Code follows:
- (UIImage*)imageWithBlurAroundPoint:(CGPoint)point {
CGRect bnds = CGRectZero;
UIImage* copy = nil;
CGContextRef ctxt = nil;
CGImageRef imag = self.CGImage;
CGRect rect = CGRectZero;
CGAffineTransform tran = CGAffineTransformIdentity;
int indx = 0;
rect.size.width = CGImageGetWidth(imag);
rect.size.height = CGImageGetHeight(imag);
bnds = rect;
UIGraphicsBeginImageContext(bnds.size);
ctxt = UIGraphicsGetCurrentContext();
// Cut out a sample out the image
CGRect fillRect = CGRectMake(point.x - 10, point.y - 10, 20, 20);
CGImageRef sampleImageRef = CGImageCreateWithImageInRect(self.CGImage, fillRect);
// Flip the image right side up & draw
CGContextSaveGState(ctxt);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextTranslateCTM(ctxt, 0.0, -rect.size.height);
CGContextConcatCTM(ctxt, tran);
CGContextDrawImage(UIGraphicsGetCurrentContext(), rect, imag);
// Restore the context so that the coordinate system is restored
CGContextRestoreGState(ctxt);
// Cut out a sample image and redraw it over the source rect
// several times, shifting the opacity and the positioning slightly
// to produce a blurred effect
for (indx = 0; indx < 5; indx++) {
CGRect myRect = CGRectOffset(fillRect, 0.5 * indx, 0.5 * indx);
CGContextSetAlpha(ctxt, 0.2 * indx);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextDrawImage(ctxt, myRect, sampleImageRef);
}
copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return copy;
}

What you really need are in the image filters in the CoreImage API. Unfortunately CoreImage is not supported on the iPhone (unless that changed recently and I missed it). Be careful here, as, IIRC, they are available in the SIM - but not on the device.
AFAIK there is no other way to do it properly with the native libraries, although I've sort of faked a blur before by creating an extra layer over the top which is a copy of what's below, offset by a pixel or two and with a low alpha value. For a proper blur effect, tho, the only way I've been able to do it is offline in Photoshop or similar.
Would be keen to hear if there is a better way too, but to my knowledge that is the situation currently.

Have a look at the following libraries:
https://github.com/coryleach/UIImageAdjust
https://github.com/esilverberg/ios-image-filters
https://github.com/cmkilger/CKImageAdditions

Related

Box2d fixture and body out of sync on retina display

I'm trying to make a cocos2d/box2d game work on iPad, iPhone and iPhone retina.
My problem is, that the fixture and body don't line up on the retina simulator, please click on screenshots below for illustration (as a new stackoverflow member, it won't allow me to post the screenshot here).
screenshot
(please disregard the different shapes, I want the 4 corners to line up)
I've done quite a bit of research on this over the last couple of days, and the closest I found was this:
link
But the solution offered there with PTM_RATIO and CC_CONTENT_SCALE_FACTOR() doesn't seem to work in my case. I think it has to do with the fact that I don't load an image from file into my sprite. Most solutions to this problem are based on loading -hd image files for the retina display, but I don't want to use files in my game at all. I basically want to draw the polygons myself at runtime,
My code looks as follows:
-(CCSprite*)addSprite
{
CGSize contextsize = CGSizeMake(200, 200);
UIGraphicsBeginImageContext(contextsize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextFlush(context);
CGContextSetAllowsAntialiasing(context, true);
CGContextTranslateCTM(context, 0, contextsize.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGFloat components[] = {0.0, 0.0, 1.0, 1.0};
CGColorRef color = CGColorCreate(colorspace, components);
CGContextSetStrokeColorWithColor(context, color);
UIBezierPath* aPath;
aPath = [UIBezierPath bezierPathWithArcCenter:CGPointMake(100, 100)
radius:100
startAngle:0
endAngle:1.57
clockwise:YES];
[aPath addArcWithCenter:CGPointMake(100, 100)
radius:50
startAngle:1.57
endAngle:0
clockwise:NO];
[aPath stroke];
CGContextStrokePath(context);
CGColorSpaceRelease(colorspace);
CGColorRelease(color);
UIImage *graphImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CCTexture2D *tex = [[[CCTexture2D alloc] initWithImage:graphImage] autorelease];
CCSprite *sprite = [CCSprite spriteWithTexture:tex];
return sprite;
}
-(void) addFixture:(CCSprite *)fixsprite
{
b2Vec2 arcdots[] = {
b2Vec2(50.0f / PTM_RATIO, 0.0f / PTM_RATIO),
b2Vec2(100.0f / PTM_RATIO, 0.0f / PTM_RATIO),
b2Vec2(0.0f / PTM_RATIO, 100.0f / PTM_RATIO),
b2Vec2(0.0f / PTM_RATIO, 50.0f / PTM_RATIO)
};
b2PolygonShape p_shape;
b2FixtureDef fixtureDef;
b2BodyDef bodyDef;
bodyDef.type = b2_kinematicBody;
bodyDef.position.Set(100/PTM_RATIO, 100/PTM_RATIO);
bodyDef.userData = fixsprite;
b2Body *body = world->CreateBody(&bodyDef);
p_shape.Set(arcdots, 4);
fixtureDef.shape = &p_shape;
fixtureDef.density = 1.0f;
fixtureDef.friction = 0.3f;
body->CreateFixture(&fixtureDef);
}
And I call these functions from the main routine as follows:
CCSprite *sprite2 = [self addSprite];
sprite2.position = ccp(0, 0);
[self addChild:sprite2 z:0];
[self addFixture:sprite2];
I have these lines uncommented in the delegate file:
if( ! [director enableRetinaDisplay:YES] )
CCLOG(#"Retina Display Not supported");
Please let me know if further information is required. And please be gentle, I'm only starting to learn this. Thanks for your time.
Unless otherwise mentioned, all coordinates in cocos2d (and most of UIKit) are given in points, not pixels. On a Retina display device you still have a point resolution of 480x320 points (960x640 pixels).
From that follows: when you calculate in actual pixels, multiply or divide by CC_CONTENT_SCALE_FACTOR. If you deal with point coordinates, do nothing. Since you're rendering your own polys I assume you know whether you use actual pixel coordinates or not. If you use OpenGL directly, then you'll be working with pixel coordinates.
I'm not sure if enabling Retina display mode does anything for you if you don't use cocos2d to render your content.
Lastly, a common misunderstanding is that the Box2D world is using point coordinates and must be transformed to pixels or vice versa. Neither is the case. The Box2D world is completely oblivious to a specific coordinate system. The use of PTM_RATIO is done only to ensure that Box2D coordinates are within reasonable ranges for the Box2D engine, since it works best with objects that are 1 meter in size/diameter, and most objects should range from 0.1 to 10 meters in diameter.

Most efficient way to draw part of an image in iOS

Given an UIImage and a CGRect, what is the most efficient way (in memory and time) to draw the part of the image corresponding to the CGRect (without scaling)?
For reference, this is how I currently do it:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, rect, imageRef);
CGImageRelease(imageRef);
}
Unfortunately this seems extremely slow with medium-sized images and a high setNeedsDisplay frequency. Playing with UIImageView's frame and clipToBounds produces better results (with less flexibility).
I guessed you are doing this to display part of an image on the screen, because you mentioned UIImageView. And optimization problems always need defining specifically.
Trust Apple for Regular UI stuff
Actually, UIImageView with clipsToBounds is one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to send setNeedsDisplay message.
Or you can try putting the UIImageView inside of an empty UIView and set clipping at the container view. With this technique, you can transform your image freely by setting transform property in 2D (scaling, rotation, translation).
If you need 3D transformation, you still can use CALayer with masksToBounds property, but using CALayer will give you very little extra performance usually not considerable.
Anyway, you need to know all of the low-level details to use them properly for optimization.
Why is that one of the fastest ways?
UIView is just a thin layer on top of CALayer which is implemented on top of OpenGL which is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.
So if you use them properly (I mean, within designed limitations), it will perform as well as plain OpenGL implementation. If you use just a few images to display, you'll get acceptable performance with UIView implementation because it can get full acceleration of underlying OpenGL (which means GPU acceleration).
Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because CALayer lacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.
Why your method is slower than UIImageView?
What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.
IMO, CGImage drawing methods are not implemented with GPU.
I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe CGImage is implemented in CPU because,
Its API looks like it was designed for CPU, such as bitmap editing interface and text drawing. They don't fit to a GPU interface very well.
Bitmap context interface allows direct memory access. That means it's backend storage is located in CPU memory. Maybe somewhat different on unified memory architecture (and also with Metal API), but anyway, initial design intention of CGImage should be for CPU.
Many recently released other Apple APIs mentioning GPU acceleration explicitly. That means their older APIs were not. If there's no special mention, it's usually done in CPU by default.
So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.
Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.
Here's another thread about whether the CoreGraphics on iOS is using OpenGL or not: iOS: is Core Graphics implemented on top of OpenGL?
About Limitations
Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).
If you need to display huge images with clipping, you have to use another optimization like CATiledLayer and that's a totally different story.
And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.
About Some Future
Though it is not very likely happen, but CGImage stuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.
It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible.
Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify.
My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done if that tab is selected.
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}
// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {
// convert y coordinate to origin bottom-left
CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
orgX = -aperture.origin.x,
scaleX = 1.0,
scaleY = 1.0,
rot = 0.0;
CGSize size;
switch (orientation) {
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
size = CGSizeMake(aperture.size.height, aperture.size.width);
break;
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
case UIImageOrientationUp:
case UIImageOrientationUpMirrored:
size = aperture.size;
break;
default:
assert(NO);
return nil;
}
switch (orientation) {
case UIImageOrientationRight:
rot = 1.0 * M_PI / 2.0;
orgY -= aperture.size.height;
break;
case UIImageOrientationRightMirrored:
rot = 1.0 * M_PI / 2.0;
scaleY = -1.0;
break;
case UIImageOrientationDown:
scaleX = scaleY = -1.0;
orgX -= aperture.size.width;
orgY -= aperture.size.height;
break;
case UIImageOrientationDownMirrored:
orgY -= aperture.size.height;
scaleY = -1.0;
break;
case UIImageOrientationLeft:
rot = 3.0 * M_PI / 2.0;
orgX -= aperture.size.height;
break;
case UIImageOrientationLeftMirrored:
rot = 3.0 * M_PI / 2.0;
orgY -= aperture.size.height;
orgX -= aperture.size.width;
scaleY = -1.0;
break;
case UIImageOrientationUp:
break;
case UIImageOrientationUpMirrored:
orgX -= aperture.size.width;
scaleX = -1.0;
break;
}
// set the draw rect to pan the image to the right spot
CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);
// create a context for the new image
UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
CGContextRef gc = UIGraphicsGetCurrentContext();
// apply rotation and scaling
CGContextRotateCTM(gc, rot);
CGContextScaleCTM(gc, scaleX, scaleY);
// draw the image to our clipped context using the offset rect
CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);
// pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
// pop the context to get back to the default
UIGraphicsEndImageContext();
// Note: this is autoreleased
return cropped;
}
The very simple way to move big image inside UIImageView as follows.
Let we have the image of size (100, 400) representing 4 states of some picture one below another. We want to show the 2nd picture having offsetY = 100 in square UIImageView of size (100, 100).
The solution is:
UIImageView *iView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
CGRect contentFrame = CGRectMake(0, 0.25, 1, 0.25);
iView.layer.contentsRect = contentFrame;
iView.image = [UIImage imageNamed:#"NAME"];
Here contentFrame is normalized frame relative to real UIImage size.
So, "0" means that we start visible part of image from left border,
"0.25" means that we have vertical offset 100,
"1" means that we want to show full width of the image,
and finally, "0.25" means that we want to show only 1/4 part of image in height.
Thus, in local image coordinates we show the following frame
CGRect visibleAbsoluteFrame = CGRectMake(0*100, 0.25*400, 1*100, 0.25*400)
or CGRectMake(0, 100, 100, 100);
Rather than creating a new image (which is costly because it allocates memory), how about using CGContextClipToRect?
The quickest way is to use an image mask: an image that is the same size as the image to mask but with a certain pixel pattern indicating which portion of the image to mask out when rendering ->
// maskImage is used to block off the portion that you do not want rendered
// note that rect is not actually used because the image mask defines the rect that is rendered
-(void) drawRect:(CGRect)rect maskImage:(UIImage*)maskImage {
UIGraphicsBeginImageContext(image_.size);
[maskImage drawInRect:image_.bounds];
maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([image_ CGImage], mask);
image_ = [UIImage imageWithCGImage:maskedImageRef scale:1.0f orientation:image_.imageOrientation];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
}
New framework is better and easy to use:
func crop(img: UIImage, with rect: CGRect) -> UIImage? {
guard let cgImg = img.cgImage else { return nil }
// Create bitmap image from context using the rect
if let imageRef: CGImage = cgImg.cropping(to: rect){
// Create a new image based on the imageRef and rotate back to the original orientation
let image: UIImage = UIImage(cgImage: imageRef, scale: img.scale, orientation: img.imageOrientation)
return image
}
else{
return nil
}
}
ps: I translated #Scott Lahteine's answer into Swift, and the result is really weird.

How to I rotate UIImageView by 90 degrees inside a UIScrollView with correct image size and scrolling?

I have an image inside an UIImageView which is within a UIScrollView. What I want to do is rotate this image 90 degrees so that it is in landscape by default, and set the initial zoom of the image so that the entire image fits into the scrollview and then allow it to be zoomed up to 100% and back down to minimum zoom again.
This is what I have so far:
self.imageView.transform = CGAffineTransformMakeRotation(-M_PI/2);
float minimumScale = scrollView.frame.size.width / self.imageView.frame.size.width;
scrollView.minimumZoomScale = minimumScale;
scrollView.zoomScale = minimumScale;
scrollView.contentSize = CGSizeMake(self.imageView.frame.size.height,self.imageView.frame.size.width);
The problem is that if I set the transform, nothing shows up in the scrollview. However if I commented out the transform, everything works except the image is not in the landscape orientation that I want it to be!
If I apply the transform and remove the code that sets the minimumZoomScale and zoomScale properties, then the image shows up in the correct orientation, however with the incorrect zoomScale and seems like the contentSize property isn't set correctly either - since the doesn't scroll to the edge of the image in the left/right direction, however does top and bottom but much over the edge.
NB: image is being loaded from a URL
Maybe rotating the image itself fits your needs:
UIImage* rotateUIImage(const UIImage* src, float angleDegrees) {
UIView* rotatedViewBox = [[UIView alloc] initWithFrame: CGRectMake(0, 0, src.size.width, src.size.height)];
float angleRadians = angleDegrees * ((float)M_PI / 180.0f);
CGAffineTransform t = CGAffineTransformMakeRotation(angleRadians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
CGContextRotateCTM(bitmap, angleRadians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-src.size.width / 2, -src.size.height / 2, src.size.width, src.size.height), [src CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I believe the easiest way (and thread safe too) is to do:
//assume that the image is loaded in landscape mode from disk
UIImage * LandscapeImage = [UIImage imageNamed: imgname];
UIImage * PortraitImage = [[UIImage alloc] initWithCGImage: LandscapeImage.CGImage
scale: 1.0
orientation: UIImageOrientationLeft];
Any calculations that you do based on the imageView's frame should probably be done before you apply any transformations to it. But I would actually suggest doing those calculations based on the size of the UIImage, not the UIImageView. Then set both the UIImageView's frame and the UIScrollView's contentSize based on that.
Max's suggestion is a good one, although with a larger image it could be a performance killer. Are you displaying this image from your app's resources? If so, why not just rotate the images before you even build the app?
There's a much easier solution that is also faster, just do this:
- (void) imageRotateTapped:(id)sender
{
[UIView animateWithDuration:0.33f animations:^()
{
self.imageView.transform = CGAffineTransformMakeRotation(RADIANS(self.rotateDegrees += 90.0f));
self.imageView.frame = self.imageView.superview.bounds; // change this to whatever rect you want
}];
}
When the user is done, you will need to actually create a new rotated image, but that is very easy to do.
I was using the accepted answer for a while until we noticed that non-square rotations based on images taken directly from the camera seemed stretched (they were rotated as desired, just the frame width/height wasn't adjusted).
Great explanation/post here from Trevor: http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
In the end, it was a very simple import of Trevor's code which uses categories to add a resizedImage:interpoationQuality method to UIImage. So yeah, user beware, if it still works for you, great. But if it doesn't, I'd take a look at the library instead.

Proper use of MKOverlayView

I am writing an iPhone app in which I place a large PNG image (1936 × 2967) on an MKMapView using MKOverlayView. I am a little confused about how to appropriately implement the drawMapRect: function in MKOverlayView - should I manually segment my image before drawing it? Or should I let the mechanisms of MKOverlayView handle all that?
My impression from other posts is that before MKOverlayView was available, you were expected to segment images yourself for this kind of task, and use a CATiledLayer. I thought maybe MKOverlayView took care of all the dirty work.
The real reason I ask though is because when I run my app through Instruments using the allocations tool, I find that the number of live bytes my app is using steadily increases with the introduction of the custom image on the map. Right now I am NOT segmenting my image, but I also am seeing no record of memory leaks in the leaks tool in Instruments. Here is my drawMapRect: function:
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
// Load image from applicaiton bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"map.png"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
MKMapRect overlayMapRect = [self.overlay boundingMapRect];
CGRect overlayRect = [self rectForMapRect:overlayMapRect];
// draw image
CGContextSaveGState(context);
CGContextDrawImage(context, overlayRect, image);
CGContextRestoreGState(context);
CGImageRelease(image);
}
If my drawMapRect: function is not the cause of these memory issues, does anybody know what it might be? I know through debugging that my viewForOverlay: function for the mapView only gets called once for each overlay, so it's not that memory is leaking there or something.
Any advice is welcome!
Thanks, -Matt
EDIT: so it turns out that the memory issue is actually being caused by MKMapView - every time I move the map at all the memory usage goes up very steadily and never comes down - this doesn't seem good :(
A bit of a late answer, leaving it here if somebody else hits the same problem in the future. The flaw here is trying to render a whole image while documentation clearly says -
In addition, you should avoid drawing the entire contents of the overlay each time this method is called. Instead, always take the mapRect parameter into consideration and avoid drawing content outside that rectangle.
so, you have to only draw the part of the image in the area defined by mapRect
updated: keep in mind that drawRect here can be larger than mapRect, need to adjust the paint and cut regions accordingly
let overlayMapRect = overlay.boundingMapRect
let overlayDrawRect = self.rect(for: overlayMapRect)
// watch out for draw rect adjustment here --
let drawRect = self.rect(for: mapRect).intersection(overlayDrawRect)
let scaleX = CGFloat(image.width) / overlayRect.width
let scaleY = CGFloat(image.height) / overlayRect.height
let transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
let imageCut = drawRect.applying(transform)
// omitting optionals checks, you should not
let cutImage = image.cropping(to: imageCut)
// the usual vertical flip issue with image.draw
context.translateBy(x: 0, y: drawRect.maxY + drawRect.origin.y)
context.scaleBy(x: 1, y: -1)
context.draw(image, in: drawRect, byTiling: false)
Here is the objc version based on epolyakov's answer. It works great, but only without any rotation.
- (void) drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGImageRef overlayImage = <your_uiimage>.CGImage;
CGRect overlayRect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGRect drawRect = [self rectForMapRect:mapRect];
CGRect rectPortion = CGRectIntersection(overlayRect, drawRect);
CGFloat scaleX = rotatedImage.size.width / overlayRect.size.width;
CGFloat scaleY = rotatedImage.size.height / overlayRect.size.height;
CGAffineTransform transform = CGAffineTransformMakeScale(scaleX, scaleY);
CGRect imagePortion = CGRectApplyAffineTransform(rectPortion, transform);
CGImageRef cutImage = CGImageCreateWithImageInRect(overlayImage, imagePortion);
CGRect finalRect = rectPortion;
CGContextTranslateCTM(context, 0, finalRect.origin.y + CGRectGetMaxY(finalRect));
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetAlpha(context, self.alpha);
CGContextDrawImage(context, finalRect, cutImage);
}
If you need to manage also the rotation of your image, I found a trick using a rotated version of the original image (this because the map rendering always draw vertical rects and rotating the image in this method will cut it).
So using a rotated version of the original image allows to render with vertical rects as the map expects
UIImage* rotatedImage = [self rotatedImage:<your_uiimage> withAngle:<angle_of_image>];
CGImageRef overlayImage = rotatedImage.CGImage;
And this is the method that produce a rotated image in a bounding rect
- (UIImage*) rotatedImage:(UIImage*)image withAngle:(CGFloat)angle
{
float radians = degreesToRadians(angle);
CGAffineTransform xfrm = CGAffineTransformMakeRotation(radians);
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
CGRect rotatedImageBoundingRect = CGRectApplyAffineTransform (imageRect, xfrm);
UIGraphicsBeginImageContext(rotatedImageBoundingRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM (ctx, rotatedImageBoundingRect.size.width/2., rotatedImageBoundingRect.size.height/2.);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextRotateCTM (ctx, radians);
CGContextDrawImage (ctx, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Save a UIImage from a UIImageView with CGAffineTransform

I have a UIImageView within a UIScrollView which I have enabled the user to perform any number of flip and rotation operations on. I have this all working which allows the user to zoom, pan, flip and rotate. Now I want to be able to save the final image out to a png.
however it is doing my head in trying to work this out...
I have seen quite a few other posts similar to this but most only require applying a single transform such as a rotation eg Creating a UIImage from a rotated UIImageView
I would like to apply any transform that the user has "created" which will be a series of flip and rotations concatenated togethers
As the user is applying various rotations, flips etc, I store the concatenated transform using CGAffineTransformConcat. For example when they rotate I do:
CGAffineTransform newTransform = CGAffineTransformMakeRotation(angle);
self.theFullTransform = CGAffineTransformConcat(self.theFullTransform, newTransform);
self.fullPhotoImageView.transform = self.theFullTransform;
The following method is the best I have gotten so far for creating a UIImage with the full transform however the image is always translated in the wrong place. Eg the image is "offset". Which my guess is either related to using the wrong bounds being set in in CGAffineTransformTranslate or CGContextDrawImage.
Does anyone have any ideas? This seems a lot harder that I thought it should be...
- (UIImage *) translateImageFromImageView: (UIImageView *) imageView withTransform:(CGAffineTransform) aTransform
{
UIImage *rotatedImage;
// Get image width, height of the bounding rectangle
CGRect boundingRect = CGRectApplyAffineTransform(imageView.bounds, aTransform);
// Create a graphics context the size of the bounding rectangle
UIGraphicsBeginImageContext(boundingRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
//I think this translaton is the problem?
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
transform = CGAffineTransformConcat(transform, aTransform);
CGContextConcatCTM(context, transform);
// Draw the image into the context
// or the boundingRect is incorrect here?
CGContextDrawImage(context, boundingRect, imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
// Clean up
UIGraphicsEndImageContext();
return rotatedImage;
}
Is the offset predictable, like always half the image, or does it depend on aTransform?
struct CGAffineTransform {
CGFloat a, b, c, d;
CGFloat tx, ty;
};
If the latter, set tx and ty to zero in aTransform before using it.