Save a UIImage from a UIImageView with CGAffineTransform - iphone

I have a UIImageView within a UIScrollView which I have enabled the user to perform any number of flip and rotation operations on. I have this all working which allows the user to zoom, pan, flip and rotate. Now I want to be able to save the final image out to a png.
however it is doing my head in trying to work this out...
I have seen quite a few other posts similar to this but most only require applying a single transform such as a rotation eg Creating a UIImage from a rotated UIImageView
I would like to apply any transform that the user has "created" which will be a series of flip and rotations concatenated togethers
As the user is applying various rotations, flips etc, I store the concatenated transform using CGAffineTransformConcat. For example when they rotate I do:
CGAffineTransform newTransform = CGAffineTransformMakeRotation(angle);
self.theFullTransform = CGAffineTransformConcat(self.theFullTransform, newTransform);
self.fullPhotoImageView.transform = self.theFullTransform;
The following method is the best I have gotten so far for creating a UIImage with the full transform however the image is always translated in the wrong place. Eg the image is "offset". Which my guess is either related to using the wrong bounds being set in in CGAffineTransformTranslate or CGContextDrawImage.
Does anyone have any ideas? This seems a lot harder that I thought it should be...
- (UIImage *) translateImageFromImageView: (UIImageView *) imageView withTransform:(CGAffineTransform) aTransform
{
UIImage *rotatedImage;
// Get image width, height of the bounding rectangle
CGRect boundingRect = CGRectApplyAffineTransform(imageView.bounds, aTransform);
// Create a graphics context the size of the bounding rectangle
UIGraphicsBeginImageContext(boundingRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
//I think this translaton is the problem?
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
transform = CGAffineTransformConcat(transform, aTransform);
CGContextConcatCTM(context, transform);
// Draw the image into the context
// or the boundingRect is incorrect here?
CGContextDrawImage(context, boundingRect, imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
// Clean up
UIGraphicsEndImageContext();
return rotatedImage;
}

Is the offset predictable, like always half the image, or does it depend on aTransform?
struct CGAffineTransform {
CGFloat a, b, c, d;
CGFloat tx, ty;
};
If the latter, set tx and ty to zero in aTransform before using it.

Related

How to I rotate UIImageView by 90 degrees inside a UIScrollView with correct image size and scrolling?

I have an image inside an UIImageView which is within a UIScrollView. What I want to do is rotate this image 90 degrees so that it is in landscape by default, and set the initial zoom of the image so that the entire image fits into the scrollview and then allow it to be zoomed up to 100% and back down to minimum zoom again.
This is what I have so far:
self.imageView.transform = CGAffineTransformMakeRotation(-M_PI/2);
float minimumScale = scrollView.frame.size.width / self.imageView.frame.size.width;
scrollView.minimumZoomScale = minimumScale;
scrollView.zoomScale = minimumScale;
scrollView.contentSize = CGSizeMake(self.imageView.frame.size.height,self.imageView.frame.size.width);
The problem is that if I set the transform, nothing shows up in the scrollview. However if I commented out the transform, everything works except the image is not in the landscape orientation that I want it to be!
If I apply the transform and remove the code that sets the minimumZoomScale and zoomScale properties, then the image shows up in the correct orientation, however with the incorrect zoomScale and seems like the contentSize property isn't set correctly either - since the doesn't scroll to the edge of the image in the left/right direction, however does top and bottom but much over the edge.
NB: image is being loaded from a URL
Maybe rotating the image itself fits your needs:
UIImage* rotateUIImage(const UIImage* src, float angleDegrees) {
UIView* rotatedViewBox = [[UIView alloc] initWithFrame: CGRectMake(0, 0, src.size.width, src.size.height)];
float angleRadians = angleDegrees * ((float)M_PI / 180.0f);
CGAffineTransform t = CGAffineTransformMakeRotation(angleRadians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
CGContextRotateCTM(bitmap, angleRadians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-src.size.width / 2, -src.size.height / 2, src.size.width, src.size.height), [src CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I believe the easiest way (and thread safe too) is to do:
//assume that the image is loaded in landscape mode from disk
UIImage * LandscapeImage = [UIImage imageNamed: imgname];
UIImage * PortraitImage = [[UIImage alloc] initWithCGImage: LandscapeImage.CGImage
scale: 1.0
orientation: UIImageOrientationLeft];
Any calculations that you do based on the imageView's frame should probably be done before you apply any transformations to it. But I would actually suggest doing those calculations based on the size of the UIImage, not the UIImageView. Then set both the UIImageView's frame and the UIScrollView's contentSize based on that.
Max's suggestion is a good one, although with a larger image it could be a performance killer. Are you displaying this image from your app's resources? If so, why not just rotate the images before you even build the app?
There's a much easier solution that is also faster, just do this:
- (void) imageRotateTapped:(id)sender
{
[UIView animateWithDuration:0.33f animations:^()
{
self.imageView.transform = CGAffineTransformMakeRotation(RADIANS(self.rotateDegrees += 90.0f));
self.imageView.frame = self.imageView.superview.bounds; // change this to whatever rect you want
}];
}
When the user is done, you will need to actually create a new rotated image, but that is very easy to do.
I was using the accepted answer for a while until we noticed that non-square rotations based on images taken directly from the camera seemed stretched (they were rotated as desired, just the frame width/height wasn't adjusted).
Great explanation/post here from Trevor: http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
In the end, it was a very simple import of Trevor's code which uses categories to add a resizedImage:interpoationQuality method to UIImage. So yeah, user beware, if it still works for you, great. But if it doesn't, I'd take a look at the library instead.

Proper use of MKOverlayView

I am writing an iPhone app in which I place a large PNG image (1936 × 2967) on an MKMapView using MKOverlayView. I am a little confused about how to appropriately implement the drawMapRect: function in MKOverlayView - should I manually segment my image before drawing it? Or should I let the mechanisms of MKOverlayView handle all that?
My impression from other posts is that before MKOverlayView was available, you were expected to segment images yourself for this kind of task, and use a CATiledLayer. I thought maybe MKOverlayView took care of all the dirty work.
The real reason I ask though is because when I run my app through Instruments using the allocations tool, I find that the number of live bytes my app is using steadily increases with the introduction of the custom image on the map. Right now I am NOT segmenting my image, but I also am seeing no record of memory leaks in the leaks tool in Instruments. Here is my drawMapRect: function:
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
// Load image from applicaiton bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"map.png"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
MKMapRect overlayMapRect = [self.overlay boundingMapRect];
CGRect overlayRect = [self rectForMapRect:overlayMapRect];
// draw image
CGContextSaveGState(context);
CGContextDrawImage(context, overlayRect, image);
CGContextRestoreGState(context);
CGImageRelease(image);
}
If my drawMapRect: function is not the cause of these memory issues, does anybody know what it might be? I know through debugging that my viewForOverlay: function for the mapView only gets called once for each overlay, so it's not that memory is leaking there or something.
Any advice is welcome!
Thanks, -Matt
EDIT: so it turns out that the memory issue is actually being caused by MKMapView - every time I move the map at all the memory usage goes up very steadily and never comes down - this doesn't seem good :(
A bit of a late answer, leaving it here if somebody else hits the same problem in the future. The flaw here is trying to render a whole image while documentation clearly says -
In addition, you should avoid drawing the entire contents of the overlay each time this method is called. Instead, always take the mapRect parameter into consideration and avoid drawing content outside that rectangle.
so, you have to only draw the part of the image in the area defined by mapRect
updated: keep in mind that drawRect here can be larger than mapRect, need to adjust the paint and cut regions accordingly
let overlayMapRect = overlay.boundingMapRect
let overlayDrawRect = self.rect(for: overlayMapRect)
// watch out for draw rect adjustment here --
let drawRect = self.rect(for: mapRect).intersection(overlayDrawRect)
let scaleX = CGFloat(image.width) / overlayRect.width
let scaleY = CGFloat(image.height) / overlayRect.height
let transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
let imageCut = drawRect.applying(transform)
// omitting optionals checks, you should not
let cutImage = image.cropping(to: imageCut)
// the usual vertical flip issue with image.draw
context.translateBy(x: 0, y: drawRect.maxY + drawRect.origin.y)
context.scaleBy(x: 1, y: -1)
context.draw(image, in: drawRect, byTiling: false)
Here is the objc version based on epolyakov's answer. It works great, but only without any rotation.
- (void) drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGImageRef overlayImage = <your_uiimage>.CGImage;
CGRect overlayRect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGRect drawRect = [self rectForMapRect:mapRect];
CGRect rectPortion = CGRectIntersection(overlayRect, drawRect);
CGFloat scaleX = rotatedImage.size.width / overlayRect.size.width;
CGFloat scaleY = rotatedImage.size.height / overlayRect.size.height;
CGAffineTransform transform = CGAffineTransformMakeScale(scaleX, scaleY);
CGRect imagePortion = CGRectApplyAffineTransform(rectPortion, transform);
CGImageRef cutImage = CGImageCreateWithImageInRect(overlayImage, imagePortion);
CGRect finalRect = rectPortion;
CGContextTranslateCTM(context, 0, finalRect.origin.y + CGRectGetMaxY(finalRect));
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetAlpha(context, self.alpha);
CGContextDrawImage(context, finalRect, cutImage);
}
If you need to manage also the rotation of your image, I found a trick using a rotated version of the original image (this because the map rendering always draw vertical rects and rotating the image in this method will cut it).
So using a rotated version of the original image allows to render with vertical rects as the map expects
UIImage* rotatedImage = [self rotatedImage:<your_uiimage> withAngle:<angle_of_image>];
CGImageRef overlayImage = rotatedImage.CGImage;
And this is the method that produce a rotated image in a bounding rect
- (UIImage*) rotatedImage:(UIImage*)image withAngle:(CGFloat)angle
{
float radians = degreesToRadians(angle);
CGAffineTransform xfrm = CGAffineTransformMakeRotation(radians);
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
CGRect rotatedImageBoundingRect = CGRectApplyAffineTransform (imageRect, xfrm);
UIGraphicsBeginImageContext(rotatedImageBoundingRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM (ctx, rotatedImageBoundingRect.size.width/2., rotatedImageBoundingRect.size.height/2.);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextRotateCTM (ctx, radians);
CGContextDrawImage (ctx, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

How to compensate the flipped coordinate system of core graphics for easy drawing?

It's really a pain, but always when I draw an UIImage in -drawRect:, it's upside-down.
When I flip the coordinates, the image draws correctly, but at the cost of all other CG functions drawing "wrong" (flipped).
What's your strategy when you have to draw images and other things? Is there any rule of thumb how to not get stuck in this problem over and over again?
Also, one nasty thing when I flip the y-axis is, that my CGRect from the UIImageView frame is wrong. Instead of the origin appearing at 10,10 upper left as expected, it appears at the bottom.
But at the same time, all those normal line drawing functions of CGContext take correct coordinates. drawing a line in -drawRect with origin 10,10 upper left, will really start at upper left. But at the same time that's strange, because core graphics actually has a flipped coordinate system with y 0 at the bottom.
So it seems like something is really inconsistent there. Drawing with CGContext functions takes coordinates as "expected" (cmon, nobody thinks in coordinates starting from bottom left, that's silly), while drawing any kind of image still works the "wrong" way.
Do you use helper methods to draw images? Or is there anything useful that makes image drawing not a pain in the butt?
Problem: Origin is at lower-left corner; positive y goes upward (negative y goes downward).
Goal: Origin at upper-left corner; positive y going downward (negative y going upward).
Solution:
Move origin up by the view's height.
Negate (multiply by -1) the y axis.
The way to do this in code is to translate up by the view bounds' height and scale by (1, -1), in that order.
There are a couple of portions of the Quartz 2D Programming Guide that are relevant to this topic, including “Drawing to a Graphics Context on iPhone OS” and the whole chapter on Transforms. Of course, you really should read the whole thing.
You can do that by apply affinetransform on the point you want to convert in UIKit related coordinates. Following is example.
// Create a affine transform object
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
// First translate your image View according to transform
transform = CGAffineTransformTranslate(transform, 0, - imageView.bounds.size.height);
// Then whenever you want any point according to UIKit related coordinates apply this transformation on the point or rect.
// To get tranformed point
CGPoint newPointForUIKit = CGPointApplyAffineTransform(oldPointInCGKit, transform);
// To get transformed rect
CGRect newRectForUIKit = CGRectApplyAffineTransform(oldPointInCGKit, transform);
The better answer to this problem is to use the UIImage method drawInRect: to draw your image. I'm assuming you want the image to span the entire bounds of your view. This is what you'd type in your drawRect: method.
Instead of:
CGContextRef ctx = UIGraphicsGetCurrentContext();
UIImage *myImage = [UIImage imageNamed:#"theImage.png"];
CGImageRef img = [myImage CGImage];
CGRect bounds = [self bounds];
CGContextDrawImage(ctx, bounds, img);
Write this:
UIImage *myImage = [UIImage imageNamed:#"theImage.png"];
CGRect bounds = [self bounds];
[myImage drawInRect:bounds];
It's really a pain, but always when I draw an UIImage in -drawRect:, it's upside-down.
Are you telling the UIImage to draw, or getting its CGImage and drawing that?
As noted in “Drawing to a Graphics Context on iPhone OS”, UIImages are aware of the difference in co-ordinate spaces and should draw themselves correctly without you having to flip your co-ordinate space yourself.
CGImageRef flip (CGImageRef im) {
CGSize sz = CGSizeMake(CGImageGetWidth(im), CGImageGetHeight(im));
UIGraphicsBeginImageContextWithOptions(sz, NO, 0);
CGContextDrawImage(UIGraphicsGetCurrentContext(),
CGRectMake(0, 0, sz.width, sz.height), im);
CGImageRef result = [UIGraphicsGetImageFromCurrentImageContext() CGImage];
UIGraphicsEndImageContext();
return result;
}
Call the above method using the code below:
This code deals with getting the left half of an image from an existing UIImageview and setting the thus generated image to a new imageview - imgViewleft
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con,
CGRectMake(0,0,sz.width/2.0,sz.height),
flip(leftReference));
imgViewLeft = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];

Image drawing on PDF fails to draw image with transformation and outputs wrong

I've a problem with drawing images on PDF. I do apply scaling, rotation, move, etc. to an image and draws that image on the PDF. But drawing is not outputting correctly. When I do rotate image, it just scales and doesn't rotate.
Here, I explain in more detail:
I place an image on UIWebView to make fake effect of image exactly on PDF. Then, I do draw image on PDF which is on the UIWebView while making PDF.
But when I go and prepare PDF with modified image, which has been applied lots of transformations, image scales not rotates.
Here, img_copyright is a Custom class inheriting UIImageView.
CGFloat orgY = (newY-whiteSpace)+img_copyright.frame.size.height;
CGFloat modY = pageRect.size.height-orgY;
CGFloat orgX = img_copyright.frame.origin.x-PDF_PAGE_PADDING_WIDTH;
CGFloat modX = orgX;
//We're getting X,Y of the image here to decide where to put the image on PDF.
CGRect drawRect = CGRectMake (modX, modY, img_copyright.frame.size.width, img_copyright.frame.size.height);
//Preparing the rectangle in which image is to be drawn.
CGContextDrawImage(pdfContext, drawRect, [img_copyright.image CGImage]);
[img_copyright release];
// Actually drawing the image.
But when i see the PDF image is not properly drawn into it.
What would you suggest, is it the problem due to image drawing based on its X,Y?
How could we decide where to put the image on PDF if we don't depend on X, Y?
What is the exact method of drawing image on PDF with rotation, scale?
The Image when I do insert the image onto UIWebView's scrollbar.
http://www.freeimagehosting.net/">
The Image when I do draw the image onto PDF.
http://www.freeimagehosting.net/">
CGFloat angleInRadians =-1*Angle* (M_PI / 180.0);
CGAffineTransform transform=CGAffineTransformIdentity;
transform = CGAffineTransformMakeRotation(angleInRadians);
//transform = CGAffineTransformMakeScale(1, -1);
//transform =CGAffineTransformMakeTranslation(0,80);
CGRect rotatedRect = CGRectApplyAffineTransform(CGRectMake(0,0,Image.size.width,Image.size.height), transform);
UIGraphicsBeginImageContext(rotatedRect.size);
//[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0,rotatedRect.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(),1, -1);
//CGContextTranslateCTM(UIGraphicsGetCurrentContext(), +(rotatedRect.size.width/2),+(rotatedRect.size.height/2));
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), (rotatedRect.origin.x)*-1,(rotatedRect.origin.y)*-1);
CGContextRotateCTM(UIGraphicsGetCurrentContext(), angleInRadians);
//CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -(rotatedRect.size.width/2),-(rotatedRect.size.height/2));
CGImageRef temp = [Image CGImage];
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, Image.size.width,Image.size.height), temp);
//CGContextRotateCTM(UIGraphicsGetCurrentContext(), -angleInRadians);
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;

how to sharp/blur an uiimage in iphone?

I have a view with UIImageView and an UIImage set to it. How do I make image sharp or blur using coregraphics?
Apple has a great sample program called GLImageProcessing that includes a very fast blur/sharpen effect using OpenGL ES 1.1 (meaning it works on all iPhones, not just the 3gs.)
If you're not fairly experienced with OpenGL, the code may make your head hurt.
Going down the OpenGL route felt like insane overkill for my needs (blurring a touched point on an image). Instead I implemented a simple blurring process that takes a touch point, creates a rect containing that touch point, samples the image in that point and then redraws the sample image upside down on top of the source rect several times slightly offset with slightly different opacity. This produces a pretty nice poor man's blur effect without an insane amount of code and complexity. Code follows:
- (UIImage*)imageWithBlurAroundPoint:(CGPoint)point {
CGRect bnds = CGRectZero;
UIImage* copy = nil;
CGContextRef ctxt = nil;
CGImageRef imag = self.CGImage;
CGRect rect = CGRectZero;
CGAffineTransform tran = CGAffineTransformIdentity;
int indx = 0;
rect.size.width = CGImageGetWidth(imag);
rect.size.height = CGImageGetHeight(imag);
bnds = rect;
UIGraphicsBeginImageContext(bnds.size);
ctxt = UIGraphicsGetCurrentContext();
// Cut out a sample out the image
CGRect fillRect = CGRectMake(point.x - 10, point.y - 10, 20, 20);
CGImageRef sampleImageRef = CGImageCreateWithImageInRect(self.CGImage, fillRect);
// Flip the image right side up & draw
CGContextSaveGState(ctxt);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextTranslateCTM(ctxt, 0.0, -rect.size.height);
CGContextConcatCTM(ctxt, tran);
CGContextDrawImage(UIGraphicsGetCurrentContext(), rect, imag);
// Restore the context so that the coordinate system is restored
CGContextRestoreGState(ctxt);
// Cut out a sample image and redraw it over the source rect
// several times, shifting the opacity and the positioning slightly
// to produce a blurred effect
for (indx = 0; indx < 5; indx++) {
CGRect myRect = CGRectOffset(fillRect, 0.5 * indx, 0.5 * indx);
CGContextSetAlpha(ctxt, 0.2 * indx);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextDrawImage(ctxt, myRect, sampleImageRef);
}
copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return copy;
}
What you really need are in the image filters in the CoreImage API. Unfortunately CoreImage is not supported on the iPhone (unless that changed recently and I missed it). Be careful here, as, IIRC, they are available in the SIM - but not on the device.
AFAIK there is no other way to do it properly with the native libraries, although I've sort of faked a blur before by creating an extra layer over the top which is a copy of what's below, offset by a pixel or two and with a low alpha value. For a proper blur effect, tho, the only way I've been able to do it is offline in Photoshop or similar.
Would be keen to hear if there is a better way too, but to my knowledge that is the situation currently.
Have a look at the following libraries:
https://github.com/coryleach/UIImageAdjust
https://github.com/esilverberg/ios-image-filters
https://github.com/cmkilger/CKImageAdditions