How to Render a rotated UIlabel - iphone

I'm currently developing an app like photoshop for iPad and when i want flatten my array of layers, DrawTextInRect doesn't preserve the Transform (CAAffineTransformMakeRotation) of my UILabel. Anybody have any idea?

Draw your text into another context and get a CGImage of it, then see if drawing the image in your rotated context will work (it should).
EDIT: I have not done this myself, I believe it will work. The label probably should not be inserted into a view when you do this, but it might work that way anyway. You may have to experiment:
UIGraphicsBeginImageContextWithOptions( myLabel.bounds.size, NO, 0); // 0 == [UIScreen mainScreen].scale
CGContextRef context = UIGraphicsGetCurrentContext();
[myLabel.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Translate the context to the label's origin, and then apply the label's transform to the context.
The code below is in Swift.
let context = UIGraphicsGetCurrentContext()!
context.saveGState()
let t = myLabel.transform
let translationPt = CGPoint.init(x: myLabel.frame.origin.x, y: myLabel.frame.origin.y)
context.translateBy(x: translationPt.x, y: translationPt.y)
context.concatenate(t)
myLabel.layer.render(in: context)
context.restoreGState()

Related

Connection between UIGraphicsGetImageFromCurrentImageContext and drawInrect

I have found the following code to resize an UIImage:
CGSize newSize = CGSizeMake(self.image.size.width*0.25, self.image.size.height*0.25);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[self.image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but there are some couple of things I don't understand.
First I'm trying to resize the original image to 25% of the original size - but this method resizes it to 50% of the original size. Why?
What is the connection between drawInRect and UIGraphicsGetImageFromCurrentImageContext. As I see it, the UIGraphicsGetImageFromCurrentImageContext is overwriting the current image making the call to drawInRect redundant.
I would be grateful if someone could help me understand what's going on in details.
Thanks in advance.
first , because it's retina screen, you should set the scale to 1.0 , or it just x2
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
if you call UIGraphicsBeginImageContext , any paint work will result in the Context you specific
the drawInRect in your code paint the image to the context, it's not redundant
or you can remove it, you get a empty image
finally, you can get a merged UIImage from the context
UIGraphicsGetImageFromCurrentImageContext();
just get a UIImage From What you have done(on context)
if you don't set the image back , it's won't change anything
self.image = UIGraphicsGetImageFromCurrentImageContext();

Getting a resized screenshot from a UIView

I'm trying to take a screenshot of a UIView shrunk down to thumbnail size with the following code,
UIGraphicsBeginImageContext(size);
[canvas.layer renderInContext:UIGraphicsGetCurrentContext()];
result = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
The above code will simply grab the top left portion of the view in the original unshrunk size instead.
I'm sure I've done this before, but I just can't get it working. Anyone know what's off here?
Supposing that you have a CGSize origSize which is the original size (e.g. 768x1024) and a CGSize size which is the required size, this can be done like so:
CGFloat scaleX = size.width / origSize.width;
CGFloat scaleY = size.height / origSize.height;
UIGraphicsBeginImageContextWithOptions(origSize, NO, scaleX > scaleY ? scaleY : scaleX);
[canvas.layer renderInContext:UIGraphicsGetCurrentContext()];
result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Note that we're using origSize in the begun context, not size. The scale affects the size as well.
Update (roughly a year later): note that this technique interferes with (or is interefered by) transforms on the UIView being snapshotted. If the above is not working and you're doing scale transforms on the view (or its layer), you may wanna go with this solution: How to scale down a UIImage and make it crispy / sharp at the same time instead of blurry?
I find that this solution generates thumbnails that are the right size.
let thumbRect = CGRect(x: 0, y: 0, width: 512, height: 666)
UIGraphicsBeginImageContext(thumbSize)
let context = UIGraphicsGetCurrentContext()
self.view.frame = thumbRect
self.view.layer.renderInContext(context)
thumbImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
However, the resized image adopts the trait collection from the original view controller.
So although the size is correct, some auto layout features still end-up visible in the resulting image.

Proper use of MKOverlayView

I am writing an iPhone app in which I place a large PNG image (1936 × 2967) on an MKMapView using MKOverlayView. I am a little confused about how to appropriately implement the drawMapRect: function in MKOverlayView - should I manually segment my image before drawing it? Or should I let the mechanisms of MKOverlayView handle all that?
My impression from other posts is that before MKOverlayView was available, you were expected to segment images yourself for this kind of task, and use a CATiledLayer. I thought maybe MKOverlayView took care of all the dirty work.
The real reason I ask though is because when I run my app through Instruments using the allocations tool, I find that the number of live bytes my app is using steadily increases with the introduction of the custom image on the map. Right now I am NOT segmenting my image, but I also am seeing no record of memory leaks in the leaks tool in Instruments. Here is my drawMapRect: function:
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
// Load image from applicaiton bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"map.png"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
MKMapRect overlayMapRect = [self.overlay boundingMapRect];
CGRect overlayRect = [self rectForMapRect:overlayMapRect];
// draw image
CGContextSaveGState(context);
CGContextDrawImage(context, overlayRect, image);
CGContextRestoreGState(context);
CGImageRelease(image);
}
If my drawMapRect: function is not the cause of these memory issues, does anybody know what it might be? I know through debugging that my viewForOverlay: function for the mapView only gets called once for each overlay, so it's not that memory is leaking there or something.
Any advice is welcome!
Thanks, -Matt
EDIT: so it turns out that the memory issue is actually being caused by MKMapView - every time I move the map at all the memory usage goes up very steadily and never comes down - this doesn't seem good :(
A bit of a late answer, leaving it here if somebody else hits the same problem in the future. The flaw here is trying to render a whole image while documentation clearly says -
In addition, you should avoid drawing the entire contents of the overlay each time this method is called. Instead, always take the mapRect parameter into consideration and avoid drawing content outside that rectangle.
so, you have to only draw the part of the image in the area defined by mapRect
updated: keep in mind that drawRect here can be larger than mapRect, need to adjust the paint and cut regions accordingly
let overlayMapRect = overlay.boundingMapRect
let overlayDrawRect = self.rect(for: overlayMapRect)
// watch out for draw rect adjustment here --
let drawRect = self.rect(for: mapRect).intersection(overlayDrawRect)
let scaleX = CGFloat(image.width) / overlayRect.width
let scaleY = CGFloat(image.height) / overlayRect.height
let transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
let imageCut = drawRect.applying(transform)
// omitting optionals checks, you should not
let cutImage = image.cropping(to: imageCut)
// the usual vertical flip issue with image.draw
context.translateBy(x: 0, y: drawRect.maxY + drawRect.origin.y)
context.scaleBy(x: 1, y: -1)
context.draw(image, in: drawRect, byTiling: false)
Here is the objc version based on epolyakov's answer. It works great, but only without any rotation.
- (void) drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGImageRef overlayImage = <your_uiimage>.CGImage;
CGRect overlayRect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGRect drawRect = [self rectForMapRect:mapRect];
CGRect rectPortion = CGRectIntersection(overlayRect, drawRect);
CGFloat scaleX = rotatedImage.size.width / overlayRect.size.width;
CGFloat scaleY = rotatedImage.size.height / overlayRect.size.height;
CGAffineTransform transform = CGAffineTransformMakeScale(scaleX, scaleY);
CGRect imagePortion = CGRectApplyAffineTransform(rectPortion, transform);
CGImageRef cutImage = CGImageCreateWithImageInRect(overlayImage, imagePortion);
CGRect finalRect = rectPortion;
CGContextTranslateCTM(context, 0, finalRect.origin.y + CGRectGetMaxY(finalRect));
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetAlpha(context, self.alpha);
CGContextDrawImage(context, finalRect, cutImage);
}
If you need to manage also the rotation of your image, I found a trick using a rotated version of the original image (this because the map rendering always draw vertical rects and rotating the image in this method will cut it).
So using a rotated version of the original image allows to render with vertical rects as the map expects
UIImage* rotatedImage = [self rotatedImage:<your_uiimage> withAngle:<angle_of_image>];
CGImageRef overlayImage = rotatedImage.CGImage;
And this is the method that produce a rotated image in a bounding rect
- (UIImage*) rotatedImage:(UIImage*)image withAngle:(CGFloat)angle
{
float radians = degreesToRadians(angle);
CGAffineTransform xfrm = CGAffineTransformMakeRotation(radians);
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
CGRect rotatedImageBoundingRect = CGRectApplyAffineTransform (imageRect, xfrm);
UIGraphicsBeginImageContext(rotatedImageBoundingRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM (ctx, rotatedImageBoundingRect.size.width/2., rotatedImageBoundingRect.size.height/2.);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextRotateCTM (ctx, radians);
CGContextDrawImage (ctx, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Texture from UIColor?

I am drawing a pie chart, each slice has a different color. I need to give the slices a textured look, not just the plain color. Any ideas how to do this? I don't want to use a image to use as a texture for all the possible colors. So I need to generate a texture or something like that. Any ideas. Thank You!
ps: this is an iphone project. (I can't use Core Image)
Use colorWithPatternImage with UIColor.
Edit: Sorry should have read the question properly.
You will need to use a UIGraphicsContext to create an image you can use in colorWithPatternImage. I would suggest using a grayscale image that you can load in, tint with a similar method to this, then use as a pattern in UIColor.
So you would have a method along the lines of this:
- (UIColor *)texturedPatternWithTint:(UIColor *)tint {
UIImage *texture = [UIImage imageNamed:#"texture.png"];
CGRect wholeImage = CGRectMake(0, 0, texture.size.width, texture.size.height);
UIGraphicsBeginImageContextWithOptions(texture.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, wholeImage, texture.CGImage);
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextSetFillColor(context, CGColorGetComponents(tint.CGColor));
CGContextFillRect(context, self.bounds);
UIImage *tintedTexture = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [UIColor colorWithPatternImage:tintedTexture];
}
(not tested)

Any quick and dirty anti-aliasing techniques for a rotated UIImageView?

I've got a UIImageView (full frame and rectangular) that i'm rotating with a CGAffineTransform. The UIImage of the UIImageView fills the entire frame. When the image is rotated and drawn the edges appear noticeably jagged. Is there anything I can do to make it look better? It's clearly not being anti-aliased with the background.
The edges of CoreAnimation layers aren't antialiased by default on iOS. However, there is a key that you can set in Info.plist that enables antialiasing of the edges: UIViewEdgeAntialiasing.
https://developer.apple.com/library/content/documentation/General/Reference/InfoPlistKeyReference/Articles/iPhoneOSKeys.html
If you don't want the performance overhead of enabling this option, a work-around is to add a 1px transparent border around the edge of the image. This means that the 'edges' of the image are no longer on the edge, so don't need special treatment!
New API – iOS 6/7
Also works for iOS 6, as noted by #Chris, but wasn't made public until iOS 7.
Since iOS 7, CALayer has a new property allowsEdgeAntialiasing which does exactly what you want in this case, without incurring the overhead of enabling it for all views in your application! This is a property of CALayer, so to enable this for a UIView you use myView.layer.allowsEdgeAntialiasing = YES.
just add 1px transparent border to your image
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0.0);
[image drawInRect:CGRectMake(1,1,image.size.width-2,image.size.height-2)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Remember to set the appropriate anti-alias options:
CGContextSetAllowsAntialiasing(theContext, true);
CGContextSetShouldAntialias(theContext, true);
just add "Renders with edge antialiasing" with YES in plist and it will work.
I would totally recommend the following library.
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
It contains lots of useful extensions to UIImage that solve this problem and also include code for generating thumbnails etc.
Enjoy!
The best way I've found to have smooth edges and a sharp image is to do this:
CGRect imageRect = CGRectMake(0, 0, self.photo.image.size.width, self.photo.image.size.height);
UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0.0);
[self.photo.image drawInRect:CGRectMake(1, 1, self.photo.image.size.width - 2, self.photo.image.size.height - 2)];
self.photo.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Adding the Info.plist key like some people describe has a big hit on performance and if you use that then you're basically applying it to everything instead of just the one place you need it.
Also, don't just use UIGraphicsBeginImageContext(imageRect.size); otherwise the layer will be blurry. You have to use UIGraphicsBeginImageContextWithOptions like I've shown.
I found this solution from here, and it's perfect:
+ (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame transparentInsets:(UIEdgeInsets)insets {
CGSize imageSizeWithBorder = CGSizeMake(frame.size.width + insets.left + insets.right, frame.size.height + insets.top + insets.bottom);
// Create a new context of the desired size to render the image
UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Clip the context to the portion of the view we will draw
CGContextClipToRect(context, (CGRect){{insets.left, insets.top}, frame.size});
// Translate it, to the desired position
CGContextTranslateCTM(context, -frame.origin.x + insets.left, -frame.origin.y + insets.top);
// Render the view as image
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
// Fetch the image
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
// Cleanup
UIGraphicsEndImageContext();
return renderedImage;
}
usage:
UIImage *image = [UIImage renderImageFromView:view withRect:view.bounds transparentInsets:UIEdgeInsetsZero];