Related
I'm trying to create an image for a custom style UIButton using an image from the camera roll on iPhone. The button has a circular background and effectively appears as a circle. Now I need an image to go in the middle of the button that also appears round.
How do I cut a square UIImage to appear round with transparency outside of the round area?
If masking is involved, do I need to pre-render a mask or can I create one programmatically(ex: a circle)?
Thank you!
I have never done anything like that, but try using QuartzCore framework and its' cornerRadius property. Example:
#import <QuartzCore/QuartzCore.h>
//some other code ...
UIImageView *imgView = [[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 100, 100)];
imgView.layer.cornerRadius = 10.0f;
play around with it a bit and you will get what you want.
Hope it helps
Yes you can use CoreGraphics to draw the mask dynamically.
Then you can create the masked image.
Example for masking:
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([image CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(maskedImageRef);
CGImageRelease(mask);
return maskedImage;
}
I started looking into this a couple of weeks back. I tried all the suggestions here, none of which worked well. In the great tradition of RTFM I went and read Apple's documentation on Quartz 2D Programming and came up with this. Please try it out and let me know how you go.
The code could be fairly easily altered to crop to an elipse, or any other shape defined by a path.
Make sure you include Quartz 2D in your project.
#include <math.h>
+ (UIImage*)circularScaleNCrop:(UIImage*)image: (CGRect) rect{
// This function returns a newImage, based on image, that has been:
// - scaled to fit in (CGRect) rect
// - and cropped within a circle of radius: rectWidth/2
//Create the bitmap graphics context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(rect.size.width, rect.size.height), NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
//Get the width and heights
CGFloat imageWidth = image.size.width;
CGFloat imageHeight = image.size.height;
CGFloat rectWidth = rect.size.width;
CGFloat rectHeight = rect.size.height;
//Calculate the scale factor
CGFloat scaleFactorX = rectWidth/imageWidth;
CGFloat scaleFactorY = rectHeight/imageHeight;
//Calculate the centre of the circle
CGFloat imageCentreX = rectWidth/2;
CGFloat imageCentreY = rectHeight/2;
// Create and CLIP to a CIRCULAR Path
// (This could be replaced with any closed path if you want a different shaped clip)
CGFloat radius = rectWidth/2;
CGContextBeginPath (context);
CGContextAddArc (context, imageCentreX, imageCentreY, radius, 0, 2*M_PI, 0);
CGContextClosePath (context);
CGContextClip (context);
//Set the SCALE factor for the graphics context
//All future draw calls will be scaled by this factor
CGContextScaleCTM (context, scaleFactorX, scaleFactorY);
// Draw the IMAGE
CGRect myRect = CGRectMake(0, 0, imageWidth, imageHeight);
[image drawInRect:myRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Include the following code in your UIView class replacing "monk2.png" with your own image name.
- (void)drawRect:(CGRect)rect
{
UIImage *originalImage = [UIImage imageNamed:[NSString stringWithFormat:#"monk2.png"]];
CGFloat oImageWidth = originalImage.size.width;
CGFloat oImageHeight = originalImage.size.height;
// Draw the original image at the origin
CGRect oRect = CGRectMake(0, 0, oImageWidth, oImageHeight);
[originalImage drawInRect:oRect];
// Set the newRect to half the size of the original image
CGRect newRect = CGRectMake(0, 0, oImageWidth/2, oImageHeight/2);
UIImage *newImage = [self circularScaleNCrop:originalImage :newRect];
CGFloat nImageWidth = newImage.size.width;
CGFloat nImageHeight = newImage.size.height;
//Draw the scaled and cropped image
CGRect thisRect = CGRectMake(oImageWidth+10, 0, nImageWidth, nImageHeight);
[newImage drawInRect:thisRect];
}
Here is a quick way to create rounded corners on a square ImageView to make it look like a perfect circle. Basically you apply a corner radius equal to 1/2 the width (width == height on a square image).
#import <QuartzCore/QuartzCore.h> //you need QuartzCore
...
float width = imageView.bounds.size.width; // we can also use the frame property instead of bounds since we just care about the Size and don't care about position
imageView.layer.cornerRadius = width/2;
{
imageView.layer.cornerRadius = imageView.frame.size.height /2;
imageView.layer.masksToBounds = YES;
imageView.layer.borderWidth = 0;
}
UIImage category to mask an image with a circle:
UIImage *originalImage = [UIImage imageNamed:#"myimage.png"];
UImage *myRoundedImage = [UIImage roundedImageWithImage:originalImage];
Get it here.
I have another solution:
- (UIImage *)roundedImageWithRect:(CGRect)rect radius:(CGFloat)radius
{
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:CGRectMake(0, 0, rect.size.width, rect.size.height) cornerRadius:radius];
CGFloat imageRatio = self.size.width / self.size.height;
CGSize imageSize = CGSizeMake(rect.size.height * imageRatio, rect.size.height);
CGRect imageRect = CGRectMake(0, 0, imageSize.width, imageSize.height);
[path addClip];
[self drawInRect:imageRect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This variant is better for performance than set cornerRadius directly.
Personally, I'd create a transparent circle image with opaque corners to overlay the photo. This solution is only suitable where you will be placing the image in one place on the UI, and assumes the opaque corners will blend in with the background.
Following is the answer I given in How to crop UIImage on oval shape or circle shape? to make the image circle. It works for me..
Download the Support archive file
#import "UIImage+RoundedCorner.h"
#import "UIImage+Resize.h"
Following lines used to resize the image and convert in to round with radius
UIImage *mask = [UIImage imageNamed:#"mask.jpg"];
mask = [mask resizedImage:CGSizeMake(47, 47) interpolationQuality:kCGInterpolationHigh ];
mask = [mask roundedCornerImage:23.5 borderSize:1];
Just use
_profilePictureImgView.layer.cornerRadius = 32.0f;
_profilePictureImgView.layer.masksToBounds = YES;
Ok I have searched all over and I can't not seem to find the answer I need. :-(
Here is what I am doing and what I need to happen. I have a UIImageView that I am able to transfrom using UIPanGestureRecognizer, UIRotationGestureRecognizer, and UIPinchGestureRecognizer all that works great. The problem comes when it is time to save those transfomations to my PhotoAlbum. The results are not what i am expecting. So far here is the code that i am using (al-be-it very incomplete)
-(IBAction)saveface:(id)sender
{
static int kMaxResolution = 640;
CGImageRef imgRef = face.image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGRect bounds = CGRectMake(0, 0, width, height);
if (width > kMaxResolution || height > kMaxResolution) {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = kMaxResolution;
bounds.size.height = bounds.size.width / ratio;
} else {
bounds.size.height = kMaxResolution;
bounds.size.width = bounds.size.height * ratio;
}
}
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -height);
NSLog(#"SAVE TX:%f TY:%f A:%f B:%f C:%f D:%f", face.transform.tx, face.transform.ty, face.transform.a,face.transform.b,face.transform.c,face.transform.d);
CGFloat x = face.transform.tx;
CGFloat y = face.transform.ty;
CGContextTranslateCTM (context, x,-y);
CGContextConcatCTM(context, face.transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(imageCopy, self, nil, nil);
}
From this method I am able to save the image however the coordinates are way off. I can see any scaling I did that works ok. I see my rotation but it seems to be backwards and the paning is WAY off.
The coords are all wrong and for the life of me I can't figure out what is wrong. I admit I am new to Objective-C and I have never used Quartz 2D before and my understanding of the transform matrixes is limited. I do understand that my transformations are not on the image data itself as trying to save the image out right with out applying this context to it does nothing. So please can anyone set me straight on this?
You generally want to translate before you scale.
You've flipped your coordinate system but you still assign the old x and y values. I think you should have:
CGFloat y = face.transform.tx;
CGFloat x = face.transform.ty;
The face object is not defined in this code. If it is off, everything else will be off. If it is a property you should use the self.face reference form to make sure it is accessed properly.
I would recommend performing the scale transform last.
If none of this works. Comment out all but one transform and see if that single transform works. Then add the others in until it fails.
This kind of work is much easier if you the image in a UIView, transform, scale and then render the layer. It saves all kinds of hassle later.
Something like:
CGSize vscaledSize = myOriginalImage.size;
//add in the scaled bits of the view
//scaling
CGFloat wratio = vscaledSize.width/self.pictureView.frame.size.width;
CGFloat vhightScaled = self.pictureView.frame.size.height * wratio;
vscaledSize.height = vhightScaled;
CGFloat hratio = vscaledSize.height/self.pictureView.frame.size.height;
//create context
UIGraphicsBeginImageContext(myOriginalImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context); //1 original context
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, vscaledSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
//original image
CGContextDrawImage(context, CGRectMake(0,0,vscaledSize.width,vscaledSize.height), myOriginalImage.CGImage);
CGContextRestoreGState(context);//1 restore to original;
//scale context to match view size
CGContextSaveGState(context); //1 pre-scaled size
CGContextScaleCTM(context, wratio, hratio);
//render
[self.pictureView.layer renderInContext:context];
CGContextRestoreGState(context);//1 restore to pre-scaled size;
UIImage *exportImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You'll notice that I flip the coord system for the image render, but restore it to the original coordinate system for the UIView.layer render.
Hope this helps
I develop an application in which i process the image using its pixels but in that image processing it takes a lot of time. Therefore i want to crop UIImage (Only middle part of image i.e. removing/croping bordered part of image).I have the develop code are,
- (NSInteger) processImage1: (UIImage*) image
{
CGFloat width = image.size.width;
CGFloat height = image.size.height;
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
// Create a new bitmap
CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);
if (context != NULL)
{
// Draw the image in the bitmap
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
NSUInteger numberOfPixels = image.size.width * image.size.height;
NSMutableArray *numberOfPixelsArray = [[[NSMutableArray alloc] initWithCapacity:numberOfPixelsArray] autorelease];
}
How i take(croping outside bordered) the middle part of UIImage?????????
Try something like this:
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Note: cropRect is smaller rectangle with middle part of the image...
I was looking for a way to get an arbitrary rectangular crop (ie., sub-image) of a UIImage.
Most of the solutions I tried do not work if the orientation of the image is anything but UIImageOrientationUp.
For example:
http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Typically if you use your iPhone camera, you will have other orientations like UIImageOrientationLeft, and you will not get a correct crop with the above. This is because of the use of CGImageRef/CGContextDrawImage which differ in the coordinate system with respect to UIImage.
The code below uses UI* methods (no CGImageRef), and I have tested this with up/down/left/right oriented images, and it seems to work great.
// get sub image
- (UIImage*) getSubImageFrom: (UIImage*) img WithRect: (CGRect) rect {
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, img.size.width, img.size.height);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[img drawInRect:drawRect];
// grab image
UIImage* subImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return subImage;
}
Because I needed it just now, here is M-V 's code in Swift 4:
func imageWithImage(image: UIImage, croppedTo rect: CGRect) -> UIImage {
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
let drawRect = CGRect(x: -rect.origin.x, y: -rect.origin.y,
width: image.size.width, height: image.size.height)
context?.clip(to: CGRect(x: 0, y: 0,
width: rect.size.width, height: rect.size.height))
image.draw(in: drawRect)
let subImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return subImage!
}
It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible. It would certainly eliminate a lot of effort!
Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify. The pixel scaling is preserved from the original image.
My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done when that tab is selected.
This code is also posted at Most efficient way to draw part of an image in iOS
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}
// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {
// convert y coordinate to origin bottom-left
CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
orgX = -aperture.origin.x,
scaleX = 1.0,
scaleY = 1.0,
rot = 0.0;
CGSize size;
switch (orientation) {
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
size = CGSizeMake(aperture.size.height, aperture.size.width);
break;
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
case UIImageOrientationUp:
case UIImageOrientationUpMirrored:
size = aperture.size;
break;
default:
assert(NO);
return nil;
}
switch (orientation) {
case UIImageOrientationRight:
rot = 1.0 * M_PI / 2.0;
orgY -= aperture.size.height;
break;
case UIImageOrientationRightMirrored:
rot = 1.0 * M_PI / 2.0;
scaleY = -1.0;
break;
case UIImageOrientationDown:
scaleX = scaleY = -1.0;
orgX -= aperture.size.width;
orgY -= aperture.size.height;
break;
case UIImageOrientationDownMirrored:
orgY -= aperture.size.height;
scaleY = -1.0;
break;
case UIImageOrientationLeft:
rot = 3.0 * M_PI / 2.0;
orgX -= aperture.size.height;
break;
case UIImageOrientationLeftMirrored:
rot = 3.0 * M_PI / 2.0;
orgY -= aperture.size.height;
orgX -= aperture.size.width;
scaleY = -1.0;
break;
case UIImageOrientationUp:
break;
case UIImageOrientationUpMirrored:
orgX -= aperture.size.width;
scaleX = -1.0;
break;
}
// set the draw rect to pan the image to the right spot
CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);
// create a context for the new image
UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
CGContextRef gc = UIGraphicsGetCurrentContext();
// apply rotation and scaling
CGContextRotateCTM(gc, rot);
CGContextScaleCTM(gc, scaleX, scaleY);
// draw the image to our clipped context using the offset rect
CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);
// pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
// pop the context to get back to the default
UIGraphicsEndImageContext();
// Note: this is autoreleased
return cropped;
}
#Very small/simple Swift 5 version,
You shouldn't mix UI and CG objects, they sometimes have very different coordinate spaces. This can make you sad.
Note 👉 : self.draw(at:)
#inlinable private prefix func - (right: CGPoint) -> CGPoint
{
return CGPoint(x: -right.x, y: -right.y)
}
extension UIImage
{
public func cropped(to cropRect: CGRect) -> UIImage?
{
let renderer = UIGraphicsImageRenderer(size: cropRect.size)
return renderer.image
{
_ in
self.draw(at: -cropRect.origin)
}
}
}
Using the function
CGContextClipToRect(context, CGRectMake(0, 0, size.width, size.height));
Here's an example code, used for a different purpose but clips ok.
- (UIImage *)aspectFillToSize:(CGSize)size
{
CGFloat imgAspect = self.size.width / self.size.height;
CGFloat sizeAspect = size.width/size.height;
CGSize scaledSize;
if (sizeAspect > imgAspect) { // increase width, crop height
scaledSize = CGSizeMake(size.width, size.width / imgAspect);
} else { // increase height, crop width
scaledSize = CGSizeMake(size.height * imgAspect, size.height);
}
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, CGRectMake(0, 0, size.width, size.height));
[self drawInRect:CGRectMake(0.0f, 0.0f, scaledSize.width, scaledSize.height)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you want a portrait crop down the center of every photo.
Use #M-V solution, & replace cropRect.
CGFloat height = imageTaken.size.height;
CGFloat width = imageTaken.size.width;
CGFloat newWidth = height * 9 / 16;
CGFloat newX = abs((width - newWidth)) / 2;
CGRect cropRect = CGRectMake(newX,0, newWidth ,height);
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/
After I load an image from the device, I need to rotate it 37.8 degrees then display it on a View.
Is there an function in Objective-C that can do the image rotation?
Ian
To rotate the view:
imageView.transform = CGAffineTransformMakeRotation(37.8°);
To rotate the image,
Calculate the width and height will be occupied by the image after rotation.
Create a CGContext by UIGraphicsBeginImageContext.
CGContextRotateCTM(UIGraphicsGetCurrentContext(), 37.8°);
[yourImage drawAtPoint:...];
UIGraphicsGetImageFromCurrentImageContext(); and use this image instead.
Release the context.
Yes, see my answer to this question: Question about rotating a slider
To convert degrees to radians (for the positionInRadians arg) use this function:
CGFloat DegreesToRadians(CGFloat degrees) {return degrees * M_PI / 180;};
To rotate the image, try this:
-(IBAction)rotateImageClick:(id)sender{
UIImage *image2=[[UIImage alloc]init];
image2 = [self imageRotatedByDegrees:self.roateImageView.image deg:(90)]; //Angle by 90 degree
self.roateImageView.image = image2;
imgData= UIImageJPEGRepresentation(image2,0.9f);
}
This method allows you to rotate an image an arbitrary amount:
- (UIImage *)imageRotatedByDegrees:(UIImage*)oldImage deg:(CGFloat)degrees{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,oldImage.size.width, oldImage.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(degrees * M_PI / 180);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// // Rotate the image context
CGContextRotateCTM(bitmap, (degrees * M_PI / 180));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-oldImage.size.width / 2, -oldImage.size.height / 2, oldImage.size.width, oldImage.size.height), [oldImage CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
See this link for more information.
I try to get rounded corners on a UIImage, what I read so far, the easiest way is to use a mask images. For this I used code from TheElements iPhone Example and some image resize code I found. My problem is that resizedImage is always nil and I don't find the error...
- (UIImage *)imageByScalingProportionallyToSize:(CGSize)targetSize
{
CGSize imageSize = [self size];
float width = imageSize.width;
float height = imageSize.height;
// scaleFactor will be the fraction that we'll
// use to adjust the size. For example, if we shrink
// an image by half, scaleFactor will be 0.5. the
// scaledWidth and scaledHeight will be the original,
// multiplied by the scaleFactor.
//
// IMPORTANT: the "targetHeight" is the size of the space
// we're drawing into. The "scaledHeight" is the height that
// the image actually is drawn at, once we take into
// account the ideal of maintaining proportions
float scaleFactor = 0.0;
float scaledWidth = targetSize.width;
float scaledHeight = targetSize.height;
CGPoint thumbnailPoint = CGPointMake(0,0);
// since not all images are square, we want to scale
// proportionately. To do this, we find the longest
// edge and use that as a guide.
if ( CGSizeEqualToSize(imageSize, targetSize) == NO )
{
// use the longeset edge as a guide. if the
// image is wider than tall, we'll figure out
// the scale factor by dividing it by the
// intended width. Otherwise, we'll use the
// height.
float widthFactor = targetSize.width / width;
float heightFactor = targetSize.height / height;
if ( widthFactor < heightFactor )
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
// ex: 500 * 0.5 = 250 (newWidth)
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the thumbnail in the frame. if
// wider than tall, we need to adjust the
// vertical drawing point (y axis)
if ( widthFactor < heightFactor )
thumbnailPoint.y = (targetSize.height - scaledHeight) * 0.5;
else if ( widthFactor > heightFactor )
thumbnailPoint.x = (targetSize.width - scaledWidth) * 0.5;
}
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
//CGContextSetFillColorWithColor(mainViewContentContext, [[UIColor whiteColor] CGColor]);
//CGContextFillRect(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height));
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
CGImageRef maskImage = [[UIImage imageNamed:#"Mask.png"] CGImage];
CGImageRef resizedImage = CGImageCreateWithMask(mainViewContentBitmapContext, maskImage);
CGImageRelease(mainViewContentBitmapContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:resizedImage];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(resizedImage);
// return the image
return theImage;
}
If you are using a UIImageView to display the image you can simply do the following:
imageView.layer.cornerRadius = 5.0;
imageView.layer.masksToBounds = YES;
And to add a border:
imageView.layer.borderColor = [UIColor lightGrayColor].CGColor;
imageView.layer.borderWidth = 1.0;
I believe that you'll have to import <QuartzCore/QuartzCore.h> and link against it for the above code to work.
How about these lines...
// Get your image somehow
UIImage *image = [UIImage imageNamed:#"image.jpg"];
// Begin a new image that will be the new image with the rounded corners
// (here with the size of an UIImageView)
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 1.0);
// Add a clip before drawing anything, in the shape of an rounded rect
[[UIBezierPath bezierPathWithRoundedRect:imageView.bounds
cornerRadius:10.0] addClip];
// Draw your image
[image drawInRect:imageView.bounds];
// Get the image, here setting the UIImageView image
imageView.image = UIGraphicsGetImageFromCurrentImageContext();
// Lets forget about that we were drawing
UIGraphicsEndImageContext();
I created an UIImage-extension in swift, based on #epatel's great answer:
extension UIImage{
var roundedImage: UIImage {
let rect = CGRect(origin:CGPoint(x: 0, y: 0), size: self.size)
UIGraphicsBeginImageContextWithOptions(self.size, false, 1)
defer {
// End context after returning to avoid memory leak
UIGraphicsEndImageContext()
}
UIBezierPath(
roundedRect: rect,
cornerRadius: self.size.height
).addClip()
self.drawInRect(rect)
return UIGraphicsGetImageFromCurrentImageContext()
}
}
Tested in a storyboard:
The problem was the use of CGImageCreateWithMask which returned an all black image. The solution I found was to use CGContextClipToMask instead:
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;
Extending Besi's excellent answer, with correct scale, in Swift 4:
extension UIImage {
public func rounded(radius: CGFloat) -> UIImage {
let rect = CGRect(origin: .zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
UIBezierPath(roundedRect: rect, cornerRadius: radius).addClip()
draw(in: rect)
return UIGraphicsGetImageFromCurrentImageContext()!
}
}
You aren't actually doing anything other than scaling there. What you need to do is to "mask" the corners of the image by clipping it with a CGPath. For instance -
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginTransparencyLayerWithRect(context, self.frame, NULL);
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0);
CGFloat roundRadius = (radius) ? radius : 12.0;
CGFloat minx = CGRectGetMinX(self.frame), midx = CGRectGetMidX(self.frame), maxx = CGRectGetMaxX(self.frame);
CGFloat miny = CGRectGetMinY(self.frame), midy = CGRectGetMidY(self.frame), maxy = CGRectGetMaxY(self.frame);
// draw the arcs, handle paths
CGContextMoveToPoint(context, minx, midy);
CGContextAddArcToPoint(context, minx, miny, midx, miny, roundRadius);
CGContextAddArcToPoint(context, maxx, miny, maxx, midy, roundRadius);
CGContextAddArcToPoint(context, maxx, maxy, midx, maxy, roundRadius);
CGContextAddArcToPoint(context, minx, maxy, minx, midy, roundRadius);
CGContextClosePath(context);
CGContextDrawPath(context, kCGPathFill);
CGContextEndTransparencyLayer(context);
}
I suggest checking out the Quartz 2D programming guide or some other samples.
static void addRoundedRectToPath(CGContextRef context, CGRect rect, float ovalWidth, float ovalHeight)
{
float fw, fh;
if (ovalWidth == 0 || ovalHeight == 0) {
CGContextAddRect(context, rect);
return;
}
CGContextSaveGState(context);
CGContextTranslateCTM (context, CGRectGetMinX(rect), CGRectGetMinY(rect));
CGContextScaleCTM (context, ovalWidth, ovalHeight);
fw = CGRectGetWidth (rect) / ovalWidth;
fh = CGRectGetHeight (rect) / ovalHeight;
CGContextMoveToPoint(context, fw, fh/2);
CGContextAddArcToPoint(context, fw, fh, fw/2, fh, 1);
CGContextAddArcToPoint(context, 0, fh, 0, fh/2, 1);
CGContextAddArcToPoint(context, 0, 0, fw/2, 0, 1);
CGContextAddArcToPoint(context, fw, 0, fw, fh/2, 1);
CGContextClosePath(context);
CGContextRestoreGState(context);
}
+ (UIImage *)imageWithRoundCorner:(UIImage*)img andCornerSize:(CGSize)size
{
UIImage * newImage = nil;
if( nil != img)
{
#autoreleasepool {
int w = img.size.width;
int h = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextBeginPath(context);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
addRoundedRectToPath(context, rect, size.width, size.height);
CGContextClosePath(context);
CGContextClip(context);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
[img release];
newImage = [[UIImage imageWithCGImage:imageMasked] retain];
CGImageRelease(imageMasked);
}
}
return newImage;
}
I think this could be very related:
In iOS 11 there is a very elgant way of rounding each single corner of a (Image)View.
let imageView = UIImageView(image: UIImage(named: "myImage"))
imageView.layer.maskedCorners = [.layerMinXMinYCorner, .layerMaxXMinYCorner]
imageView.layer.cornerRadius = 10.0
I liked the answer of #samwize, however it caused me nasty memory leaks when used with collectionView.
To fix it I found that UIGraphicsEndImageContext() was missing
extension UIImage {
/**
Rounds corners of UIImage
- Parameter proportion: Proportion to minimum paramter (width or height)
in order to have the same look of corner radius independetly
from aspect ratio and actual size
*/
func roundCorners(proportion: CGFloat) -> UIImage {
let minValue = min(self.size.width, self.size.height)
let radius = minValue/proportion
let rect = CGRect(origin: CGPoint(x: 0, y: 0), size: self.size)
UIGraphicsBeginImageContextWithOptions(self.size, false, 1)
UIBezierPath(roundedRect: rect, cornerRadius: radius).addClip()
self.draw(in: rect)
let image = UIGraphicsGetImageFromCurrentImageContext() ?? self
UIGraphicsEndImageContext()
return image
}
}
Feel free to just pass the radius instead of proportion. proportion is used because I have collectionView scroll and images have different sizes, therefore when using constant radius it actually looks different in terms of proprtions (example: two images, one is 1000x1000 and another 2000x2000, corner radius of 30 will look different on each one of them)
So if you do image.roundCorners(proportion: 20) all the pictures look like the have the same corner radius.
This answer is also an updated version.
The reason it worked with clipping, not with masking, seems to be the color space.
Apple Documentation's below.
mask
A mask. If the mask is an image, it must be in the DeviceGray color space, must not have an alpha component, and may not itself be masked by an image mask or a masking color. If the mask is not the same size as the image specified by the image parameter, then Quartz scales the mask to fit the image.
Hi guys try this code,
+ (UIImage *)roundedRectImageFromImage:(UIImage *)image withRadious:(CGFloat)radious {
if(radious == 0.0f)
return image;
if( image != nil) {
CGFloat imageWidth = image.size.width;
CGFloat imageHeight = image.size.height;
CGRect rect = CGRectMake(0.0f, 0.0f, imageWidth, imageHeight);
UIWindow *window = [[[UIApplication sharedApplication] windows] objectAtIndex:0];
const CGFloat scale = window.screen.scale;
UIGraphicsBeginImageContextWithOptions(rect.size, NO, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginPath(context);
CGContextSaveGState(context);
CGContextTranslateCTM (context, CGRectGetMinX(rect), CGRectGetMinY(rect));
CGContextScaleCTM (context, radious, radious);
CGFloat rectWidth = CGRectGetWidth (rect)/radious;
CGFloat rectHeight = CGRectGetHeight (rect)/radious;
CGContextMoveToPoint(context, rectWidth, rectHeight/2.0f);
CGContextAddArcToPoint(context, rectWidth, rectHeight, rectWidth/2.0f, rectHeight, radious);
CGContextAddArcToPoint(context, 0.0f, rectHeight, 0.0f, rectHeight/2.0f, radious);
CGContextAddArcToPoint(context, 0.0f, 0.0f, rectWidth/2.0f, 0.0f, radious);
CGContextAddArcToPoint(context, rectWidth, 0.0f, rectWidth, rectHeight/2.0f, radious);
CGContextRestoreGState(context);
CGContextClosePath(context);
CGContextClip(context);
[image drawInRect:CGRectMake(0.0f, 0.0f, imageWidth, imageHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
return nil;
}
Cheers !!!
It's very easy to create a rounded image when you make use of the image dimension.
cell.messageImage.layer.cornerRadius = image.size.width / 2
cell.messageImage.layer.masksToBounds = true
Found out the best and simple way of doing it is as follows (no answer did that):
UIImageView *imageView;
imageView.layer.cornerRadius = imageView.frame.size.width/2.0f;
imageView.layer.masksToBounds = TRUE;
Pretty simple and done this right.
See here...
IMO unless you absolutely need to do it in code, just overlay an image on top.
Something along the lines of...
- (void)drawRect:(CGRect)rect
{
// Drawing code
[backgroundImage drawInRect:rect];
[buttonOverlay drawInRect:rect];
}
For Creating a Round Corner image we can use quartzcore.
First How to add QuartzCore framework?
Click project -Targets
->project
->BuildPhase
->Link Binary with Libraries
->Then click + symbol finally select from list and add it
or else
Click project -Targets
->Targets
->general
->Linked Frameworks and Libraries
->Then click + symbol finally select from list and add the QuartzCore framework
Now import
#import <QuartzCore/QuartzCore.h>
in your ViewController
Then in viewDidLoad method
self.yourImageView.layer.cornerRadius = 5.0;
self.yourImageView.layer.borderWidth = 1.0f;
self.yourImageView.layer.borderColor = [UIColor blackColor].CGColor;
self.yourImageView.layer.masksToBounds = YES;
I was struggling to round the corners of a UIImage box in my storyboard. I had a IBOutlet for my UIImage called image. After reading a bunch of posts on here, I simply added 3 lines and that worked perfectly.
import UIKit
Then in viewDidLoad:
image.layer.cornerRadius = 20.0
image.layer.masksToBounds = true
This is for iOS 11.1 in Xcode 9.