I try to get rounded corners on a UIImage, what I read so far, the easiest way is to use a mask images. For this I used code from TheElements iPhone Example and some image resize code I found. My problem is that resizedImage is always nil and I don't find the error...
- (UIImage *)imageByScalingProportionallyToSize:(CGSize)targetSize
{
CGSize imageSize = [self size];
float width = imageSize.width;
float height = imageSize.height;
// scaleFactor will be the fraction that we'll
// use to adjust the size. For example, if we shrink
// an image by half, scaleFactor will be 0.5. the
// scaledWidth and scaledHeight will be the original,
// multiplied by the scaleFactor.
//
// IMPORTANT: the "targetHeight" is the size of the space
// we're drawing into. The "scaledHeight" is the height that
// the image actually is drawn at, once we take into
// account the ideal of maintaining proportions
float scaleFactor = 0.0;
float scaledWidth = targetSize.width;
float scaledHeight = targetSize.height;
CGPoint thumbnailPoint = CGPointMake(0,0);
// since not all images are square, we want to scale
// proportionately. To do this, we find the longest
// edge and use that as a guide.
if ( CGSizeEqualToSize(imageSize, targetSize) == NO )
{
// use the longeset edge as a guide. if the
// image is wider than tall, we'll figure out
// the scale factor by dividing it by the
// intended width. Otherwise, we'll use the
// height.
float widthFactor = targetSize.width / width;
float heightFactor = targetSize.height / height;
if ( widthFactor < heightFactor )
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
// ex: 500 * 0.5 = 250 (newWidth)
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the thumbnail in the frame. if
// wider than tall, we need to adjust the
// vertical drawing point (y axis)
if ( widthFactor < heightFactor )
thumbnailPoint.y = (targetSize.height - scaledHeight) * 0.5;
else if ( widthFactor > heightFactor )
thumbnailPoint.x = (targetSize.width - scaledWidth) * 0.5;
}
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
//CGContextSetFillColorWithColor(mainViewContentContext, [[UIColor whiteColor] CGColor]);
//CGContextFillRect(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height));
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
CGImageRef maskImage = [[UIImage imageNamed:#"Mask.png"] CGImage];
CGImageRef resizedImage = CGImageCreateWithMask(mainViewContentBitmapContext, maskImage);
CGImageRelease(mainViewContentBitmapContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:resizedImage];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(resizedImage);
// return the image
return theImage;
}
If you are using a UIImageView to display the image you can simply do the following:
imageView.layer.cornerRadius = 5.0;
imageView.layer.masksToBounds = YES;
And to add a border:
imageView.layer.borderColor = [UIColor lightGrayColor].CGColor;
imageView.layer.borderWidth = 1.0;
I believe that you'll have to import <QuartzCore/QuartzCore.h> and link against it for the above code to work.
How about these lines...
// Get your image somehow
UIImage *image = [UIImage imageNamed:#"image.jpg"];
// Begin a new image that will be the new image with the rounded corners
// (here with the size of an UIImageView)
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 1.0);
// Add a clip before drawing anything, in the shape of an rounded rect
[[UIBezierPath bezierPathWithRoundedRect:imageView.bounds
cornerRadius:10.0] addClip];
// Draw your image
[image drawInRect:imageView.bounds];
// Get the image, here setting the UIImageView image
imageView.image = UIGraphicsGetImageFromCurrentImageContext();
// Lets forget about that we were drawing
UIGraphicsEndImageContext();
I created an UIImage-extension in swift, based on #epatel's great answer:
extension UIImage{
var roundedImage: UIImage {
let rect = CGRect(origin:CGPoint(x: 0, y: 0), size: self.size)
UIGraphicsBeginImageContextWithOptions(self.size, false, 1)
defer {
// End context after returning to avoid memory leak
UIGraphicsEndImageContext()
}
UIBezierPath(
roundedRect: rect,
cornerRadius: self.size.height
).addClip()
self.drawInRect(rect)
return UIGraphicsGetImageFromCurrentImageContext()
}
}
Tested in a storyboard:
The problem was the use of CGImageCreateWithMask which returned an all black image. The solution I found was to use CGContextClipToMask instead:
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;
Extending Besi's excellent answer, with correct scale, in Swift 4:
extension UIImage {
public func rounded(radius: CGFloat) -> UIImage {
let rect = CGRect(origin: .zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
UIBezierPath(roundedRect: rect, cornerRadius: radius).addClip()
draw(in: rect)
return UIGraphicsGetImageFromCurrentImageContext()!
}
}
You aren't actually doing anything other than scaling there. What you need to do is to "mask" the corners of the image by clipping it with a CGPath. For instance -
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginTransparencyLayerWithRect(context, self.frame, NULL);
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0);
CGFloat roundRadius = (radius) ? radius : 12.0;
CGFloat minx = CGRectGetMinX(self.frame), midx = CGRectGetMidX(self.frame), maxx = CGRectGetMaxX(self.frame);
CGFloat miny = CGRectGetMinY(self.frame), midy = CGRectGetMidY(self.frame), maxy = CGRectGetMaxY(self.frame);
// draw the arcs, handle paths
CGContextMoveToPoint(context, minx, midy);
CGContextAddArcToPoint(context, minx, miny, midx, miny, roundRadius);
CGContextAddArcToPoint(context, maxx, miny, maxx, midy, roundRadius);
CGContextAddArcToPoint(context, maxx, maxy, midx, maxy, roundRadius);
CGContextAddArcToPoint(context, minx, maxy, minx, midy, roundRadius);
CGContextClosePath(context);
CGContextDrawPath(context, kCGPathFill);
CGContextEndTransparencyLayer(context);
}
I suggest checking out the Quartz 2D programming guide or some other samples.
static void addRoundedRectToPath(CGContextRef context, CGRect rect, float ovalWidth, float ovalHeight)
{
float fw, fh;
if (ovalWidth == 0 || ovalHeight == 0) {
CGContextAddRect(context, rect);
return;
}
CGContextSaveGState(context);
CGContextTranslateCTM (context, CGRectGetMinX(rect), CGRectGetMinY(rect));
CGContextScaleCTM (context, ovalWidth, ovalHeight);
fw = CGRectGetWidth (rect) / ovalWidth;
fh = CGRectGetHeight (rect) / ovalHeight;
CGContextMoveToPoint(context, fw, fh/2);
CGContextAddArcToPoint(context, fw, fh, fw/2, fh, 1);
CGContextAddArcToPoint(context, 0, fh, 0, fh/2, 1);
CGContextAddArcToPoint(context, 0, 0, fw/2, 0, 1);
CGContextAddArcToPoint(context, fw, 0, fw, fh/2, 1);
CGContextClosePath(context);
CGContextRestoreGState(context);
}
+ (UIImage *)imageWithRoundCorner:(UIImage*)img andCornerSize:(CGSize)size
{
UIImage * newImage = nil;
if( nil != img)
{
#autoreleasepool {
int w = img.size.width;
int h = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextBeginPath(context);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
addRoundedRectToPath(context, rect, size.width, size.height);
CGContextClosePath(context);
CGContextClip(context);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
[img release];
newImage = [[UIImage imageWithCGImage:imageMasked] retain];
CGImageRelease(imageMasked);
}
}
return newImage;
}
I think this could be very related:
In iOS 11 there is a very elgant way of rounding each single corner of a (Image)View.
let imageView = UIImageView(image: UIImage(named: "myImage"))
imageView.layer.maskedCorners = [.layerMinXMinYCorner, .layerMaxXMinYCorner]
imageView.layer.cornerRadius = 10.0
I liked the answer of #samwize, however it caused me nasty memory leaks when used with collectionView.
To fix it I found that UIGraphicsEndImageContext() was missing
extension UIImage {
/**
Rounds corners of UIImage
- Parameter proportion: Proportion to minimum paramter (width or height)
in order to have the same look of corner radius independetly
from aspect ratio and actual size
*/
func roundCorners(proportion: CGFloat) -> UIImage {
let minValue = min(self.size.width, self.size.height)
let radius = minValue/proportion
let rect = CGRect(origin: CGPoint(x: 0, y: 0), size: self.size)
UIGraphicsBeginImageContextWithOptions(self.size, false, 1)
UIBezierPath(roundedRect: rect, cornerRadius: radius).addClip()
self.draw(in: rect)
let image = UIGraphicsGetImageFromCurrentImageContext() ?? self
UIGraphicsEndImageContext()
return image
}
}
Feel free to just pass the radius instead of proportion. proportion is used because I have collectionView scroll and images have different sizes, therefore when using constant radius it actually looks different in terms of proprtions (example: two images, one is 1000x1000 and another 2000x2000, corner radius of 30 will look different on each one of them)
So if you do image.roundCorners(proportion: 20) all the pictures look like the have the same corner radius.
This answer is also an updated version.
The reason it worked with clipping, not with masking, seems to be the color space.
Apple Documentation's below.
mask
A mask. If the mask is an image, it must be in the DeviceGray color space, must not have an alpha component, and may not itself be masked by an image mask or a masking color. If the mask is not the same size as the image specified by the image parameter, then Quartz scales the mask to fit the image.
Hi guys try this code,
+ (UIImage *)roundedRectImageFromImage:(UIImage *)image withRadious:(CGFloat)radious {
if(radious == 0.0f)
return image;
if( image != nil) {
CGFloat imageWidth = image.size.width;
CGFloat imageHeight = image.size.height;
CGRect rect = CGRectMake(0.0f, 0.0f, imageWidth, imageHeight);
UIWindow *window = [[[UIApplication sharedApplication] windows] objectAtIndex:0];
const CGFloat scale = window.screen.scale;
UIGraphicsBeginImageContextWithOptions(rect.size, NO, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginPath(context);
CGContextSaveGState(context);
CGContextTranslateCTM (context, CGRectGetMinX(rect), CGRectGetMinY(rect));
CGContextScaleCTM (context, radious, radious);
CGFloat rectWidth = CGRectGetWidth (rect)/radious;
CGFloat rectHeight = CGRectGetHeight (rect)/radious;
CGContextMoveToPoint(context, rectWidth, rectHeight/2.0f);
CGContextAddArcToPoint(context, rectWidth, rectHeight, rectWidth/2.0f, rectHeight, radious);
CGContextAddArcToPoint(context, 0.0f, rectHeight, 0.0f, rectHeight/2.0f, radious);
CGContextAddArcToPoint(context, 0.0f, 0.0f, rectWidth/2.0f, 0.0f, radious);
CGContextAddArcToPoint(context, rectWidth, 0.0f, rectWidth, rectHeight/2.0f, radious);
CGContextRestoreGState(context);
CGContextClosePath(context);
CGContextClip(context);
[image drawInRect:CGRectMake(0.0f, 0.0f, imageWidth, imageHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
return nil;
}
Cheers !!!
It's very easy to create a rounded image when you make use of the image dimension.
cell.messageImage.layer.cornerRadius = image.size.width / 2
cell.messageImage.layer.masksToBounds = true
Found out the best and simple way of doing it is as follows (no answer did that):
UIImageView *imageView;
imageView.layer.cornerRadius = imageView.frame.size.width/2.0f;
imageView.layer.masksToBounds = TRUE;
Pretty simple and done this right.
See here...
IMO unless you absolutely need to do it in code, just overlay an image on top.
Something along the lines of...
- (void)drawRect:(CGRect)rect
{
// Drawing code
[backgroundImage drawInRect:rect];
[buttonOverlay drawInRect:rect];
}
For Creating a Round Corner image we can use quartzcore.
First How to add QuartzCore framework?
Click project -Targets
->project
->BuildPhase
->Link Binary with Libraries
->Then click + symbol finally select from list and add it
or else
Click project -Targets
->Targets
->general
->Linked Frameworks and Libraries
->Then click + symbol finally select from list and add the QuartzCore framework
Now import
#import <QuartzCore/QuartzCore.h>
in your ViewController
Then in viewDidLoad method
self.yourImageView.layer.cornerRadius = 5.0;
self.yourImageView.layer.borderWidth = 1.0f;
self.yourImageView.layer.borderColor = [UIColor blackColor].CGColor;
self.yourImageView.layer.masksToBounds = YES;
I was struggling to round the corners of a UIImage box in my storyboard. I had a IBOutlet for my UIImage called image. After reading a bunch of posts on here, I simply added 3 lines and that worked perfectly.
import UIKit
Then in viewDidLoad:
image.layer.cornerRadius = 20.0
image.layer.masksToBounds = true
This is for iOS 11.1 in Xcode 9.
Related
If I override drawRect in order to display an image and place a dinamically-generated overlay on it (see code), whenever I scale up the image it is drawn in a very blurry way as the result of the scaling.
The image is composed of two pieces, an image drawn from a png (whose original size is 2x the wanted one, so it should not give problems when scaled, but it does) and the other is dinamically generated according to the rect size, so it should also adapt to the current rect size, but it doesn't.
Any help?
- (void) drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextDrawImage(ctx, CGRectMake(0, 0, rect.size.width, rect.size.height), [UIImage imageNamed:#"actionBg.png"].CGImage);
// generate the overlay
if ([self isActive] == NO && self.fullDelay != 0) { // TODO: remove fullDelay check!
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef overlayCtx = UIGraphicsGetCurrentContext();
int segmentSize = (rect.size.height / [self fullDelay]);
for (int i=0; i<[self fullDelay]; i++) {
float alpha = 0.9 - (([self fullDelay] * 0.1) - (i * 0.1));
[[UIColor colorWithRed:120.0/255.0 green:14.0/255.0 blue:14.0/255.0 alpha:alpha] setFill];
if (currentDelay > i) {
CGRect r = CGRectMake(0, i * segmentSize, rect.size.width, segmentSize);
CGContextFillRect(overlayCtx, r);
}
[[UIColor colorWithRed:1 green:1 blue:1 alpha:0.3] setFill];
CGRect line = CGRectMake(0, (i * segmentSize) + segmentSize - 1 , rect.size.width, 1);
CGContextFillRect(overlayCtx, line);
}
UIImage *overlay = UIGraphicsGetImageFromCurrentImageContext();
UIImage *overlayMasked = [TDUtilities maskImage:overlay withMask:[UIImage imageNamed:#"actionMask.png"]];
// prevent the drawings to be flipped
CGContextTranslateCTM(overlayCtx, 0, rect.size.height);
CGContextScaleCTM(overlayCtx, 1.0, -1.0);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
CGContextDrawImage(ctx, rect, overlayMasked.CGImage);
CGContextSetBlendMode(ctx, kCGBlendModeNormal);
UIGraphicsEndImageContext();
}
The problem is that you are drawing overlayMasked as a CGImage with CGContextDrawImage, which knows nothing of scale. Either you will have to double the size yourself manually if you're in a double-scale situation, or you should use UIImage, which knows about scale.
I want to rotate an UIImage (not UIImageView) in custom degree
I followed this post but it didn't work for me.
Anyone can help? Thanks.
UPDATE:
The code below does some of the job, but I lose some of the image after rotating it:
What should I change to get it right? (btw the yellow color in the screenshots is my UIImageView bg)
- (UIImage *) rotate: (UIImage *) image
{
double angle = 20;
CGSize s = {image.size.width, image.size.height};
UIGraphicsBeginImageContext(s);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, 0,image.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextRotateCTM(ctx, 2*M_PI*angle/360);
CGContextDrawImage(ctx,CGRectMake(0,0,image.size.width, image.size.height),image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return you image on your angle of rotate
#pragma mark -
#pragma mark Rotate Image
- (UIImage *)scaleAndRotateImage:(UIImage *)image {
CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
CGFloat boundHeight;
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeScale(-1.0, 1.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0); //use angle/360 *MPI
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
- (UIImage *)rotate:(UIImage *)image radians:(float)rads
{
float newSide = MAX([image size].width, [image size].height);
CGSize size = CGSizeMake(newSide, newSide);
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, newSide/2, newSide/2);
CGContextRotateCTM(ctx, rads);
CGContextDrawImage(UIGraphicsGetCurrentContext(),CGRectMake(-[image size].width/2,-[image size].height/2,size.width, size.height),image.CGImage);
//CGContextTranslateCTM(ctx, [image size].width/2, [image size].height/2);
UIImage *i = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return i;
}
this function rotates any image on its center, but the image becomes a square so I would suggest referencing the image center when drawing it after this function.
You need to address two things to make this work.
You are rotating about the bottom corner of the image instead of the centre
The bounding rectangle of the resulting image needs to be larger now the image is rotated for it to fit in.
To solve the rotation about the centre, first perform a translate to the centre, then rotate, then translate back.
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
To calculate the size of the bounding rectangle you'd need to fit the new rotated image into use this:
- (CGRect) getBoundingRectAfterRotation: (CGRect) rectangle byAngle: (CGFloat) angleOfRotation {
// Calculate the width and height of the bounding rectangle using basic trig
CGFloat newWidth = rectangle.size.width * fabs(cosf(angleOfRotation)) + rectangle.size.height * fabs(sinf(angleOfRotation));
CGFloat newHeight = rectangle.size.height * fabs(cosf(angleOfRotation)) + rectangle.size.width * fabs(sinf(angleOfRotation));
// Calculate the position of the origin
CGFloat newX = rectangle.origin.x + ((rectangle.size.width - newWidth) / 2);
CGFloat newY = rectangle.origin.y + ((rectangle.size.height - newHeight) / 2);
// Return the rectangle
return CGRectMake(newX, newY, newWidth, newHeight);
}
You can find these techniques in my previous posts and answers here:
Creating a UIImage from a rotated UIImageView
and here:
Saving 2 UIImages
Hope this helps,
Dave
for rotate image.. you can use this IBAction ... for each and every button click, the image will be rotate by 90 degree...
-(IBAction)rotateImageClick:(id)sender{
UIImage *image2=[[UIImage alloc]init];
image2 = [self imageRotatedByDegrees:self.roateImageView.image deg:(90)]; //Angle by 90 degree
self.roateImageView.image = image2;
imgData= UIImageJPEGRepresentation(image2,0.9f);
}
for rotating image u only have to pass UIimage and rotating degrees for the following method
- (UIImage *)imageRotatedByDegrees:(UIImage*)oldImage deg:(CGFloat)degrees
//------------------------------------------------------------------------
#pragma mark - imageRotatedByDegrees Method
- (UIImage *)imageRotatedByDegrees:(UIImage*)oldImage deg:(CGFloat)degrees{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,oldImage.size.width, oldImage.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(degrees * M_PI / 180);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// // Rotate the image context
CGContextRotateCTM(bitmap, (degrees * M_PI / 180));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-oldImage.size.width / 2, -oldImage.size.height / 2, oldImage.size.width, oldImage.size.height), [oldImage CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I think this link will be help.......! Rotate Original Image by clicking button in Objective C
http://adrianmobileapplication.blogspot.com/2015/03/rotate-original-image-by-clicking.html
You have to do some thing like this
YourContainer.transform = CGAffineTransformMakeRotation( 270.0/180*M_PI );
I think rest of the thing you can figured out..
I'm using following code to add rounded corners to my UIImage, but the problem is that the rounded corners are showing "white" area instead of transparent or "clear". What am i doing wrong here:
- (UIImage *)makeRoundCornerImageWithCornerWidth:(int)cornerWidth cornerHeight:(int)cornerHeight {
UIImage * newImage = nil;
if (self != nil) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
int w = self.size.width;
int h = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextBeginPath(context);
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
[self addRoundedRectToPath:context rect:rect width:cornerWidth height:cornerHeight];
CGContextClosePath(context);
CGContextClip(context);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), self.CGImage);
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
newImage = [[UIImage imageWithCGImage:imageMasked] retain];
CGImageRelease(imageMasked);
[pool release];
}
return [newImage autorelease];
}
- (void)addRoundedRectToPath:(CGContextRef)context rect:(CGRect)rect width:(float)ovalWidth height:(float)ovalHeight {
float fw, fh;
// If the width or height of the corner oval is zero, then it reduces to a right angle,
// so instead of a rounded rectangle we have an ordinary one.
if (ovalWidth == 0 || ovalHeight == 0) {
CGContextAddRect(context, rect);
return;
}
// Save the context's state so that the translate and scale can be undone with a call
// to CGContextRestoreGState.
CGContextSaveGState(context);
// Translate the origin of the contex to the lower left corner of the rectangle.
CGContextTranslateCTM (context, CGRectGetMinX(rect), CGRectGetMinY(rect));
//Normalize the scale of the context so that the width and height of the arcs are 1.0
CGContextScaleCTM (context, ovalWidth, ovalHeight);
// Calculate the width and height of the rectangle in the new coordinate system.
fw = CGRectGetWidth (rect) / ovalWidth;
fh = CGRectGetHeight (rect) / ovalHeight;
// CGContextAddArcToPoint adds an arc of a circle to the context's path (creating the rounded
// corners). It also adds a line from the path's last point to the begining of the arc, making
// the sides of the rectangle.
CGContextMoveToPoint(context, fw, fh/2); // Start at lower right corner
CGContextAddArcToPoint(context, fw, fh, fw/2, fh, 1); // Top right corner
CGContextAddArcToPoint(context, 0, fh, 0, fh/2, 1); // Top left corner
CGContextAddArcToPoint(context, 0, 0, fw/2, 0, 1); // Lower left corner
CGContextAddArcToPoint(context, fw, 0, fw, fh/2, 1); // Back to lower right
// Close the path
CGContextClosePath(context);
// Restore the context's state. This removes the translation and scaling
// but leaves the path, since the path is not part of the graphics state.
CGContextRestoreGState(context);
}
Here's a simpler formulation using UIKit calls:
- (UIImage*) roundCorneredImage: (UIImage*) orig radius:(CGFloat) r {
UIGraphicsBeginImageContextWithOptions(orig.size, NO, 0);
[[UIBezierPath bezierPathWithRoundedRect:(CGRect){CGPointZero, orig.size}
cornerRadius:r] addClip];
[orig drawInRect:(CGRect){CGPointZero, orig.size}];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Notice the NO parameter - this makes the image context transparent, so the clipped-out region is transparent.
https://github.com/detroit-labs/AmazeKit
sounds like a job for a library
Right after creating the bitmap context clear it with:
CGContextClearRect (context, CGRectMake(0, 0, w, h));
lukya's comment below your question is what you probably want to do.
Make sure you import QuartzCore:
#import <QuartzCore/QuartzCore.h>
Then, if you have a UIImageView of your image that you want to have rounded corners, just call (assuming imageView is a property and cornerRadius is the desired corner radius):
self.imageView.layer.cornerRadius = cornerRadius;
self.imageView.clipsToBounds = YES;
Since you already have self.CGImage, you could do this to create a UIImageView:
UIImageView *imageView = [[UIImageView alloc] initWithImage:[UIImage imageWithCGImage:self.CGImage]];
Just make sure to release the imageView after you add it as a subview.
profileImageView.layer.cornerRadius = profileImageView.frame.size.height/2;
profileImageView.clipsToBounds = YES;
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Resize UIImage with aspect ratio?
The following piece of code is resizing the image perfectly, but the problem is that it messes up the aspect ratio (resulting in a skewed image). Any pointers?
// Change image resolution (auto-resize to fit)
+ (UIImage *)scaleImage:(UIImage*)image toResolution:(int)resolution {
CGImageRef imgRef = [image CGImage];
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGRect bounds = CGRectMake(0, 0, width, height);
//if already at the minimum resolution, return the orginal image, otherwise scale
if (width <= resolution && height <= resolution) {
return image;
} else {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = resolution;
bounds.size.height = bounds.size.width / ratio;
} else {
bounds.size.height = resolution;
bounds.size.width = bounds.size.height * ratio;
}
}
UIGraphicsBeginImageContext(bounds.size);
[image drawInRect:CGRectMake(0.0, 0.0, bounds.size.width, bounds.size.height)];
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
I used this single line of code to create a new UIImage which is scaled. Set the scale and orientation params to achieve what you want. The first line of code just grabs the image.
// grab the original image
UIImage *originalImage = [UIImage imageNamed:#"myImage.png"];
// scaling set to 2.0 makes the image 1/2 the size.
UIImage *scaledImage =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:(originalImage.scale * 2.0)
orientation:(originalImage.imageOrientation)];
That's ok not a big problem . thing is u got to find the proportional width and height
like if size is 2048.0 x 1360.0 which has to be resized to 320 x 480 resolution then the resulting image size should be 722.0 x 480.0
here is the formulae to do that . if w,h is original and x,y are resulting image.
w/h=x/y
=>
x=(w/h)*y;
submitting w=2048,h=1360,y=480 => x=722.0 ( here width>height. if height>width then consider x to be 320 and calculate y)
U can submit in this web page . ARC
Confused ? alright , here is category for UIImage which will do the thing for you.
#interface UIImage (UIImageFunctions)
- (UIImage *) scaleToSize: (CGSize)size;
- (UIImage *) scaleProportionalToSize: (CGSize)size;
#end
#implementation UIImage (UIImageFunctions)
- (UIImage *) scaleToSize: (CGSize)size
{
// Scalling selected image to targeted size
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextClearRect(context, CGRectMake(0, 0, size.width, size.height));
if(self.imageOrientation == UIImageOrientationRight)
{
CGContextRotateCTM(context, -M_PI_2);
CGContextTranslateCTM(context, -size.height, 0.0f);
CGContextDrawImage(context, CGRectMake(0, 0, size.height, size.width), self.CGImage);
}
else
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage);
CGImageRef scaledImage=CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
UIImage *image = [UIImage imageWithCGImage: scaledImage];
CGImageRelease(scaledImage);
return image;
}
- (UIImage *) scaleProportionalToSize: (CGSize)size1
{
if(self.size.width>self.size.height)
{
NSLog(#"LandScape");
size1=CGSizeMake((self.size.width/self.size.height)*size1.height,size1.height);
}
else
{
NSLog(#"Potrait");
size1=CGSizeMake(size1.width,(self.size.height/self.size.width)*size1.width);
}
return [self scaleToSize:size1];
}
#end
-- the following is appropriate call to do this if img is the UIImage instance.
img=[img scaleProportionalToSize:CGSizeMake(320, 480)];
This fixes the math to scale to the max size in both width and height rather than just one depending on the width and height of the original.
- (UIImage *) scaleProportionalToSize: (CGSize)size
{
float widthRatio = size.width/self.size.width;
float heightRatio = size.height/self.size.height;
if(widthRatio > heightRatio)
{
size=CGSizeMake(self.size.width*heightRatio,self.size.height*heightRatio);
} else {
size=CGSizeMake(self.size.width*widthRatio,self.size.height*widthRatio);
}
return [self scaleToSize:size];
}
This change worked for me:
// The size returned by CGImageGetWidth(imgRef) & CGImageGetHeight(imgRef) is incorrect as it doesn't respect the image orientation!
// CGImageRef imgRef = [image CGImage];
// CGFloat width = CGImageGetWidth(imgRef);
// CGFloat height = CGImageGetHeight(imgRef);
//
// This returns the actual width and height of the photo (and hence solves the problem
CGFloat width = image.size.width;
CGFloat height = image.size.height;
CGRect bounds = CGRectMake(0, 0, width, height);
Try to make the bounds's size integer.
#include <math.h>
....
if (ratio > 1) {
bounds.size.width = resolution;
bounds.size.height = round(bounds.size.width / ratio);
} else {
bounds.size.height = resolution;
bounds.size.width = round(bounds.size.height * ratio);
}
How can I change the UIImage's color through programming, any help please? If I send a UIImage, its color needs to change any help please? If I change the RGB color through bitmaphandling, it does not work.
If you only need it to look different, just use imageView.tintColor (iOS 7+). Catch is, setting tintColor doesn't do anything by default:
To make it work, use imageWithRenderingMode:
var image = UIImage(named: "stackoverflow")!
image = image.imageWithRenderingMode(.AlwaysTemplate)
let imageView = ...
imageView.tintColor = UIColor(red: 0.35, green: 0.85, blue: 0.91, alpha: 1)
imageView.image = image
And now it will work:
Link to documentation.
Performance
Setting the image after configuring the UIImageView avoids repeating expensive operations:
// Good usage
let imageView = ...
imageView.tintColor = yourTintColor
var image = UIImage(named: "stackoverflow")!
image = image.imageWithRenderingMode(.AlwaysTemplate)
imageView.image = image // Expensive
// Bad usage
var image = UIImage(named: "stackoverflow")!
image = image.imageWithRenderingMode(.AlwaysTemplate)
let imageView = ...
imageView.image = image // Expensive
imageView.frame = ... // Expensive
imageView.tintColor = yourTint // Expensive
Getting & setting the image asynchronously reduces scrolling and animation lag (especially when tinting an image inside of a UICollectionViewCell or UITableViewCell):
let imageView = cell.yourImageView
imageView.image = nil // Clear out old image
imageView.tintColor = UIColor(red: 0.35, green: 0.85, blue: 0.91, alpha: 1)
// Setting the image asynchronously reduces stuttering
// while scrolling. Remember, the image should be set as
// late as possible to avoid repeating expensive operations
// unnecessarily.
dispatch_async(dispatch_get_main_queue(), { () -> Void in
var image = UIImage(named: "stackoverflow")!
image = image.imageWithRenderingMode(.AlwaysTemplate)
imageView.image = image
})
One way to accomplish this is to desaturate your image, and add a tint on top of that image with the color you desire.
Desaturate
-(UIImage *) getImageWithUnsaturatedPixelsOfImage:(UIImage *)image {
const int RED = 1, GREEN = 2, BLUE = 3;
CGRect imageRect = CGRectMake(0, 0, image.size.width*2, image.size.height*2);
int width = imageRect.size.width, height = imageRect.size.height;
uint32_t * pixels = (uint32_t *) malloc(width*height*sizeof(uint32_t));
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [image CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t * rgbaPixel = (uint8_t *) &pixels[y*width+x];
uint32_t gray = (0.3*rgbaPixel[RED]+0.59*rgbaPixel[GREEN]+0.11*rgbaPixel[BLUE]);
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
CGImageRef newImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
UIImage * resultUIImage = [UIImage imageWithCGImage:newImage scale:2 orientation:0];
CGImageRelease(newImage);
return resultUIImage;
}
Overlay With Color
-(UIImage *) getImageWithTintedColor:(UIImage *)image withTint:(UIColor *)color withIntensity:(float)alpha {
CGSize size = image.size;
UIGraphicsBeginImageContextWithOptions(size, FALSE, 2);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawAtPoint:CGPointZero blendMode:kCGBlendModeNormal alpha:1.0];
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGContextSetAlpha(context, alpha);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(CGPointZero.x, CGPointZero.y, image.size.width, image.size.height));
UIImage * tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
How-To
//For a UIImageView
yourImageView.image = [self getImageWithUnsaturatedPixelsOfImage:yourImageView.image];
yourImageView.image = [atom getImageWithTintedColor:yourImageView.image withTint:[UIColor redColor] withIntensity:0.7];
//For a UIImage
yourImage = [self getImageWithUnsaturatedPixelsOfImage:yourImage];
yourImage = [atom getImageWithTintedColor:yourImageView.image withTint:[UIColor redColor] withIntensity:0.7];
You can change the color of the tint to whatever you desire.
There's a great post about this here:
http://coffeeshopped.com/2010/09/iphone-how-to-dynamically-color-a-uiimage
The one caveat that I have with the current code is that using it on retina images will result in a loss of the higher 'resolution' for these images. I am currently looking for a solution for this...
Check out my post (mostly just remixing code).
Edit: This code basically creates a new CGContext, draws a layer on it with the new color, and returns a new UIImage from that. I haven't gone in depth on this code in a while, but it seems to just draw a UIImage with the same shape as the original, so that's a limit (loses any detail in the image).
If you need high performance, I strongly recommend you to use GPUImage.
You may download it at https://github.com/BradLarson/GPUImage
The RGB data you are operating on is just a copy. After you finish making changes, you need to turn that data back into an image.
I first make a new bitmap:
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
ctx = CGBitmapContextCreate( malloc(dataSize), width, height,
8, // CGImageGetBitsPerComponent(cgImage),
bytesPerRow, //CGImageGetBytesPerRow(cgImage),
space,
//kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big );
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little);
//kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease( space );
// now draw the image into the context
CGRect rect = CGRectMake( 0, 0, CGImageGetWidth(cgImage), CGImageGetHeight(cgImage) );
CGContextDrawImage( ctx, rect, cgImage );
And get the pixels:
pixels = CGBitmapContextGetData( ctx );
Assuming that your pixel data came from pixels = CGBitmapContextGetData( ctx ); then take that context and build a new image from it:
CGImageRef newImg = CGBitmapContextCreateImage(ctx);
[[UIImage imageWithCGImage:newImg] drawInRect:rect];
CGImageRelease(newImg);
I think you can create another context with setting there context color to RGB you want to color your picture. Then draw your UIImage into that context and use that context instead of using directly your picture. This is a concept. This way you're creating offscreen buffer with a colored image. I didn't try this in cocoa, only in carbon, but i suppose it will work in the same way.
Hmmm -- isn't the order of the bytes supposed to be RGBA? You are setting them as ARGB...
try this
- (UIImage *)imageWithOverlayColor:(UIColor *)color
{
CGRect rect = CGRectMake(0.0f, 0.0f, self.size.width, self.size.height);
if (UIGraphicsBeginImageContextWithOptions) {
CGFloat imageScale = 1.0f;
if ([self respondsToSelector:#selector(scale)]) // The scale property is new with iOS4.
imageScale = self.scale;
UIGraphicsBeginImageContextWithOptions(self.size, NO, imageScale);
}
else {
UIGraphicsBeginImageContext(self.size);
}
[self drawInRect:rect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(context, kCGBlendModeSourceIn);
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillRect(context, rect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
The great post mentioned by user576924 worked great for me:
iPhone: How to Dynamically Color a UIImage
and in swift:
extension UIImage {
func imageWithColor( color : UIColor ) -> UIImage {
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(self.size)
// get a reference to that context we created
let context = UIGraphicsGetCurrentContext();
// set the fill color
color.setFill()
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, self.size.height)
CGContextScaleCTM(context, 1.0, -1.0)
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColor)
let rect = CGRect(origin: CGPointZero, size: self.size)
CGContextDrawImage(context, rect, self.CGImage)
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, self.CGImage)
CGContextAddRect(context, rect)
CGContextDrawPath(context,kCGPathFill)
// generate a new UIImage from the graphics context we drew onto
let coloredImg = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//return the color-burned image
return coloredImg
}
}
Note that I also changed "kCGBlendModeColorBurn" to "kCGBlendModeColor" as mentioned in the post's comments section.
For me this worked:
extension UIImage {
class func image(image: UIImage, withColor color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image.size.width, image.size.height), false, image.scale)
let context = UIGraphicsGetCurrentContext()
color.set()
CGContextTranslateCTM(context, 0, image.size.height)
CGContextScaleCTM(context, 1, -1)
let rect = CGRectMake(0, 0, image.size.width, image.size.height)
CGContextClipToMask(context, rect, image.CGImage)
CGContextFillRect(context, rect)
let coloredImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return coloredImage
}
}