High Quality Round Corner Image in iPhone - iphone

In my app, I want to high quality Image. Image is loaded from Facebook friend list. When that image is loaded in smaller size (50 * 50), its quality is fine. But when I try to get that image in bigger size(280 *280) quality of image diminished.
For round corner m doing like :
self.mImageView.layer.cornerRadius = 10.0;
self.mImageView.layer.borderColor = [UIColor blackColor].CGColor;
self.mImageView.layer.borderWidth = 1.0;
self.mImageView.layer.masksToBounds = YES;
For getting image m using following code :
self.mImageView.image = [self imageWithImage:profileImage scaledToSize:CGSizeMake(280, 280)];
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, YES,0.0);
CGContextRef context = CGContextRetain(UIGraphicsGetCurrentContext());
CGContextTranslateCTM(context, 0.0, newSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetInterpolationQuality(context, kCGInterpolationLow);
CGContextSetAllowsAntialiasing (context, TRUE);
CGContextSetShouldAntialias(context, TRUE);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, newSize.width, newSize.height),image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I have checked my code several times, but could not figure out how to make image perfect. So, guys how this quality of image will be improved?
Thanx in advance...

…quality of image diminished.
The 'quality' of the image is still present. (Technically, you are introducing a small amount of error by resizing it, but that's not the real problem…)
So, you want to scale a 50x50px image to 280x280px? The information/detail does not exist in the source signal. Ideally, you would download a more appropriately sized image, for the size you want to display at.
If that's not an option, you could reduce pixelation by means of proper resampling and/or interpolation. This would simply smooth out the pixels your program magnifies by 5.6 -- the image would then look like a cross between pixelated and blurred (see CGContextSetAllowsAntialiasing, CGContextSetShouldAntialias, CGContextSetInterpolationQuality and related APIs to accomplish this using quartz).

Related

Image quality problems when UILabel is rescaled and rasterized

I have some problems in rescaling the content of a UILabel object when it is stored as an image. Since the rendered image has to be bigger than the original UILabel, I have computed the scale imageScale needed to rescale the original image and saved it into a CGSize variable. In the following, I will explain the adopted (and failing) approaches.
Code used for rendering the image
The following code is used for rendering the extracted image on the canvas.
[labelImage drawInRect:CGRectMake(xCoordinate/imageScale.width,
yCoordinate/imageScale.height,
newSize.width,
newSize.height)
blendMode:kCGBlendModeNormal alpha:0.8];
where the variable newSize is computed as follows:
newSize.width = originalWidth/imageScale.width;
newSize.height = originalHeight/imageScale.height
Approach 1
I extracted the label using the following code:
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[[label layer] renderInContext: UIGraphicsGetCurrentContext()];
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
where label is the UILabel variable and newSize is the size that the rescaled image should have (see above for details).
However, I obtain the following image, which is obviously failing, since the content is very little and not centered:
Approach 2
I extracted the label using the following code:
UIGraphicsBeginImageContextWithOptions([label bounds].size, NO, 0.0);
[[label layer] renderInContext: UIGraphicsGetCurrentContext()];
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However, since I am using the original image size in order to extract the image, the effect I obtain is the following:
As you can notice, the text in the balloon has not a high resolution, and thus it is not visualized properly.
The question
How to correct one of the two approaches so as to visualize in high resolution the image?
Seems like you just need to set appropriate scale for the generated image.
This is the function:
void UIGraphicsBeginImageContextWithOptions(
CGSize size,
BOOL opaque,
CGFloat scale
);
You set scale as 0.0. Try replacing it with [UIScreen mainScreen].scale.

pixelated iphone UIImageView

I've been having issues rendering images with the UIImageView class. The pixelation seems to occur mostly on the edges of the image I am trying to show.
I have tried changing the property 'Render with edge antialiasing' to no avail.
The image files contain images that are larger than what will appear on the screen.
It seems to be royally messing with the quality of the image and then displaying it. I tried to post images here, but StackOverflow is denying me that privilege. So here's a link to what's going on.
http://i.imgur.com/QpUOTOF.png
The sun in this image is the problem I'm speaking of. Any ideas?
On-the-fly image resizing is quick and of low quality. For bundled images, it is worth the extra bundle space to include downsized versions. For downloaded images, you can achieve better results by resizing with Core Graphics into a new UIImage before you set the image property.
CGSize newSize = CGSizeMake(newWidth, newHeight);
UIGraphicsBeginImageContextWithOptions(newSize, // context size
NO, // opaque?
0); // image scale. 0 means "device screen scale"
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
[bigImage drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Use following method use for get specific hight and width with image
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(int)width withHeight:(int)height
{
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width;
float heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return NewImage, with specific size that you specified.
How big is your image and what is the size of the imageView? Don't rely on UIImageView to scale it down for you. You probably need to resize it manually. This would also be a bit more memory efficient.
I use categories like these:
>>>github link <<<
to do image resizing.
This also gives you some other nice function for rounded corners etc.
Also keep in mind, that you need a transparent border at the edge of an image if you want to rotate it to avoid aliasing.

<Error>: CGBitmapContextCreate: unsupported parameter combination vs. lower resolution image

- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
// Build a context that's the same dimensions as the new size
CGBitmapInfo info = CGImageGetBitmapInfo(image.CGImage);
CGContextRef context = CGBitmapContextCreate(NULL,
image.size.width,
image.size.height,
CGImageGetBitsPerComponent(image.CGImage),
0,
CGImageGetColorSpace(image.CGImage),
CGImageGetBitmapInfo(image.CGImage));
// Create a clipping path with rounded corners
CGContextBeginPath(context);
[self addRoundedRectToPath:CGRectMake(borderSize, borderSize, image.size.width - borderSize * 2, image.size.height - borderSize * 2)
context:context
ovalWidth:cornerSize
ovalHeight:cornerSize];
CGContextClosePath(context);
CGContextClip(context);
// Draw the image to the context; the clipping path will make anything outside the rounded rect transparent
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
// Create a CGImage from the context
CGImageRef clippedImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// Create a UIImage from the CGImage
UIImage *roundedImage = [UIImage imageWithCGImage:clippedImage];
CGImageRelease(clippedImage);
return roundedImage;
}
I have the method above and am adding rounded corners to Twitter profile images. For most of the images this works awesome. There are a few that cause the following error to occur:
: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaLast; 96 bytes/row.
I have done some debugging and it looks like the only difference from the images causing errors and the ones that are not is the parameter, CGImageGetBitmapInfo(image.CGImage), when creating the context. This throws the error and results in the context being null. I tried setting the last parameter to kCGImageAlphaPremultipliedLast to no avail either. The image is drawn this time but with much less quality. Is there a way to get a higher quality image on par with the rest of them? The path to the image is via Twitter so not sure if they have different ones you can pull.
I have seen the other questions regarding this error too. None of have solved this issue. I saw this post but the errored images are completely blurry after that. And casting the width and height to NSInteger also didn't work. Below is a screenshot of the two profile images and their quality as well. The first one is causing the error.
Does anyone have any idea what the issue is here?
Thanks a ton. This has been killing me.
iOS does not support kCGImageAlphaLast. You need to use kCGImageAlphaPremultipliedLast.
You also need to handle the scale of your initial image. Your current code doesn't, so it downsamples the image if its scale is 2.0.
You can write the entire function more simply by using UIKit functions and classes. UIKit will take care of the scale for you; you just have to pass in the original image's scale when you ask it to create the graphics context.
- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale); {
CGRect imageRect = (CGRect){ CGPointZero, image.size };
CGRect borderRect = CGRectInset(imageRect, borderSize, borderSize);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:borderRect
byRoundingCorners:UIRectCornerAllCorners
cornerRadii:CGSizeMake(cornerSize, cornerSize)];
[path addClip];
[image drawAtPoint:CGPointZero];
}
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
If your imageWithAlpha method itself creates a UIImage from another UIImage, it needs to propagate the scale also.

saving image from a page of CGPDfDocument is not perfectly fitted in UIImageview

I am having some trouble with saving a PDF page as UIImage...the pdf is loaded from the internet and it has one page(original PDF has been splitted in sever)...but the converted image sometimes is cropped...sometimes it is small and leave white space when it is putted on UIImageview...
here is the code
-(UIImage *)imageFromPdf:(NSString *) pdfUrl{
NSURL *pdfUrlStr=[NSURL URLWithString:pdfUrl];
CFURLRef docURLRef=(CFURLRef)pdfUrlStr;
UIGraphicsBeginImageContext(CGSizeMake(768, 1024)); //840, 960
NSLog(#"save begin");
CGContextRef context = UIGraphicsGetCurrentContext();
//CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("/file.pdf"), NULL, NULL);
CGPDFDocumentRef pdf = CGPDFDocumentCreateWithURL(docURLRef);
NSLog(#"save complete");
CGContextTranslateCTM(context, 0.0, 900);//320
CGContextScaleCTM(context, 1.0, -1.0);
CGPDFPageRef page = CGPDFDocumentGetPage(pdf, 1);
CGContextSaveGState(context);
CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, CGRectMake(0, 0, 768, 1024), 0, true);
CGContextConcatCTM(context, pdfTransform);
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
btw I have prepared my UIImageview by coding like this
self.PDFImageVIew.contentMode = UIViewContentModeScaleAspectFit;
self.PDFImageVIew.clipsToBounds = YES;
I just want this image perfectly fitted on UIImageview and may be its reducing the quality of image...can you have suggesion how can I keep the quality also? please help and give me some suggestion
thanks
CGContextTranslateCTM(context, 0.0, 900);//320
Here generally last parameter of translate operation should be the height of context or height of rectangle for which you creating image. So, i think it should be 1024(You have taken height of image context is 1024 so here i am assuming that status bar is not present). This may eliminate the issue of cropping. Some more things that i have noted on your code you should have to save the state of graphics before any operation on context. You have are saving it but after few operations.
Above code will try to make it height fit so if height of actual page is bigger than your context height then it will be scaled down. so you can obviously see white space around page.
One more thing if your original pdf page have white space in it then there is no way to eliminate it as far as i know.

Any quick and dirty anti-aliasing techniques for a rotated UIImageView?

I've got a UIImageView (full frame and rectangular) that i'm rotating with a CGAffineTransform. The UIImage of the UIImageView fills the entire frame. When the image is rotated and drawn the edges appear noticeably jagged. Is there anything I can do to make it look better? It's clearly not being anti-aliased with the background.
The edges of CoreAnimation layers aren't antialiased by default on iOS. However, there is a key that you can set in Info.plist that enables antialiasing of the edges: UIViewEdgeAntialiasing.
https://developer.apple.com/library/content/documentation/General/Reference/InfoPlistKeyReference/Articles/iPhoneOSKeys.html
If you don't want the performance overhead of enabling this option, a work-around is to add a 1px transparent border around the edge of the image. This means that the 'edges' of the image are no longer on the edge, so don't need special treatment!
New API – iOS 6/7
Also works for iOS 6, as noted by #Chris, but wasn't made public until iOS 7.
Since iOS 7, CALayer has a new property allowsEdgeAntialiasing which does exactly what you want in this case, without incurring the overhead of enabling it for all views in your application! This is a property of CALayer, so to enable this for a UIView you use myView.layer.allowsEdgeAntialiasing = YES.
just add 1px transparent border to your image
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0.0);
[image drawInRect:CGRectMake(1,1,image.size.width-2,image.size.height-2)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Remember to set the appropriate anti-alias options:
CGContextSetAllowsAntialiasing(theContext, true);
CGContextSetShouldAntialias(theContext, true);
just add "Renders with edge antialiasing" with YES in plist and it will work.
I would totally recommend the following library.
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
It contains lots of useful extensions to UIImage that solve this problem and also include code for generating thumbnails etc.
Enjoy!
The best way I've found to have smooth edges and a sharp image is to do this:
CGRect imageRect = CGRectMake(0, 0, self.photo.image.size.width, self.photo.image.size.height);
UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0.0);
[self.photo.image drawInRect:CGRectMake(1, 1, self.photo.image.size.width - 2, self.photo.image.size.height - 2)];
self.photo.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Adding the Info.plist key like some people describe has a big hit on performance and if you use that then you're basically applying it to everything instead of just the one place you need it.
Also, don't just use UIGraphicsBeginImageContext(imageRect.size); otherwise the layer will be blurry. You have to use UIGraphicsBeginImageContextWithOptions like I've shown.
I found this solution from here, and it's perfect:
+ (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame transparentInsets:(UIEdgeInsets)insets {
CGSize imageSizeWithBorder = CGSizeMake(frame.size.width + insets.left + insets.right, frame.size.height + insets.top + insets.bottom);
// Create a new context of the desired size to render the image
UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Clip the context to the portion of the view we will draw
CGContextClipToRect(context, (CGRect){{insets.left, insets.top}, frame.size});
// Translate it, to the desired position
CGContextTranslateCTM(context, -frame.origin.x + insets.left, -frame.origin.y + insets.top);
// Render the view as image
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
// Fetch the image
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
// Cleanup
UIGraphicsEndImageContext();
return renderedImage;
}
usage:
UIImage *image = [UIImage renderImageFromView:view withRect:view.bounds transparentInsets:UIEdgeInsetsZero];