pixelated iphone UIImageView - iphone

I've been having issues rendering images with the UIImageView class. The pixelation seems to occur mostly on the edges of the image I am trying to show.
I have tried changing the property 'Render with edge antialiasing' to no avail.
The image files contain images that are larger than what will appear on the screen.
It seems to be royally messing with the quality of the image and then displaying it. I tried to post images here, but StackOverflow is denying me that privilege. So here's a link to what's going on.
http://i.imgur.com/QpUOTOF.png
The sun in this image is the problem I'm speaking of. Any ideas?

On-the-fly image resizing is quick and of low quality. For bundled images, it is worth the extra bundle space to include downsized versions. For downloaded images, you can achieve better results by resizing with Core Graphics into a new UIImage before you set the image property.
CGSize newSize = CGSizeMake(newWidth, newHeight);
UIGraphicsBeginImageContextWithOptions(newSize, // context size
NO, // opaque?
0); // image scale. 0 means "device screen scale"
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
[bigImage drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Use following method use for get specific hight and width with image
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(int)width withHeight:(int)height
{
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width;
float heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return NewImage, with specific size that you specified.

How big is your image and what is the size of the imageView? Don't rely on UIImageView to scale it down for you. You probably need to resize it manually. This would also be a bit more memory efficient.

I use categories like these:
>>>github link <<<
to do image resizing.
This also gives you some other nice function for rounded corners etc.
Also keep in mind, that you need a transparent border at the edge of an image if you want to rotate it to avoid aliasing.

Related

Image quality problems when UILabel is rescaled and rasterized

I have some problems in rescaling the content of a UILabel object when it is stored as an image. Since the rendered image has to be bigger than the original UILabel, I have computed the scale imageScale needed to rescale the original image and saved it into a CGSize variable. In the following, I will explain the adopted (and failing) approaches.
Code used for rendering the image
The following code is used for rendering the extracted image on the canvas.
[labelImage drawInRect:CGRectMake(xCoordinate/imageScale.width,
yCoordinate/imageScale.height,
newSize.width,
newSize.height)
blendMode:kCGBlendModeNormal alpha:0.8];
where the variable newSize is computed as follows:
newSize.width = originalWidth/imageScale.width;
newSize.height = originalHeight/imageScale.height
Approach 1
I extracted the label using the following code:
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[[label layer] renderInContext: UIGraphicsGetCurrentContext()];
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
where label is the UILabel variable and newSize is the size that the rescaled image should have (see above for details).
However, I obtain the following image, which is obviously failing, since the content is very little and not centered:
Approach 2
I extracted the label using the following code:
UIGraphicsBeginImageContextWithOptions([label bounds].size, NO, 0.0);
[[label layer] renderInContext: UIGraphicsGetCurrentContext()];
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However, since I am using the original image size in order to extract the image, the effect I obtain is the following:
As you can notice, the text in the balloon has not a high resolution, and thus it is not visualized properly.
The question
How to correct one of the two approaches so as to visualize in high resolution the image?
Seems like you just need to set appropriate scale for the generated image.
This is the function:
void UIGraphicsBeginImageContextWithOptions(
CGSize size,
BOOL opaque,
CGFloat scale
);
You set scale as 0.0. Try replacing it with [UIScreen mainScreen].scale.

High Quality Round Corner Image in iPhone

In my app, I want to high quality Image. Image is loaded from Facebook friend list. When that image is loaded in smaller size (50 * 50), its quality is fine. But when I try to get that image in bigger size(280 *280) quality of image diminished.
For round corner m doing like :
self.mImageView.layer.cornerRadius = 10.0;
self.mImageView.layer.borderColor = [UIColor blackColor].CGColor;
self.mImageView.layer.borderWidth = 1.0;
self.mImageView.layer.masksToBounds = YES;
For getting image m using following code :
self.mImageView.image = [self imageWithImage:profileImage scaledToSize:CGSizeMake(280, 280)];
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, YES,0.0);
CGContextRef context = CGContextRetain(UIGraphicsGetCurrentContext());
CGContextTranslateCTM(context, 0.0, newSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetInterpolationQuality(context, kCGInterpolationLow);
CGContextSetAllowsAntialiasing (context, TRUE);
CGContextSetShouldAntialias(context, TRUE);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, newSize.width, newSize.height),image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I have checked my code several times, but could not figure out how to make image perfect. So, guys how this quality of image will be improved?
Thanx in advance...
…quality of image diminished.
The 'quality' of the image is still present. (Technically, you are introducing a small amount of error by resizing it, but that's not the real problem…)
So, you want to scale a 50x50px image to 280x280px? The information/detail does not exist in the source signal. Ideally, you would download a more appropriately sized image, for the size you want to display at.
If that's not an option, you could reduce pixelation by means of proper resampling and/or interpolation. This would simply smooth out the pixels your program magnifies by 5.6 -- the image would then look like a cross between pixelated and blurred (see CGContextSetAllowsAntialiasing, CGContextSetShouldAntialias, CGContextSetInterpolationQuality and related APIs to accomplish this using quartz).

Connection between UIGraphicsGetImageFromCurrentImageContext and drawInrect

I have found the following code to resize an UIImage:
CGSize newSize = CGSizeMake(self.image.size.width*0.25, self.image.size.height*0.25);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[self.image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but there are some couple of things I don't understand.
First I'm trying to resize the original image to 25% of the original size - but this method resizes it to 50% of the original size. Why?
What is the connection between drawInRect and UIGraphicsGetImageFromCurrentImageContext. As I see it, the UIGraphicsGetImageFromCurrentImageContext is overwriting the current image making the call to drawInRect redundant.
I would be grateful if someone could help me understand what's going on in details.
Thanks in advance.
first , because it's retina screen, you should set the scale to 1.0 , or it just x2
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
if you call UIGraphicsBeginImageContext , any paint work will result in the Context you specific
the drawInRect in your code paint the image to the context, it's not redundant
or you can remove it, you get a empty image
finally, you can get a merged UIImage from the context
UIGraphicsGetImageFromCurrentImageContext();
just get a UIImage From What you have done(on context)
if you don't set the image back , it's won't change anything
self.image = UIGraphicsGetImageFromCurrentImageContext();

A simple way to put a UIImage in a UIButton

I have a UIButton in my iPhone app. I set its size to 100x100.
I have an image that is 400x200 that I wish to display in the button.
The button STILL needs to stay at 100x100... and I want the image to downsize to fit... but
keep the correct aspect ratio.
I thought that's what "Aspect Fit" was used for.
Do I include my image with setImage, or setBackgroundImage?
Do I set AspectFit for the button? or the image? or the background image?
(I need the smaller images to INCREASE in size. While larger images should DESCREASE in size.
Always keeping the button at 100x100.)
I've tried countless combinations of all of the above... and I just can't seem to get
everything to work at the same time:
Don't change the button's size of 100x100.
Don't destroy the image's aspect ratio.
Always increase small images to fit the button.
Always decrease large images to fit the button.
Never clip any of the image edges.
Never require the "put UIButtons over all your UIimages" hack.
Don't require everyone to upgrade to v4.2.1 just to use new framework methods.
I see so many apps, with so many fully-working image-buttons... that I can't believe I can't figure out this very simple, very common thing.
Ugh.
UIButton is broken. That's the short answer. The UIImageViews in its image and backgroundImage properties don't respect UIViewContentMode settings. They're read-only properties, and while the UIImage contents of those UIImageViews can be set through setImage: and setBackgroundImage: methods, the content mode can't be.
The solution is either to provide properly-sized images in your bundle to begin with, or to put a UIImageView down, configure it the way you want it, and then put a clear "custom" UIButton over top of it. That's the hack all those fancy professional apps you've seen have used, I promise. We're all having to do it.
UIImage *img = [UIImage imageNamed:#"yourImageName"];
button.imageView.contentMode = UIViewContentModeScaleAspectFit;
[button setImage:img forState:UIControlStateNormal];
To do this correctly, I would actually programmatically resize and manipulate the image to get the desired aspect ratio. This avoids the need for any view hierarchy hacks, and also reduces any performance hit to a single operation, instead of every redraw.
This (untested) code should help illustrate what I mean:
CGSize imageSize = image.size;
CGFloat currentAspect = imageSize.width / imageSize.height;
// for purposes of illustration
CGFloat targetWidth = 100;
CGFloat targetHeight = 100;
CGFloat targetAspect = targetWidth / targetHeight;
CGFloat newWidth, newHeight;
if (currentAspect > targetAspect) {
// width will end up at 100, height needs to be smaller
newWidth = targetWidth;
newHeight = targetWidth / currentAspect;
} else {
// height will end up at 100, width needs to be smaller
newHeight = targetHeight;
newWidth = targetHeight * currentAspect;
}
size_t bytesPerPixel = 4;
// although the image will be resized to { newWidth, newHeight }, it needs
// to be padded with empty space to provide the aspect fit behavior
//
// use calloc() to clear the data as it's allocated
void *imageData = calloc(targetWidth * targetHeight, bytesPerPixel);
if (!imageData) {
// error out
return;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!colorSpace) {
// error out
return;
}
CGContextRef context = CGBitmapContextCreate(
imageData,
targetWidth,
targetHeight,
8, // bits per component
targetWidth * bytesPerPixel, // bytes per row
colorSpace,
kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst
);
CGColorSpaceRelease(colorSpace);
// now we have a context to draw the original image into
// in doing so, we want to center it, so prepare the geometry
CGRect drawRect = CGRectMake(
floor((targetWidth - newWidth) / 2),
floor((targetHeight - newHeight) / 2),
round(newWidth),
round(newHeight)
);
CGContextDrawImage(context, drawRect, image.CGImage);
// now that the bitmap context contains the aspect fit image with transparency
// letterboxing, we want to pull out a new image from it
CGImageRef newImage = CGBitmapContextCreateImage(context);
// destroy the temporary context
CGContextRelease(context);
free(imageData);
// and, finally, create a new UIImage
UIImage *newUIImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
Let me know if any part of that is unclear.
I think what Dan is trying to say (but without ever saying it) is to do this:
Use a "temp image" to do the resizing for you.
The temp-image needs to be set to ASPECT FIT and HIDDEN.
Make sure your button is set to your desired size, and NOT set to ASPECT FIT.
// Make a frame the same size as your button
CGRect aFrame = CGRectMake(0, 0, myButton.frame.size.width, myButton.frame.size.height);
// Set your temp-image to the size of your button
imgTemp.frame = aFrame;
// Put your image into the temp-image
imgTemp.image = anImage;
// Copy that resized temp-image to your button
[myButton setBackgroundImage:tempImage forState:UIControlStateNormal];
Since none of my attempts have worked....
Maybe I should be asking this instead. When using a UIButton:
When DO I use setImage instead of setBackgroundImage? (Why are there both?)
When DO I use "Aspect Fit" instead of "Center"? (Why do both seem to stretch my images when I expect them to "keep aspect ratio" and "don't resize anything", respective.)
And the big question: Why is such a common thing... such a huge mess?
It would all be solved if I could find a work-around method like: Just use UIImage instead and detect TAPS. (But that seems to be even a LARGER nightmare of code.)
Apple, if you've tried to make my job easier... you have instead made it 400 times more confusing.
Place a imageview over the button, set your image for the imageview and not for button.
All the best.
I would resize the image to 100x100 maintaining the aspect ratio of the content contained in the image. Then set the backgroundImage property of the UIButton to the image.
I faced same issue few days back and resolved it. Please try with this
[_profilePicBtn setImage:profilePic forState:UIControlStateNormal];
_profilePicBtn.imageView.contentMode = UIViewContentModeScaleAspectFit;

Any quick and dirty anti-aliasing techniques for a rotated UIImageView?

I've got a UIImageView (full frame and rectangular) that i'm rotating with a CGAffineTransform. The UIImage of the UIImageView fills the entire frame. When the image is rotated and drawn the edges appear noticeably jagged. Is there anything I can do to make it look better? It's clearly not being anti-aliased with the background.
The edges of CoreAnimation layers aren't antialiased by default on iOS. However, there is a key that you can set in Info.plist that enables antialiasing of the edges: UIViewEdgeAntialiasing.
https://developer.apple.com/library/content/documentation/General/Reference/InfoPlistKeyReference/Articles/iPhoneOSKeys.html
If you don't want the performance overhead of enabling this option, a work-around is to add a 1px transparent border around the edge of the image. This means that the 'edges' of the image are no longer on the edge, so don't need special treatment!
New API – iOS 6/7
Also works for iOS 6, as noted by #Chris, but wasn't made public until iOS 7.
Since iOS 7, CALayer has a new property allowsEdgeAntialiasing which does exactly what you want in this case, without incurring the overhead of enabling it for all views in your application! This is a property of CALayer, so to enable this for a UIView you use myView.layer.allowsEdgeAntialiasing = YES.
just add 1px transparent border to your image
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0.0);
[image drawInRect:CGRectMake(1,1,image.size.width-2,image.size.height-2)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Remember to set the appropriate anti-alias options:
CGContextSetAllowsAntialiasing(theContext, true);
CGContextSetShouldAntialias(theContext, true);
just add "Renders with edge antialiasing" with YES in plist and it will work.
I would totally recommend the following library.
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
It contains lots of useful extensions to UIImage that solve this problem and also include code for generating thumbnails etc.
Enjoy!
The best way I've found to have smooth edges and a sharp image is to do this:
CGRect imageRect = CGRectMake(0, 0, self.photo.image.size.width, self.photo.image.size.height);
UIGraphicsBeginImageContextWithOptions(imageRect.size, NO, 0.0);
[self.photo.image drawInRect:CGRectMake(1, 1, self.photo.image.size.width - 2, self.photo.image.size.height - 2)];
self.photo.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Adding the Info.plist key like some people describe has a big hit on performance and if you use that then you're basically applying it to everything instead of just the one place you need it.
Also, don't just use UIGraphicsBeginImageContext(imageRect.size); otherwise the layer will be blurry. You have to use UIGraphicsBeginImageContextWithOptions like I've shown.
I found this solution from here, and it's perfect:
+ (UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame transparentInsets:(UIEdgeInsets)insets {
CGSize imageSizeWithBorder = CGSizeMake(frame.size.width + insets.left + insets.right, frame.size.height + insets.top + insets.bottom);
// Create a new context of the desired size to render the image
UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Clip the context to the portion of the view we will draw
CGContextClipToRect(context, (CGRect){{insets.left, insets.top}, frame.size});
// Translate it, to the desired position
CGContextTranslateCTM(context, -frame.origin.x + insets.left, -frame.origin.y + insets.top);
// Render the view as image
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
// Fetch the image
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
// Cleanup
UIGraphicsEndImageContext();
return renderedImage;
}
usage:
UIImage *image = [UIImage renderImageFromView:view withRect:view.bounds transparentInsets:UIEdgeInsetsZero];