Stretching an UIImage while preserving the corners - iphone

I'm trying to stretch a navigation arrow image while preserving the edges so that the middle stretches and the ends are fixed.
Here is the image that I'm trying to stretch:
The following iOS 5 code allows the image when resized to stretch the center portions of the image defined by the UIEdgeInsets.
[[UIImage imageNamed:#"arrow.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(15, 7, 15, 15)];
This results in an image that looks like this (if the image's frame is set to 70 pixels wide):
This is actually what I want but resizableImageWithCapInsets is only supported on iOS 5 and later.
Prior to iOS 5 the only similar method is stretchableImageWithLeftCapWidth:topCapHeight but you can only specify the top and left insets which means the image has to have equal shaped edges.
Is there an iOS 4 way of resizing the image the same as iOS 5's resizableImageWithCapInsets method, or another way of doing this?

Your assumption here is wrong:
Prior to iOS 5 the only similar method is stretchableImageWithLeftCapWidth:topCapHeight but you can only specify the top and left insets which means the image has to have equal shaped edges.
The caps are figured out as follows - I'll step through the left cap, but the same principle applies to the top cap.
Say your image is 20px wide.
Left cap width - this is the part on the left hand side of the image that cannot be stretched. In the stretchableImage method you send a value of 10 for this.
Stretchable part - this is assumed to be one pixel in width, so it will be the pixels in column "11", for want of a better description
This means there is an implied right cap of the remaining 9px of your image - this will also not be distorted.
This is taken from the documentation
leftCapWidth
End caps specify the portion of an image that should not be resized when an image is stretched. This technique is used to implement buttons and other resizable image-based interface elements. When a button with end caps is resized, the resizing occurs only in the middle of the button, in the region between the end caps. The end caps themselves keep their original size and appearance.
This property specifies the size of the left end cap. The middle (stretchable) portion is assumed to be 1 pixel wide. The right end cap is therefore computed by adding the size of the left end cap and the middle portion together and then subtracting that value from the width of the image:
rightCapWidth = image.size.width - (image.leftCapWidth + 1);

UIImage *image = [UIImage imageNamed:#"img_loginButton.png"];
UIEdgeInsets edgeInsets;
edgeInsets.left = 0.0f;
edgeInsets.top = 0.0f;
edgeInsets.right = 5.0f; //Assume 5px will be the constant portion in your image
edgeInsets.bottom = 0.0f;
image = [image resizableImageWithCapInsets:edgeInsets];
//Use this image as your controls image

Your example is perfectly possible using stretchableImageWithLeftCapWidth:topCapHeight: with a left cap of 15 (apparently, from reading your code). That will horizontally stretch the button by repeating the middle column.

You can extend UIImage to allow stretching an image with custom edge protection (thereby stretching the interior of the image, instead of tiling it):
UIImage+utils.h:
#import <UIKit/UIKit.h>
#interface UIImage(util_extensions)
//extract a portion of an UIImage instance
-(UIImage *) cutout: (CGRect) coords;
//create a stretchable rendition of an UIImage instance, protecting edges as specified in cornerCaps
-(UIImage *) stretchImageWithCapInsets: (UIEdgeInsets) cornerCaps toSize: (CGSize) size;
#end
UIImage+utils.m:
#import "UIImage+utils.h"
#implementation UIImage(util_extensions)
-(UIImage *) cutout: (CGRect) coords {
UIGraphicsBeginImageContext(coords.size);
[self drawAtPoint: CGPointMake(-coords.origin.x, -coords.origin.y)];
UIImage *rslt = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return rslt;
}
-(UIImage *) stretchImageWithCapInsets: (UIEdgeInsets) cornerCaps toSize: (CGSize) size {
UIGraphicsBeginImageContext(size);
[[self cutout: CGRectMake(0,0,cornerCaps.left,cornerCaps.top)] drawAtPoint: CGPointMake(0,0)]; //topleft
[[self cutout: CGRectMake(self.size.width-cornerCaps.right,0,cornerCaps.right,cornerCaps.top)] drawAtPoint: CGPointMake(size.width-cornerCaps.right,0)]; //topright
[[self cutout: CGRectMake(0,self.size.height-cornerCaps.bottom,cornerCaps.left,cornerCaps.bottom)] drawAtPoint: CGPointMake(0,size.height-cornerCaps.bottom)]; //bottomleft
[[self cutout: CGRectMake(self.size.width-cornerCaps.right,self.size.height-cornerCaps.bottom,cornerCaps.right,cornerCaps.bottom)] drawAtPoint: CGPointMake(size.width-cornerCaps.right,size.height-cornerCaps.bottom)]; //bottomright
[[self cutout: CGRectMake(cornerCaps.left,0,self.size.width-cornerCaps.left-cornerCaps.right,cornerCaps.top)]
drawInRect: CGRectMake(cornerCaps.left,0,size.width-cornerCaps.left-cornerCaps.right,cornerCaps.top)]; //top
[[self cutout: CGRectMake(0,cornerCaps.top,cornerCaps.left,self.size.height-cornerCaps.top-cornerCaps.bottom)]
drawInRect: CGRectMake(0,cornerCaps.top,cornerCaps.left,size.height-cornerCaps.top-cornerCaps.bottom)]; //left
[[self cutout: CGRectMake(cornerCaps.left,self.size.height-cornerCaps.bottom,self.size.width-cornerCaps.left-cornerCaps.right,cornerCaps.bottom)]
drawInRect: CGRectMake(cornerCaps.left,size.height-cornerCaps.bottom,size.width-cornerCaps.left-cornerCaps.right,cornerCaps.bottom)]; //bottom
[[self cutout: CGRectMake(self.size.width-cornerCaps.right,cornerCaps.top,cornerCaps.right,self.size.height-cornerCaps.top-cornerCaps.bottom)]
drawInRect: CGRectMake(size.width-cornerCaps.right,cornerCaps.top,cornerCaps.right,size.height-cornerCaps.top-cornerCaps.bottom)]; //right
[[self cutout: CGRectMake(cornerCaps.left,cornerCaps.top,self.size.width-cornerCaps.left-cornerCaps.right,self.size.height-cornerCaps.top-cornerCaps.bottom)]
drawInRect: CGRectMake(cornerCaps.left,cornerCaps.top,size.width-cornerCaps.left-cornerCaps.right,size.height-cornerCaps.top-cornerCaps.bottom)]; //interior
UIImage *rslt = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [rslt resizableImageWithCapInsets: cornerCaps];
}
#end

Swift 3.0 version of Vicky's answer.
var imageInset:UIEdgeInsets = UIEdgeInsets()
imageInset.left = 10.0
imageInset.top = 10.0
imageInset.bottom = 10.0
imageInset.right = 10.0
self.myImageView.image = myimage.resizableImage(withCapInsets: imageInset)

Related

divide image into two parts using divider

I'm working on one app where I need to divide a image into two part using a red line.
left part for labels
right part for prices
Question 1.
How can I draw a red line on image?
Question 2.
How can I divide image to two parts using red line ?( red line position is not fixed. user can move the position wherever it want)
Question 3.
How can I get line current position and how can I use that position two divide image
Thanks in advance
I would approach this in somewhat the same manner as koray was suggesting:
1) I am assuming that your above image/view is going to be managed by a view controller, which I will call ImageSeperatorViewController from here on.
Inside of ImageSeperatorViewController, insert koray's code in the -(void) viewDidLoad{} method. Make sure you change the imageToSplit variable to be an UIImageView instead of a plain UIView.
2) Next, I assume that you know how to detect user gestures. You will detect these gestures, and determine if the user has selected the view (i.e. bar in koray's code). Once you have determined if the user has selected bar, just update its origin's X position with the touch position.
CGRect barFrame = bar.frame;
barFrame.origin.x = *X location of the users touch*
bar.frame = barFrame;
3) For cropping, I would not use github.com/bilalmughal/NLImageCropper, it will not do what you need to do.
Try this on for size:
Header:
#interface UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor*)color;
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage;
- (NSArray*)imagesBySlicingAt:(CGFloat)position;
#end
Implementation:
#implementation UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage
{
//pattern image
UIColor *patternColor = [UIColor colorWithPatternImage:patternImage];
CGFloat width = patternImage.size.width;
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color from the pattern image color
CGContextSetFillColorWithColor(context, patternColor.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//the joy of image color patterns being based on 0,0 origin! must set phase
CGContextSetPatternPhase(context, CGSizeMake(dividerRect.origin.x, 0));
//fill the divider rect with the repeating pattern from the image
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor *)color
{
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color for your divider
CGContextSetFillColorWithColor(context, color.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//fill the divider's rect with the provided color
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (NSArray*)imagesBySlicingAt:(CGFloat)position
{
NSMutableArray *slices = [NSMutableArray array];
//first image
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
//second
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(self.size.width - position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointMake(-position, 0)];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
return slices;
}
The concept is simple - you want an image with the divider drawn over it. You could just overlay a view, or override drawRect:, or any number of any solutions. I'd rather give you this category. It just uses some quick Core Graphics calls to generate an image with your desired divider, be it pattern image or color, at the specified position. If you want support for horizontal dividers as well, it is rather trivial to modify this as such. Bonus: You can use a tiled image as your divider!
Now to answer your primary question. Using the category is rather self explanatory - just call one of the two methods on your source background to generate one with the divider, and then apply that image rather than the original source image.
Now, the second question is simple - when the divider has been moved, regenerate the image based on the new divider position. This is actually a relatively inefficient way of doing it, but this ought to be lightweight enough for your purposes as well as only being an issue when moving the divider. Premature optimization is just as much a sin.
Third question is also simple - call imagesBySlicingAt: - it will return an array of two images, as generated by slicing through the image at the provided position. Use them as you wish.
This code has been tested to be functional. I strongly suggest that you fiddle around with it, not for any purpose of utility, but to better understand the mechanisms used so that next time, you can be on the answering side of things
For Crop you can try this,
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGImageRef tmpImgRef = image.CGImage;
CGImageRef topImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, 0, image.size.width, image.size.height / 2.0));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, image.size.height / 2.0, image.size.width, image.size.height / 2.0));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
hope this can help you, :)
if you want to draw a line you could just use a UIView with red background and make the height the size of your image and the width around 5 pixels.
UIView *imageToSplit; //the image im trying to split using a red bar
CGRect i = imageToSplit.frame;
int x = i.origin.x + i.size.width/2;
int y = i.origin.y;
int width = 5;
int height = i.size.height;
UIView *bar = [[[UIView alloc] initWithFrame:CGRectMake(x, y, width, height)] autorelease];
bar.backgroundColor = [UIColor redColor];
[self.view addSubview:bar];

Uiimage from UIView: higher than on-screen resolution?

I've got a UIView which I'm rendering to a UIImage via the typical UIGraphicsBeginImageContextWithOptions method, using a scale of 2.0 so the image output will always be the "retina display" version of what would show up onscreen, regardless of the user's actual screen resolution.
The UIView I'm rendering contains both images and text (UIImages and UILabels).  The image is appearing in the rendered UIImage at its full resolution, and looks great.  But the UILabels appear to have been rasterized at a 1.0 scale and then upscaled to 2.0, resulting in blurry text.
Is there something I'm doing wrong, or is there some way to get the text to render nice and crisp at the higher scale level?  Or is there some way to do this other than using the scaling parameter of UIGraphicsBeginImageContextWithOptions that would have better results?   Thanks!
The solution is to change the labels's contentsScale to 2 before you draw it, then set it back immediately thereafter. I just coded up a project to verify it, and its working just fine making a 2x image in a normal retina phone (simulator). [If you have a public place I can put it let me know.]
EDIT: the extended code walks the subviews and any container UIViews to set/unset the scale
- (IBAction)snapShot:(id)sender
{
[self changeScaleforView:snapView scale:2];
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, snapView.opaque, 2);
[snapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageDisplay.image = img; // contentsScale
imageDisplay.contentMode = UIViewContentModeScaleAspectFit;
[self changeScaleforView:snapView scale:1];
}
- (void)changeScaleforView:(UIView *)aView scale:(CGFloat)scale
{
[aView.subviews enumerateObjectsUsingBlock:^void(UIView *v, NSUInteger idx, BOOL *stop)
{
if([v isKindOfClass:[UILabel class]]) {
v.layer.contentsScale = scale;
} else
if([v isKindOfClass:[UIImageView class]]) {
// labels and images
// v.layer.contentsScale = scale; won't work
// if the image is not "#2x", you could subclass UIImageView and set the name of the #2x
// on it as a property, then here you would set this imageNamed as the image, then undo it later
} else
if([v isMemberOfClass:[UIView class]]) {
// container view
[self changeScaleforView:v scale:scale];
}
} ];
}
Try rendering to an image with double size, and then create the scaled image:
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
// Do stuff
UImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
newImage=[UIImage imageWithCGImage:[newImage CGImage] scale:2.0 orientation:UIImageOrientationUp];
Where:
size = realSize * scale;
I have been struggling with much the same oddities in the context of textview to PDF rendering. I found out that there are some documented properties on the CALayer objects which make up the view. Maybe setting the rasterizationScale of the relevant (sub)layer(s) helps.

How to I rotate UIImageView by 90 degrees inside a UIScrollView with correct image size and scrolling?

I have an image inside an UIImageView which is within a UIScrollView. What I want to do is rotate this image 90 degrees so that it is in landscape by default, and set the initial zoom of the image so that the entire image fits into the scrollview and then allow it to be zoomed up to 100% and back down to minimum zoom again.
This is what I have so far:
self.imageView.transform = CGAffineTransformMakeRotation(-M_PI/2);
float minimumScale = scrollView.frame.size.width / self.imageView.frame.size.width;
scrollView.minimumZoomScale = minimumScale;
scrollView.zoomScale = minimumScale;
scrollView.contentSize = CGSizeMake(self.imageView.frame.size.height,self.imageView.frame.size.width);
The problem is that if I set the transform, nothing shows up in the scrollview. However if I commented out the transform, everything works except the image is not in the landscape orientation that I want it to be!
If I apply the transform and remove the code that sets the minimumZoomScale and zoomScale properties, then the image shows up in the correct orientation, however with the incorrect zoomScale and seems like the contentSize property isn't set correctly either - since the doesn't scroll to the edge of the image in the left/right direction, however does top and bottom but much over the edge.
NB: image is being loaded from a URL
Maybe rotating the image itself fits your needs:
UIImage* rotateUIImage(const UIImage* src, float angleDegrees) {
UIView* rotatedViewBox = [[UIView alloc] initWithFrame: CGRectMake(0, 0, src.size.width, src.size.height)];
float angleRadians = angleDegrees * ((float)M_PI / 180.0f);
CGAffineTransform t = CGAffineTransformMakeRotation(angleRadians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
CGContextRotateCTM(bitmap, angleRadians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-src.size.width / 2, -src.size.height / 2, src.size.width, src.size.height), [src CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I believe the easiest way (and thread safe too) is to do:
//assume that the image is loaded in landscape mode from disk
UIImage * LandscapeImage = [UIImage imageNamed: imgname];
UIImage * PortraitImage = [[UIImage alloc] initWithCGImage: LandscapeImage.CGImage
scale: 1.0
orientation: UIImageOrientationLeft];
Any calculations that you do based on the imageView's frame should probably be done before you apply any transformations to it. But I would actually suggest doing those calculations based on the size of the UIImage, not the UIImageView. Then set both the UIImageView's frame and the UIScrollView's contentSize based on that.
Max's suggestion is a good one, although with a larger image it could be a performance killer. Are you displaying this image from your app's resources? If so, why not just rotate the images before you even build the app?
There's a much easier solution that is also faster, just do this:
- (void) imageRotateTapped:(id)sender
{
[UIView animateWithDuration:0.33f animations:^()
{
self.imageView.transform = CGAffineTransformMakeRotation(RADIANS(self.rotateDegrees += 90.0f));
self.imageView.frame = self.imageView.superview.bounds; // change this to whatever rect you want
}];
}
When the user is done, you will need to actually create a new rotated image, but that is very easy to do.
I was using the accepted answer for a while until we noticed that non-square rotations based on images taken directly from the camera seemed stretched (they were rotated as desired, just the frame width/height wasn't adjusted).
Great explanation/post here from Trevor: http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
In the end, it was a very simple import of Trevor's code which uses categories to add a resizedImage:interpoationQuality method to UIImage. So yeah, user beware, if it still works for you, great. But if it doesn't, I'd take a look at the library instead.

How do achieve a frame around image

I like the way this (http://shakeitphoto.com/) application puts a border around the image.. i would like to do something similar in my application but not sure how should I go about doing it.
Any ideas on how given a UIImage can I wrap a frame around it?
From that website, it appears you want a border with a shadow. There's 2 reasonable options, 3 if you don't care about the shadow.
If you don't care about the shadow, you can just do something like
#import <QuartzCore/QuartzCore.h> // this should be at the top
// inside your view layout code
myImageView.layer.borderColor = [UIColor whiteColor].CGColor
myImageView.layer.borderWidth = 5;
This will give you a 5-pixel white border inset into the view, layered on top of the view's contents (e.g. the image). What it won't give you is a shadow. If you want the shadow, there's 2 other options.
You could just create an image that includes the border and the shadow, and nothing else. Just make everything else alpha-transparent. Then you can simply layer this image on top of the one you want to display (either with 2 imageviews, or by creating a third image out of the 2). This should work fine, but it won't scale to different image sizes. In the case of the linked app, the image size is always the same so they could be using this.
The other option is to simply draw the border and shadow on top of your image in a new image. Here's a bit of sample code that will do this - it creates a new image the same size as your original, but with a white, shadowed border:
- (UIImage *)borderedImage:(UIImage *)image {
// the following NO means the new image has an alpha channel
// If you know the source image is fully-opaque, you may want to set that to YES
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
CGContextRef ctx = UIGraphicsGetCurrentContext();
const CGFloat shadowRadius = 5;
CGContextSetShadowWithColor(ctx, 0, shadowRadius, [UIColor blackColor].CGColor);
[[UIColor whiteColor] set];
CGRect rect = (CGRect){CGPointZero, image.size};
const CGFloat frameWidth = 5;
rect = CGRectInset(rect, frameWidth / 2.0f, frameWidth / 2.0f);
UIBezierPath *path = [UIBezierPath bezierPathWithRect:rect];
path.lineWidth = frameWidth;
[path stroke];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// note: getting the new image this way throws away the orientation data from the original
// You could create a third image by doing something like
// newImage = [UIImage imageWithCGImage:newImage.CGImage scale:newImage.scale orientation:image.orientation]
// but I am unsure as to how orientation actually affects rendering (if at all)
UIGraphicsEndImageContext();
return newImage;
}
(note: this code has not been compiled and could contain bugs)

UITableViewCell has square corner with image

I have a grouped UITableView that contains several cells (just standard UITableViewCells), all of which are of UITableViewCellStyleSubtitle style. Bueno. However, when I insert images into them (using the provided imageView property), the corners on the left side become square.
Example Image http://files.lithiumcube.com/tableView.png
The code being used to assign the values into the cell is:
cell.textLabel.text = currentArticle.descriptorAndTypeAndDifferentiator;
cell.detailTextLabel.text = currentArticle.stateAndDaysWorn;
cell.imageView.image = currentArticle.picture;
and currentArticle.picture is a UIImage (also the pictures, as you can see, display just fine with the exception of the square corners).
It displays the same on my iPhone 3G, in the iPhone 4 simulator and in the iPad simulator.
What I'm going for is something similar to the UITableViewCells that Apple uses in its iTunes app.
Any ideas about what I'm doing wrong?
Thanks,
-Aaron
cell.imageView.layer.cornerRadius = 16; // 16 is just a guess
cell.imageView.clipsToBounds = YES;
This will round the UIImageView so it does not draw over the cell. It will also round all the corners of all your images, but that may be OK.
Otherwise, you will have to add your own image view that will just round the one corner. You can do that by setting up a clip region in drawRect: before calling super. Or just add your own image view that is not so close to the left edge.
You can add a category on UIImage and include this method:
// Return the image, but with rounded corners. Useful for masking an image
// being used in a UITableView which has grouped style
- (UIImage *)imageWithRoundedCorners:(UIRectCorner)corners radius:(CGFloat)radius {
// We need to create a CGPath to set a clipping context
CGRect aRect = CGRectMake(0.f, 0.f, self.size.width, self.size.height);
CGPathRef clippingPath = [UIBezierPath bezierPathWithRoundedRect:aRect byRoundingCorners:corners cornerRadii:CGSizeMake(radius, radius)].CGPath;
// Begin drawing
// Start a context with a scale of 0.0 uses the current device scale so that this doesn't unnecessarily drop resolution on a retina display.
// Use `UIGraphicsBeginImageContextWithOptions(aRect.size)` instead for pre-iOS 4 compatibility.
UIGraphicsBeginImageContextWithOptions(aRect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, clippingPath);
CGContextClip(context);
[self drawInRect:aRect];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Then when you're configuring your cells, in the table view controller, call something like:
if ( *SINGLE_ROW* ) {
// We need to clip to both corners
cell.imageView.image = [image imageWithRoundedCorners:(UIRectCornerTopLeft | UIRectCornerBottomLeft) radius:radius];
} else if (indexPath.row == 0) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerTopLeft radius:radius];
} else if (indexPath.row == *NUMBER_OF_ITEMS* - 1) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerBottomLeft radius:radius];
} else {
cell.imageView.image = image;
}
but replace the SINGLE_ROW etc with real logic to determine whether you've got a single row in a section, or it's the last row. One thing to note here, is that I've found (experimentally) that the radius for a group style table is 12, which works perfectly in the simulator, but not on an iPhone. I've not been able to test it on a non-retina device. A radius of 30 looks good on the iPhone 4 (so I'm wondering if this is an image scale thing, as the images I'm using are from the AddressBook, so don't have an implied scale factor). Therefore, I've got some code before this that modifies the radius...
CGFloat radius = GroupStyleTableCellCornerRadius;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2){
// iPhone 4
radius = GroupStyleTableCellCornerRadiusForRetina;
}
hope that helps.