UITableViewCell has square corner with image - iphone

I have a grouped UITableView that contains several cells (just standard UITableViewCells), all of which are of UITableViewCellStyleSubtitle style. Bueno. However, when I insert images into them (using the provided imageView property), the corners on the left side become square.
Example Image http://files.lithiumcube.com/tableView.png
The code being used to assign the values into the cell is:
cell.textLabel.text = currentArticle.descriptorAndTypeAndDifferentiator;
cell.detailTextLabel.text = currentArticle.stateAndDaysWorn;
cell.imageView.image = currentArticle.picture;
and currentArticle.picture is a UIImage (also the pictures, as you can see, display just fine with the exception of the square corners).
It displays the same on my iPhone 3G, in the iPhone 4 simulator and in the iPad simulator.
What I'm going for is something similar to the UITableViewCells that Apple uses in its iTunes app.
Any ideas about what I'm doing wrong?
Thanks,
-Aaron

cell.imageView.layer.cornerRadius = 16; // 16 is just a guess
cell.imageView.clipsToBounds = YES;
This will round the UIImageView so it does not draw over the cell. It will also round all the corners of all your images, but that may be OK.
Otherwise, you will have to add your own image view that will just round the one corner. You can do that by setting up a clip region in drawRect: before calling super. Or just add your own image view that is not so close to the left edge.

You can add a category on UIImage and include this method:
// Return the image, but with rounded corners. Useful for masking an image
// being used in a UITableView which has grouped style
- (UIImage *)imageWithRoundedCorners:(UIRectCorner)corners radius:(CGFloat)radius {
// We need to create a CGPath to set a clipping context
CGRect aRect = CGRectMake(0.f, 0.f, self.size.width, self.size.height);
CGPathRef clippingPath = [UIBezierPath bezierPathWithRoundedRect:aRect byRoundingCorners:corners cornerRadii:CGSizeMake(radius, radius)].CGPath;
// Begin drawing
// Start a context with a scale of 0.0 uses the current device scale so that this doesn't unnecessarily drop resolution on a retina display.
// Use `UIGraphicsBeginImageContextWithOptions(aRect.size)` instead for pre-iOS 4 compatibility.
UIGraphicsBeginImageContextWithOptions(aRect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, clippingPath);
CGContextClip(context);
[self drawInRect:aRect];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Then when you're configuring your cells, in the table view controller, call something like:
if ( *SINGLE_ROW* ) {
// We need to clip to both corners
cell.imageView.image = [image imageWithRoundedCorners:(UIRectCornerTopLeft | UIRectCornerBottomLeft) radius:radius];
} else if (indexPath.row == 0) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerTopLeft radius:radius];
} else if (indexPath.row == *NUMBER_OF_ITEMS* - 1) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerBottomLeft radius:radius];
} else {
cell.imageView.image = image;
}
but replace the SINGLE_ROW etc with real logic to determine whether you've got a single row in a section, or it's the last row. One thing to note here, is that I've found (experimentally) that the radius for a group style table is 12, which works perfectly in the simulator, but not on an iPhone. I've not been able to test it on a non-retina device. A radius of 30 looks good on the iPhone 4 (so I'm wondering if this is an image scale thing, as the images I'm using are from the AddressBook, so don't have an implied scale factor). Therefore, I've got some code before this that modifies the radius...
CGFloat radius = GroupStyleTableCellCornerRadius;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2){
// iPhone 4
radius = GroupStyleTableCellCornerRadiusForRetina;
}
hope that helps.

Related

divide image into two parts using divider

I'm working on one app where I need to divide a image into two part using a red line.
left part for labels
right part for prices
Question 1.
How can I draw a red line on image?
Question 2.
How can I divide image to two parts using red line ?( red line position is not fixed. user can move the position wherever it want)
Question 3.
How can I get line current position and how can I use that position two divide image
Thanks in advance
I would approach this in somewhat the same manner as koray was suggesting:
1) I am assuming that your above image/view is going to be managed by a view controller, which I will call ImageSeperatorViewController from here on.
Inside of ImageSeperatorViewController, insert koray's code in the -(void) viewDidLoad{} method. Make sure you change the imageToSplit variable to be an UIImageView instead of a plain UIView.
2) Next, I assume that you know how to detect user gestures. You will detect these gestures, and determine if the user has selected the view (i.e. bar in koray's code). Once you have determined if the user has selected bar, just update its origin's X position with the touch position.
CGRect barFrame = bar.frame;
barFrame.origin.x = *X location of the users touch*
bar.frame = barFrame;
3) For cropping, I would not use github.com/bilalmughal/NLImageCropper, it will not do what you need to do.
Try this on for size:
Header:
#interface UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor*)color;
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage;
- (NSArray*)imagesBySlicingAt:(CGFloat)position;
#end
Implementation:
#implementation UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage
{
//pattern image
UIColor *patternColor = [UIColor colorWithPatternImage:patternImage];
CGFloat width = patternImage.size.width;
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color from the pattern image color
CGContextSetFillColorWithColor(context, patternColor.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//the joy of image color patterns being based on 0,0 origin! must set phase
CGContextSetPatternPhase(context, CGSizeMake(dividerRect.origin.x, 0));
//fill the divider rect with the repeating pattern from the image
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor *)color
{
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color for your divider
CGContextSetFillColorWithColor(context, color.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//fill the divider's rect with the provided color
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (NSArray*)imagesBySlicingAt:(CGFloat)position
{
NSMutableArray *slices = [NSMutableArray array];
//first image
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
//second
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(self.size.width - position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointMake(-position, 0)];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
return slices;
}
The concept is simple - you want an image with the divider drawn over it. You could just overlay a view, or override drawRect:, or any number of any solutions. I'd rather give you this category. It just uses some quick Core Graphics calls to generate an image with your desired divider, be it pattern image or color, at the specified position. If you want support for horizontal dividers as well, it is rather trivial to modify this as such. Bonus: You can use a tiled image as your divider!
Now to answer your primary question. Using the category is rather self explanatory - just call one of the two methods on your source background to generate one with the divider, and then apply that image rather than the original source image.
Now, the second question is simple - when the divider has been moved, regenerate the image based on the new divider position. This is actually a relatively inefficient way of doing it, but this ought to be lightweight enough for your purposes as well as only being an issue when moving the divider. Premature optimization is just as much a sin.
Third question is also simple - call imagesBySlicingAt: - it will return an array of two images, as generated by slicing through the image at the provided position. Use them as you wish.
This code has been tested to be functional. I strongly suggest that you fiddle around with it, not for any purpose of utility, but to better understand the mechanisms used so that next time, you can be on the answering side of things
For Crop you can try this,
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGImageRef tmpImgRef = image.CGImage;
CGImageRef topImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, 0, image.size.width, image.size.height / 2.0));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, image.size.height / 2.0, image.size.width, image.size.height / 2.0));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
hope this can help you, :)
if you want to draw a line you could just use a UIView with red background and make the height the size of your image and the width around 5 pixels.
UIView *imageToSplit; //the image im trying to split using a red bar
CGRect i = imageToSplit.frame;
int x = i.origin.x + i.size.width/2;
int y = i.origin.y;
int width = 5;
int height = i.size.height;
UIView *bar = [[[UIView alloc] initWithFrame:CGRectMake(x, y, width, height)] autorelease];
bar.backgroundColor = [UIColor redColor];
[self.view addSubview:bar];

UIImageView with big image. issue

I want to implement iPhone Photo App (default iPhone app). I faced difficulties, when I want to load big image (2500 * 3700). When I want to scroll from one image to another, I see something like stuttering. To display images I use ImageScrollView from apple site It has displaying method: ImageScrollView.m
- (void)displayImage:(UIImage *)image
{
// clear the previous imageView
[imageView removeFromSuperview];
[imageView release];
imageView = nil;
// reset our zoomScale to 1.0 before doing any further calculations
self.zoomScale = 1.0;
self.imageView = [[[UIImageView alloc] initWithImage:image] autorelease];
[self addSubview:imageView];
self.contentSize = [image size];
[self setMaxMinZoomScalesForCurrentBounds];
self.zoomScale = self.minimumZoomScale;
}
- (void)setMaxMinZoomScalesForCurrentBounds
{
CGSize boundsSize = self.bounds.size;
CGSize imageSize = imageView.bounds.size;
// calculate min/max zoomscale
CGFloat xScale = boundsSize.width / imageSize.width; // the scale needed to perfectly fit the image width-wise
CGFloat yScale = boundsSize.height / imageSize.height; // the scale needed to perfectly fit the image height-wise
CGFloat minScale = MIN(xScale, yScale); // use minimum of these to allow the image to become fully visible
// on high resolution screens we have double the pixel density, so we will be seeing every pixel if we limit the
// maximum zoom scale to 0.5.
CGFloat maxScale = 1.0 / [[UIScreen mainScreen] scale];
// don't let minScale exceed maxScale. (If the image is smaller than the screen, we don't want to force it to be zoomed.)
if (minScale > maxScale) {
minScale = maxScale;
}
self.maximumZoomScale = maxScale;
self.minimumZoomScale = minScale;
}
For my app I have 600*600 images, I show them first. When user scrolls to next image he sees only 600*600 image. Then in background I load 3600 * 3600 image
[operationQueue addOperationWithBlock:^{
UIImage *image = [self getProperBIGImage];
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
ImageScrollView *scroll = [self getCurrentScroll];
[scroll displayImage:newImage];
}];
}];
I see, that when the dimensions of image are 3600 * 3600 and I want to display image in 640 * 960 screen, iPhone waste 1 second of main queue time to scale the image, and that's why I can't scroll to next image during this 1 second.
I want to scale image, because I need user to be able to zoom this image. I tried to use this approach, but this didn't help.
I see some possible solutions:
1) to provide scaling of image in UIImageView in background (but I know, that UI should be changed only in main thread)
2) to use - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollViewCalled to show only 600 * 600 image at the beginning and then load big image, when user tries to zoom (but I tried this, and I will loose 1 second, when I try to init UIImageView with bigImage and then return this UIImageView; And I can't even implement it, because I see bad scroll view, where scrolling behavior is wrong (difficault to explain), when I try to return different view for different scales)
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollViewCalled
{
if (!zooming)
{
ImageScrollView *scroll = (ImageScrollView *)scrollViewCalled;
UIImageView *imageView = (UIImageView *)[scroll imageView];
return imageView;
}
else
{
UIImageView *bigImageView = [self getBigImageView];
return bigImageView;
}
}
Unfortunately, that image is too large for the iPhone to handle. I know on the iPad, the limit is roughly the size of the device screen. Any bigger than that and you'll have to use a CATiledLayer.
I would take a look at the WWDC UIScrollView presentations over the last three year, starting with 2010. In that, they discuss ways to handle large images on the iPhone. The sample code will also get you on your way.
Good luck!

How to Define UIImageView size as UIImage resolution?

I have scenario, in which I am getting images using Web Service and all images are in different resolution. Now my requirement is that I want resolution of each Images and using that I want to define size of UIImageView so I can prevent my Images from getting blurred
For example image resolution if 326 pixel/inch the imageview should be as size of that image can represent fully without any blur.
UIImage *img = [UIImage imageNamed:#"foo.png"];
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
UIImageView *imgView = [[UIImageView alloc] initWithFrame:rect];
[imgView setImage:img];
Image size IS it's resolution.
Your problem might be - retina display!
Check for Retina display and thus - make UIImageView width/height twice smaller (so that each UIImageView pixel would consist of four smaller UIImage pixels for retina display).
How to check for retina display:
https://stackoverflow.com/a/7607087/894671
How to check image size (without actually loading image in memory):
NSString *mFullPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]
stringByAppendingPathComponent:#"imageName.png"];
NSURL *imageFileURL = [NSURL fileURLWithPath:mFullPath];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)imageFileURL, NULL);
if (imageSource == NULL)
{
// Error loading image ...
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:NO], (NSString *)kCGImageSourceShouldCache, nil];
CFDictionaryRef imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, (CFDictionaryRef)options);
NSNumber *mImgWidth;
NSNumber *mImgHeight;
if (imageProperties)
{
//loaded image width
mImgWidth = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelWidth);
//loaded image height
mImgHeight = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelHeight);
CFRelease(imageProperties);
}
if (imageSource != NULL)
{
CFRelease(imageSource);
}
So - for example:
UIImageView *mImgView = [[UIImageView alloc] init];
[mImgView setImage:[UIImage imageNamed:#"imageName.png"]];
[[self view] addSubview:mImgView];
if ([UIScreen instancesRespondToSelector:#selector(scale)])
{
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale > 1.0)
{
//iphone retina screen
[mImgView setFrame:CGRectMake(0,0,[mImgWidth intValue]/2,[mImgHeight intValue]/2)];
}
else
{
//iphone screen
[mImgView setFrame:CGRectMake(0,0,[mImgWidth intValue],[mImgHeight intValue])];
}
}
Hope that helps!
You can get image size using following code. So, first calculate downloaded image size and than make image view according to that.
UIImage *Yourimage = [UIImage imageNamed:#"image.png"];
CGFloat width = Yourimage.size.width;
CGFloat height = Yourimage.size.height;
Hope, this will help you..
UIImage *oldimage = [UIImage imageWithContentsOfFile:imagePath]; // or you can set from url with NSURL
CGSize imgSize = [oldimage size];
imgview.frame = CGRectMake(10, 10, imgSize.width,imgSize.height);
[imgview setImage:oldimage];
100% working ....
To solve this problem, we need to take care of the device's display resolution..
For example you have an image of resolution 326ppi which is same as of iPhone4, iPhone4S and iPod 4th Gen. So you can simply use solutions suggested by #Nit and #Peko. But for other devices(or for image with different resolution on these devices) you will need to apply maths to calculate size for better display.
Now suppose you have 260ppi(with dimensions W x H) image and you wish to display it on iPhone4S, so as the information contained in it per inches is less than the display resolution of iPhone so we will need to resize it by reducing image size by 326/260 factor. so now the size for imageView that you will use is
imageViewWidth = W*(260/326);
imageViewHeight = H*(260/326);
In general:
resizeFactor = imageResolution/deviceDisplayResolution;
imageViewWidth = W*resizeFactor;
imageViewHeight = H*resizeFactor;
Here I am considering when we set an image in imageView and resize it, it does not removes or adds pixels from image,
Let the UIImageView do the work by utilizing the contentMode property to do your image resizing for you.
You probably want to be displaying your UIImageView with a static sizing (the "frame" property) that represents the maximum size of the image you want to display, and allowing the images to resize within that frame relative to their own particular size requirements (overall size, aspect ratio, etc.). You can let the UIImageView do the heavy lifting for you of dealing with different sized images by mastering the contentMode property. It has many different settings, one of which is UIViewContentModeScaleAspectFit, which will downsize your image as necessary to fit within the UIImageView, which if the image is smaller, it will simply display centered. You can play with the setting to get the results you want.
Note that with this approach, there is nothing special you need to do to deal with scaling issues associated with a Retina display.
As per the requirement you stated in the question body, I believe you need not change UIImageView size.
Image can represent fully without any blur using this line of code:
imageView.contentMode = UIViewContentModeScaleAspectFit;

UIImage stretchableImageWithLeftCapWidth

in iOS, does any UIImage support stretchableImageWithLeftCapWidth: , does that mean autoresize the uimmage?
First, this is deprecated, replaced by the more powerful resizableImageWithCapInsets:. However, that is only supported by iOS 5.0 and above.
stretchableImageWithLeftCapWidth:topCapHeight: does not resize the image you call it on. It returns a new UIImage. All UIImages may be drawn at different sizes, but a capped image responds to resizing by drawing its caps at the corners, and then filling the remaining space.
When is this useful? When we want to make buttons out of an image, as in this tutorial for the iOS 5 version.
The following code is a UIView drawRect method which illustrates the difference between a regular UIImage and a stretchable image with caps. The image used for stretch.png came from http://commons.wikimedia.org/wiki/Main_Page.
- (void) drawRect:(CGRect)rect;
{
CGRect bounds = self.bounds;
UIImage *sourceImage = [UIImage imageNamed:#"stretch.png"];
// Cap sizes should be carefully chosen for an appropriate part of the image.
UIImage *cappedImage = [sourceImage stretchableImageWithLeftCapWidth:64 topCapHeight:71];
CGRect leftHalf = CGRectMake(bounds.origin.x, bounds.origin.y, bounds.size.width/2, bounds.size.height);
CGRect rightHalf = CGRectMake(bounds.origin.x+bounds.size.width/2, bounds.origin.y, bounds.size.width/2, bounds.size.height);
[sourceImage drawInRect:leftHalf];
[cappedImage drawInRect:rightHalf];
UIFont *font = [UIFont systemFontOfSize:[UIFont systemFontSize]];
[#"Stretching a standard UIImage" drawInRect:leftHalf withFont:font];
[#"Stretching a capped UIImage" drawInRect:rightHalf withFont:font];
}
Output:
I have written a category method for maintaining compatibility
- (UIImage *) resizableImageWithSize:(CGSize)size
{
if( [self respondsToSelector:#selector(resizableImageWithCapInsets:)] )
{
return [self resizableImageWithCapInsets:UIEdgeInsetsMake(size.height, size.width, size.height, size.width)];
} else {
return [self stretchableImageWithLeftCapWidth:size.width topCapHeight:size.height];
}
}
just put that into your UIImage category you already have ( or make a new one )
this only supports the old way stretchable resizing, if you need more complex stretchable image resizing you can only do that on iOS 5 using the resizableImageWithCapInsets: directly

Uiimage from UIView: higher than on-screen resolution?

I've got a UIView which I'm rendering to a UIImage via the typical UIGraphicsBeginImageContextWithOptions method, using a scale of 2.0 so the image output will always be the "retina display" version of what would show up onscreen, regardless of the user's actual screen resolution.
The UIView I'm rendering contains both images and text (UIImages and UILabels).  The image is appearing in the rendered UIImage at its full resolution, and looks great.  But the UILabels appear to have been rasterized at a 1.0 scale and then upscaled to 2.0, resulting in blurry text.
Is there something I'm doing wrong, or is there some way to get the text to render nice and crisp at the higher scale level?  Or is there some way to do this other than using the scaling parameter of UIGraphicsBeginImageContextWithOptions that would have better results?   Thanks!
The solution is to change the labels's contentsScale to 2 before you draw it, then set it back immediately thereafter. I just coded up a project to verify it, and its working just fine making a 2x image in a normal retina phone (simulator). [If you have a public place I can put it let me know.]
EDIT: the extended code walks the subviews and any container UIViews to set/unset the scale
- (IBAction)snapShot:(id)sender
{
[self changeScaleforView:snapView scale:2];
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, snapView.opaque, 2);
[snapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageDisplay.image = img; // contentsScale
imageDisplay.contentMode = UIViewContentModeScaleAspectFit;
[self changeScaleforView:snapView scale:1];
}
- (void)changeScaleforView:(UIView *)aView scale:(CGFloat)scale
{
[aView.subviews enumerateObjectsUsingBlock:^void(UIView *v, NSUInteger idx, BOOL *stop)
{
if([v isKindOfClass:[UILabel class]]) {
v.layer.contentsScale = scale;
} else
if([v isKindOfClass:[UIImageView class]]) {
// labels and images
// v.layer.contentsScale = scale; won't work
// if the image is not "#2x", you could subclass UIImageView and set the name of the #2x
// on it as a property, then here you would set this imageNamed as the image, then undo it later
} else
if([v isMemberOfClass:[UIView class]]) {
// container view
[self changeScaleforView:v scale:scale];
}
} ];
}
Try rendering to an image with double size, and then create the scaled image:
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
// Do stuff
UImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
newImage=[UIImage imageWithCGImage:[newImage CGImage] scale:2.0 orientation:UIImageOrientationUp];
Where:
size = realSize * scale;
I have been struggling with much the same oddities in the context of textview to PDF rendering. I found out that there are some documented properties on the CALayer objects which make up the view. Maybe setting the rasterizationScale of the relevant (sub)layer(s) helps.