Rounded Rect / Rounded corners for images in UITableView - iphone

I use this category and create images for my UITableView to all be the same size. Is there a way to have the images have rounded corners as well? Thanks!
+ (UIImage *)scale:(UIImage *)image toSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Edit: I then get the image, and other object info to put it in an NSDictionary to get in the UITableView. I tried changing the UIImageView.layer property in the cellForRowAtIndexPath, but it doesn't seem to do the trick:
cell.TitleLabel.text = [dict objectForKey:#"Name"];
cell.CardImage.image = [dict objectForKey:#"Image"];
cell.CardImage.layer.cornerRadius = 5.0;

You can add clipping to the drawing operation, the UIBezierPath class makes this super easy.
Extend you code to:
+ (UIImage *)scale:(UIImage *)image toSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0, 0, size.width, size.height);
[[UIBezierPath bezierPathWithRoundeRect:rect cornerRadius:5] addClip];
[image drawInRect:rect];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}

Try this
image.layer.cornerRadius = 5;

Include QuartzCore framework.
Import CALayer.h
image.layer.cornerFRadius = 5;

As said Sisu and the.evangelist : image.layer.cornerRadius = 5;
But you may need to also add :
[image.layer setMasksToBounds:YES];

Related

Extract a part of UIImageView

I was wondering if it's possible to "extract" a part of UIImageView.
For example, I select using Warp Affine a part of the UIImageView and I know the selected part frame.
like in this image:
Is it possible to get from the original UIImageView only the selected part without losing quality?
Get the snapshot of the view via category method:
#implementation UIView(Snapshot)
-(UIImage*)makeSnapshot
{
CGRect wholeRect = self.bounds;
UIGraphicsBeginImageContextWithOptions(wholeRect.size, YES, [UIScreen mainScreen].scale);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, wholeRect);
[self.layer renderInContext:ctx];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
then crop it to your rect via another category method:
#implementation UIImage(Crop)
-(UIImage*)cropFromRect:(CGRect)fromRect
{
fromRect = CGRectMake(fromRect.origin.x * self.scale,
fromRect.origin.y * self.scale,
fromRect.size.width * self.scale,
fromRect.size.height * self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, fromRect);
UIImage* crop = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return crop;
}
#end
in your VC:
UIImage* snapshot = [self.imageView makeSnapshot];
UIImage* imageYouNeed = [snapshot cropFromRect:selectedRect];
selectedRect should be in you self.imageView coordinate system, if no so then use
selectedRect = [self.imageView convertRect:selectedRect fromView:...]
Yes, it's possibile.First you should get the UIImageView's image, using this property:
#property(nonatomic, retain) UIImage *image;
And NSImage's :
#property(nonatomic, readonly) CGImageRef CGImage;
Then you get the cut image:
CGImageRef cutImage = CGImageCreateWithImageInRect(yourCGImageRef, CGRectMake(x, y, w, h));
If you want again a UIImage you should use this UIImage's method:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage;
PS: I don't know how to do it directly, without convert it to CGImageRef, maybe there's a way.

how to merge 2 images without using set alpha?

I am a Fresher Developer in iPhone .
I want Merge Two Images and Get Only One Image In UIImageView without set alpha.
This is my code. This code is working using alpha,
but I want set without set alpha.
MYCODE:-
-(UIImage *)maskingImage:(UIImage *)image
{
CGSize sizeR = CGSizeMake(200, 220);
// UIImage *textureImage = [UIImage imageNamed:#"tt.png"];
UIImage *textureImage =imgView2.image;
UIGraphicsBeginImageContextWithOptions(sizeR, YES, textureImage.scale);
[textureImage drawInRect:CGRectMake(0.0, 0.0, 200, 220)];
UIImage *bottomImage = UIGraphicsGetImageFromCurrentImageContext();
UIImage *upperImage = image;
CGSize newSize = sizeR ;
UIGraphicsBeginImageContext(newSize);
[bottomImage drawInRect:CGRectMake(0.0, 0.0, 200, 220)];
[upperImage drawInRect:CGRectMake(0.0, 0.0, 200, 220) blendMode:kCGBlendModeNormal alpha:0.5];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Thanks in advance.
UIGraphicsBeginImageContext(YOUR SIZE);
//FIRST IMAGE
[FIRST_IMAGE drawInRect:CGRectMake(0, 0, YOUR_SIZE_WIDTH/2, YOUR_SIZE_HEIGHT)];
//SECOND IMAGE
[SECOND_IMAGE drawInRect:CGRectMake(YOUR_SIZE_WIDTH/2, 0, YOUR_SIZE_WIDTH/2, YOUR_SIZE_HEIGHT)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
use this function
- (UIImage * ) mergeImage: (UIImage *) imageA
withImage: (UIImage *) imageB
strength: (float) strength X:(float )x Y:(float)y{
UIGraphicsBeginImageContextWithOptions(CGSizeMake([imageA size].width,[imageA size].height), NO, 0.0);
[imageA drawAtPoint: CGPointMake(0,0)];
[imageB drawAtPoint: CGPointMake(x,y)
blendMode: kCGBlendModeNormal // you can play with this
alpha: strength]; // 0 - 1
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mergedImage;}
here x and y is the placemnt where you want to show second image
i just faced the same problem , now i got the solution for my problem
CGSize newSize = CGSizeMake(320, 377);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[ image1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[image2 drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this ,its work like a charm for me , i hope you will also get the solution.
you can use like --
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.0];

Break down a picture in xcode for an iphone app

Is there any way that I can have the user upload an image into the app, say for example a 50X150 pixels image, and I can break it into 3 50x50 pixel images?
If so, can someone help me to select certain pixels and break it into several images?
Thank you!
Use this code...
// In following method inRect:(CGRect)rect >>> this rect should be 50x50 or you can define according to your requirements..
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef scale:1.0 orientation:image.imageOrientation];
CGImageRelease(newImageRef);
return newImage;
}
For more visit this reference..
Hope, this will help you...enjoy
Define a category on UIImage that gives you a great cropping method:
- (UIImage *)cropImageInRect:(CGRect)cropRect
{
CGImageRef image = CGImageCreateWithImageInRect(self.CGImage,cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return croppedImage;
}
Now with this category you can easily do what you want to:
UIImage *original = ...;
UIImage left = [original cropImageInRect:CGRectMake(0.0, 0.0, 50.0, 50.0)];
UIImage center = [original cropImageInRect:CGRectMake(0.0, 50.0, 50.0, 50.0)];
UIImage right = [original cropImageInRect:CGRectMake(0.0, 100.0, 50.0, 50.0)];
I needed this, too. Added to a utils category method on UIImage:
// UIImage+Utls.h
#interface UIImage (UIImage_Utls)
- (UIImage *)subimageInRect:(CGRect)rect;
- (NSArray *)subimagesHorizontally:(NSInteger)count;
#end
// UIImage+Utls.m
#import "UIImage+Utls.h"
#implementation UIImage (UIImage_Utls)
- (UIImage *)subimageInRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *answer = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return answer;
}
- (NSArray *)subimagesHorizontally:(NSInteger)count {
NSMutableArray *answer = [NSMutableArray arrayWithCapacity:count];
CGFloat width = self.size.width / count;
CGRect rect = CGRectMake(0.0, 0.0, width, self.size.height);
for (int i=0; i<count; i++) {
[answer addObject:[self subimageInRect:rect]];
rect = CGRectOffset(rect, width, 0.0);
}
return [NSArray arrayWithArray:answer];
}
#end

merging a stretchable UIImage with a "normal" one

I'd like to combine two UIImages, one stretchable and one "normal" one. The problem is that if I merge the Images using the UIGraphicsImageContext, the scond image is also stretched (it is on top of the first one as it should be, but stretched). Does anybody know how to avoid this?
Thanks a lot!
calls from my ViewController:
UIImage *stretchImage = [[UIImage imageNamed:#"stretchableLeft.png"] stretchableImageWithLeftCapWidth:0.0 topCapHeight:16.0];
stretchImage = [self imageWithImage:stretchImage scaledToSize:CGSizeMake(64.0, 64.0)];
stretchImage = [self mergeImageWithImage:stretchImage secondImage:[UIImage imageNamed:#"topImage.png"]]; // only 40x40 Px
the two methods are:
- (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
- (UIImage*)mergeImageWithImage:(UIImage *)image secondImage:(UIImage *)image2
{
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0,0,image.size.width,image.size.height)];
[image2 drawInRect:CGRectMake(10,10,image.size.width,image.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I think the issue is that you're asking both images to draw in the full rectangle. That is causing your second image to stretch.
Try using the image2.size for image2 when merging the images. You'll have to adjust the placement using the x/y coordinate when drawing the rectangle.
- (UIImage*)mergeImageWithImage:(UIImage *)image secondImage:(UIImage *)image2
{
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0,0,image.size.width,image.size.height)];
[image2 drawInRect:CGRectMake(10,10,image2.size.width,image2.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Setting a UIImage border

Is there a way to set borders to a UIImage. I know how to set it for UIImageView, but my problem is the UIImage that I load up in a UIImageview wont be of the same size or aspect ratio as the UIImageView. Hence Ive kept the UIImageView mode to aspect fit. Giving a border to UIImageView now would border the entire UIImageView rather than just the UIImage, and that doesn't look good when the UIImage is not of the same size or aspect ratio as the UIV.
Help?
There's some solutions in this question: How can i take an UIImage and give it a black border?
#import <QuartzCore/CALayer.h>
UIImageView *imageView = [UIImageView alloc]init];
imageView.layer.masksToBounds = YES;
imageView.layer.borderColor = [UIColor blackColor].CGColor;
imageView.layer.borderWidth = 1;
imageView.layer.cornerRadius = 10; //optional
image with border
+(UIImage*)imageWithBorderFromImage:(UIImage*)source
{
CGSize size = [source size];
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0, 0, size.width, size.height);
[source drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context,(255/255.f),(255/255.f),(255/255.f), 1.0);
CGContextStrokeRect(context, rect);
UIImage *testImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return testImg;
}