Break down a picture in xcode for an iphone app - iphone

Is there any way that I can have the user upload an image into the app, say for example a 50X150 pixels image, and I can break it into 3 50x50 pixel images?
If so, can someone help me to select certain pixels and break it into several images?
Thank you!

Use this code...
// In following method inRect:(CGRect)rect >>> this rect should be 50x50 or you can define according to your requirements..
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef scale:1.0 orientation:image.imageOrientation];
CGImageRelease(newImageRef);
return newImage;
}
For more visit this reference..
Hope, this will help you...enjoy

Define a category on UIImage that gives you a great cropping method:
- (UIImage *)cropImageInRect:(CGRect)cropRect
{
CGImageRef image = CGImageCreateWithImageInRect(self.CGImage,cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return croppedImage;
}
Now with this category you can easily do what you want to:
UIImage *original = ...;
UIImage left = [original cropImageInRect:CGRectMake(0.0, 0.0, 50.0, 50.0)];
UIImage center = [original cropImageInRect:CGRectMake(0.0, 50.0, 50.0, 50.0)];
UIImage right = [original cropImageInRect:CGRectMake(0.0, 100.0, 50.0, 50.0)];

I needed this, too. Added to a utils category method on UIImage:
// UIImage+Utls.h
#interface UIImage (UIImage_Utls)
- (UIImage *)subimageInRect:(CGRect)rect;
- (NSArray *)subimagesHorizontally:(NSInteger)count;
#end
// UIImage+Utls.m
#import "UIImage+Utls.h"
#implementation UIImage (UIImage_Utls)
- (UIImage *)subimageInRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *answer = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return answer;
}
- (NSArray *)subimagesHorizontally:(NSInteger)count {
NSMutableArray *answer = [NSMutableArray arrayWithCapacity:count];
CGFloat width = self.size.width / count;
CGRect rect = CGRectMake(0.0, 0.0, width, self.size.height);
for (int i=0; i<count; i++) {
[answer addObject:[self subimageInRect:rect]];
rect = CGRectOffset(rect, width, 0.0);
}
return [NSArray arrayWithArray:answer];
}
#end

Related

Image Cropping issue

I want to crop UIImage with the following code:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
// or use the UIImage wherever you like
UIImage * img = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return img;
}
This code is working fine in the simulator but giving unusual result on device.
Create a UIImage category and try adding this.
#implementation UIImage (Crop)
- (UIImage *)crop:(CGRect)cropRect {
cropRect = CGRectMake(cropRect.origin.x*self.scale,
cropRect.origin.y*self.scale,
cropRect.size.width*self.scale,
cropRect.size.height*self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], cropRect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:self.scale
orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
Try this one :
- (UIImage *)cropImage:(UIImage *)oldImage {
CGSize imageSize = oldImage.size;
UIGraphicsBeginImageContextWithOptions(CGSizeMake( imageSize.width,imageSize.height - 150),NO,0.);
[oldImage drawAtPoint:CGPointMake( 0, -80) blendMode:kCGBlendModeCopy alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}

Extract a part of UIImageView

I was wondering if it's possible to "extract" a part of UIImageView.
For example, I select using Warp Affine a part of the UIImageView and I know the selected part frame.
like in this image:
Is it possible to get from the original UIImageView only the selected part without losing quality?
Get the snapshot of the view via category method:
#implementation UIView(Snapshot)
-(UIImage*)makeSnapshot
{
CGRect wholeRect = self.bounds;
UIGraphicsBeginImageContextWithOptions(wholeRect.size, YES, [UIScreen mainScreen].scale);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, wholeRect);
[self.layer renderInContext:ctx];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
then crop it to your rect via another category method:
#implementation UIImage(Crop)
-(UIImage*)cropFromRect:(CGRect)fromRect
{
fromRect = CGRectMake(fromRect.origin.x * self.scale,
fromRect.origin.y * self.scale,
fromRect.size.width * self.scale,
fromRect.size.height * self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, fromRect);
UIImage* crop = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return crop;
}
#end
in your VC:
UIImage* snapshot = [self.imageView makeSnapshot];
UIImage* imageYouNeed = [snapshot cropFromRect:selectedRect];
selectedRect should be in you self.imageView coordinate system, if no so then use
selectedRect = [self.imageView convertRect:selectedRect fromView:...]
Yes, it's possibile.First you should get the UIImageView's image, using this property:
#property(nonatomic, retain) UIImage *image;
And NSImage's :
#property(nonatomic, readonly) CGImageRef CGImage;
Then you get the cut image:
CGImageRef cutImage = CGImageCreateWithImageInRect(yourCGImageRef, CGRectMake(x, y, w, h));
If you want again a UIImage you should use this UIImage's method:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage;
PS: I don't know how to do it directly, without convert it to CGImageRef, maybe there's a way.

why some UIimages don't show up in iphone

hi I am currently developing a small app on ios 4.3 , using objective c
as part of the app I need to manipulate an Image that I have downloaded from the web.
the following code shows up a missing image:
(the original is in a class but I just put this together as a test scenario so that it could be easily copy pasted)
- (void)viewDidLoad
{
[super viewDidLoad];
[self loadImage:#"http://www.night-net.net/images/ms/microsoft_vista_home_basic.jpg"];
[self getCroped:CGRectMake(10, 50, 80, 160)];
[self getCroped:CGRectMake(90, 50, 80, 80)];
[self getCroped:CGRectMake(90, 130, 40, 80)];
[self getCroped:CGRectMake(130, 130, 40, 40)];
[self getCroped:CGRectMake(130, 170, 40, 40)];
}
-(void) loadImage : (NSString*) url
{
_data = [NSData dataWithContentsOfURL:
[NSURL URLWithString: url]];
}
-(UIImageView*) getCroped:(CGRect) imageSize{
UIImage *temp = [[UIImage alloc] initWithData:_data];
UIImage *myImage = [self resizedImage:temp and:CGSizeMake(160,160) interpolationQuality:kCGInterpolationHigh];
UIImage *image = [self croppedImage:myImage and:imageSize];
UIImageView *imageView = [[UIImageView alloc] init];
imageView.image = image;
imageView.frame = imageSize;
[[self view] addSubview:imageView];
return imageView;
}
- (UIImage *)croppedImage:(UIImage*) image and: (CGRect)bounds {
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
- (UIImage *)resizedImage:(UIImage*) image and:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality {
BOOL drawTransposed = NO;
return [self resizedImage:image
and:newSize
transform:[self transformForOrientation:newSize]
drawTransposed:drawTransposed
interpolationQuality:quality];
}
// Returns a copy of the image that has been transformed using the given affine transform and scaled to the new size
// The new image's orientation will be UIImageOrientationUp, regardless of the current image's orientation
// If the new size is not integral, it will be rounded up
- (UIImage *)resizedImage:(UIImage*) image and:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = image.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
// Returns an affine transform that takes into account the image orientation when drawing a scaled image
- (CGAffineTransform)transformForOrientation:(CGSize)newSize {
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, newSize.width, 0);
transform = CGAffineTransformScale(transform, -1, 1);
return transform;
}
at first I thought this is caused by a lack of memory, but I have tested for that and that doesnt seem to be the problem,thanks in advance ofir
I've had issues in the past with images not appearing within UIWebViews if they contain unicode characters in the filename. I wonder if this might be the same thing. Try renaming your image?
doing this should be possible and low on memory cost as I did the same test,using flash to create an iphone app that does the same thing, and it works.
but I would much prefer using objective c so the question still stands

Rounded Rect / Rounded corners for images in UITableView

I use this category and create images for my UITableView to all be the same size. Is there a way to have the images have rounded corners as well? Thanks!
+ (UIImage *)scale:(UIImage *)image toSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Edit: I then get the image, and other object info to put it in an NSDictionary to get in the UITableView. I tried changing the UIImageView.layer property in the cellForRowAtIndexPath, but it doesn't seem to do the trick:
cell.TitleLabel.text = [dict objectForKey:#"Name"];
cell.CardImage.image = [dict objectForKey:#"Image"];
cell.CardImage.layer.cornerRadius = 5.0;
You can add clipping to the drawing operation, the UIBezierPath class makes this super easy.
Extend you code to:
+ (UIImage *)scale:(UIImage *)image toSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
CGRect rect = CGRectMake(0, 0, size.width, size.height);
[[UIBezierPath bezierPathWithRoundeRect:rect cornerRadius:5] addClip];
[image drawInRect:rect];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
Try this
image.layer.cornerRadius = 5;
Include QuartzCore framework.
Import CALayer.h
image.layer.cornerFRadius = 5;
As said Sisu and the.evangelist : image.layer.cornerRadius = 5;
But you may need to also add :
[image.layer setMasksToBounds:YES];

Mask white color out of PNG image on iPhone

I know that there are already 2 relevant posts on that, but it's not clear...
So the case is this...:
I have a UIImage which consists of a png file requested from a url on the net.
Is it possible to mask out the white color so it becomes transparent?
Everything I have tried so far with CGImageCreateWithMaskingColors, returns a white image...
Any help guys would be precious :)
Ok since I found a solution and it's working fine, I am answering my question and I really believe that this will be useful to many people having the same need.
First of all I thought that would be nice to extend my UIImages via a category:
UIImage+Utils.h
#import <UIKit/UIKit.h>
#interface UIImage (Utils)
- (UIImage*)imageByClearingWhitePixels;
#end
UIImage+Utils.m
#import "UIImage+Utils.h"
#implementation UIImage (Utils)
#pragma mark Private Methods
- (CGImageRef) CopyImageAndAddAlphaChannel:(CGImageRef) sourceImage {
CGImageRef retVal = NULL;
size_t width = CGImageGetWidth(sourceImage);
size_t height = CGImageGetHeight(sourceImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef offscreenContext = CGBitmapContextCreate(NULL, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
if (offscreenContext != NULL) {
CGContextDrawImage(offscreenContext, CGRectMake(0, 0, width, height), sourceImage);
retVal = CGBitmapContextCreateImage(offscreenContext);
CGContextRelease(offscreenContext);
}
CGColorSpaceRelease(colorSpace);
return retVal;
}
- (UIImage *) getMaskedArtworkFromPicture:(UIImage *)image withMask:(UIImage *)mask{
UIImage *maskedImage;
CGImageRef imageRef = [self CopyImageAndAddAlphaChannel:image.CGImage];
CGImageRef maskRef = mask.CGImage;
CGImageRef maskToApply = CGImageMaskCreate(CGImageGetWidth(maskRef),CGImageGetHeight(maskRef),CGImageGetBitsPerComponent(maskRef),CGImageGetBitsPerPixel(maskRef),CGImageGetBytesPerRow(maskRef),CGImageGetDataProvider(maskRef), NULL, NO);
CGImageRef masked = CGImageCreateWithMask(imageRef, maskToApply);
maskedImage = [UIImage imageWithCGImage:masked];
CGImageRelease(imageRef);
CGImageRelease(maskToApply);
CGImageRelease(masked);
return maskedImage;
}
#pragma mark Public Methods
- (UIImage*)imageByClearingWhitePixels{
//Copy image bitmaps
float originalWidth = self.size.width;
float originalHeight = self.size.height;
CGSize newSize;
newSize = CGSizeMake(originalWidth, originalHeight);
UIGraphicsBeginImageContext( newSize );
[self drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Clear white color by masking with self
newImage = [self getMaskedArtworkFromPicture:newImage withMask:newImage];
return newImage;
}
#end
Finally, I am able to use it like this: (After I have imported UIImage+Utils.h of course)
UIImage *myImage = [...]; //A local png file or a file from a url
UIImage *myWhitelessImage = [myImage imageByClearingWhitePixels]; // Hooray!
It is not really possible, at least no easy way, certainly no API call. It is basically an image editing problem usually done with Photoshop. Most images that have a transparent background were created that way.