divide image into two parts using divider - iphone

I'm working on one app where I need to divide a image into two part using a red line.
left part for labels
right part for prices
Question 1.
How can I draw a red line on image?
Question 2.
How can I divide image to two parts using red line ?( red line position is not fixed. user can move the position wherever it want)
Question 3.
How can I get line current position and how can I use that position two divide image
Thanks in advance

I would approach this in somewhat the same manner as koray was suggesting:
1) I am assuming that your above image/view is going to be managed by a view controller, which I will call ImageSeperatorViewController from here on.
Inside of ImageSeperatorViewController, insert koray's code in the -(void) viewDidLoad{} method. Make sure you change the imageToSplit variable to be an UIImageView instead of a plain UIView.
2) Next, I assume that you know how to detect user gestures. You will detect these gestures, and determine if the user has selected the view (i.e. bar in koray's code). Once you have determined if the user has selected bar, just update its origin's X position with the touch position.
CGRect barFrame = bar.frame;
barFrame.origin.x = *X location of the users touch*
bar.frame = barFrame;
3) For cropping, I would not use github.com/bilalmughal/NLImageCropper, it will not do what you need to do.

Try this on for size:
Header:
#interface UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor*)color;
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage;
- (NSArray*)imagesBySlicingAt:(CGFloat)position;
#end
Implementation:
#implementation UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage
{
//pattern image
UIColor *patternColor = [UIColor colorWithPatternImage:patternImage];
CGFloat width = patternImage.size.width;
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color from the pattern image color
CGContextSetFillColorWithColor(context, patternColor.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//the joy of image color patterns being based on 0,0 origin! must set phase
CGContextSetPatternPhase(context, CGSizeMake(dividerRect.origin.x, 0));
//fill the divider rect with the repeating pattern from the image
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor *)color
{
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color for your divider
CGContextSetFillColorWithColor(context, color.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//fill the divider's rect with the provided color
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (NSArray*)imagesBySlicingAt:(CGFloat)position
{
NSMutableArray *slices = [NSMutableArray array];
//first image
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
//second
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(self.size.width - position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointMake(-position, 0)];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
return slices;
}
The concept is simple - you want an image with the divider drawn over it. You could just overlay a view, or override drawRect:, or any number of any solutions. I'd rather give you this category. It just uses some quick Core Graphics calls to generate an image with your desired divider, be it pattern image or color, at the specified position. If you want support for horizontal dividers as well, it is rather trivial to modify this as such. Bonus: You can use a tiled image as your divider!
Now to answer your primary question. Using the category is rather self explanatory - just call one of the two methods on your source background to generate one with the divider, and then apply that image rather than the original source image.
Now, the second question is simple - when the divider has been moved, regenerate the image based on the new divider position. This is actually a relatively inefficient way of doing it, but this ought to be lightweight enough for your purposes as well as only being an issue when moving the divider. Premature optimization is just as much a sin.
Third question is also simple - call imagesBySlicingAt: - it will return an array of two images, as generated by slicing through the image at the provided position. Use them as you wish.
This code has been tested to be functional. I strongly suggest that you fiddle around with it, not for any purpose of utility, but to better understand the mechanisms used so that next time, you can be on the answering side of things

For Crop you can try this,
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGImageRef tmpImgRef = image.CGImage;
CGImageRef topImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, 0, image.size.width, image.size.height / 2.0));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, image.size.height / 2.0, image.size.width, image.size.height / 2.0));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
hope this can help you, :)

if you want to draw a line you could just use a UIView with red background and make the height the size of your image and the width around 5 pixels.
UIView *imageToSplit; //the image im trying to split using a red bar
CGRect i = imageToSplit.frame;
int x = i.origin.x + i.size.width/2;
int y = i.origin.y;
int width = 5;
int height = i.size.height;
UIView *bar = [[[UIView alloc] initWithFrame:CGRectMake(x, y, width, height)] autorelease];
bar.backgroundColor = [UIColor redColor];
[self.view addSubview:bar];

Related

Image mask without transparant part

I've been looking into this for a while now.
I'm trying to put a gray image over a cup
Currently i'm stretching an uiimageview with a grey background to the amount the slider has gone.
-(void) sliderChanged:(CGFloat) value{
drinksView.grayArea.frame = CGRectMake( 0 , -value ,372 ,value);
I know this is very dirty ,not what i want...
What i want is that the grey part only covers the part where there is a cup (e.g the part where the image is not transparant). The image of the cup just has a transparant background
Does anybody have an idea of how to achieve this ? i'm a noob with masks and many tutorials have led me nowhere and i don't even know if it's possible.
P.S: drawing a path around the cup is not possible because the cup image can change to a glass
The easiest way I can think of is to use a masked CALayer. This is what you need to do:
Instead of using a UIImageView, use a CALayer with gray background as your overlay. Your sliderChanged: method would remain untouched, except that drinksView.grayArea would be a layer instead of a view.
So far the effect will be exactly the same as before. Now, you need to set the grayArea's mask. Do the following:
CALayer * maskLayer = [CALayer new];
maskLayer.contents = myCupImage.CGImage;
grayArea.mask = maskLayer;
I think by default the layer will stretch the content as the scale is changed. We don't want that. You can fix this by setting the layer's contentsGravity to, say, kCAGravityTop.
That should do what you want.
One caveat: I'm not quite sure how masks cope with changing content gravity. If you have issues on that front, you can fixed it by adding a container layer:
Set a fixed frame for grayArea (equal to the size of the cup image).
Instead of adding grayArea directly, introduce a container layer for it:
CALayer * container = [CALayer new];
container.masksToBounds = YES;
[container addSublayer:grayArea];
[drinksView.layer addSublayer:container];
In your sliderChanged:, change the frame of container instead of the grayArea.
Hope this works.
First of all you will need to get familiar with this method
// masks the item based on the MaskImage
- (UIImage*) itemMask : (UIImage*)image withMask:(UIImage*)maskImage
{
UIImage* afterMasking = nil;
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
CFRelease(mask);
afterMasking = [UIImage imageWithCGImage:masked];
CFRelease(masked);
return afterMasking;
}
What that does is that you feed it your Cup image and the mask Image together and it will mask your cup image. It will only mask your cup image and nothing underneath so you don't have to worry.
The problem you have is that the grey box resizes. How I would approach this is to crop the mask according to the slider value. So make a black box and change it size how you do with the grayBG BEFORE you feed it through the method. That should be quick to do so so I won't elaborate but at the end you will have something like this. Pardon the half elaborated graphics
drinksView.grayLayer = [CALayer new];
drinksView.grayLayer.frame = CGRectMake(0, 0, 372, 0);
drinksView.grayLayer.contentsGravity = kCAGravityTop;
CALayer * topLayer = [CALayer new];
topLayer.frame = CGRectMake(0, 0, 372, 367);
UIImage * grayImage = [UIImage imageNamed:#"grayDrink.png"];
topLayer.contents = (id) grayImage.CGImage;
CALayer * maskLayer = [CALayer new];
maskLayer.contents = (id)cupImage.CGImage;
maskLayer.masksToBounds = YES;
maskLayer.bounds = CGRectMake(0, 0, 372, 367);
maskLayer.position = CGPointMake(186,184);
[topLayer setMask:maskLayer];
[drinksView.grayLayer addSublayer:topLayer];
[drinksView.grayLayer setMasksToBounds:YES];
[[drinksView layer]addSublayer:drinksView.grayLayer];

Scale and Save UIImage from Photo Library in iPhone?

i am trying to scale down a image i get from photo library on touchesmoved by user in a similar way like when we take picture with camera using UIImagepicker setEditing to Yes method(or like the camera app).
I am trying to use the following method passing in some parameters based on touchesmoved but i am not getting the desired effect? what am i possibly doing wrong??
-(UIImage*)scaleToSize:(UIImage *)img:(CGSize)size
{
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[img drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
-(void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UIImage *img = [self scaleToSize:imgView.image:CGSizeMake(touch1.x,touch1.y)];
imgView.image=img;
}
also how can i save the scaled image once somehow i scale it??
Building on your comment, the image will distort as it will draw the image into the rectangle specified and if the new dimensions are not the same aspect ratio (the width / height) as the original image then it will appear distorted.
You need some logic to ensure that your new width and height have the same aspect ratio, for example:
CGFloat newHeight = imageView.frame.size.height * size.width / imageView.frame.size.width;
If you make your Graphics context size.width and newHeight and then draw your image into this rect it will maintain the aspect ratio.
You will likely want to put some extra logic in there to either create a new width given the height or new height given the width, depending on which dimension was the largest change.
Hope this helps,
Dave

iPhone SDK: Problem saving one image over another

basically I am making an app that involves a user taking a photo, or selecting one already on their device, and then placing an overlay onto the image.
So, I seem to have coded everything fine, apart from one thing, after the user has selected the overlay and positioned it, when saved the size of the overlay has changed, whereas the x and y values seem correct.
And so this is the code I use to add the overlay ("image" being the users photo):
float wid = (overlay.image.size.width);
float hei = (overlay.image.size.height);
overlay.frame = CGRectMake(0, 0, wid, hei);
[image addSubview:overlay];
And this is the code used to save the resulting image:
UIGraphicsBeginImageContext(image.image.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.image.size.width, image.image.size.height)];
// Draw the overlay
float xx = (overlay.center.x);
float yy = (overlay.center.y);
CGRect aaFrame = overlay.frame;
float width = aaFrame.size.width;
float height = aaFrame.size.height;
[overlay.image drawInRect:CGRectMake(xx, yy, width, height)];
UIGraphicsEndImageContext();
Any help? Thanks
The problem is that you are using image's size rather than the image view's frame size. Image seems to be much larger than its image view so when you use the image's size the other image's size ends up being much smaller in comparison although it is still the correct size. You can modify your snippet to this –
UIGraphicsBeginImageContext(image.frame.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.frame.size.width, image.frame.size.height)];
[overlay.image drawInRect:overlay.frame];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Avoiding loss of quality
While the above method leads to loss of resolution, trying to draw the parent image in its proper resolution might have an unwanted effect on its child image i.e. if the overlay wasn't of high resolution itself then it can end being stretchy. However you can try this code to draw it in the parent image's resolution (untested, let me know if you've problems ) –
float verticalScale = image.image.size.height / image.frame.size.height;
float horizontalScale = image.image.size.width / image.frame.size.width;
CGRect overlayFrame = overlay.frame;
overlayFrame.origin.x *= horizontalScale;
overlayFrame.origin.y *= verticalScale;
overlayFrame.size.width *= horizontalScale;
overlayFrame.size.height *= verticalScale;
UIGraphicsBeginImageContext(image.image.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.image.size.width, image.image.size.height)];
[overlay.image drawInRect:overlayFrame];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

ios unread message icon

I was wondering if there is a standard method in iOS to produce the numbered bubble icon for unread messages as the ones used in mail for iphone and mac.
I'm not talking about the red dots on the application item which is done with badgevalue but about the blue bubble beside the mailboxes.
Of course one can do it manually using coregraphics but it's harder to match the dimensions and color of the standard ones used in mail etc.
here are three ways to do this, in order of difficulty..
screen shot your mail app from your iphone, send the image into photoshop, extract the blue dot and use it as an image in your app. To use it in a tableviewcell, you just set the imageView.image = [UIImage imageName:#"blueDot.png"];
same as #1, except save the image as a grayscale, this way you can use Quartz and overlay your own colors on top of it. so you can make that dot any color you want. Very cool stuff.
Use Quartz to draw the whole thing. Its really not that hard. Let me know if you would like some code for that.
OK, twist my arm... here is the code to draw your own gradient sphere... from quartz.
Make a class that inherits from UIView. add the following code
static float RADIANS_PER_DEGREE=0.0174532925;
-(void) drawInContext:(CGContextRef) context
{
// Drawing code
CGFloat radius = self.frame.size.width/2;
CGFloat start = 0 * RADIANS_PER_DEGREE;
CGFloat end = 360 * RADIANS_PER_DEGREE;
CGPoint startPoint = CGPointMake(0, 0);
CGPoint endPoint = CGPointMake(0, self.bounds.size.height);
//define our grayscale gradient.. we will add color later
CGFloat cc[] =
{
.70,.7,.7,1, //r,g,b,a of color1, as a percentage of full on.
.4,.4,.4,1, //r,g,b,a of color2, as a percentage of full on.
};
//set up our gradient
CGGradientRef gradient;
CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB();
gradient = CGGradientCreateWithColorComponents(rgb, cc, NULL, sizeof(cc)/(sizeof(cc[0])*4));
CGColorSpaceRelease(rgb);
//draw the gray gradient on the sphere
CGContextSaveGState(context);
CGContextBeginPath(context);
CGContextAddArc(context, self.bounds.size.width/2, self.bounds.size.height/2, radius,start,end , 0);
CGContextClosePath(context);
CGContextClip(context);
CGContextAddRect(context, self.bounds);
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, kCGGradientDrawsBeforeStartLocation);
CGGradientRelease(gradient);
//now add our primary color. you could refactor this to draw this from a color property
UIColor *color = [UIColor blueColor];
[color setFill];
CGContextSetBlendMode(context, kCGBlendModeColor); // play with the blend mode for difference looks
CGContextAddRect(context, self.bounds); //just add a rect as we are clipped to a sphere
CGContextFillPath(context);
CGContextRestoreGState(context);
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
[self drawInContext:context];
}
If you want to use a graphic resource from iOS, you can find it using the UIKit-Artwork-Extractor tool. Extract everything to the desktop and find the one you want. For example, the red badge for notifications is called SBBadgeBG.png. I don't know which one you mean, so search for it yourself :P
This is what I did to use a badge, the procedure is exactly the same to show a bubble in a subview of your table:
// Badge is an image with 14+1+14 pixels width and 15+1+15 pixels height.
// Setting the caps to 14 and 15 preserves the original size of the sides, so only the pixel in the middle is stretched.
UIImage *image = [UIImage imageNamed:#"badge"];
self.badgeImage = [image stretchableImageWithLeftCapWidth:(image.size.width-1)/2 topCapHeight:(image.size.height-1)/2];
// what size do we need to show 3 digits using the given font?
self.badgeFont = [UIFont fontWithName:#"Helvetica-Bold" size:13.0];
CGSize maxStringSize = [[NSString stringWithString:#"999"] sizeWithFont:self.badgeFont];
// set the annotation frame to the max needed size
self.frame = CGRectMake(0,0,
self.badgeImage.size.width + maxStringSize.width,
self.badgeImage.size.height + maxStringSize.height);
and then override the method drawRect: of your view to paint the badge and the numbers inside:
- (void)drawRect:(CGRect)rect {
// get the string to show and calculate its size
NSString *string = [NSString stringWithFormat:#"%d",self.badgeNumber];
CGSize stringSize = [string sizeWithFont:self.badgeFont];
// paint the image after stretching it enough to acommodate the string
CGSize stretchedSize = CGSizeMake(self.badgeImage.size.width + stringSize.width,
self.badgeImage.size.height);
// -20% lets the text go into the arc of the bubble. There is a weird visual effect without abs.
stretchedSize.width -= abs(stretchedSize.width *.20);
[self.badgeImage drawInRect:CGRectMake(0, 0,
stretchedSize.width,
stretchedSize.height)];
// color of unread messages
[[UIColor yellowColor] set];
// x is the center of the image minus half the width of the string.
// Same thing for y, but 3 pixels less because the image is a bubble plus a 6px shadow underneath.
float height = stretchedSize.height/2 - stringSize.height/2 - 3;
height -= abs(height*.1);
CGRect stringRect = CGRectMake(stretchedSize.width/2 - stringSize.width/2,
height,
stringSize.width,
stringSize.height);
[string drawInRect:stringRect withFont:badgeFont];
}

How do achieve a frame around image

I like the way this (http://shakeitphoto.com/) application puts a border around the image.. i would like to do something similar in my application but not sure how should I go about doing it.
Any ideas on how given a UIImage can I wrap a frame around it?
From that website, it appears you want a border with a shadow. There's 2 reasonable options, 3 if you don't care about the shadow.
If you don't care about the shadow, you can just do something like
#import <QuartzCore/QuartzCore.h> // this should be at the top
// inside your view layout code
myImageView.layer.borderColor = [UIColor whiteColor].CGColor
myImageView.layer.borderWidth = 5;
This will give you a 5-pixel white border inset into the view, layered on top of the view's contents (e.g. the image). What it won't give you is a shadow. If you want the shadow, there's 2 other options.
You could just create an image that includes the border and the shadow, and nothing else. Just make everything else alpha-transparent. Then you can simply layer this image on top of the one you want to display (either with 2 imageviews, or by creating a third image out of the 2). This should work fine, but it won't scale to different image sizes. In the case of the linked app, the image size is always the same so they could be using this.
The other option is to simply draw the border and shadow on top of your image in a new image. Here's a bit of sample code that will do this - it creates a new image the same size as your original, but with a white, shadowed border:
- (UIImage *)borderedImage:(UIImage *)image {
// the following NO means the new image has an alpha channel
// If you know the source image is fully-opaque, you may want to set that to YES
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
CGContextRef ctx = UIGraphicsGetCurrentContext();
const CGFloat shadowRadius = 5;
CGContextSetShadowWithColor(ctx, 0, shadowRadius, [UIColor blackColor].CGColor);
[[UIColor whiteColor] set];
CGRect rect = (CGRect){CGPointZero, image.size};
const CGFloat frameWidth = 5;
rect = CGRectInset(rect, frameWidth / 2.0f, frameWidth / 2.0f);
UIBezierPath *path = [UIBezierPath bezierPathWithRect:rect];
path.lineWidth = frameWidth;
[path stroke];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// note: getting the new image this way throws away the orientation data from the original
// You could create a third image by doing something like
// newImage = [UIImage imageWithCGImage:newImage.CGImage scale:newImage.scale orientation:image.orientation]
// but I am unsure as to how orientation actually affects rendering (if at all)
UIGraphicsEndImageContext();
return newImage;
}
(note: this code has not been compiled and could contain bugs)