in iOS, does any UIImage support stretchableImageWithLeftCapWidth: , does that mean autoresize the uimmage?
First, this is deprecated, replaced by the more powerful resizableImageWithCapInsets:. However, that is only supported by iOS 5.0 and above.
stretchableImageWithLeftCapWidth:topCapHeight: does not resize the image you call it on. It returns a new UIImage. All UIImages may be drawn at different sizes, but a capped image responds to resizing by drawing its caps at the corners, and then filling the remaining space.
When is this useful? When we want to make buttons out of an image, as in this tutorial for the iOS 5 version.
The following code is a UIView drawRect method which illustrates the difference between a regular UIImage and a stretchable image with caps. The image used for stretch.png came from http://commons.wikimedia.org/wiki/Main_Page.
- (void) drawRect:(CGRect)rect;
{
CGRect bounds = self.bounds;
UIImage *sourceImage = [UIImage imageNamed:#"stretch.png"];
// Cap sizes should be carefully chosen for an appropriate part of the image.
UIImage *cappedImage = [sourceImage stretchableImageWithLeftCapWidth:64 topCapHeight:71];
CGRect leftHalf = CGRectMake(bounds.origin.x, bounds.origin.y, bounds.size.width/2, bounds.size.height);
CGRect rightHalf = CGRectMake(bounds.origin.x+bounds.size.width/2, bounds.origin.y, bounds.size.width/2, bounds.size.height);
[sourceImage drawInRect:leftHalf];
[cappedImage drawInRect:rightHalf];
UIFont *font = [UIFont systemFontOfSize:[UIFont systemFontSize]];
[#"Stretching a standard UIImage" drawInRect:leftHalf withFont:font];
[#"Stretching a capped UIImage" drawInRect:rightHalf withFont:font];
}
Output:
I have written a category method for maintaining compatibility
- (UIImage *) resizableImageWithSize:(CGSize)size
{
if( [self respondsToSelector:#selector(resizableImageWithCapInsets:)] )
{
return [self resizableImageWithCapInsets:UIEdgeInsetsMake(size.height, size.width, size.height, size.width)];
} else {
return [self stretchableImageWithLeftCapWidth:size.width topCapHeight:size.height];
}
}
just put that into your UIImage category you already have ( or make a new one )
this only supports the old way stretchable resizing, if you need more complex stretchable image resizing you can only do that on iOS 5 using the resizableImageWithCapInsets: directly
Related
I'm working on one app where I need to divide a image into two part using a red line.
left part for labels
right part for prices
Question 1.
How can I draw a red line on image?
Question 2.
How can I divide image to two parts using red line ?( red line position is not fixed. user can move the position wherever it want)
Question 3.
How can I get line current position and how can I use that position two divide image
Thanks in advance
I would approach this in somewhat the same manner as koray was suggesting:
1) I am assuming that your above image/view is going to be managed by a view controller, which I will call ImageSeperatorViewController from here on.
Inside of ImageSeperatorViewController, insert koray's code in the -(void) viewDidLoad{} method. Make sure you change the imageToSplit variable to be an UIImageView instead of a plain UIView.
2) Next, I assume that you know how to detect user gestures. You will detect these gestures, and determine if the user has selected the view (i.e. bar in koray's code). Once you have determined if the user has selected bar, just update its origin's X position with the touch position.
CGRect barFrame = bar.frame;
barFrame.origin.x = *X location of the users touch*
bar.frame = barFrame;
3) For cropping, I would not use github.com/bilalmughal/NLImageCropper, it will not do what you need to do.
Try this on for size:
Header:
#interface UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor*)color;
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage;
- (NSArray*)imagesBySlicingAt:(CGFloat)position;
#end
Implementation:
#implementation UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage
{
//pattern image
UIColor *patternColor = [UIColor colorWithPatternImage:patternImage];
CGFloat width = patternImage.size.width;
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color from the pattern image color
CGContextSetFillColorWithColor(context, patternColor.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//the joy of image color patterns being based on 0,0 origin! must set phase
CGContextSetPatternPhase(context, CGSizeMake(dividerRect.origin.x, 0));
//fill the divider rect with the repeating pattern from the image
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor *)color
{
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color for your divider
CGContextSetFillColorWithColor(context, color.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//fill the divider's rect with the provided color
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (NSArray*)imagesBySlicingAt:(CGFloat)position
{
NSMutableArray *slices = [NSMutableArray array];
//first image
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
//second
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(self.size.width - position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointMake(-position, 0)];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
return slices;
}
The concept is simple - you want an image with the divider drawn over it. You could just overlay a view, or override drawRect:, or any number of any solutions. I'd rather give you this category. It just uses some quick Core Graphics calls to generate an image with your desired divider, be it pattern image or color, at the specified position. If you want support for horizontal dividers as well, it is rather trivial to modify this as such. Bonus: You can use a tiled image as your divider!
Now to answer your primary question. Using the category is rather self explanatory - just call one of the two methods on your source background to generate one with the divider, and then apply that image rather than the original source image.
Now, the second question is simple - when the divider has been moved, regenerate the image based on the new divider position. This is actually a relatively inefficient way of doing it, but this ought to be lightweight enough for your purposes as well as only being an issue when moving the divider. Premature optimization is just as much a sin.
Third question is also simple - call imagesBySlicingAt: - it will return an array of two images, as generated by slicing through the image at the provided position. Use them as you wish.
This code has been tested to be functional. I strongly suggest that you fiddle around with it, not for any purpose of utility, but to better understand the mechanisms used so that next time, you can be on the answering side of things
For Crop you can try this,
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGImageRef tmpImgRef = image.CGImage;
CGImageRef topImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, 0, image.size.width, image.size.height / 2.0));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, image.size.height / 2.0, image.size.width, image.size.height / 2.0));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
hope this can help you, :)
if you want to draw a line you could just use a UIView with red background and make the height the size of your image and the width around 5 pixels.
UIView *imageToSplit; //the image im trying to split using a red bar
CGRect i = imageToSplit.frame;
int x = i.origin.x + i.size.width/2;
int y = i.origin.y;
int width = 5;
int height = i.size.height;
UIView *bar = [[[UIView alloc] initWithFrame:CGRectMake(x, y, width, height)] autorelease];
bar.backgroundColor = [UIColor redColor];
[self.view addSubview:bar];
I am adding 2 images to each other and wanted to know if this is a good way to do this? This code works and looked to be powerful.
So, my question really is, It this good or is there a better way?
PS: Warning code written by a designer.
Call the function:
- (IBAction) {
UIImage *MyFirstImage = UIImage imageNamed: #"Image.png"];
UIImage *MyTopImage = UIImage imageNamed: #"Image2.png"];
CGFloat yFloat = 50;
CGFloat xFloat = 50;
UIImage *newImage = [self placeImageOnImage:MyFirstImage imageOver:MyTopImage x:&xFloat y:&yFloat];
}
The Function:
- (UIImage*) placeImageOnImage:(UIImage *)image topImage:(UIImage *)topImage x:(CGFloat *)x y:(CGFloat *)y {
// if you want the image to be added next to the image make this CGSize bigger.
CGSize newSize = CGSizeMake(image.size.width,image.size.height);
UIGraphicsBeginImageContext( newSize );
[topImage drawInRect:CGRectMake(*x,*y,topImage.size.width,topImage.size.height)];
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeDestinationOver alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Looks OK. Perhaps you don't really need the CGFloat pointers, but that's fine, too.
The main idea is correct. There is no better way to do what you want.
Minuses:
1) Consider UIGraphicsBeginImageContextWithOptions method. UIGraphicsBeginImageContext isn't good for retina.
2) Don't pass floats as pointers. Use x:(CGFloat)x y:(CGFloat)y instead
You should use the begin context version, UIGraphicsBeginImageContextWithOptions, that allows you to specify options for scale (and pass 0 as the scale) do you don't lose any quality on retina displays.
If you want one image drawn on top of another image, just draw the one in back, then the one in front, exactly as if you were using paint. There is no need to use blend modes.
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[topImage drawInRect:CGRectMake(*x,*y,topImage.size.width,topImage.size.height)];
I've got a UIView which I'm rendering to a UIImage via the typical UIGraphicsBeginImageContextWithOptions method, using a scale of 2.0 so the image output will always be the "retina display" version of what would show up onscreen, regardless of the user's actual screen resolution.
The UIView I'm rendering contains both images and text (UIImages and UILabels). The image is appearing in the rendered UIImage at its full resolution, and looks great. But the UILabels appear to have been rasterized at a 1.0 scale and then upscaled to 2.0, resulting in blurry text.
Is there something I'm doing wrong, or is there some way to get the text to render nice and crisp at the higher scale level? Or is there some way to do this other than using the scaling parameter of UIGraphicsBeginImageContextWithOptions that would have better results? Thanks!
The solution is to change the labels's contentsScale to 2 before you draw it, then set it back immediately thereafter. I just coded up a project to verify it, and its working just fine making a 2x image in a normal retina phone (simulator). [If you have a public place I can put it let me know.]
EDIT: the extended code walks the subviews and any container UIViews to set/unset the scale
- (IBAction)snapShot:(id)sender
{
[self changeScaleforView:snapView scale:2];
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, snapView.opaque, 2);
[snapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageDisplay.image = img; // contentsScale
imageDisplay.contentMode = UIViewContentModeScaleAspectFit;
[self changeScaleforView:snapView scale:1];
}
- (void)changeScaleforView:(UIView *)aView scale:(CGFloat)scale
{
[aView.subviews enumerateObjectsUsingBlock:^void(UIView *v, NSUInteger idx, BOOL *stop)
{
if([v isKindOfClass:[UILabel class]]) {
v.layer.contentsScale = scale;
} else
if([v isKindOfClass:[UIImageView class]]) {
// labels and images
// v.layer.contentsScale = scale; won't work
// if the image is not "#2x", you could subclass UIImageView and set the name of the #2x
// on it as a property, then here you would set this imageNamed as the image, then undo it later
} else
if([v isMemberOfClass:[UIView class]]) {
// container view
[self changeScaleforView:v scale:scale];
}
} ];
}
Try rendering to an image with double size, and then create the scaled image:
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
// Do stuff
UImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
newImage=[UIImage imageWithCGImage:[newImage CGImage] scale:2.0 orientation:UIImageOrientationUp];
Where:
size = realSize * scale;
I have been struggling with much the same oddities in the context of textview to PDF rendering. I found out that there are some documented properties on the CALayer objects which make up the view. Maybe setting the rasterizationScale of the relevant (sub)layer(s) helps.
I have a grouped UITableView that contains several cells (just standard UITableViewCells), all of which are of UITableViewCellStyleSubtitle style. Bueno. However, when I insert images into them (using the provided imageView property), the corners on the left side become square.
Example Image http://files.lithiumcube.com/tableView.png
The code being used to assign the values into the cell is:
cell.textLabel.text = currentArticle.descriptorAndTypeAndDifferentiator;
cell.detailTextLabel.text = currentArticle.stateAndDaysWorn;
cell.imageView.image = currentArticle.picture;
and currentArticle.picture is a UIImage (also the pictures, as you can see, display just fine with the exception of the square corners).
It displays the same on my iPhone 3G, in the iPhone 4 simulator and in the iPad simulator.
What I'm going for is something similar to the UITableViewCells that Apple uses in its iTunes app.
Any ideas about what I'm doing wrong?
Thanks,
-Aaron
cell.imageView.layer.cornerRadius = 16; // 16 is just a guess
cell.imageView.clipsToBounds = YES;
This will round the UIImageView so it does not draw over the cell. It will also round all the corners of all your images, but that may be OK.
Otherwise, you will have to add your own image view that will just round the one corner. You can do that by setting up a clip region in drawRect: before calling super. Or just add your own image view that is not so close to the left edge.
You can add a category on UIImage and include this method:
// Return the image, but with rounded corners. Useful for masking an image
// being used in a UITableView which has grouped style
- (UIImage *)imageWithRoundedCorners:(UIRectCorner)corners radius:(CGFloat)radius {
// We need to create a CGPath to set a clipping context
CGRect aRect = CGRectMake(0.f, 0.f, self.size.width, self.size.height);
CGPathRef clippingPath = [UIBezierPath bezierPathWithRoundedRect:aRect byRoundingCorners:corners cornerRadii:CGSizeMake(radius, radius)].CGPath;
// Begin drawing
// Start a context with a scale of 0.0 uses the current device scale so that this doesn't unnecessarily drop resolution on a retina display.
// Use `UIGraphicsBeginImageContextWithOptions(aRect.size)` instead for pre-iOS 4 compatibility.
UIGraphicsBeginImageContextWithOptions(aRect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, clippingPath);
CGContextClip(context);
[self drawInRect:aRect];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Then when you're configuring your cells, in the table view controller, call something like:
if ( *SINGLE_ROW* ) {
// We need to clip to both corners
cell.imageView.image = [image imageWithRoundedCorners:(UIRectCornerTopLeft | UIRectCornerBottomLeft) radius:radius];
} else if (indexPath.row == 0) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerTopLeft radius:radius];
} else if (indexPath.row == *NUMBER_OF_ITEMS* - 1) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerBottomLeft radius:radius];
} else {
cell.imageView.image = image;
}
but replace the SINGLE_ROW etc with real logic to determine whether you've got a single row in a section, or it's the last row. One thing to note here, is that I've found (experimentally) that the radius for a group style table is 12, which works perfectly in the simulator, but not on an iPhone. I've not been able to test it on a non-retina device. A radius of 30 looks good on the iPhone 4 (so I'm wondering if this is an image scale thing, as the images I'm using are from the AddressBook, so don't have an implied scale factor). Therefore, I've got some code before this that modifies the radius...
CGFloat radius = GroupStyleTableCellCornerRadius;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2){
// iPhone 4
radius = GroupStyleTableCellCornerRadiusForRetina;
}
hope that helps.
I'm trying to write an animation on the iPhone, without much success, getting crashes and nothing seems to work.
What I wanna do appears simple, create a UIImage, and draw part of another UIImage into it, I got a bit confused with the context and layers and stuff.
Could someone please explain how to write something like that (efficiently), with example code?
For the record, this turns out to be fairly straightforward - everything you need to know is somewhere in the example below:
+ (UIImage*) addStarToThumb:(UIImage*)thumb
{
CGSize size = CGSizeMake(50, 50);
UIGraphicsBeginImageContext(size);
CGPoint thumbPoint = CGPointMake(0, 25 - thumb.size.height / 2);
[thumb drawAtPoint:thumbPoint];
UIImage* starred = [UIImage imageNamed:#"starred.png"];
CGPoint starredPoint = CGPointMake(0, 0);
[starred drawAtPoint:starredPoint];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
I just want to add a comment about the answer above by dpjanes, because it is a good answer but will look blocky on iPhone 4 (with high resolution retina display), since "UIGraphicsGetImageFromCurrentImageContext()" does not render at the full resolution of an iPhone 4.
Use "...WithOptions()" instead. But since WithOptions is not available until iOS 4.0, you could weak link it (discussed here) then use the following code to only use the hires version if it is supported:
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
}
else {
UIGraphicsBeginImageContext();
}
Here is an example to merge two images that are the same size into one. I don't know if this is the best and don't know if this kind of code is posted somewhere else. Here is my two cents.
+ (UIImage *)mergeBackImage:(UIImage *)backImage withFrontImage:(UIImage *)frontImage
{
UIImage *newImage;
CGRect rect = CGRectMake(0, 0, backImage.size.width, backImage.size.height);
// Begin context
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
// draw images
[backImage drawInRect:rect];
[frontImage drawInRect:rect];
// grab context
newImage = UIGraphicsGetImageFromCurrentImageContext();
// end context
UIGraphicsEndImageContext();
return newImage;
}
Hope this helps.