iPhone: How to extend a repeatable UIImage? - iphone

I have an image which I use to frame a tweet. It consists of two rounded rects, with a twitter icon a the top left. The important part is that it is repeatable, as you could copy any part of the middle section vertically, and it would be the same, just longer. Here is the image I created:
My question is how, in code, do I extend (or shrink) that dependent on how many lines are in my UITextView? Something like this to get the size:
float requiredHeight = lines * 14;
I know this is possible, because apple do it with their SMS app :)
UPDATE: Here is the complete code for doing this:
UIImage *loadImage = [UIImage imageNamed:#"TwitPost.png"];
float w2 = loadImage.size.width/2;
float h2 = loadImage.size.height/2;
// I have now reduced the image size so the height must be offset a little (otherwise it stretches the bird!):
loadImage = [loadImage stretchableImageWithLeftCapWidth:w2 topCapHeight:h2+15];
imageView.image = loadImage;
Thanks for both answers.

By using
- (UIImage *)stretchableImageWithLeftCapWidth:(NSInteger)leftCapWidth topCapHeight:(NSInteger)topCapHeight
Where you set leftCapWidth and topCapHeight as half the width and height of your image. This image you can stretch in a UIImageView by changing its bounds/frame.

Look at the documentation for UIImage. Specifically:
- (UIImage *)stretchableImageWithLeftCapWidth:(NSInteger)leftCapWidth topCapHeight:(NSInteger)topCapHeight
This allows you to use an image which repeats the portion at the leftCapWidth or topCapHeight to stretch it horizontally or vertically.

Related

My original Size will change after combined Two images into one Image

I want to combined Two images and my BaseImages Size is 1280*1920 then i used method to convert into 1 image, but the new image Size will be 320*480.
i want its original pixel And Size
i used this Code :
// ImageContainView - UIView that Contains 2 UIImageView. View size :- 320*480
UIImage *temp;
UIGraphicsBeginImageContext(imageContainView.bounds.size);
[imageContainView.layer renderInContext:UIGraphicsGetCurrentContext()];
temp = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If I understand correctly what you are trying to do, the reason you are getting a small image is that you are initializing your context with a small size:
UIGraphicsBeginImageContext(imageContainView.bounds.size);
if imageContainView.bounds.size is 320x480, you will get a 320x480 image.
So, you should call:
UIGraphicsBeginImageContext(correctImageSize);
where correctImageSize is CGSizeMake(1280,1920), or the size of the bigger image you have.
Or you could try calling sizeToFit before:
[imageContainView sizeToFit];
UIGraphicsBeginImageContext(imageContainView.bounds.size);

Merge image on double Tap

I have two ImageViews and I am merging two images. First image is a bodyImage and second Image is a tattooImage. I already done merging, but I wants ask
1)I can drag tattooImage over bodyImage. I wants on doubleTap tattooImage mergeWith bodyImage on the tap coordinates. Hope you understand question
Thanks
+ =
and here is my code : here imageView1 is my bodyImage and imageView2 is my tattooImage
- (void)tapDetected:(UITapGestureRecognizer *)tapRecognizer
{
int width=500;
int height=500;
NSLog(#"takephoto from twitter");
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[imageView1.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[imageView2.image drawInRect:CGRectMake(180,200,200,200) blendMode:kCGBlendModeDarken alpha:0.4];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
imageView1.image=newImage;
UIGraphicsEndImageContext();
}
mask image http://www.developers-life.com/resize-and-mask-an-image.html
You need image masking for that. I wrote a tutorial on how to use it, and how I've used it in my own application. From the Apple documentation:
Masking techniques can produce many interesting effects by controlling which parts of an image are painted. You can:
Apply an image mask to an image. You can also use an image as a mask
to achieve an effect that’s opposite from applying an image mask.
Use color to mask parts of an image, which includes the technique
referred to as chroma key masking.
Clip a graphics context to an
image or image mask, which effectively masks an image (or any kind of
drawing) when Quartz draws the content to the clipped context.

Is it possible to determine if a UIImage is stretchable?

I'm trying to reuse a small chunk of code inside a custom button class. For this to work I need to pass in non stretchable images (an icon) or a stretchable image (a 'swoosh'). Within the code I need to set the rect to draw the height so I'd like, ideally, to simply determine if the image is stretchable or not? If it isn't I draw it at the size of the image, if not I draw at the bounds of the containing rect.
From my investigation so far capInsets (iOS 5) or leftCapWidth/topCapHeight (pre iOS 5) are not useful for this.
Is there something buried in the core or quartz information I can use?
Just curious, for now I'm coding around it with an extra parameter.
** I've since read through CGImageRef and the CI equivalent **
As far as I can tell there is no such information that we can access to identify such images, which begs the question how does the OS know?
There is no way to detect this unless you have some intense image analysis (which won't be 100% correct). UIImage is essentially some pixels with meta-information, all obtained from the file that you loaded it from. No file formats would have that information.
However, you can encode some information into the file name of the image. If you have an image called foo.png that is stretchable, why not call it foo.stretch.png? Your loading routines can analyse the file name and extract meta-information that you can associate with the UIImage (see http://labs.vectorform.com/2011/07/objective-c-associated-objects/ for associated objects) or by creating your own class that composites a UIImage with meta-information.
Good luck in your research.
When u create UIImage*, its .size property is absolute.
If u mean stretchable to your button view. just check scale for example.
- (BOOL) stretchableImage:(UIImage*)aImage toView:(UIView*)aView{
CGFloat scaleW = aView.size.width/aImage.size.width;
CGFloat scaleH = aView.size.height/aImage.size.height;
if (scaleW == scaleH){
if (scaleW < 1)
return YES;
}
return NO;
}
You can check it's class.
UIImage *img = [UIImage imageNamed:#"back"];
NSString *imgClass = NSStringFromClass(img.class);
UIImage *imgStretch = [img stretchableImageWithLeftCapWidth:10 topCapHeight:10];
NSString *imgStrClass = NSStringFromClass(imgStretch.class);
NSLog(#"Normal class:\t%#\nStretchable class:\t%#",imgClass,imgStrClass);
Console:
Normal class: UIImage
Stretchable class: _UIResizableImage

Must I multiply the scale with the point values for retina display, in this case?

Since the retina display, suddenly this piece of drawing code seems to not work anymore. The drawn image is slightly offset than it was before and appears somewhat stretched.
I am drawing something in the -drawRect: method of an UIControl subclass. I figured out that the current scale inside that UIControl indeed is 2.0. This code obtains an CGImage from an UIImage, which probably won't know anything about the scale. It's feeded as parameter to an method that also takes some point values right now.
CGContextDrawImage(context, CGRectMake(drawingRect.origin.x, drawingRect.origin.y, img.size.width, img.size.height), [img CGImage]);
Note: drawingRect is in points. img.size.width inside an NSLog does output the correct value in points, while [img CGImage] does output the #2x image for retina display. I did a check to verify this:
NSLog(#"image height = %f (CGImage = %d)", img.size.height, CGImageGetHeight([img CGImage]));
Output in console: image height = 31.000000 (CGImage = 62)
How would I deal with the #2x image here? Must I multiply every value with the scale, manually? But that would also screw up the actual visible rectangle on the screen, or not?
Yes.
CGImageGetWidth([image CGImage]) == image.size.width * image.scale
CGImageGetHeight([image CGImage]) == image.size.height * image.scale
Alternatively, you can use the -[UIImage drawAtPoint:], -[UIImage drawInRect:] and other similar methods that deal with the scale automatically. If you drop down to CGImage, you have to handle scale yourself.

Is it possible to set the position of an UIImageView's image?

I have a UIImageView that displays a bigger image. It appears to be centered, but I would like to move that image inside that UIImageView. I looked at the MoveMe sample from Apple, but I couldn't figure out how they do it. It seems that they don't even have an UIImageView for that. Any ideas?
What you need is something like (e.g. showing the 30% by 30% of the top left corner of the original image):
imageView.layer.contentsRect = CGRectMake(0.0, 0.0, 0.3, 0.3);
Description of "contentsRect":
The rectangle, in the unit coordinate space, that defines the portion of the layer’s contents that should be used.
Original Answer has been superseded by CoreAnimation in iOS4.
So as Gold Thumb says: you can do this by accessing the UIView's CALayer. Specifically its contentRect:
From the Apple Docs: The rectangle, in the unit coordinate space, that defines the portion of the layer’s contents that should be used. Animatable.
Do you want to display the image so that it is contained within the UIImageView? In that case just change the contectMode of UIImageView to UIViewContentModeScaleToFill (if aspect ratio is inconsequential) or UIViewContentModeScaleAspectFit (if you want to maintain the aspect ratio)
In IB, this can be done by setting the Mode in Inspector.
In code, it can be done as
yourImageView.contentMode = UIViewContentModeScaleToFill;
In case you want to display the large image as is inside a UIImageView, the best and easiest way to do this would be to have the image view inside a UIScrollView. That ways you will be able to zoom in and out in the image and also move it around.
Hope that helps.
It doesn't sound like the MoveMe sample does anything like what you want. The PlacardView in it is the same size as the image used. The only size change done to it is a view transform, which doesn't effect the viewport of the image. As I understand it, you have a large picture, and want to show a small viewport into it. There isn't a simple class method to do this, but there is a function that you can use to get the desired results: CGImageCreateWithImageInRect(CGImageRef, CGRect) will help you out.
Here's a short example using it:
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
Thanks a lot. I have found a pretty simple solution that looks like this:
CGRect frameRect = myImage.frame;
CGPoint rectPoint = frameRect.origin;
CGFloat newXPos = rectPoint.x - 0.5f;
myImage.frame = CGRectMake(newXPos, 0.0f, myImage.frame.size.width, myImage.frame.size.height);
I just move the frame around. It happens that portions of that frame go out of the iPhone's view port, but I hope that this won't matter much. There is a mask over it, so it doesn't look weird. The user doesn't totice how it's done.
You can accomplish the same by:
UIImageView *imgVw=[[UIImageView alloc]initwithFrame:CGRectMake(x,y,height,width)];
imgVw.image=[UIImage imageNamed:#""];
[self.view addSubView imgVw];
imgVw.contentMode = UIViewContentModeScaleToFill;
You can use NSLayoutConstraint to set the position of UIImageView , it can be relative to other elements or with respect to the frame.
Here's an example snippet:
let logo = UIImage(imageLiteralResourceName: "img")
let logoImage = UIImageView(image: logo)
logoImage.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(logoImage)
NSLayoutConstraint.activate([logoImage.topAnchor.constraint(equalTo: view.topAnchor,constant: 30),
logoImage.centerXAnchor.constraint(equalTo: view.centerXAnchor),logoImage.widthAnchor.constraint(equalToConstant: 100),logoImage.heightAnchor.constraint(equalToConstant: 100)
])
This way you can also resize the image easily. The constant parameter represents, how far should a certain anchor be positioned relative to the specified anchor.
Consider this,
logoImage.topAnchor.constraint(equalTo: view.topAnchor,constant: 30)
The above line is setting the top anchor of the instance logoImage to be 30 (constant) below the parent view. A negative value would mean opposite direction.