Is it possible to set the position of an UIImageView's image? - iphone

I have a UIImageView that displays a bigger image. It appears to be centered, but I would like to move that image inside that UIImageView. I looked at the MoveMe sample from Apple, but I couldn't figure out how they do it. It seems that they don't even have an UIImageView for that. Any ideas?

What you need is something like (e.g. showing the 30% by 30% of the top left corner of the original image):
imageView.layer.contentsRect = CGRectMake(0.0, 0.0, 0.3, 0.3);
Description of "contentsRect":
The rectangle, in the unit coordinate space, that defines the portion of the layer’s contents that should be used.

Original Answer has been superseded by CoreAnimation in iOS4.
So as Gold Thumb says: you can do this by accessing the UIView's CALayer. Specifically its contentRect:
From the Apple Docs: The rectangle, in the unit coordinate space, that defines the portion of the layer’s contents that should be used. Animatable.

Do you want to display the image so that it is contained within the UIImageView? In that case just change the contectMode of UIImageView to UIViewContentModeScaleToFill (if aspect ratio is inconsequential) or UIViewContentModeScaleAspectFit (if you want to maintain the aspect ratio)
In IB, this can be done by setting the Mode in Inspector.
In code, it can be done as
yourImageView.contentMode = UIViewContentModeScaleToFill;
In case you want to display the large image as is inside a UIImageView, the best and easiest way to do this would be to have the image view inside a UIScrollView. That ways you will be able to zoom in and out in the image and also move it around.
Hope that helps.

It doesn't sound like the MoveMe sample does anything like what you want. The PlacardView in it is the same size as the image used. The only size change done to it is a view transform, which doesn't effect the viewport of the image. As I understand it, you have a large picture, and want to show a small viewport into it. There isn't a simple class method to do this, but there is a function that you can use to get the desired results: CGImageCreateWithImageInRect(CGImageRef, CGRect) will help you out.
Here's a short example using it:
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);

Thanks a lot. I have found a pretty simple solution that looks like this:
CGRect frameRect = myImage.frame;
CGPoint rectPoint = frameRect.origin;
CGFloat newXPos = rectPoint.x - 0.5f;
myImage.frame = CGRectMake(newXPos, 0.0f, myImage.frame.size.width, myImage.frame.size.height);
I just move the frame around. It happens that portions of that frame go out of the iPhone's view port, but I hope that this won't matter much. There is a mask over it, so it doesn't look weird. The user doesn't totice how it's done.

You can accomplish the same by:
UIImageView *imgVw=[[UIImageView alloc]initwithFrame:CGRectMake(x,y,height,width)];
imgVw.image=[UIImage imageNamed:#""];
[self.view addSubView imgVw];
imgVw.contentMode = UIViewContentModeScaleToFill;

You can use NSLayoutConstraint to set the position of UIImageView , it can be relative to other elements or with respect to the frame.
Here's an example snippet:
let logo = UIImage(imageLiteralResourceName: "img")
let logoImage = UIImageView(image: logo)
logoImage.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(logoImage)
NSLayoutConstraint.activate([logoImage.topAnchor.constraint(equalTo: view.topAnchor,constant: 30),
logoImage.centerXAnchor.constraint(equalTo: view.centerXAnchor),logoImage.widthAnchor.constraint(equalToConstant: 100),logoImage.heightAnchor.constraint(equalToConstant: 100)
])
This way you can also resize the image easily. The constant parameter represents, how far should a certain anchor be positioned relative to the specified anchor.
Consider this,
logoImage.topAnchor.constraint(equalTo: view.topAnchor,constant: 30)
The above line is setting the top anchor of the instance logoImage to be 30 (constant) below the parent view. A negative value would mean opposite direction.

Related

Merge image on double Tap

I have two ImageViews and I am merging two images. First image is a bodyImage and second Image is a tattooImage. I already done merging, but I wants ask
1)I can drag tattooImage over bodyImage. I wants on doubleTap tattooImage mergeWith bodyImage on the tap coordinates. Hope you understand question
Thanks
+ =
and here is my code : here imageView1 is my bodyImage and imageView2 is my tattooImage
- (void)tapDetected:(UITapGestureRecognizer *)tapRecognizer
{
int width=500;
int height=500;
NSLog(#"takephoto from twitter");
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[imageView1.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[imageView2.image drawInRect:CGRectMake(180,200,200,200) blendMode:kCGBlendModeDarken alpha:0.4];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
imageView1.image=newImage;
UIGraphicsEndImageContext();
}
mask image http://www.developers-life.com/resize-and-mask-an-image.html
You need image masking for that. I wrote a tutorial on how to use it, and how I've used it in my own application. From the Apple documentation:
Masking techniques can produce many interesting effects by controlling which parts of an image are painted. You can:
Apply an image mask to an image. You can also use an image as a mask
to achieve an effect that’s opposite from applying an image mask.
Use color to mask parts of an image, which includes the technique
referred to as chroma key masking.
Clip a graphics context to an
image or image mask, which effectively masks an image (or any kind of
drawing) when Quartz draws the content to the clipped context.

Cocos2D - filling a sprite

UIImageViews have a property called contentMode that you can use as
imageView.contentMode = UIViewContentModeScaleAspectFit;
and it will fill the entire view with your image without distorting, even if it has to bleed the image to do that.
Is there any similar stuff on Cocos2D? Sorry about the question, but I am new to Cocos2d.
I am creating the sprite like this:
CCTexture2D *textBack = [[CCTexture2D alloc] initWithImage:image];
CCSprite *sprite = [CCSprite spriteWithTexture:textBack];
thanks.
The equivalent method to performing a UIViewContentModeScaleAspectFit would be the .scale property. When a CCNode (or any of the sub nodes such as CCSprite etc.) is first created, the scale property is 1. Keep increasing it to scale the sprite up proportionally.
sprite.scale = 2.0f; // Scales the sprite proportionally at a factor of 2
As for it fitting to a specific size, you would have to write a routine:
Pass in desired rect and CCSprite bounding box rect.
Scale the box rect to aspect fit the desired rect.
Return the scaling factor
The result can then be applied to the CCSprite.scale property.
You can certainly scale the sprite to do that...
sprite.scale = ?
sprite.scaleX = ?
sprite.scaleY = ?
but I don't believe there is a function to automatically fill the entire screen. If you don't get a definitive reply here I would suggest posting on the Cocos2D forums (http://www.cocos2d-iphone.org/forum/).

CGAffineTransformMakeTranslation translate to a point rather then by a value.

My question is simple.
Let us say I use this method
CGAffineTransformMakeTranslation(5.0f, 0.0f);
which translates the image view 5 pixels to the right. But is there a similar method that does the exact same thing except takes the destination point as an argument rather then the values you want to move the image view by?
For example, if I wanted to move an image view to 100.0f, 0.0f what would I use?
You can use the following two options:
imgOne.center = CGPointMake(50, 50);
or
imgOne.frame = CGRectMake(50, 50, imgOne.frame.size.width, imgOne.frame.size.height);
If it's the center point you want to move to this coordinate, use:
imageView.center = CGPointMake(100.0f, 0.0f);
If it's one of the corner points, subtract/add half the view's frame's width/height to the coordinates. If you need this frequently, it's a good idea to write a small UIView category that allows you to position a view's corner on a particular coordinate.

iPhone: How to extend a repeatable UIImage?

I have an image which I use to frame a tweet. It consists of two rounded rects, with a twitter icon a the top left. The important part is that it is repeatable, as you could copy any part of the middle section vertically, and it would be the same, just longer. Here is the image I created:
My question is how, in code, do I extend (or shrink) that dependent on how many lines are in my UITextView? Something like this to get the size:
float requiredHeight = lines * 14;
I know this is possible, because apple do it with their SMS app :)
UPDATE: Here is the complete code for doing this:
UIImage *loadImage = [UIImage imageNamed:#"TwitPost.png"];
float w2 = loadImage.size.width/2;
float h2 = loadImage.size.height/2;
// I have now reduced the image size so the height must be offset a little (otherwise it stretches the bird!):
loadImage = [loadImage stretchableImageWithLeftCapWidth:w2 topCapHeight:h2+15];
imageView.image = loadImage;
Thanks for both answers.
By using
- (UIImage *)stretchableImageWithLeftCapWidth:(NSInteger)leftCapWidth topCapHeight:(NSInteger)topCapHeight
Where you set leftCapWidth and topCapHeight as half the width and height of your image. This image you can stretch in a UIImageView by changing its bounds/frame.
Look at the documentation for UIImage. Specifically:
- (UIImage *)stretchableImageWithLeftCapWidth:(NSInteger)leftCapWidth topCapHeight:(NSInteger)topCapHeight
This allows you to use an image which repeats the portion at the leftCapWidth or topCapHeight to stretch it horizontally or vertically.

bounds and frames: how do I display part of an UIImage

My goal is simple; I want to create a program that displays an UIImage, and when swiped from bottom to top, displays another UIImage. The images here could be a happy face/sad face. The sad face should be the starting point, the happy face the end point. When swiping your finger the part below the finger should be showing the happy face.
So far I tried solving this with the frame and bounds properties of the UIImageview I used for the happy face image.
What this piece of code does is wrong, because the transition starts in the center of the screen and not the bottom. Notice that the origin of both frame and bounds are at 0,0...
I have read numerous pages about frames and bounds, but I don't get it. Any help is appreciated!
The loadimages is called only once.
- (void)loadImages {
sadface = [UIImage imageNamed:#"face-sad.jpg"];
happyface = [UIImage imageNamed:#"face-happy.jpg"];
UIImageView *face1view = [[UIImageView alloc]init];
face1view.image = sadface;
[self.view addSubview:face1view];
CGRect frame;
CGRect contentRect = self.view.frame;
frame = CGRectMake(0, 0, contentRect.size.width, contentRect.size.height);
face1view.frame = frame;
face2view = [[UIImageView alloc]init];
face2view.layer.masksToBounds = YES;
face2view.contentMode = UIViewContentModeScaleAspectFill;
face2view.image = happyface;
[self.view addSubview:face2view];
frame = CGRectMake(startpoint.x, 0, contentRect.size.width, contentRect.size.height);
face2view.frame = frame;
face2view.clipsToBounds = YES;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint movepoint = [[touches anyObject] locationInView: self.view];
NSLog(#"movepoint: %f %f", movepoint.x, movepoint.y);
face2view.bounds = CGRectMake(0, 0, 320, 480 - movepoint.y);
}
The UIImages and UIImageViews are properly disposed of in the dealloc function.
Indeed, you seem to be confused about frames and bounds. In fact, they are easy. Always remember that any view has its own coordinate system. The frame, center and transform properties are expressed in superview's coordinates, while the bounds is expressed in the view's own coordinate system. If a view doesn't have a superview (not installed into a view hierarchy yet), it still has a frame. In iOS the frame property is calculated from the view's bounds, center and transform. You may ask what the hell frame and center mean when there's no superview. They are used when you add the view to another view, allowing to position the view before it's actually visible.
The most common example when a view's bounds differ from its frame is when it is not in the upper left corner of its superview: its bounds.origin may be CGPointZero, while its frame.origin is not. Another classic example is UIScrollView, which frequently modifies its bounds.origin to make subviews scroll (in fact, modifying the origin of the coordinate system automatically moves every subview without affecting their frames), while its own frame is constant.
Back to your code. First of all, when you already have images to display in image views, it makes sense to init the views with their images:
UIImageView *face1view = [[UIImageView alloc] initWithImage: sadface];
That helps the image view to immediately size itself properly. It is not recommended to init views with -init because that might skip some important code in their designated initializer, -initWithFrame:.
Since you add face1view to self.view, you should really use its bounds rather than its frame:
face1view.frame = self.view.bounds;
Same goes for the happier face. Then in -touchesMoved:… you should either change face2view's frame to move it inside self.view or (if self.view does not contain any other subviews besides faces) modify self.view's bounds to move both faces inside it together. Instead, you do something weird like vertically stretching the happy face inside face2view. If you want the happy face to slide from the bottom of self.view, you should initially set its frame like this (not visible initially):
face2view.frame = CGRectOffset(face2view.frame, 0, CGRectGetHeight(self.view.bounds));
If you choose to swap faces by changing image views' frames (contrasted with changing self.view's bounds), I guess you might want to change both the views' frame origins, so that the sad face slides up out and the happy face slides up in. Alternatively, if you want the happy face to cover the sad one:
face2view.frame = face1view.frame;
Your problem seems to have something to do with the face2view.bounds in touchesMoved.
You are setting the bounds of this view to the rect, x:0, y:0, width:320, height:480 - y
x = 0 == left on the x axis
y = 0 == top on the y axis
So you are putting this image frame at the upper left corner, and making it fill the whole view. That's not what you want. The image simply becomes centered in this imageView.