UIImage in an UIImageView rendering distorted - iphone

I have an UIImageView 3 levels deep in a super view. (The white region on top of the gray rectangle is my superview, the gray elongated rectangle is one subview, the black square is another subview of the elongated gray rectangle, the cog is the image of the UIImageView which is a subview of the black square). The frame rectangle of the UIImageView is calculate as follows, where _normalImage is a UIImage object. I do this inside the subclass that represents the black square
CGFloat xPoint = self.bounds.size.width/2 - _normalImage.size.width/2;
CGFloat yPoint = self.bounds.size.height/2 - _normalImage.size.height/2;
CGRect frameRect = CGRectMake(xPoint, yPoint, _normalImage.size.width, _normalImage.size.height);
self.imageHolder = [[UIImageView alloc] initWithFrame:frameRect];
The _normalImage is 26X26 and should be a perfect square. However the image is rendering as distorted as if there is an aspect ratio loss.
Whats wrong ?

try using floorf() function for your coordinates to prevent them contain the fractional part.

Image is stretched because by default it use UIViewContentModeScaleToFill content mode.
self.imageHolder.contentMode = UIViewContentModeScaleAspectFit;

Related

Stretching an UIImage while preserving the corners

I'm trying to stretch a navigation arrow image while preserving the edges so that the middle stretches and the ends are fixed.
Here is the image that I'm trying to stretch:
The following iOS 5 code allows the image when resized to stretch the center portions of the image defined by the UIEdgeInsets.
[[UIImage imageNamed:#"arrow.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(15, 7, 15, 15)];
This results in an image that looks like this (if the image's frame is set to 70 pixels wide):
This is actually what I want but resizableImageWithCapInsets is only supported on iOS 5 and later.
Prior to iOS 5 the only similar method is stretchableImageWithLeftCapWidth:topCapHeight but you can only specify the top and left insets which means the image has to have equal shaped edges.
Is there an iOS 4 way of resizing the image the same as iOS 5's resizableImageWithCapInsets method, or another way of doing this?
Your assumption here is wrong:
Prior to iOS 5 the only similar method is stretchableImageWithLeftCapWidth:topCapHeight but you can only specify the top and left insets which means the image has to have equal shaped edges.
The caps are figured out as follows - I'll step through the left cap, but the same principle applies to the top cap.
Say your image is 20px wide.
Left cap width - this is the part on the left hand side of the image that cannot be stretched. In the stretchableImage method you send a value of 10 for this.
Stretchable part - this is assumed to be one pixel in width, so it will be the pixels in column "11", for want of a better description
This means there is an implied right cap of the remaining 9px of your image - this will also not be distorted.
This is taken from the documentation
leftCapWidth
End caps specify the portion of an image that should not be resized when an image is stretched. This technique is used to implement buttons and other resizable image-based interface elements. When a button with end caps is resized, the resizing occurs only in the middle of the button, in the region between the end caps. The end caps themselves keep their original size and appearance.
This property specifies the size of the left end cap. The middle (stretchable) portion is assumed to be 1 pixel wide. The right end cap is therefore computed by adding the size of the left end cap and the middle portion together and then subtracting that value from the width of the image:
rightCapWidth = image.size.width - (image.leftCapWidth + 1);
UIImage *image = [UIImage imageNamed:#"img_loginButton.png"];
UIEdgeInsets edgeInsets;
edgeInsets.left = 0.0f;
edgeInsets.top = 0.0f;
edgeInsets.right = 5.0f; //Assume 5px will be the constant portion in your image
edgeInsets.bottom = 0.0f;
image = [image resizableImageWithCapInsets:edgeInsets];
//Use this image as your controls image
Your example is perfectly possible using stretchableImageWithLeftCapWidth:topCapHeight: with a left cap of 15 (apparently, from reading your code). That will horizontally stretch the button by repeating the middle column.
You can extend UIImage to allow stretching an image with custom edge protection (thereby stretching the interior of the image, instead of tiling it):
UIImage+utils.h:
#import <UIKit/UIKit.h>
#interface UIImage(util_extensions)
//extract a portion of an UIImage instance
-(UIImage *) cutout: (CGRect) coords;
//create a stretchable rendition of an UIImage instance, protecting edges as specified in cornerCaps
-(UIImage *) stretchImageWithCapInsets: (UIEdgeInsets) cornerCaps toSize: (CGSize) size;
#end
UIImage+utils.m:
#import "UIImage+utils.h"
#implementation UIImage(util_extensions)
-(UIImage *) cutout: (CGRect) coords {
UIGraphicsBeginImageContext(coords.size);
[self drawAtPoint: CGPointMake(-coords.origin.x, -coords.origin.y)];
UIImage *rslt = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return rslt;
}
-(UIImage *) stretchImageWithCapInsets: (UIEdgeInsets) cornerCaps toSize: (CGSize) size {
UIGraphicsBeginImageContext(size);
[[self cutout: CGRectMake(0,0,cornerCaps.left,cornerCaps.top)] drawAtPoint: CGPointMake(0,0)]; //topleft
[[self cutout: CGRectMake(self.size.width-cornerCaps.right,0,cornerCaps.right,cornerCaps.top)] drawAtPoint: CGPointMake(size.width-cornerCaps.right,0)]; //topright
[[self cutout: CGRectMake(0,self.size.height-cornerCaps.bottom,cornerCaps.left,cornerCaps.bottom)] drawAtPoint: CGPointMake(0,size.height-cornerCaps.bottom)]; //bottomleft
[[self cutout: CGRectMake(self.size.width-cornerCaps.right,self.size.height-cornerCaps.bottom,cornerCaps.right,cornerCaps.bottom)] drawAtPoint: CGPointMake(size.width-cornerCaps.right,size.height-cornerCaps.bottom)]; //bottomright
[[self cutout: CGRectMake(cornerCaps.left,0,self.size.width-cornerCaps.left-cornerCaps.right,cornerCaps.top)]
drawInRect: CGRectMake(cornerCaps.left,0,size.width-cornerCaps.left-cornerCaps.right,cornerCaps.top)]; //top
[[self cutout: CGRectMake(0,cornerCaps.top,cornerCaps.left,self.size.height-cornerCaps.top-cornerCaps.bottom)]
drawInRect: CGRectMake(0,cornerCaps.top,cornerCaps.left,size.height-cornerCaps.top-cornerCaps.bottom)]; //left
[[self cutout: CGRectMake(cornerCaps.left,self.size.height-cornerCaps.bottom,self.size.width-cornerCaps.left-cornerCaps.right,cornerCaps.bottom)]
drawInRect: CGRectMake(cornerCaps.left,size.height-cornerCaps.bottom,size.width-cornerCaps.left-cornerCaps.right,cornerCaps.bottom)]; //bottom
[[self cutout: CGRectMake(self.size.width-cornerCaps.right,cornerCaps.top,cornerCaps.right,self.size.height-cornerCaps.top-cornerCaps.bottom)]
drawInRect: CGRectMake(size.width-cornerCaps.right,cornerCaps.top,cornerCaps.right,size.height-cornerCaps.top-cornerCaps.bottom)]; //right
[[self cutout: CGRectMake(cornerCaps.left,cornerCaps.top,self.size.width-cornerCaps.left-cornerCaps.right,self.size.height-cornerCaps.top-cornerCaps.bottom)]
drawInRect: CGRectMake(cornerCaps.left,cornerCaps.top,size.width-cornerCaps.left-cornerCaps.right,size.height-cornerCaps.top-cornerCaps.bottom)]; //interior
UIImage *rslt = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [rslt resizableImageWithCapInsets: cornerCaps];
}
#end
Swift 3.0 version of Vicky's answer.
var imageInset:UIEdgeInsets = UIEdgeInsets()
imageInset.left = 10.0
imageInset.top = 10.0
imageInset.bottom = 10.0
imageInset.right = 10.0
self.myImageView.image = myimage.resizableImage(withCapInsets: imageInset)

bounds and frames: how do I display part of an UIImage

My goal is simple; I want to create a program that displays an UIImage, and when swiped from bottom to top, displays another UIImage. The images here could be a happy face/sad face. The sad face should be the starting point, the happy face the end point. When swiping your finger the part below the finger should be showing the happy face.
So far I tried solving this with the frame and bounds properties of the UIImageview I used for the happy face image.
What this piece of code does is wrong, because the transition starts in the center of the screen and not the bottom. Notice that the origin of both frame and bounds are at 0,0...
I have read numerous pages about frames and bounds, but I don't get it. Any help is appreciated!
The loadimages is called only once.
- (void)loadImages {
sadface = [UIImage imageNamed:#"face-sad.jpg"];
happyface = [UIImage imageNamed:#"face-happy.jpg"];
UIImageView *face1view = [[UIImageView alloc]init];
face1view.image = sadface;
[self.view addSubview:face1view];
CGRect frame;
CGRect contentRect = self.view.frame;
frame = CGRectMake(0, 0, contentRect.size.width, contentRect.size.height);
face1view.frame = frame;
face2view = [[UIImageView alloc]init];
face2view.layer.masksToBounds = YES;
face2view.contentMode = UIViewContentModeScaleAspectFill;
face2view.image = happyface;
[self.view addSubview:face2view];
frame = CGRectMake(startpoint.x, 0, contentRect.size.width, contentRect.size.height);
face2view.frame = frame;
face2view.clipsToBounds = YES;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint movepoint = [[touches anyObject] locationInView: self.view];
NSLog(#"movepoint: %f %f", movepoint.x, movepoint.y);
face2view.bounds = CGRectMake(0, 0, 320, 480 - movepoint.y);
}
The UIImages and UIImageViews are properly disposed of in the dealloc function.
Indeed, you seem to be confused about frames and bounds. In fact, they are easy. Always remember that any view has its own coordinate system. The frame, center and transform properties are expressed in superview's coordinates, while the bounds is expressed in the view's own coordinate system. If a view doesn't have a superview (not installed into a view hierarchy yet), it still has a frame. In iOS the frame property is calculated from the view's bounds, center and transform. You may ask what the hell frame and center mean when there's no superview. They are used when you add the view to another view, allowing to position the view before it's actually visible.
The most common example when a view's bounds differ from its frame is when it is not in the upper left corner of its superview: its bounds.origin may be CGPointZero, while its frame.origin is not. Another classic example is UIScrollView, which frequently modifies its bounds.origin to make subviews scroll (in fact, modifying the origin of the coordinate system automatically moves every subview without affecting their frames), while its own frame is constant.
Back to your code. First of all, when you already have images to display in image views, it makes sense to init the views with their images:
UIImageView *face1view = [[UIImageView alloc] initWithImage: sadface];
That helps the image view to immediately size itself properly. It is not recommended to init views with -init because that might skip some important code in their designated initializer, -initWithFrame:.
Since you add face1view to self.view, you should really use its bounds rather than its frame:
face1view.frame = self.view.bounds;
Same goes for the happier face. Then in -touchesMoved:… you should either change face2view's frame to move it inside self.view or (if self.view does not contain any other subviews besides faces) modify self.view's bounds to move both faces inside it together. Instead, you do something weird like vertically stretching the happy face inside face2view. If you want the happy face to slide from the bottom of self.view, you should initially set its frame like this (not visible initially):
face2view.frame = CGRectOffset(face2view.frame, 0, CGRectGetHeight(self.view.bounds));
If you choose to swap faces by changing image views' frames (contrasted with changing self.view's bounds), I guess you might want to change both the views' frame origins, so that the sad face slides up out and the happy face slides up in. Alternatively, if you want the happy face to cover the sad one:
face2view.frame = face1view.frame;
Your problem seems to have something to do with the face2view.bounds in touchesMoved.
You are setting the bounds of this view to the rect, x:0, y:0, width:320, height:480 - y
x = 0 == left on the x axis
y = 0 == top on the y axis
So you are putting this image frame at the upper left corner, and making it fill the whole view. That's not what you want. The image simply becomes centered in this imageView.

Trying to create a rectangle filled with blue UIImage object

I want to create a blue rectangle image and see it in my view, but this code doesn't seem to work:
CGRect imageRect = CGRectMake(50, 50, 64, 40);
UIGraphicsBeginImageContext(imageRect.size);
[[UIColor blueColor] set];
UIRectFill(imageRect);
UIImage *aImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *myImageView = [[UIImageView alloc] initWithImage:aImage];
[self.view addSubview:myImageView];
Can someone fix it for me?
Thanks,
Sagiftw
Your context is 64 points by 40 points. You filled a rectangle starting 50 points from the origin in a 40-point-tall context. That put it out of bounds, and anything you draw outside the bounds of the context won't show up.
Set your rectangle's origin to 0,0, which is the origin of the context. Then, your 64×40-point rectangle will be completely within the bounds of your 64×40-point context.
If you actually want to draw the rectangle 50 points below and to the right of the context's origin, then you need to make the context's size at least big enough to hold that margin plus the size of the rectangle. If you also want the same amount of margin on the other size, then the context's size should be the rectangle's size plus 100 points wide by 100 points tall (50 points on each side of the rectangle on each axis).

Is it possible to set the position of an UIImageView's image?

I have a UIImageView that displays a bigger image. It appears to be centered, but I would like to move that image inside that UIImageView. I looked at the MoveMe sample from Apple, but I couldn't figure out how they do it. It seems that they don't even have an UIImageView for that. Any ideas?
What you need is something like (e.g. showing the 30% by 30% of the top left corner of the original image):
imageView.layer.contentsRect = CGRectMake(0.0, 0.0, 0.3, 0.3);
Description of "contentsRect":
The rectangle, in the unit coordinate space, that defines the portion of the layer’s contents that should be used.
Original Answer has been superseded by CoreAnimation in iOS4.
So as Gold Thumb says: you can do this by accessing the UIView's CALayer. Specifically its contentRect:
From the Apple Docs: The rectangle, in the unit coordinate space, that defines the portion of the layer’s contents that should be used. Animatable.
Do you want to display the image so that it is contained within the UIImageView? In that case just change the contectMode of UIImageView to UIViewContentModeScaleToFill (if aspect ratio is inconsequential) or UIViewContentModeScaleAspectFit (if you want to maintain the aspect ratio)
In IB, this can be done by setting the Mode in Inspector.
In code, it can be done as
yourImageView.contentMode = UIViewContentModeScaleToFill;
In case you want to display the large image as is inside a UIImageView, the best and easiest way to do this would be to have the image view inside a UIScrollView. That ways you will be able to zoom in and out in the image and also move it around.
Hope that helps.
It doesn't sound like the MoveMe sample does anything like what you want. The PlacardView in it is the same size as the image used. The only size change done to it is a view transform, which doesn't effect the viewport of the image. As I understand it, you have a large picture, and want to show a small viewport into it. There isn't a simple class method to do this, but there is a function that you can use to get the desired results: CGImageCreateWithImageInRect(CGImageRef, CGRect) will help you out.
Here's a short example using it:
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
Thanks a lot. I have found a pretty simple solution that looks like this:
CGRect frameRect = myImage.frame;
CGPoint rectPoint = frameRect.origin;
CGFloat newXPos = rectPoint.x - 0.5f;
myImage.frame = CGRectMake(newXPos, 0.0f, myImage.frame.size.width, myImage.frame.size.height);
I just move the frame around. It happens that portions of that frame go out of the iPhone's view port, but I hope that this won't matter much. There is a mask over it, so it doesn't look weird. The user doesn't totice how it's done.
You can accomplish the same by:
UIImageView *imgVw=[[UIImageView alloc]initwithFrame:CGRectMake(x,y,height,width)];
imgVw.image=[UIImage imageNamed:#""];
[self.view addSubView imgVw];
imgVw.contentMode = UIViewContentModeScaleToFill;
You can use NSLayoutConstraint to set the position of UIImageView , it can be relative to other elements or with respect to the frame.
Here's an example snippet:
let logo = UIImage(imageLiteralResourceName: "img")
let logoImage = UIImageView(image: logo)
logoImage.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(logoImage)
NSLayoutConstraint.activate([logoImage.topAnchor.constraint(equalTo: view.topAnchor,constant: 30),
logoImage.centerXAnchor.constraint(equalTo: view.centerXAnchor),logoImage.widthAnchor.constraint(equalToConstant: 100),logoImage.heightAnchor.constraint(equalToConstant: 100)
])
This way you can also resize the image easily. The constant parameter represents, how far should a certain anchor be positioned relative to the specified anchor.
Consider this,
logoImage.topAnchor.constraint(equalTo: view.topAnchor,constant: 30)
The above line is setting the top anchor of the instance logoImage to be 30 (constant) below the parent view. A negative value would mean opposite direction.

DrawRect of UIView inside UIScrollView

I'm trying to draw a small image on a UIScrollView for my iPhone app. I started with a UIImage created from a png I included in my bundle and that works ok. Every time the zooming/panning stops the delegate of the scrollview recalculates the frame of that UIImage (which is a subview of the UIScrollView's contentView) and calls setNeedsDisplay. The frame gets smaller and smaller in width/height measurements as you zoom into the scrollview because I want the image to stay the same size on the screen. While the user is zooming the image is the wrong size but as soon as the zooming stops it gets corrected. Not great but not the worst thing either.
The problem is now that I want to draw the image myself rather than have a static png. I've subclassed UIView and got into the drawRect method but I can't get the maths right to draw a smooth looking path when the view is zoomed in. I thought I could just scale the line width down and the radius of the circle I'm drawing (CGContextAddArc) but it looks as if the iPhone has tried to draw a thin lined circle over two or three pixels and then enlarged those pixels, rather than zoomed the view in and drawn a highly accurate circle over it.
This is the ViewController resizing the UIView (which is a subview of the UIScrollView's contentView)
float scaler = (1/scrollView.zoomScale);
[targetView setFrame:CGRectMake(viewCoords.x-11*scaler, viewCoords.y-11*scaler, 22*scaler,22*scaler)];
[targetView setNeedsDisplay];
This is the drawRect of the UIView
float x = rect.origin.x + rect.size.width/2.0;
float y = rect.origin.y + rect.size.height/2.0;
float radius = (rect.size.width-2.0*lineWidth)/2.0;
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(ctx,lineWidth );
CGContextSetRGBStrokeColor(ctx, 1.0, 0.0, 0.0, 1.0);
CGContextAddArc(ctx, x, y, radius , 0, 2* M_PI, 1);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
I'm open to the idea of placing the UIView subclass elsewhere, but I must have it move in sync with the UIScrollView's contentview
See my answer to this question. The UIScrollView applies a transform to your content view when it does the pinch zooming, which is what's causing your blurriness. You can override this to draw your own content sharply if you follow the method I describe.