How to set UISlider's MinimumTrackImage so that it won't strech? - iphone

Ever since I've updated my XCode, the MinimumTrackImage on my UISlider is now stretching, when before it was clipping like I wanted it to. The MaximumTrackImage's behavior didn't change.
How can I get the MinimumTrackImage to not stretch? Note that I use rubymotion, but a solution using obj-c is also acceptable.

I'm guessing here (you are allowed to guess on StackOverflow as long as you're honest about it)... There is a new iOS6 feature for images and perhaps that is getting in your way here. You can set the resizingMode and capInsets for an image. Try this:
// Objective-C
UIImage *newImage = [oldImage resizableImageWithCapInsets:UIEdgeInsetsZero resizingMode:UIImageResizingModeTile];
# RubyMotion
newImage = oldImage.resizableImageWithCapInsets(UIEdgeInsetsZero, resizingMode:UIImageResizingModeTile)
If you need to adjust the insets as well, replace UIEdgeInsetsZero with UIEdgeInsetsMake(top, left, bottom, right) where top, left, bottom, and right are floats. In RubyMotion, I believe you can just use [top, left, bottom, right].
Info came from here:
http://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html

Related

UIImageView content pixels (ingore image padding)

The short version: How do I know what region of a UIImageView contains the image, and not aspect ratio padding?
The longer version:
I have a UIImageView of fixed size as pictured:
I am loading photos into this UIViewController, and I want to retain the original photo's aspect ratio so I set the contentMode to Aspect Fit. This ends up ensuring that the entire photo is displayed within the UIImageView, but with the side effect of adding some padding (configured in red):
No problem so far.... But now I am doing face detection on the original image. The face detection code returns a list of CGRects which I then render on top of the UIImageView (I have a subclassed UIView and then laid out an instance in IB which is the same size and offset as the UIImageView).
This approach works great when then photo is not padded out to fit into UIImageView. However if there is padding, it introduces some skew as seen here in green:
I need to take the image padding into account when rendering the boxes, but I do not see a way to retrieve it.
Since I know the original image size and the UIImageView size, I can do some algebra to calculate where the padding should be. However it seems like there is probably a way to retrieve this information, and I am overlooking it.
I do not use image views often so this may not be the best solution. But since no one else has answered the question I figured I'd through out a simple mathematical solution that should solve your problem:
UIImage *selectedImage; // the image you want to display
UIImageView *imageView; // the imageview to hold the selectedImage
NSInteger heightOfView = imageView.frame.size.height;
NSInteger heightOfPicture = selectedImage.size.height;
NSInteger yStartingLocationForGreenSquare; // set it to whatever the current location is
// take whatever you had it set to and add the value of the top padding
yStartingLocationForGreenSquare += (heightOfView - heightOfPicture) / 2;
So although there may be other solutions this is a pretty simple math formula to accomplish what you need. Hope it helps.

UISlider minimumValueImageRectForBounds: and super maximumValueImageRectForBounds returns empty rectangle

I subclass UISlider in order to make the thumb smaller, and I want to override minimumValueImageRectForBounds and maximumValueImageRectForBounds to make their width 2px less. So my code is
- (CGRect)minimumValueImageRectForBounds:(CGRect)bounds
{
CGRect stdRect = [super minimumValueImageRectForBounds:bounds];
return CGRectMake(stdRect.origin.x + 2, stdRect.origin.y, stdRect.size.width - 2, stdRect.size.height);
}
The point is that stdRect is empty rectangle (0, 0, 0, 0).
Morover, if I explicitly set some rectangle like that
- (CGRect)minimumValueImageRectForBounds:(CGRect)bounds
{
return CGRectMake(2, 0, 40, 8);
}
It doesn't affect minimum value image position at all.
Any ideas?
Haha, I just figured it out. I wasn't reading closely enough.
Setting the minimumValueImage is not the same as calling setMinimumTrackImage:forState:. The minimumValueImage is an image that gets displayed to the left of the slider and is independent what's going on in the slider. Overriding minimumValueImageRectForBounds changes the dimensions of this image (the default size being 0pt wide) and the sliders frame is made less wide and shifted to the right as a result. As I understand it, there isn't a way to modify the rectangle of the minimumTrackImage (such as to make it extend to the right of the thumb image); it is only possible to change the image.
I haven't really figured out the point of allowing you to set the minimumValueImage. It seems like you could accomplish the same thing by changing the size of the slider and adding separate UIImageViews to the side of the slider. Who knows.
Note that everything I've said here applies in the same way for the maximumValueImage and setMaximumTrackImage:forState methods.
DO NOT DO THIS. This is an old, incorrect answer, by past me. I hate that guy.
--
As a general rule, you probably don't want to subclass the Apple standard UI controls. You might want to build it up and set those properties yourself instead (not sure if they are settable, but seems like they should be).

iphone: re-sizing gradient after shift from portrait to landscape

In viewDidLoad, I can create a gradient with no problem:
CAGradientLayer *blueGradient = [[CAGradientLayer layer] retain];
blueGradient.frame = CGRectMake(gradientStartX,gradientStartY,gradientWidth,gradientHeight);
where gradientWith is device-defined as 320 or 1024 as appropriate.
What I can’t do is resize it inside willRotateToInterfaceOrientation: -– and thus get rid of that empty black space off to the right -- after the user changes to landscape mode. (The nav bar and tab bar behave nicely.)
(1) Recalibrating the gradient’s new dimensions according to the new mid-point, (2) using kCALayerMaxXMargin, and (3) employing bounds all looked like they would do the job. bounds looked a litte more intuitive, so I tried that.
I don’t want to admit that I have made zero progress.
I will say that I’ve been reduced to the brute force method of trying every permutation of self, view, layer, bounds, blueGradient, and CGRect(gradientStartX,gradientStartY,newGradientWidth,newGradientHeight) with zero success.
This is not difficult. My lack of understanding is making it difficult. Anyone out there “Been there, done that”?
Does the layer resize its size automatically? If so, simple
[blueGradient setNeedsDisplay];
should do the trick.
Hope this was helpful,
Paul

StretchableImageWithLeftCapWidth stretching wrong portions

I am trying to use a UIImage with stretchableImageWithLeftCapWidth to set the image in my UIImageView but am encountering a strange scaling bug. Basically picture my image as an oval that is 31 pixels wide. The left and right 15 pixels are the caps and the middle single pixel is the scaled portion.
This works fine if I set the left cap to 15. However, if I set it to, say, 4. I would expect to get a 'center' portion that is a bit curved as it spans the center while the ends are a little pinched.
What I get is the left cap seemingly correct, followed by a long middle portion that is as if I scaled the single pixel at pixel 5, then a portion at the right of the image where it expands and closes over a width about twice the width of the original image. The resulting image is like a thermometer bulb.
Has anyone seen odd behavior like this and might know what's going on?
Your observation is correct, Joey. StretchableImageWithLeftCapWidth does NOT expand the whole center of the image as you would expect. It only expands the pixel column just right of the left cap and the pixel row just below the top cap!
Use UIView's contentStretch property instead, and your problem will be solved. Another advantage to this is that contentStretch can also shrink a graphic properly, whereas stretchableImageWithLeftCapWidth only works when making the graphic larger.
Not sure if I got you right, but LeftCapWidth etc is made for rounded corners, with everything in the rectangle within the rounding radius is stretched to fit the space between the 'caps' on the destination button or such.
So if your oval is taller or wider than 4 x 2 = 8, whatever is in the middle rectangle will be stretched. And yours is, so it would at least look at bit ugly! But if it's not even symmetrical, something has affected the stretch. Maybe something to do with origin or frame, or when it's set, or maybe it's set twice, or you have two different stretched images on top of each other giving the thermometer look.
I once created two identical buttons in the same place, using the same retained object - of course throwing away the previous button. Then I wondered why the heck the button didn't disappear when I set alpha to 0... But it did, it's just that there was a 'dead' identical button beneath it :)

drawAtPoint: and drawInRect: blurry text

When drawing strings using drawAtPoint:, drawInRect: and even setting the text property of UILabels - the text can sometimes appear slightly blurry.
I tend to use Helvetica in most places, and I notice that specific font sizes cause some level of blurriness to occur, both in the simulator and on the device.
For example:
UIFont *labelFont = [UIFont fontWithName:#"Helvetica-Bold" size:12];
Will cause the resulting label to have slightly blurry text.
UIFont *labelFont = [UIFont fontWithName:#"Helvetica-Bold" size:13];
Results in crisp text.
My question is why does this occur? And is it just a matter of selecting an optimal font size for a typeface? If so, what are the optimal font sizes?
UPDATE: It seems that perhaps it is not the font size that is causing the blurriness. It may be that the center of the rect is a fractional point. Here is a comment I found on the Apple dev forums:
Check the position. It's likely on a
fractional pixel. Change center to be
integer value.
I rounded off the values of all my points, but there are still places where text remains blurry. Has anyone come across this issue before?
I have resolved this.
Just make sure that the point or rect in which you are drawing does not occur on a fractional pixel.
I.e. NSLog(#"%#", NSStringFromCGRect(theRect)) to determine which point is being drawn on a fractional pixel. Then call round() on that point.
You might want to look at NSIntegralRect(), it does what you want.
Pardon my ignorance if this is incorrect, I know nothing about iPhone or Cocoa.
If you're asking for the text to be centered in the rect, you might also need to make sure the width and/or height of the rect is an even number.
I have had this problem too, but my solution is different and may help someone.
My problem was text blur after changing the size of a UIView object thru TouchesBegan
and CGAffineTransformMakeScale, then back to CGAffineTransformIdentity in TouchesEnded.
I tried both the varying text size and rounding of x and y center points but neither worked.
The solution for my problem was to use even sizes for the width and height of my UIView !!
Hope this helps ....
From my experiments, some fonts don't render clearly at certain sizes. e.g. Helvetica-Bold doesn't render "." well at 16.0f, but they're clear at 18.0f. (If you look closely, the top pixel of the "." is blurry.)
After I noticed that, I've been irked every time I see that UIView, since it's rendered dynamically.
In my case, I drew text on a custom CALayer and turned out to be pretty blurry. I solved it by setting contentScale with appropriate value:
Objective-C:
layer.contentsScale = [UIScreen mainScreen].scale;
Swift:
layer.contentsScale = UIScreen.main.scale