Difference in size when converting UILabel to UIImage - iphone

I am drawing a UILabel to a UIImage.
The UILabel has a width of 193.5 while the resulting UIImage from the code below is 194 wide. Why is this?
UIGraphicsBeginImageContextWithOptions(label.bounds.size, YES, [[UIScreen mainScreen] scale]);
[label.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How'd you manage to get half a pixel in the width of a UILabel? It probably saw that and rounded it up automatically to 194.

Consider this - there is no way really to render half a pixel. I guess you could technically blend one pixel with another, but you still need a full pixel there. So it has to round up. I'd consider it a mistake to have a non-integer value for your any UI element coordinate - you'll end up with a blurry looking UI. So try to figure out how to fix the half-pixel issue.

Related

UIImageView content pixels (ingore image padding)

The short version: How do I know what region of a UIImageView contains the image, and not aspect ratio padding?
The longer version:
I have a UIImageView of fixed size as pictured:
I am loading photos into this UIViewController, and I want to retain the original photo's aspect ratio so I set the contentMode to Aspect Fit. This ends up ensuring that the entire photo is displayed within the UIImageView, but with the side effect of adding some padding (configured in red):
No problem so far.... But now I am doing face detection on the original image. The face detection code returns a list of CGRects which I then render on top of the UIImageView (I have a subclassed UIView and then laid out an instance in IB which is the same size and offset as the UIImageView).
This approach works great when then photo is not padded out to fit into UIImageView. However if there is padding, it introduces some skew as seen here in green:
I need to take the image padding into account when rendering the boxes, but I do not see a way to retrieve it.
Since I know the original image size and the UIImageView size, I can do some algebra to calculate where the padding should be. However it seems like there is probably a way to retrieve this information, and I am overlooking it.
I do not use image views often so this may not be the best solution. But since no one else has answered the question I figured I'd through out a simple mathematical solution that should solve your problem:
UIImage *selectedImage; // the image you want to display
UIImageView *imageView; // the imageview to hold the selectedImage
NSInteger heightOfView = imageView.frame.size.height;
NSInteger heightOfPicture = selectedImage.size.height;
NSInteger yStartingLocationForGreenSquare; // set it to whatever the current location is
// take whatever you had it set to and add the value of the top padding
yStartingLocationForGreenSquare += (heightOfView - heightOfPicture) / 2;
So although there may be other solutions this is a pretty simple math formula to accomplish what you need. Hope it helps.

converting the coordinates of a 300 dpi image to coordinates of a 72 dpi image

I'm working on a tess4J project and using tess4j, i've gotten the coordinates of words in the image. The only problem is, these are coordinates for a TIFF image. My project involves writing a layer of text overr the image in a pdf document. I take it the resolution of a pdf document is 72dpi. So the coordinates are morphed and too widely placed. If i can bring down the resolution from 300 dpi to 72dpi and THEN pass the image to tessaract, wont i get the coordinates i need? If not, any alternatives? already tried multiplying the coordinates with 300/72. Surprisingly, that doesnt work.
Thanks in advance!
To convert from 300DPI to 72DPI, you need to multiply by 72/300, not the other way round. Do it in floating point or the multiplication first and division then, as in (x * 72) / 300. PDF units are always 1/72 of an inch.
Scaling down the original image is not a good idea, since the loss of information will reduce the output text quality.
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"New image has w=%f, h=%f", newImage.size.width, newImage.size.height);
return newImage;
}
this code to change any image 72 dpi.

Why won't UIImage stretch properly?

I am trying to stretch a UIImage with the following code:
UIImage *stretchyImage = [[UIImage imageNamed:#"Tag#2x.png"] stretchableImageWithLeftCapWidth:10.0 topCapHeight:0.0];
UIImageView *newTag = [[UIImageView alloc] initWithImage:stretchyImage];
The image before stretching looks like this:
And after, looks like this:
Why hasn't the stretching worked properly? The corners have all gone pixelated and look stretched, when in fact only the middle should be stretched. FYI: I am running this app on iOS 6.
Why your implementation doesn't work is because of the values you give to the stretchableImageWithLeftCapWidth:topCapHeight: method.
First of all, stretchableImageWithLeftCapWidth:topCapHeight: is deprecated with iOS 6. The new API is resizableImageWithCapInsets:
The image has non-stretchable parts on the top, bottom and the right sides. What you told the API was "get -10 from the left side, stretch the rest according to the size I give you".
Since you have a non-repeatable custom shape on the right side, by both height and width, we should take that piece as a whole.
So the top cap width should be the height of the image (to preserve the shape of the thing on the right side), left cap width should be ~20 pixels (rounded rectangle corners), the bottom cap can be 0, since the top cap is the whole image height, and finally the right cap should be the width of the custom orange shape on the right side (which I take as ~40 pixels).
You can play with the cap values and achieve a better result.
UIImage *image = [UIImage imageNamed:#"Tag"];
UIImage *resizableImage = [image resizableImageWithCapInsets:UIEdgeInsetsMake(image.size.height, 20, 0, 40)];
Should do the trick.
Also, -imageNamed works fine when you get rid of the file extension & #2x.

Set row height from the height of an image loaded from a plist

I have a UITableView that displays single large images in each cell. The names of the images are stored in a plist. I would like to adjust the hight of each cell to accommodate the height of the image.
Does anyone know of a way to get the height for an image and use it to set the row height?
I'm having trouble finding this one and really appreciate any help.
The best thing to do would probably be to pre-load the image heights into your plist. Otherwise, you're going to have to load the image (using UIImage's imageWithContentsOfFile: method), and then get its size (a property).
To set custom table row heights, implement -tableView:heightForRowAtIndexPath: in your table's delegate.
The property for the height of a UIImage object - let's call it image - is:
CGFloat imageHeight = image.size.height;
Here is the code that actually worked. I put it under the heightForRowAtIndexPath method.
UIImage *imageForHeight = [UIImage imageWithContentsOfFile:MyImagePath];
imageHeight = CGImageGetHeight(imageForHeight.CGImage);
return imageHeight;
Just using:
CGFloat imageHeight = image.size.height;
as Jason recommended did not work for some reason. But he got me on the right track.

drawAtPoint: and drawInRect: blurry text

When drawing strings using drawAtPoint:, drawInRect: and even setting the text property of UILabels - the text can sometimes appear slightly blurry.
I tend to use Helvetica in most places, and I notice that specific font sizes cause some level of blurriness to occur, both in the simulator and on the device.
For example:
UIFont *labelFont = [UIFont fontWithName:#"Helvetica-Bold" size:12];
Will cause the resulting label to have slightly blurry text.
UIFont *labelFont = [UIFont fontWithName:#"Helvetica-Bold" size:13];
Results in crisp text.
My question is why does this occur? And is it just a matter of selecting an optimal font size for a typeface? If so, what are the optimal font sizes?
UPDATE: It seems that perhaps it is not the font size that is causing the blurriness. It may be that the center of the rect is a fractional point. Here is a comment I found on the Apple dev forums:
Check the position. It's likely on a
fractional pixel. Change center to be
integer value.
I rounded off the values of all my points, but there are still places where text remains blurry. Has anyone come across this issue before?
I have resolved this.
Just make sure that the point or rect in which you are drawing does not occur on a fractional pixel.
I.e. NSLog(#"%#", NSStringFromCGRect(theRect)) to determine which point is being drawn on a fractional pixel. Then call round() on that point.
You might want to look at NSIntegralRect(), it does what you want.
Pardon my ignorance if this is incorrect, I know nothing about iPhone or Cocoa.
If you're asking for the text to be centered in the rect, you might also need to make sure the width and/or height of the rect is an even number.
I have had this problem too, but my solution is different and may help someone.
My problem was text blur after changing the size of a UIView object thru TouchesBegan
and CGAffineTransformMakeScale, then back to CGAffineTransformIdentity in TouchesEnded.
I tried both the varying text size and rounding of x and y center points but neither worked.
The solution for my problem was to use even sizes for the width and height of my UIView !!
Hope this helps ....
From my experiments, some fonts don't render clearly at certain sizes. e.g. Helvetica-Bold doesn't render "." well at 16.0f, but they're clear at 18.0f. (If you look closely, the top pixel of the "." is blurry.)
After I noticed that, I've been irked every time I see that UIView, since it's rendered dynamically.
In my case, I drew text on a custom CALayer and turned out to be pretty blurry. I solved it by setting contentScale with appropriate value:
Objective-C:
layer.contentsScale = [UIScreen mainScreen].scale;
Swift:
layer.contentsScale = UIScreen.main.scale