I have some image I got from the web. Let's say that the max size I can make it without pix elating is 500 x 500. So with that said, should I make the #2x version of it, simply the 500 x 500 version, and regular version (ie for non retina) 250 x 250? Just a little confused about sizing the image correctly for the right screen resolution and any help would be appreciated.
Yes what you said is correct.
Keep in mind though that once you put it on the device the #2x will display as 250x250 pts on a retina screen.
If the #2x version is 500 x 500 then it will be treated at load time as a 250 x 250 double-resolution image (scale = 2).
Related
I am creating a UIImage that I want to print using iOS. The printing type will be
printInfo.outputType = UIPrintInfoOutputGeneral;
or in other words, using regular paper.
As far as I read, iOS will print at 72dpi. So, if I want to print a UIImage with a 2x3 inche size on paper I need to create this image with 144 x 216 points, but how much is it in pixels for that UIImage? or in other words, which size should the image has?
thanks
It looks to me that the 72dpi is actually saying 72 pixels. so your image size of 144 x 216 is actually the size you would want to print. The only reason the UIImageView is measured in points rather than pixels is because of the different screen resolutions. The original iPhone has the same size screen as the 4s, but the screen of the 4s has a LOT more pixels in it. Apple used points so that it is easier to program for all of the devices. Check out this link: http://www.scantips.com/basics1a.html i just read through it and i hope it will answer your question. I apologize if i confused you even more, I'm just trying to help! :)
I've gotten some graphics files for buttons etc. from the designer. Most of the retina files have one or both dimensions odd, like 29 x 30 or 79 x 61, and then the dimensions of the corresponding non-retina files will be 15 x 15 or 39 x 31, for example. The dimensions of the UIImageView s that hold each image exactly match the size of the non-retina files they hold, so on a non-retina phone there is no distortion and everything looks fine.
On a retina phone, these images (icons and such) only look fine when the images happen to be even dimensions (like 30 x 30 or 46 x 80); when there's an odd dimension to the image, it gets distorted slightly.
Should the pixel dimensions of a retina image always be twice the size of the non-retina dimensions, and of the dimensions of the frame that displays it?
As the name (#2X) implies, it is indeed assumed that the retina-version is exactly twice the size of the non-retina version. Otherwise, as you have seen, there might be distortions.
On a side note, this has only indirectly to do with the displaying frame, e.g. think of scrollviews.
Ask your designer to always design the UI (not necessarily the components themselves) for the non-retina version first, and then just double up the sizes for the retina version. This way, you won't run into distortion-problems. If he designs for retina-first and then scales all components back to half their sizes, he will likely end up with odd dimensions.
Oh, and give your designer this link:
http://www.smashingmagazine.com/2010/11/17/designing-for-iphone-4-retina-display-techniques-and-workflow/
Yes, image files that have #2x appended should be exactly double the size of the 'non'-retina image. Thus should only have even dimensions.
It would appear so.
When you create a view which is 30 points by 30 points on a regular display the backing store (the data that gets drawn on the screen) will be created 30 pixels by 30 pixels.
On a retina display that backing store is simply multiplied by a scale factor. Currently that scale factor is 2 for iPhone 4 and iPhone 4s. This means that the backing stores on retina displays will always be a multiple of 2.
Your 30 point by 30 point view would have a 60 pixel by 60 pixel backing store. If your images aren't drawn properly for retina displays it would seem that the #2x image needs to be the full size of the backing store, and hence exactly double the size of the view in points.
Does iPhone automatically use bicubic resizing, or bicubic sharpening?
Say I have an image that will be 100 X 100 in the iPhone. I know I should make it 200 X 200 and append #2x.png so iPhone 4 high resolution screens can use that version. Assuming file size makes no difference (a huge assumption, I know)... would it matter if I just use a 800 X 800 image and let the iPhone do the resizing itself? Rather than me manually resizing a 800 X 800 image to 200 X 200? What different would it make?
It would consume unnecessary memory and processor cycles.
Addition: Perceived image quality is a subjective issue—why don't you try it and see? Be sure to test it on older devices. If you make an exact pixel double, or quadruple, or in the case of 800x800 down to 100x100 octouple version, the downsizing is more straightforward and that may help. Frankly, there are few developers who don't care about all three of disk space, memory, and processor cycles, all of which relate to UI performance and thus end user experience, and exporting images in the correct sizes is not that difficult a step in professional development, so you're not likely to get many more answers here as it isn't something most people would contemplate.
I have an image taken by my ipod touch 4 that's 720x960. In the simulator calling CGImageGetBytesPerRow() on the image returns 2880 (720 * 4 bytes) which is what I expected. However on the device CGImageGetBytesPerRow() will return 3840, the "bytes per row" along the height. Does anyone know why there is different behavior even though the image I'm CGImageGetBytesPerRow() on has a width of 720 and height of 960 in both cases?
Thanks in advance.
Bytes per row can be anything as long as it is sufficient to hold the image bounds, so best not to make assumptions that it will be the minimum to fit the image.
I would guess that on the device, bytes per row is dictated by some or other optimisation or hardware consideration: perhaps an image buffer that does not have to be changed if the orientation is rotated, or the image sensor transfers extra bytes of dead data per row that are then ignored instead of doing a second transfer into a buffer with minimum bytes per row, or some other reason that would only make sense if we knew the inner workings of these devices.
It may slightly different because the internal memory allocation: "The number of bytes used in memory for each row of the specified bitmap image (or image mask)."1
Consider using NSBitmapRepresention for some special tasks.
I created a 8.5x11.0 inches image # a 300dpi setting in photoshop.
When i go to use this as a background image in report designer the image looks hugeee.
It's not fitting within the 8.5x11.0 page.
Is there a way to resize this image correctly so that it will fit correctly within a 8.5x11.0 letter size page?
Thanks in advance,
with the information you gave, i believe your problem is problably in the group Size/dpi
You saved an image of size 8,5 x 11 inches # 300 Dpi (dots per inch) that calculates to aproximately an image of 2550 x 3300 pixels.
Now if your "report designer" software looks only at the size in pixels and assumes a dpi value diferent then the one you used, say for example 72 dpi, your 2550 x 3300 pixels image would actually be something like 45,8 x 35.4 inches.
So, my advice is, find out what are the characteristics your solftware is especting, aparently it is not 300dpi.
If you can´t find the information, try commonly used dpis like 72dpi or 150dpi.