Transparency on images inside opaque UIImageView - iphone

I often export PNG images from Photoshop for use in an iPhone app, using the Save For Web and Devices feature. I always leave the transparency option selected, even if there are no transparent parts to the image. This is because I assumed that it would have no effect if the image has no transparent areas, and it's easier to just leave the option selected.
I was told recently that by doing this, the opaque property of a UIImageView is effectively ignored because the UIImage will have an alpha channel, having a negative impact on performance.
Is this correct? Should I turn off the transparency option if it's not needed when exporting PNGs from Photoshop?

The image itself should have zero effect on a UIImageView or its opaque property except for the amount of image data that has to be loaded into the UIImageView's image property. Since, an image with transparency will usually have a larger amount of data than the same image without any transparency, it would take slightly longer for it to load into a UIImageView.image when setting it (imageView.image = [UIImage imageNamed:#"myTransparentImage.png"];). Unless, of course, you use a different quality/compression/format/color depth/etc when exporting from Photoshop.
You can verify all of this and see the exact amount of time, I/O, memory, etc for each different image by using the Instruments App with the System Usage, Time Profiler, and/or Activity Monitor templates.

Related

Does putting a large UIImage into a small UIImageView auto resize it?

Let's say I have a UIImage 1000x1000 and I have a UIImageView with a frame size of 50x50. If I set my UIImageView's image as the large image, is the UIImage automatically resized to 50x50? The UIImageView contentMode is ScaleToFit.
I know that the image does shrink down to 50x50 to fit in there, but in terms of memory, would it be better that I resized the image to 50x50 before setting it as the UIImageView image?
I know it won't make a difference for one image, but I have hundreds of large images and image views in a UIScrollView, so I want to make sure I get the fastest and smoothest performance.
I can think of a couple considerations. UIImage ImageNamed: will cache the image for the entire app run, even when no longer used. If arbitrary images will be used or added as opposed to a static set, make sure to load the image with NSData from a connection, file, or database. Also, I haven't checked in instruments for at least several iOS versions, but I believe that the original image is cached for scaling. If there is no specific issue that you actually run into with running out of memory or slowness, don't bother with pre-emptive optimization. Otherwise, there are some techniques that may help. First, you may only want to retain what is currently on screen. This is easier to do with a UITableView and cell reuse than with a bare UIScrollView. Second, for one app I used that held a lot of thumbnails, I scaled them once and stored them in the database.

Creating a composited image with a transparent background

In Eclipse, I have a view that uses GEF and for some of the figures that I display. I need to paint some background.
For some of the background, I use a system of folders with a set of predefined images (top_left.png, top.png, top_right.png, left.png, middle.png, ...., bottom_right.png) and, while I can recreate the background when needed, it is highly inefficient (especially since it redraws every time the view is being scrolled or a fly is passing by).
To avoid having to recreate the background image everytime, I want to use a cache system: I create an Image object on which I paint each image, and then I cache that image in a map (the key being the dimension of the image).
To be able to have rounded corners, the cache image needs to have a transparent background and this is where I am blocking:
I have tried to set the transparent pixel and paint with the same color but without success
I tried to use ImageData to set the alpha depending on the alpha of each source image but in that case, while the transparency is done well, the image created is all white.
Is there a way in SWT to do a transparent background image that I can paint some images on?
Update:
I found a possible solution by using a BufferedImage from AWT and converting to SWT using code found in http://www.java2s.com/Code/Java/SWT-JFace-Eclipse/ConvertbetweenSWTImageandAWTBufferedImage.htm
While a good base, this code doesn't actually handle transparency, and I modified it quickly (and quite dirtily) to do it. Is there a reliable code for converting Images from AWT to SWT and inverse?
I would prefer some solution to my problem that doesn't involve converting back and forth between images format.

Core Graphics Vs Images for a custom button

When should I go with core graphics over images for making a custom UIButton?
Is core graphics faster? Other than resolution independence, are there any other major benefits?
Pros of Core Graphics:
Code for drawing a button will probably be smaller than an image file.
Allows dynamic modification, slight changes without adding a complete second image.
As you mentioned, resolution independent.
Less memory intensive (doesn't allocate memory to hold every pixel).
Pros of images:
Creating the image in an editor will usually be simpler than writing code which draws it perfectly. (If you use anything other than solid colors, it could be much simpler.)
The editor will let you see the image beforehand without recompiling.
Easier to work with other built in objects (i.e., you can make it the background of a UIButton).
As for running time, I am not sure. I would guess that CG would be faster for simple drawing, where most of the pixels aren't changed, but images would be faster for more complex drawing where most of the pixels are changed (Assuming you use the PNG format, so it gets optomized. Otherwise, CG would probably always be faster).
As a compromise, you could draw into an image once, and use that image for future drawing. This would allow you to get some of the benefits from each.
Additional thoughts to ughoavgfhw's comment:
Images can be cached (like when you use [UIImage imageNamed:]). So you even won't use more memory for new buttons, except the first one displayed. (And no allocations, except memory for a new pointer).
You can use stretchable images for button creation, and avoid some (not all and not always) problems with resolution dependence:
UIImage *myImg = [UIImage imageNamed:#"myImg.png"];
UIImage *myImgStretchable = [myImg stretchableImageWithLeftCapWidth:10 topCapHeight:10];
[myButton setBackgroundImage:myImgStretchable forState:UIControlStateNormal];

Zoom out on large UIImageView within an UIScrollView

Case: I have a simple piece of code for displaying/zooming/scrolling a large image in an UIImageView->UIScrollView. In this UIScrollView content I want to place buttons to create clickable areas.
Issue: When I zoom in the quality remains proper but, when I zoom out to the highest posible level than the image is getting dotted and some lines aren't visible anymore.
Tried: I tried to regenerate the UIImage with Interpolation Quality on kCGInterpolationHigh and tried to change the size of the image after every zoom change. As you might aspect, no results jet.
I suggest you use a CATiledLayer as the backing layer, as demonstrated by Apple's PhotoScroller sample app. This allows you to prebuild the scaled versions, meaning you can precisely control the interpolation quality with Photoshop/ImageMagick/GIMP etc, rather than relying on UIScrollView's built-in scaling.

Image strategy in iPhone app

I'm writing a card game for the iPhone, and I'm not sure about the best strategy for displaying the cards. I have a basic prototype that creates a UIImageView that can be dragged for each card with a dummy image. I wanted to use one large UIImage that contains the faces of all of the cards, and then have each draggable UIImageView display a part of that image. I must be misunderstanding what setBounds is for - I thought that controlled which part of the underlying image is displayed. So, two questions:
Is this the right approach?
How do I display just a part of the image?
Depending on your resolution, this might not be the best approach.
From Apple:
You should avoid creating UIImage
objects that are greater than 1024 x
1024 in size. Besides the large amount
of memory such an image would consume,
you may run into problems when using
the image as a texture in OpenGL ES or
when drawing the image to a view or
layer. This size restriction does not
apply if you are performing code-based
manipulations, such as resizing an
image larger than 1024 x 1024 pixels
by drawing it to a bitmap-backed
graphics context. In fact, you may
need to resize an image in this manner
(or break it into several smaller
images) in order to draw it to one of
your views.
Now, you are talking about breaking it up into several smaller pieces, but given UIIMage's caching, I am not sure what happens to memory every time you access the image and copy a sub-rect out of it. I think the approach I would take is to have an array of images, instead of one big one.