Core Graphics Vs Images for a custom button - iphone

When should I go with core graphics over images for making a custom UIButton?
Is core graphics faster? Other than resolution independence, are there any other major benefits?

Pros of Core Graphics:
Code for drawing a button will probably be smaller than an image file.
Allows dynamic modification, slight changes without adding a complete second image.
As you mentioned, resolution independent.
Less memory intensive (doesn't allocate memory to hold every pixel).
Pros of images:
Creating the image in an editor will usually be simpler than writing code which draws it perfectly. (If you use anything other than solid colors, it could be much simpler.)
The editor will let you see the image beforehand without recompiling.
Easier to work with other built in objects (i.e., you can make it the background of a UIButton).
As for running time, I am not sure. I would guess that CG would be faster for simple drawing, where most of the pixels aren't changed, but images would be faster for more complex drawing where most of the pixels are changed (Assuming you use the PNG format, so it gets optomized. Otherwise, CG would probably always be faster).
As a compromise, you could draw into an image once, and use that image for future drawing. This would allow you to get some of the benefits from each.

Additional thoughts to ughoavgfhw's comment:
Images can be cached (like when you use [UIImage imageNamed:]). So you even won't use more memory for new buttons, except the first one displayed. (And no allocations, except memory for a new pointer).
You can use stretchable images for button creation, and avoid some (not all and not always) problems with resolution dependence:
UIImage *myImg = [UIImage imageNamed:#"myImg.png"];
UIImage *myImgStretchable = [myImg stretchableImageWithLeftCapWidth:10 topCapHeight:10];
[myButton setBackgroundImage:myImgStretchable forState:UIControlStateNormal];

Related

Does putting a large UIImage into a small UIImageView auto resize it?

Let's say I have a UIImage 1000x1000 and I have a UIImageView with a frame size of 50x50. If I set my UIImageView's image as the large image, is the UIImage automatically resized to 50x50? The UIImageView contentMode is ScaleToFit.
I know that the image does shrink down to 50x50 to fit in there, but in terms of memory, would it be better that I resized the image to 50x50 before setting it as the UIImageView image?
I know it won't make a difference for one image, but I have hundreds of large images and image views in a UIScrollView, so I want to make sure I get the fastest and smoothest performance.
I can think of a couple considerations. UIImage ImageNamed: will cache the image for the entire app run, even when no longer used. If arbitrary images will be used or added as opposed to a static set, make sure to load the image with NSData from a connection, file, or database. Also, I haven't checked in instruments for at least several iOS versions, but I believe that the original image is cached for scaling. If there is no specific issue that you actually run into with running out of memory or slowness, don't bother with pre-emptive optimization. Otherwise, there are some techniques that may help. First, you may only want to retain what is currently on screen. This is easier to do with a UITableView and cell reuse than with a bare UIScrollView. Second, for one app I used that held a lot of thumbnails, I scaled them once and stored them in the database.

Transparency on images inside opaque UIImageView

I often export PNG images from Photoshop for use in an iPhone app, using the Save For Web and Devices feature. I always leave the transparency option selected, even if there are no transparent parts to the image. This is because I assumed that it would have no effect if the image has no transparent areas, and it's easier to just leave the option selected.
I was told recently that by doing this, the opaque property of a UIImageView is effectively ignored because the UIImage will have an alpha channel, having a negative impact on performance.
Is this correct? Should I turn off the transparency option if it's not needed when exporting PNGs from Photoshop?
The image itself should have zero effect on a UIImageView or its opaque property except for the amount of image data that has to be loaded into the UIImageView's image property. Since, an image with transparency will usually have a larger amount of data than the same image without any transparency, it would take slightly longer for it to load into a UIImageView.image when setting it (imageView.image = [UIImage imageNamed:#"myTransparentImage.png"];). Unless, of course, you use a different quality/compression/format/color depth/etc when exporting from Photoshop.
You can verify all of this and see the exact amount of time, I/O, memory, etc for each different image by using the Instruments App with the System Usage, Time Profiler, and/or Activity Monitor templates.

What's the best design for image-based iphone app?

I would appreciate some advice on an iphone game design. I want to display some backround image and other images on top of it (buildings, characters etc). The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once. The idea is to replace this piece when character gets close to screen borders. I need to make this background transition as a smooth animation. Also, I need to have a zoom in/out feature, preferably animated. Some images on the screen will be static (buildings) and some will require some animation (character walking).
What is the best design:
Use Core Graphics combined with
"sprite" classes - displaying
sprite's UIImage with
CGContextDrawImage
Use UIKit -
create UIImageview to hold every
image and add them as subviews in a
single view appplication
Use OpenGL
ES project
Option 1) turned out to be very slow. It seems like CoreGraphics is not meant to display images in a game loop. But maybe there is a way to make it effecient? Maybe combine it with Core Animation somehow?
Option 2) is my current choice. I am hoping the view to cache the image it holds and thus be more effecient than CG. But will the animation provided by UIImageview will be satisfactory? I think the views shouldn't be added all at once, but rather created & added (removed?) dynamically when background moves. Is it a good idea?
Option 3) would probably give the best control over the images but it seems like quite an overhead. I only need to display images, not vector graphics. Plus I'm new to Mac programming and I don't want to get stuck in some complex technology.
I appreciate any advice, thanks :)
I highly recommend Cocos2D as I've done my own development here on my blog. It was really easy to do. I follow Ray Wenderlich's tutorials and he provides great tools for doing everything you describe.
You asked "The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once."
The tiled image system is very powerful and fast performance. If you use google maps you will see and example of a tiled image. Scroll off to a new are and blocks appear. In a local app you could take your image that is 10 times the size of the screen and cut in to tiles that are say 100px by 100px and each screen will only load the tiles that are displayed. When the user moves only the needed tiles are loaded. This saves memory and dramatically improves speed. It is the base reason why tables can fly, only the cells one screen are loaded, as is scrolls off the screen it's memory is reused for the next cell.
If option 2 is sufficiently performant for your needs I would stick with that - it's as easy a system as you'll get on the iPhone and fine for very simple graphics. A related option that might buy you a little bit of speed is using CALayers to implement the graphics. CALayers are almost as easy to use, but are a bit more lightweight than UIViews (in some ways you can think of UIViews as just wrappers for CALayers with additional overhead for managing things like touch events, etc.)
If you're interested I would read the Core Animation Programming Guide (I would provide a link but I think my reputation is too low, but Google should track it down for you). Core Animation is a big subject and can be pretty daunting but if you just use layers (i.e. not the animation parts of it) it's not so bad. Here's a quick example to give you a sense of what using layers looks like:
// NOTE: I haven't compiled this code so it may have typos/errors I haven't noticed
UIView* canvasView; // the view that will be the "canvas" for your game
... // initialize the canvas, etc.
CALayer* imageLayer = [CALayer layer];
UIImage* image = [UIImage imageNamed: #"MyImage.png"];
imageLayer.content = (id)image.CGImage;
imageLayer.bounds = CGRectMake(0, 0, image.size.width, image.size.height);
imageLayer.position = CGPointMake(100, 100); // NOTE: Unlike UIViews CALayers have their origin at the center
[canvasView.layer addSublayer:imageLayer];
So basically it looks a lot like working with views but with some added performance (and occasional headache).
P.S. - One thing to keep in mind is that if you make changes to a layer's property that is animatable (e.g. position, opacity, etc.) Core Animation will implicitly animate it (e.g. if you write imageLayer.position = somePoint; the layer animates to that position rather than having it's position set immediately. There's easy ways to work around that but that's a topic for another question/answer.

Fastest iPhone Blit Routine?

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).
Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)
While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.
Since CALayer is lightweight and fast I would get a try.
Thierry
The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.

Image strategy in iPhone app

I'm writing a card game for the iPhone, and I'm not sure about the best strategy for displaying the cards. I have a basic prototype that creates a UIImageView that can be dragged for each card with a dummy image. I wanted to use one large UIImage that contains the faces of all of the cards, and then have each draggable UIImageView display a part of that image. I must be misunderstanding what setBounds is for - I thought that controlled which part of the underlying image is displayed. So, two questions:
Is this the right approach?
How do I display just a part of the image?
Depending on your resolution, this might not be the best approach.
From Apple:
You should avoid creating UIImage
objects that are greater than 1024 x
1024 in size. Besides the large amount
of memory such an image would consume,
you may run into problems when using
the image as a texture in OpenGL ES or
when drawing the image to a view or
layer. This size restriction does not
apply if you are performing code-based
manipulations, such as resizing an
image larger than 1024 x 1024 pixels
by drawing it to a bitmap-backed
graphics context. In fact, you may
need to resize an image in this manner
(or break it into several smaller
images) in order to draw it to one of
your views.
Now, you are talking about breaking it up into several smaller pieces, but given UIIMage's caching, I am not sure what happens to memory every time you access the image and copy a sub-rect out of it. I think the approach I would take is to have an array of images, instead of one big one.