What's the best design for image-based iphone app? - iphone

I would appreciate some advice on an iphone game design. I want to display some backround image and other images on top of it (buildings, characters etc). The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once. The idea is to replace this piece when character gets close to screen borders. I need to make this background transition as a smooth animation. Also, I need to have a zoom in/out feature, preferably animated. Some images on the screen will be static (buildings) and some will require some animation (character walking).
What is the best design:
Use Core Graphics combined with
"sprite" classes - displaying
sprite's UIImage with
CGContextDrawImage
Use UIKit -
create UIImageview to hold every
image and add them as subviews in a
single view appplication
Use OpenGL
ES project
Option 1) turned out to be very slow. It seems like CoreGraphics is not meant to display images in a game loop. But maybe there is a way to make it effecient? Maybe combine it with Core Animation somehow?
Option 2) is my current choice. I am hoping the view to cache the image it holds and thus be more effecient than CG. But will the animation provided by UIImageview will be satisfactory? I think the views shouldn't be added all at once, but rather created & added (removed?) dynamically when background moves. Is it a good idea?
Option 3) would probably give the best control over the images but it seems like quite an overhead. I only need to display images, not vector graphics. Plus I'm new to Mac programming and I don't want to get stuck in some complex technology.
I appreciate any advice, thanks :)

I highly recommend Cocos2D as I've done my own development here on my blog. It was really easy to do. I follow Ray Wenderlich's tutorials and he provides great tools for doing everything you describe.

You asked "The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once."
The tiled image system is very powerful and fast performance. If you use google maps you will see and example of a tiled image. Scroll off to a new are and blocks appear. In a local app you could take your image that is 10 times the size of the screen and cut in to tiles that are say 100px by 100px and each screen will only load the tiles that are displayed. When the user moves only the needed tiles are loaded. This saves memory and dramatically improves speed. It is the base reason why tables can fly, only the cells one screen are loaded, as is scrolls off the screen it's memory is reused for the next cell.

If option 2 is sufficiently performant for your needs I would stick with that - it's as easy a system as you'll get on the iPhone and fine for very simple graphics. A related option that might buy you a little bit of speed is using CALayers to implement the graphics. CALayers are almost as easy to use, but are a bit more lightweight than UIViews (in some ways you can think of UIViews as just wrappers for CALayers with additional overhead for managing things like touch events, etc.)
If you're interested I would read the Core Animation Programming Guide (I would provide a link but I think my reputation is too low, but Google should track it down for you). Core Animation is a big subject and can be pretty daunting but if you just use layers (i.e. not the animation parts of it) it's not so bad. Here's a quick example to give you a sense of what using layers looks like:
// NOTE: I haven't compiled this code so it may have typos/errors I haven't noticed
UIView* canvasView; // the view that will be the "canvas" for your game
... // initialize the canvas, etc.
CALayer* imageLayer = [CALayer layer];
UIImage* image = [UIImage imageNamed: #"MyImage.png"];
imageLayer.content = (id)image.CGImage;
imageLayer.bounds = CGRectMake(0, 0, image.size.width, image.size.height);
imageLayer.position = CGPointMake(100, 100); // NOTE: Unlike UIViews CALayers have their origin at the center
[canvasView.layer addSublayer:imageLayer];
So basically it looks a lot like working with views but with some added performance (and occasional headache).
P.S. - One thing to keep in mind is that if you make changes to a layer's property that is animatable (e.g. position, opacity, etc.) Core Animation will implicitly animate it (e.g. if you write imageLayer.position = somePoint; the layer animates to that position rather than having it's position set immediately. There's easy ways to work around that but that's a topic for another question/answer.

Related

What is the exact performance cost when mixing OpenGL with UIKit in iPhone?

I need to make a design decision of how to approach an app which needs to render few 3D objects on top of an image texture.
Result needs to be rendered and saved as an UIImage.
Graphical design (client) expects standard UIKit controls to manipulate 3D world.
OpenGL view (CAEAGLLayer layer) needs to be inside UIScrollView for cheap and natural scroll and zooming implementation.
There are just few extra controls to manipulate rotation scale and transition of the few 3D objects.
There is not many triangles expected (100-200 at most) and 2-3 textures (1 mask). It does not even have to be refreshed constantly, just when some transformations and zoom changes.
CAEAGLLayer does not need to be opaque.
I would go with Core Animation solution but rendering 3D transformed CALayers to CGContextRef is not supported (neither is masking).
What is real performance cost when putting CAEAGLLayer inside UIScrollView and mixing it with few UIKit views?
How much triangles per second can I expect to be rendered with smooth frame rate (30fps will do), so I can make the best decision possible?
I know there are similar questions out there already, but none of the answers provides specific numbers, which could help with estimating expected rendering results.
Per Allan Schaffer, who gives the WWDC and WWDC on tour OpenGL speeches, OpenGL itself is not so much a special case — it's that anything that changes will cause everything on top of it to be recomposited. I spoke to him specifically about an app with a live updating video view underneath a live updating OpenGL view and he said that sort of thing is the pathological worst case. It's generally not that expensive to throw a few quite static views on top, such as using a UILabel to display the current score over an OpenGL game.
In your case, I think you can probably largely avoid the problem. Don't put the GL view inside the scroll view, but rather make the scroll view non-opaque and put the GL view behind it. Catch scrollViewDidScroll: (and the corresponding zoom messages) and make related OpenGL adjustments. I can speak from experience and say I've done exactly that on an iPad 1 with no performance issues. Particularly for the sort of model you're talking about I don't imagine a problem.

UIImageView v/s Drawing from code

I'm developing an app that must show one of three shapes(UIImageViews) depending on which button the user taps. I'm achieving this by creating different UIImageViews with three UIImages. I was wondering, is it more efficient if I draw the shapes directly from code??
BTW, the images have transparency and are 342 px x 388 px. Thanks!
It depends on the shapes. I suggest using PNG image files unless you are having particular speed issues. This lets the designers/artists customize the look easily rather than making the programmers do it in code, which is more tedious to modify.
We have some nice gradients and shadows in our application and drawing them using Quartz was pretty slow -- fine for a single screen, but too slow when animated or scrolled.

iPhone performance with Bitmaps

Pretty new to iPhone / objective-C.
I have an application that has 15-100 small images (16x16 or 8x8 PNG) on the screen. For this example sake, let's assume that I can create these images using CGContext if I needed to.
I would have to assume that iPhone would perform better using that method rather than loading images (PNG's). However, the bitmap version is easier to develop and also has other advantages (like built in touch events) that I need.
If performance is not the ultimate metric for this application, does placing 100 small images degrade performance/memory enough to even consider switching to the CGContext method. My instinct tells me that I will not see that much of a performance difference either way but I am too new to iPhone development to know enough about it to make a difference.
I suppose it depends on the complexity of your image generation algorithm.
I will also depend on you application: will you be drawing this images many times per second, like in an animation? If that's the case, use UIImageViews.
I think using 100 or so UIImageViews should be fine as long as you don't need to rapidly animate them or update them at the same time. You should avoid doing anything that would change the size of the views (like resizing the view that contains them all), and if you use Core Animation to animate them, perform all of the animations inside a single animation block. (Wrap everything with one [UIView beginAnimations:context:], [UIView commitAnimations] - not one for each view)
Good luck!
I'd try the bitmap version first, then CGContext one if bitmap is too slow.
THEN if it's still too slow, I'd put all the icons into a GL texture.

How do I use CALayer with the iPhone?

Currently, I have a UIView subclass that "stamps" a single 2px by 2px CGLayerRef across the screen, up to 160 x 240 times.
I currently animate this by moving the UIView "up" the screen 2 pixels (actually, a UIImageView) and then drawing the next "row".
Would using multiple CALayer layers speed up performance of rendering this animation?
Are there tutorials, sample applications or code snippets for use of CALayer with the iPhone SDK?
The reason I ask is that most of the code snippets I find that demonstrate simple examples of CALayer employ method calls that do not work with the iPhone SDK. I appreciate any advice or pointers.
Okay, well, if you want something that has some good examples of CA good that draws things like that and works on the phone, I recommend the GeekGameBoard code that Jens Aflke published (it is an improved version of some Apple demo code).
Based on what you are describing I think you are doing somthing way more complicated than it needs be. My impression is you want basically a static view that you are animating by shifting its position so that it is partially off screen. If you just need to set some static content in your drawRect going through layers is not going to be faster than just calling CGFillRect() with your color. After that you could just use implicit animations and the animator proxy on UIView to move the view. I suspect you could even get rid of the custom drawRect: implementation with a patterned UIColor, but I honestly have not benchmarked the difference between the two.
What CALayer methods are you seeing that don't work on iPhone? Aside from animation features tied to CoreImage I have not noticed much that is missing. The big thing you are likely to notice is that all views are layer backed (so you do not need to do anything special to use layers, you can just grab a UIView's layer through the layer accessors methos), and the coordinate system has a top left origin.
In any event, generally having more things is slower than having fewer things. If you are just repeating the same pattern over and over again you are likely to find the best performance is implementing a custom UIView/CALayer/UIColor that knows how to draw what you want, rather than placing visually identical layers or views next to each other.
Having said that, generally layers are lighter weight than views, so if you have a lot of separate elements that you need to keep logically separated you will find that moving to layers can be a win over using views.
You might want to look at -[UIColor initWithPatternImage:] depending on exactly what you are trying to do. If you are using this two pixel pattern as a background color you could just make a UIColor that draws it and set the background.
What CALayer methods are you seeing that don't work on iPhone?
As one example, I tried implementing the grid demo here, without much luck. It looks like CAConstraintLayoutManager and CAConstraint are not available in QuartzCore.h.
In another attempt, I tried a very simple, small 20x20 CALayer object as a sublayer of my UIView's layer property, but that didn't show up.
Right now, I have a custom UIView of which I override the drawRect method. In drawRect I grab a context and render two types of CGLayerRefs:
At "off" cells I draw the background color across the entire 320x480 canvas.
At "on" cells, I either draw a single CGLayerRef across a grid of 320x480 pixels (initialization) or across a 320x2 row (animation).
During animation, I make a UIImageView clip view from 320x478 pixels, and draw a single row. This "pushes" my bitmap up the screen two pixels at a time.
Basically, I'd like to test whether or not using CALayer will accomplish two things:
Make my rendering faster, if CALayer has less overhead than what I'm doing now
Make my animation smoother, by letting me transition a layer up the screen smoothly
Unfortunately, I can't seem to get a basic CALayer working at the moment, and haven't found a good chunk of sample code to look at and play with.

What's the best way to create a "magnifying glass" on a 2D scene?

I'm working on a game where I need to let the player look at a plane (e.g., a wall) through a lens (e.g., a magnifying glass). The game is to run on the iPhone, so my choices are Core Animation or OpenGL ES.
My first idea (that I have not yet tried) is to do this using Core Animation.
Create the wall and objects on it using CALayers.
Use CALayer's renderInContext: method to create an image of the wall as a background layer.
Crop the image to the lens shape, scale it up, then draw it over the background.
Draw the lens frame and "shiny glass" layer on top of all that.
Notes:
I am a lot more familiar with Core Animation than OpenGL, so maybe there is a much better way to do this with OpenGL. (Please tell me!)
If I am using CALayers that are not attached to a view, do I have to manage all animations myself? Or is there a straightforward way to run them manually?
3D perspective is not important; I'm just magnifying a flat wall.
I'm concerned that doing all of the above will be too slow for smooth animation.
Before I commit a lot of code to writing this, my question is do you see any pitfalls in the plan above or can you recommend an easier way to do this?
I have implemented a magnifying glass on the iPhone using a UIView. CA was way too slow.
You can draw a CGImage into a UIView using it's drawRect method. Here's the steps in my drawRect:
get the current context
create a path for clipping the view (circle)
scale the current transformation matrix (CTM)
move the current transformation matrix
draw the CGimage
You can have the CGImage prerendered, then it's in the graphics memory.
If you want something dynamic, draw it from scratch instead of drawing a CGImage.
Very fast, looks great.
That is how I'd do it, it sounds like a good plan.
Whether you choose OGL or CA the basic principle is the same so I would stick with what you're more comfortable with.
Identify the region you wish to magnify
Render this region to a separate surface
Render any border/overlay onto of the surface
Render your surface enlarged onto the main scene, clipping appropriately.
In terms of performance you will have to try it and see (just make sure you test on actual hardware, because the simulator is far faster than the hardware). If it IS to slow then you can look at doing steps 2/3 less frequently, e.g every 2-3 frames. This will give some magnification lag but it may be perfectly acceptable.
I suspect that performance between OGL / CA will be roughly equivalent. CA is built ontop of the OGL libraries but your cost is going to be doing the actual rendering, not the time spent in the layers.