Is there any performance loss when repeatedly calling UIGraphicsGetCurrentContext - iphone

I'm building a UIView with a custom drawRect function. This is a fairly complex view, with a number of different items that need to be drawn. I've basically broken it down into one function per item that needs to be drawn.
What I'm wondering is should I pass my CGContextRef, obtained from UIGraphicsGetCurrentContext(), as a parameter to each function, or can I just call it at the start of each function? The latter option looks neater to me, but I am wondering if there is much of a performance penalty?

It's the same, unless you are saving/restoring context all around. In any way, getting the context from that method will, most probably, never be the bottleneck.
I suggest that if you are not saving and restoring states, you could use the UIGraphicsGetCurrentContext(). However, if you are indeed saving state, you should pass this one since it would be easier to read your code.
It's a matter of style I guess...

Pier-Olivier's response is good, and just grazes the key issue: don't worry about it until you have to. This is a case of premature optimization. Before spending a lot of time deciding whether to pass around your CGContextRef, you should write your application and then look at the performance. Using Instruments can help you figure out where your real bottlenecks are. If it turns out this is causing problems (which I highly doubt), then you can optimize it.

just profile after it's implemented correctly and well tested.
if it really shows up as a hotspot, then your problem is likely best divided, and/or rendered to an offscreen context... or by using lower level rendering.

Related

XIB files vs. defining layout in code in iPhone

Aside from the WYSIWYG editor, what are the advantages of using XIB/NIB files over defining the layout in code in iPhone/iPad/iOS?
While I don't find XIB files much useful, many iOS developers do, which makes me suspect I might not know their benefits or how to use them properly.
Easier maintenance. More often than not, clients require last minute changes like changing the logo or changing colors or realigning something or some such. Much easier to change it in a xib file and see/show the results immediately
Decoupling. It forces you to write nicely decoupled code right from the offset, which again means easier maintenance.
Defining things in Interface Builder makes them much easier to adjust later. Also, doing interface elements in code can lead to a lot of code bloat, for setting things like exact placement, font, color, etc.
The main advantage using code directly gives you is speed. But it's usually better to start in IB and then see what might need speeding up.
Table cells are one of the main areas where you might consider drawing elements in code for speed.
Personnaly, I'm using XIB to see graphically my I'm doing so I don't have to run, check the look, change a bit the color, run the app, check... It allows me to get a better design. If you work with designer that gives you photoshop design, it will not be useful for you.
Second thing : when you start to really handle Xib, it's much faster than doing it with code (but it takes time and training for TableView and tableView cell for example)

OpenGL performance on iPhone: glAlphaFuncx on the trace

This is kind of weird, but I noticed that up to 40 percents of the rendering time is spent inside glAlphaFuncx. I know that alpha testing is very expensive, but the interesting thing is I do not use it :) No single place of code uses alpha testing, neither do I invoke this function in any other way.
I also checked GL layer for blending on other sorts of stuff which might cause this to happen but it is what it is.
So, if anybody knows what might cause glAlphaFuncx to appear on the performance trace of CPU Sampler, I would be glad to hear it :)
Update: fixed the screenshot link: http://twitpic.com/2afxho/full
Update 2: the function that leads to invokation of glAlpaFuncx contains a single line:
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
Update 3: I tried setting the breakpoint inside this function, but it seems it haven't been invoked at all. I guess profiler is screwed up here...
It's weird that this function appears on a profiler trace, as you say you aren't using it. Try setting a breakpoint in glAlphaFuncx to see from where it is being called.
But anyway, that should not be a problem, glAlphaFunc will just set a state in the GL server side, it doesn't (or should) do any more processing than that. It shouldn't be a performance problem, maybe it's a bug in the GL implementation or in the profiler.
To be sure, you can disable alpha test with glDisable(GL_ALPHA_TEST).
From what I can see, glAlphaFuncx could just be taking the hit for setting up the rendering or pushing the pixels. It could be that it is run either first or last in the rendering.
Do you have an actual performance problem, or are you just trying to find pieces of code to slice off / optimize?
If so, you should set a breakpoint in glAlphaFuncx and see where it is called from and why. To do this, just bring up the debugger console and type "break glAlphaFuncx".
Have you tried explicitly disabling the use of alpha channels?
glDisable(GL_ALPHA_TEST);
http://www.khronos.org/opengles/documentation/opengles1_0/html/glEnable.html
Regardless of system, this sort of behaviour -- time spent presenting what's been drawn -- almost always indicates that the GPU is the bottleneck. Either you are drawing too much (if the framerate is a problem), or the CPU isn't doing enough (if the framerate is fine).
Actually, there's one other possibility -- that the amount of GPU work is fine, but the system is waiting for some kind of vertical retrace period. (That seems unlikely on a device that only ever has an LCD, and doesn't support a raster scan display, but maybe things still notionally work that way internally.) The upshot is still the same as far as the amount of CPU works goes, though, in that you've got time to do more stuff without affecting the frame rate.
I can't explain exactly why glAlphaFuncx specifically is appearing in the call stack, but if it doesn't appear ever to be actually getting called then I'd consider it a red herring until proven otherwise.

How much does an CGAffineTransformMakeRotation() cost?

Is that an very cost-intensive function that sucky my performance away under my feet? I guess they used that one for the waggling buttons on the home screen + core animation. I just want to know before I start wasting my time ;)
Seems unlikely that it'd be much of a performance problem - it works out to something like a Cosine, a Sine, and a few multiplications. Don't call it thousands of times a second, and you'll be fine.
Very (very) little. This is also something you can easily measure yourself; see the following URLs for examples/information on implementing high-resolution timing:
http://developer.apple.com/qa/qa2004/qa1398.html
http://code.google.com/p/plinstrument/source/browse/trunk/Source/PLInstrumentTime.h
http://code.google.com/p/plinstrument/source/browse/trunk/Source/PLInstrumentTime.m
http://code.google.com/p/plinstrument/
Like with everything, it depends on how and how much you use it. It's used all the time in game development, which the iPhone is very well suited for. CoreAnimation is also very fast. If you're worried about it, my suggestion is to take one of the Apple-provided sample apps and run it through Instruments to preform some of your own benchmarks to see if they are acceptable for your needs.
All CGAffineTransformMakeRotation() does is fill in the matrix for a regular old CGAffineTransform. If you think you can fill (or pre-fill) the matrix faster yourself, then go for it (I'd be really surprised if this wasn't super-optimized already).
Then when it comes time to do the actual work of applying the transform, I'm pretty sure the GPU is told to take care of it so it'll run fast and your main CPU shouldn't take too much of a hit.
If you're really worried about it, then do the transforms in 2D space on top of OpenGL to make sure it's hardware optimized.
I just want to know before I start wasting my time
Optimise the system not the individual lines. Who knows how many times your code will call the transform. Better to get the whole system working, profile it and then optimise the parts that are too slow.
Don't worry about individual calls that people tell you are slow. Use this information as a pointer if your code is slow but always see how the operate in your code.

Using Core Graphics/ Cocoa, can you draw to a bitmap context from a background thread?

I'm drawing offscreen to a CGContext created using CGBitmapContextCreate, then later generating a CGImage from it with CGBitmapContextCreateImage and drawing that onto my view in drawRect (I'm also drawing some other stuff on top of that - this is an exercise in isolating different levels of variability and complexity).
This all works fine when it's all running on the main thread. However one of the motivations for splitting this out this way was so that the offscreen part could be run on a background thread (which I had thought should be ok since it's not rendering to an onscreen context).
However, when I do this the resulting image is empty! I've checked over the code, and placed judicious NSLog's to verify that everything is happening in the right order.
My next step is to boil this down to the simplest code that reproduces the issue (or find some silly thing I'm missing and fix it) - at which point I'd have some code to post here if necessary. But I first wanted to check here that I'm not going down the wrong path with this. I couldn't find anything in my travels around the googlesphere that sheds light either way - but a friend did mention that he ran into a similar issue while trying to resize images in a background thread - suggesting there may be some general limitation here.
[edit]
Thanks for the responses so far. If nothing else they have told me that at least I'm not alone in not having an answer for this - which was part of what I wanted to find out. At this point I'm going to put the extra work into getting the simplest possible example and may come back with some code or more information. In the meantime keep any ideas coming :-)
One point to bring up: A couple of people have used the term thread safety with respect to APIs. It should be noted that there are two types of thread safety in this context:
Threadability of the API itself - ie can it be used at all from more than one thread (global state and other re-entrancy issues such as C's strtok are common reasons that an API might not be thread safe too).
Atomicity of individual operations - can multiple threads interact with the same objects and resources through API without application level locking?
I suspect that mention so far has been of the first type, but would appreciate if you could clarify.
[edit2 - solved!]
Ok, I got it all working. Executive summary is that the problem was with me, rather than bitmap contexts themselves.
In my background thread, just before I drew into the bitmap context, I was doing some preparation on some other objects. It turns out that, indirectly, the calls to those other objects where leading to setNeedsDisplay being called on some views!
By separating the part that did that out to the main thread it now all works perfectly.
So for anyone who hits this question wondering if they can draw to a bitmap context on a background thread, the answer is you can (with the caveats that have been presented here and in the answers).
Thanks all
Just a guess, but if you are trying to call setNeedsDisplay from another thread, you need to call it via performSelectorOnMainThread instead.
What you're doing should work if you're working with the CGContextRef in one and only one thread. I've done this before with 8 cores working on 8 different parts of an image and then compositing the different resultant CGImageRefs together and drawing them onscreen.
Apple don't say anything about thread safety on iPhone but Cocoa (as opposed to UIKit) is generally thread safe for drawing. As they share a lot of drawing code, I would assume drawing on iPhone is threadsafe.
That said, your experience would imply there are problems. Could it be that you are using your image before it is rendered?
Not all APIs are thread-safe. Some require locking or require that they be run on the main thread. You may want to scour the documentation. I believe there is a page that summarizes which parts of the SDK are thread-safe and which aren't.
In case anyone is/was searching for exactly how to do this i've written a blog post that describes how to do this, and wraps the whole thing in a NSOperation subclass.

Should I save strings returned by NSLocalizedString()?

I'm working on an iPhone app that we're localizing in both English and Japanese for our initial release. We frequently call NSLocalizedString() to load the appropriate localized string for display. Is it generally better to save the localized strings in instance variables for the next time we need them, or am I micro-optimizing here and should I just reload the string each time it's needed?
This is one of those "it depends" answers.
Calling NSLocalizedString involves performing a lookup in the bundle. These lookups are pretty fast but not free. Whether to cache this return value or just have the convenience of calling NSLocalizedString will depend on how it's used.
If you're passing the return to the
textfield of something like a
UILabel or UITableViewCell then the
lookup will only occur when you
first set the property.
If you're using it in a drawRect
call then the lookup will only
happen when your view needs to be
repainted which could be often,
infrequently, or never.
If your using it in a game UI where
the screen is redrawn every frame
then for a few UI elements these
lookups could be happening hundreds
of times each second.
I would say that for something like #3 you should start with caching the results.
For the others, write them in the way that's most convenient and if you have performance issues in your UI use Instruments to narrow down the cause. If it's NSLocalizedString then optimize it accordingly.
Micro-optimizing. First make it work, then make it right, then make it fast. And when you get to step 3, run Shark (or Instruments), then follow its guidance.
I suspect that you don't take too much of a performance hit. NSLocalizedString(key, comment) is a macro that converts to
[[NSBundle mainBundle] localizedStringForKey:(key) value:#"" table:nil]
Without benchmarking, I have no idea how expensive this is, but I suspect it's not too bad. My feeling is that this won't be a performance bottleneck for you, but you can always run Shark or Instruments and see for yourself when you run your application on the device.