Gradient backgrounds - iphone

When is it recommended to use gradient created by the iphone itself and when is an image the better choice?

This question asks something similar. As with most performance-related questions, you are best served by using Instruments and other tools to determine if this is an area worth spending the time to optimize.
As I state in my answer, I had noticed a significant amount of time being spent during launch in the Quartz functions for drawing a radial gradient in the background on an iPhone 3G. By switching to an image, I was able to noticeably reduce startup time of my application. However, a new image would need to be generated for each larger display size, so for the newer devices (iPad, iPhone 4), I use the Quartz radial gradient once again because of the negligible rendering time on those systems.
For linear gradients, it has been my experience that a CAGradientLayer gives you both good performance and scalability to new resolutions, but again you'll need to test this in your particular application.

That is a pretty vague question, are you thinking about any particular situations? Unless you are doing very intensive processing or heavy graphics (game) then the difference in processing time is probably not going to be noticeable. I haven't measured it, but I would bet the difference in overhead is pretty minimal, so do whichever you think is easier/nicer looking until you find a problem.

Related

Analysis of canvas rendering performance

I'm developing a rendering of an isometric 3d environment made up of blocks (kind of minecraft).
I'm drawing it in a canvas using its 2d context (and doing some math).
On page load a loop is created adding some blocks each frame (window.requestAnimationFrame(fn)), but I'm struggling with low fps when rendering.
This is first time for me to go so deep in performance analysis, and I'm struggling understanding the performance view of Chrome Devtools.
Looking at the results:
What I understand is that the frame took 115.9ms to complete, but looking at the processes seems it took just ~30ms to do the calculation using the canvas API, but in the task bar (upon the Animation Frame Fired) I see much longer time for the frame to complete.
Is this a common behavior? Have I did some dumb mistake wasting performance some way?
(if it is a common behavior, what is happening during that time? Is it the actual drawing?)
I blocked as I'm wondering if I should try to improve my algorithm of drawing, or I should look somewhere else to address a bottleneck
I don't know if you ever got an answer to this, but one thing that jumps out at me is that in your screenshot the green "GPU" bar is nearly solid. As I understand it, this bar indicates that the browser is sending instructions and/or data to the GPU for hardware-accelerated rendering. In my experience this can be a problem if you're using a slow graphics card, depending on what you're trying to do.
The good news is that I would expect testing on a more powerful system to result in an immediate framerate improvement. The bad news is, I'm not sure how to tell exactly which canvas operations put that much load on your (bad) GPU or how to optimize to reduce GPU traffic.

Fast rotation of an image (bitmap) on an iPhone for an arbitrary degree

I need to rotate a full size photo (about 8MB) as fast as possible on an iPhone (4s and up), an arbitrary angle. The code to do so with CoreImage is easy enough, but not fast. It takes about 1.5 seconds on a 4s. Please note that the purpose of this rotate is for further image processing in memory, NOT for display on the screen.
Is there any kind of hope that we can get this down to sub-second given, perhaps, the DSP (using the Accelerate framework) or OpenGL (and keeping in mind that we have to copy the bits in and out of whatever buffer we using. If this is hopeless then we have other (but more complicated) ways to tackle the job. I have not written OpenGL code before and want some assurance that this will actually work before I spend significant time on it!
Thank you,
Ken
Since you have it running at 1.5s with no hardware acceleration I'd say it's safe to assume you can get it under a second with OpenGL.

Drawing using drawRect: heavy for the performance?

I'm writing a mathematical app where the user can draw several mathematical figures like circles, squares, lines, etc... I'm drawing directly to the screen using the current graphics context, Quartz 2D, UIView and drawRect: method.
I'm not sure about what I'm asking, but is this drawing way using drawRect: every time heavy for the performance (iPhone battery)? Thanks a lot.
You will need to profile your app's execution under heavy conditions using Instruments in order to answer your question. It may be heavy, or it may be fine. The complexity can vary greatly for a number of reasons. If the interface is visibly lagging/slow, that may indicate your drawing is taking too much time. If you suspect it will be an issue due to complexity, sample often in order to spot and correct issues as they are introduced.

How to improve edge detection on IPhone apps?

I'm currently developing an IPhone app that uses edge detection. I took some sample pictures and I noticed that they came out pretty dark in doors. Flash is obviously an option but it usually blinding the camera and miss some edges.
Update: I'm more interested in IPhone tips. If there is a wat to get better pictures.
Have you tried playing with contrast and/or brightness? If you increase contrast before doing the edge detection, you should get better results (although it depends on the edge detection algorithm you're using and whether it auto-magically fixes contrast first).
Histogram equalisation may prove useful here as it should allow you to maintain approximately equal contrast levels between pictures. I'm sure there's an algorithm been implemented in OpenCV to handle it (although I've never used it on iOS, so I can't be sure).
UPDATE: I found this page on performing Histogram Equalization in OpenCV

Performance-wise: A lot of small PNGs or one large PNG?

Developing a simple game for the iPhone, what gives a better performance?
Using a lot of small (10x10 to 30x30 pixels) PNGs for my UIViews' backgrounds.
Using one large PNG and clipping to the bounds of my UIViews.
My thought is that the first technique requires less memory per individual UIView, but complicates how the iPhone handles the large amount of images, as it tries to combine the images into a larger texture or tries to switch between all the small textures a lot.
The second technique, on the other hand, gives the iPhone the opportunity to handle just one large PNG, but unnessicarily increases the image weight every UIView has to carry.
Am I right about the iPhone's attempts, handling the images the way I described it?
So, what is the way to go?
Seeing the answers thus far, there is still doubt. There seems to be a trade-off with two parameters: Complexity and CPU-intensive coding. What would be my tipping point for deciding what technique to use?
If you end up referring back to the same CGImageRef (for example by sharing a UIImage *), the image won't be loaded multiple times by the different views. This is the technique used by the videowall Core Animation demo at the WWDC 07 keynote. That's OSX code, but UIViews are very similar to CALayers.
The way Core Graphics handles images (from my observation anyway) is heavily tweaked for just-in-time loading and aggressive releasing when memory is tight.
Using a large image you could end up loading the image at draw time if the memory for the decoded image that CGImageRef points to has been reclaimed by the system.
What makes a difference is not how many images you have, but how often the UIKit traverses your code.
Both UIViews and Core Animation CALayers will only repaint if you ask them to (-setNeedsDisplay), and the bottleneck usually is your code plus transferring the rendered content into a texture for the graphics chip.
So my advice is to think your UIView layout in a way that allows portions that change together to be updated all at the same time, which turn into a single texture upload.
One large image mainly gives you more work and more headaches. It's harder to maintain but is probably less ram intensive because there is only one structure + data in memory instead of many structures + data. (though probably not enough to notice).
Looking at the contents of .app bundles on regular Mac OS, it seems the generally approved method of storage is one file/resource per image.
Ofcourse, this is assuming you're not getting the image resources from the web, where the bottleneck would be in http and its specified maximum of two concurrent requests.
One large gives you better performance. (Of cause if you should render all pictures anyway).
One large image will remove any overhead associated with opening and manipulating many images in memory.
I would say there is no authoritative answer to this question. A single large image cuts down (slow) flash access and gets the decode done in one go but a lot of smaller images give you better control over what processing happens when... but it's memory hungry and you do have to slice that image up or mask it.
You will have to implement one solution and test it. If it isn't fast enough and you can't optimise, implement the other. I suggest implementing the version which you personally find easier to imagine implementing because that will be easiest to implement.