ios, quartz2d, fastest way of drawing bitmap context into window context? - iphone

ios, quartz2d, fastest way of drawing bitmap context into window context?
hallo, sorry for my weak english,
I am looking hardly for fastest possible way of redrawing bitmap
context (which holds pointer to may raw bitmap data) onto iphone
view window context
in the examples i have found in the net people are doing this by
making CGImage from such bitmap context then making UIImage
from this and drawing it onto the view
i am thinking if it is a fastest way of doing it? do i need to create
then release CGImage - in documentation there is info that
making CGImage copy data - is it possible to send my bitmap context
data straight to window context without allocating/ copying then
releasing it in CGImage? (which seem physically not necessary)
parade

Well, i have done some measuring and here is what i have got -
no need to worry about creating CGImage and UIImage stuff becouse it all only takes about 2 miliseconds - my own image processing routines takes the most time (about 100 ms) drawing UIImage at point takes 20 ms - and there is also third thing: when i receive image buffer in my video frame ready delegate i call setNeedsDisplay by performSelectorOnMainThread - and this operation takes sometimes 2 miliseconds and sometimes about 40 miliseconds - does anybody know what it is with that - can i speed up this thing? thanx in advance
parade

I think I see what you are getting at. You have a pointer to the bitmap data and you just want the window to display that. On the old Mac OS (9 and previous) you could write draw directly to video memory, but you can't do that anymore. Back then video memory was part of RAM and now it's all on the OpenGL card.
At some level the bitmap data will have to be copied at least once. You can either do it directly by creating an OpenGL texture from the data and drawing that in an OpenGL context or you can use the UIImage approach. The UIImage approach will be slower and may contain two or more copies of the bitmap data, once to the UIImage and once when rendering the UIImage.
In either case, you need to create and release the CGImage.

The copy is necessary. You first have to get the bitmap into the GPU, as only the GPU has access to compositing any layer to the display window. And the GPU has to make a copy into it's opaque (device dependent) format. One way to do this is to create an image from your bitmap context (other alternatives include uploading an OpenGL texture, etc.)
Once you create an image you can draw it, or assign it to a visible CALayer's contents. The latter may be faster.

Related

Displaying stream of CGImage data as fast as possible

I am receiving a series of images over a network connection, and want to display them as fast as possible, up to 30 FPS. I have derived a UIView object (so I can override drawRect) and exposed a UIImage property called cameraImage. The frame data is arriving as fast as needed, but the actual drawing to screen takes far too much time (as a result of the lag from setNeedsDisplay), both creating lag in the video and slowing down the interaction with other controls. What is a better way to do this? I've though of using OpenGL, but the only examples I've seen aren't for drawing static images to the screen, but for adding texture to some rotating polygon.
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect screenBounds = [[UIScreen mainScreen] bounds];
[self.cameraImage drawInRect:screenBounds];
// Drawing code
//CGContextDrawImage(context, screenBounds, [self.cameraImage CGImage]); // Doesn't help
}
Try setting a CALayer's contents to the image. Or, if you set it up with relatively sane parameters, setting a UIImageView's image shouldn't be much slower.
OpenGL is definitely an alternative, but it's much more difficult to work with, and it will still be limited by the same thing as the others: the texture upload speed.
(The way you're doing it (drawing in drawRect:), you have to let UIKit allocate a bitmap for you, then draw the image into the bitmap, then upload the bitmap to the video hardware. Best to skip the middleman and upload the image directly, if you can.)
It may also depend on where the CGImage came from. Is it backed by raw bitmap data (and if so, in what format?), or is it a JPEG or PNG? If the image has to be format-converted or decoded, it will take longer.
Also, how large is the CGImage? Make sure it isn't larger than it needs to be!

Openg GL ES draw offscreen to provide the contents for a CALayer

Is it is possible use Open GL ES to draw offscreen to create a CGImageRef to use as content for a CALayer.
I intend to alter the image only once. In detail I'm looking for an efficient way to change only the hue of an image without changing the brightness as well. The other solution might be to create a pixel buffer and to change the data directly but it seems computationally expensive.
Although it's not something I've done, it should be possible.
If you check out the current OpenGL ES template in Xcode, especially EAGLView.m, you'll see that the parts that bind the OpenGL context in there to the screen are:
line 77, [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];, which tells the CAEAGLLayer to provide sufficient details to the framebuffer object there being created so that it can be displayed on screen.
line 128, success = [context presentRenderbuffer:GL_RENDERBUFFER];, which gives the CAEAGLLayer the nod that you've drawn a whole new frame and it should present that when possible.
What you should be able to do is dump the CAEAGLLayer connection entirely (and, therefore, you don't need to create a UIView subclass), use glRenderbufferStorage or glRenderbufferStorageMultisampleAPPLE to allocate a colour buffer for your framebuffer instead (so that it has storage, but wherever OpenGL feels like putting it), do all your drawing, then use glReadPixels to get the pixel contents back.
From there you can use CGDataProviderCreateWithData and CGImageCreate to convert the raw pixel data to a suitable CGImageRef.
The GPU stuff should be a lot faster than you can manage on the CPU normally, but your main costs are likely to be the upload and the download. If you don't actually need it as a CGImageRef other than to show it on screen, you'll be better just using a CAEAGLLayer toting UIView subclass. They act exactly like any other view — updating if and when you push new data, compositing in exactly the same way — so there's no additional complexity. The only disadvantage, if you're new, is that most tutorials and sample code on OpenGL tend to focus on setting things up to be full screen, updating 60 times a second, etc, that being what games want.

About fast swapping textures for openGLES rendering

in my openglES context i want to animate one texture, but the datas for this texture may be change lot of time per second ( like a video frame, but more slow ).
the idea is to animate a simple rectangle surface in my 3D scene.
In my mind the fast technique to realize this it's to load some next textures in memory ( by loading in CGImageREF in other Thread) and push the data on my texture just before using this.
Can you think about this ?
Thanks a lot
You can see this cover flow sample code. It will store the textures in a container (array or dictionary), and retrieve these textures as necessary. But keep this in mind, a texture will occupy physical memory. If your texture is 256x256, it will occupy 256x256x4 bytes (256KB). If you stores too many textures, it is easy to receive iOS's memory warning.
You might take an approach similar to UITableView's handing for row views in which you manage a queue of some number of reusable textures. As you need frames, dequeue the next texture in that queue, load the new image into it and add it to a different queue of loaded textures ready to be displayed at the appropriate time. After you swap out a frame, enqueue the previously displayed texture in your queue again so it can be reused later.

Continuously drawing into an iPhone bitmap object? What am I missing?

I am pretty sure this is a straightforward problem I must be confused about (I am a total newbie on the iPhone): I am trying to continuously draw shapes using the CG APIs into a bitmap object, and to always show an up-to-date version of that bitmap on the screen, inside a UIScrollView object so the user can pan, zoom, etc.
Using UIView with drawRect is no good because this is an ongoing thing -- I have a separate thread where all the drawing commands are issued, and I need them to be applied and accumulated on that same framebuffer (kind of like a web page rendering engine). What I keep seeing is that all GCContext APIs related to images seem to CREATE a brand new image whenever you want it rendered to the screen. That sounds bad because it forces the creation of another copy of the image.
I guess I am looking for a way to create an off-screen bitmap, render to it as much as I want using Core Graphics, and whenever I want to, blt that image to the screen, but still retaining the ability to keep drawing to it and blt it to the screen again, later on.
Thanks in advance!
You could use a CGLayer. Create it once with CGLayerCreateWithContext() and later retrieve the layer's context with CGLayerGetContext() whenever you need to draw into it.
To draw the layer into another graphics context, call CGContextDrawLayerAtPoint() or CGContextDrawLayerInRect().

Fastest iPhone Blit Routine?

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).
Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)
While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.
Since CALayer is lightweight and fast I would get a try.
Thierry
The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.