Displaying stream of CGImage data as fast as possible - iphone

I am receiving a series of images over a network connection, and want to display them as fast as possible, up to 30 FPS. I have derived a UIView object (so I can override drawRect) and exposed a UIImage property called cameraImage. The frame data is arriving as fast as needed, but the actual drawing to screen takes far too much time (as a result of the lag from setNeedsDisplay), both creating lag in the video and slowing down the interaction with other controls. What is a better way to do this? I've though of using OpenGL, but the only examples I've seen aren't for drawing static images to the screen, but for adding texture to some rotating polygon.
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect screenBounds = [[UIScreen mainScreen] bounds];
[self.cameraImage drawInRect:screenBounds];
// Drawing code
//CGContextDrawImage(context, screenBounds, [self.cameraImage CGImage]); // Doesn't help
}

Try setting a CALayer's contents to the image. Or, if you set it up with relatively sane parameters, setting a UIImageView's image shouldn't be much slower.
OpenGL is definitely an alternative, but it's much more difficult to work with, and it will still be limited by the same thing as the others: the texture upload speed.
(The way you're doing it (drawing in drawRect:), you have to let UIKit allocate a bitmap for you, then draw the image into the bitmap, then upload the bitmap to the video hardware. Best to skip the middleman and upload the image directly, if you can.)
It may also depend on where the CGImage came from. Is it backed by raw bitmap data (and if so, in what format?), or is it a JPEG or PNG? If the image has to be format-converted or decoded, it will take longer.
Also, how large is the CGImage? Make sure it isn't larger than it needs to be!

Related

How to save image larger than device resolution in iPhone?

My question is related to this link
I would like to know how we can save images larger than device resolution using CGBitmapContextCreate .
Any sample code for guidance will be much appreciated.
thanks!
Don't use CGBitmapContextCreate, use UIGraphicsBeginImageContextWithOptions, it's much easier. Use it like this:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), YES, 1.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//do your drawing
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEdnImageContext();
//your resultant UIImage is now stored in image variable
The three parameters to UIGraphicsBeginImageContext are:
The size of the image - this can be anything you want
Whether the image has transparency or not
The scale of pixels in the image. 0.0 is the default, so on an iPhone 3GS it will be 1.0 and on an iPhone 4 with a retina display it will be 2.0. You can pass in any scale you want though, so if you pass in 5.0, each pixel unit in your image will actually be 5x5 pixels in the bitmap, just like 1 pixel on a Retina display is really 2x2 pixels on screen.
Edit: it turns out that the question of whether UIGraphicsBeginImageContext() is thread-safe seems to be a bit controversial. If you do need to do this concurrently on a background thread, there is an alternative (rather more complex approach) using CGBitMapContextCreate() here: UIGraphicsBeginImageContext vs CGBitmapContextCreate

Redundant image drawing with CGContextDrawImage

I have one image that I wish to redraw repeatedly over the screen, however, there is a large number of redraws per second, and drawing the image each time makes the app take a huge performance hit. Is there a way to somehow cache the CGImageRef or something that would make CGContextDrawImage perform faster?
Try using UIImageViews and see if it's fast enough. You are allowed to have many UIImageViews. You should set all of their image properties to the same instance of UIImage.
If it's for a game, you should just use a game engine (Unity, Cocos2D, etc.). They have already spent a lot of time figuring out how to make this stuff fast.
A CGLayerRef should be what you need.
From the Apple docs:
Layers are suited for the following:
High-quality offscreen rendering of drawing that you plan to reuse.
For example, you might be building a scene and plan to reuse the same background. Draw the background scene to a layer and then draw the layer whenever you need it. One added benefit is that you don’t need to know color space or device-dependent information to draw to a layer.
Repeated drawing. For example, you might want to create a pattern that consists of the same item drawn over and over. Draw the item to a layer and then repeatedly draw the layer, as shown in Figure 12-1. Any Quartz object that you draw repeatedly—including CGPath, CGShading, and CGPDFPage objects—benefits from improved performance if you draw it to a CGLayer. Note that a layer is not just for onscreen drawing; you can use it for graphics contexts that aren’t screen-oriented, such as a PDF graphics context.
https://developer.apple.com/library/mac/#documentation/graphicsimaging/Conceptual/drawingwithquartz2d/dq_layers/dq_layers.html#//apple_ref/doc/uid/TP30001066-CH219-TPXREF101

Openg GL ES draw offscreen to provide the contents for a CALayer

Is it is possible use Open GL ES to draw offscreen to create a CGImageRef to use as content for a CALayer.
I intend to alter the image only once. In detail I'm looking for an efficient way to change only the hue of an image without changing the brightness as well. The other solution might be to create a pixel buffer and to change the data directly but it seems computationally expensive.
Although it's not something I've done, it should be possible.
If you check out the current OpenGL ES template in Xcode, especially EAGLView.m, you'll see that the parts that bind the OpenGL context in there to the screen are:
line 77, [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];, which tells the CAEAGLLayer to provide sufficient details to the framebuffer object there being created so that it can be displayed on screen.
line 128, success = [context presentRenderbuffer:GL_RENDERBUFFER];, which gives the CAEAGLLayer the nod that you've drawn a whole new frame and it should present that when possible.
What you should be able to do is dump the CAEAGLLayer connection entirely (and, therefore, you don't need to create a UIView subclass), use glRenderbufferStorage or glRenderbufferStorageMultisampleAPPLE to allocate a colour buffer for your framebuffer instead (so that it has storage, but wherever OpenGL feels like putting it), do all your drawing, then use glReadPixels to get the pixel contents back.
From there you can use CGDataProviderCreateWithData and CGImageCreate to convert the raw pixel data to a suitable CGImageRef.
The GPU stuff should be a lot faster than you can manage on the CPU normally, but your main costs are likely to be the upload and the download. If you don't actually need it as a CGImageRef other than to show it on screen, you'll be better just using a CAEAGLLayer toting UIView subclass. They act exactly like any other view — updating if and when you push new data, compositing in exactly the same way — so there's no additional complexity. The only disadvantage, if you're new, is that most tutorials and sample code on OpenGL tend to focus on setting things up to be full screen, updating 60 times a second, etc, that being what games want.

ios, quartz2d, fastest way of drawing bitmap context into window context?

ios, quartz2d, fastest way of drawing bitmap context into window context?
hallo, sorry for my weak english,
I am looking hardly for fastest possible way of redrawing bitmap
context (which holds pointer to may raw bitmap data) onto iphone
view window context
in the examples i have found in the net people are doing this by
making CGImage from such bitmap context then making UIImage
from this and drawing it onto the view
i am thinking if it is a fastest way of doing it? do i need to create
then release CGImage - in documentation there is info that
making CGImage copy data - is it possible to send my bitmap context
data straight to window context without allocating/ copying then
releasing it in CGImage? (which seem physically not necessary)
parade
Well, i have done some measuring and here is what i have got -
no need to worry about creating CGImage and UIImage stuff becouse it all only takes about 2 miliseconds - my own image processing routines takes the most time (about 100 ms) drawing UIImage at point takes 20 ms - and there is also third thing: when i receive image buffer in my video frame ready delegate i call setNeedsDisplay by performSelectorOnMainThread - and this operation takes sometimes 2 miliseconds and sometimes about 40 miliseconds - does anybody know what it is with that - can i speed up this thing? thanx in advance
parade
I think I see what you are getting at. You have a pointer to the bitmap data and you just want the window to display that. On the old Mac OS (9 and previous) you could write draw directly to video memory, but you can't do that anymore. Back then video memory was part of RAM and now it's all on the OpenGL card.
At some level the bitmap data will have to be copied at least once. You can either do it directly by creating an OpenGL texture from the data and drawing that in an OpenGL context or you can use the UIImage approach. The UIImage approach will be slower and may contain two or more copies of the bitmap data, once to the UIImage and once when rendering the UIImage.
In either case, you need to create and release the CGImage.
The copy is necessary. You first have to get the bitmap into the GPU, as only the GPU has access to compositing any layer to the display window. And the GPU has to make a copy into it's opaque (device dependent) format. One way to do this is to create an image from your bitmap context (other alternatives include uploading an OpenGL texture, etc.)
Once you create an image you can draw it, or assign it to a visible CALayer's contents. The latter may be faster.

iPhone Rendering (or capturing) UIListView into an image

I'd like to render a basic UIListView into an image I can save.
I found lots of code here for rendering a UIView to an image (for iphone/iPod Touch). In fact, the code below is what I pulled from other posts and am using now. Problem is it doesn't draw anything from the UIListView which isn't visible when the method is invoked.
- (void)snapShotAction:(id)sender
{
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
//[self.view.layer renderInContext:ctx];
[self.myTableView.layer renderInContext:ctx];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenImage, nil, nil, nil);
}
So I'd like the flexibility of being able to force the entire UIListView to render (including offscreen rows) to an image, but if no one has some insight for making that happen, I guess my alternative is to create an image or graphicscontext and just render a facsimile of the list manually (and any references to sample code for doing that are appreciated).
Here's another idea that occurred to me. Perhaps I can use a separate UITableView with no bottom or top bars (to maximize horizontal space) then calculate the row heights based on the number of items in the table then use the code above to capture that. Of course there will be limits to how many items the list can have before the text size is too small to be usable...
The problem lies in the way UIListView works. In order to save memory, it doesn't render a cell until just before it's going to be displayed, so any UIListViewCell that is off screen may not even be rendered. Short of digging in and injecting code into UIListView to make it render differently (probably not a good idea), you're going to have to stitch it together by scrolling down the list.
Here's an idea you can try that probably won't work: try changing the bounds of the window and all relevant views to be larger than the screen, large enough to hold the entire UIListView at once. Assuming that you're running on the simulator this shouldn't kill the memory. However, I have a feeling that UIListView is smarter than to be fooled by that and bases how much it renders in part on the size of the screen. But it's probably worth a shot.