I'm writing an app that offloads some heavy drawing into an EAGLView, and it does some lightweight stuff in UIKit on top. It seems that the GL render during an orientation change is cached somewhere, and I can't figure out how to get access to it. Specifically:
After an orientation change, calling glClear(GL_COLOR_BUFFER_BIT) isn't enough to clear the GL display (drawing is cached somewhere?) How can I clear this cache?
After an orientation change, glReadPixel() can no longer access pixels drawn before the orientation change. How can I get access to where this is stored?
I'm not a expert but I was reading the Optimizing OpenGL ES for iPhone OS and it stated:
Avoid Mixing OpenGL ES with Native Platform Rendering
You may like to check it out to see if it helps.
After an orientation change, calling glClear(GL_COLOR_BUFFER_BIT) isn't enough to clear the GL display (drawing is cached somewhere?) How can I clear this cache?
You're drawing too an offscreen image. That image only becomes available for CoreAnimation compositing when you call -presentRenderbuffer.
After an orientation change, glReadPixel() can no longer access pixels drawn before the orientation change. How can I get access to where this is stored?
I assume you're using the RetainedBacking option. Without that option, you can never read the contents of a previous frame, even outside of rotation. When you call -presentRenderbuffer, the contents of the offscreen image are shipped off to CA for compositing, and a new image takes its place. The contents of the new image are undefined.
Assuming you are using something derived from the EAGLView sample code and that you are using RetainedBacking, when the rotation occurs, your framebuffer is resized by deallocating and reallocating. Any of the existing contents will be lost when this occurs.
You can either:
1) save the contents yourself across the transition by calling ReadPixels
2) never reallocate the framebuffer, and instead rotate the UIView (or CALayer) using the transform property. Doing so can cause quite a performance hit during compositing, but you'll get the rotation you're looking for without having to resize your framebuffer.
Related
i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i don´t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.
Is it is possible use Open GL ES to draw offscreen to create a CGImageRef to use as content for a CALayer.
I intend to alter the image only once. In detail I'm looking for an efficient way to change only the hue of an image without changing the brightness as well. The other solution might be to create a pixel buffer and to change the data directly but it seems computationally expensive.
Although it's not something I've done, it should be possible.
If you check out the current OpenGL ES template in Xcode, especially EAGLView.m, you'll see that the parts that bind the OpenGL context in there to the screen are:
line 77, [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];, which tells the CAEAGLLayer to provide sufficient details to the framebuffer object there being created so that it can be displayed on screen.
line 128, success = [context presentRenderbuffer:GL_RENDERBUFFER];, which gives the CAEAGLLayer the nod that you've drawn a whole new frame and it should present that when possible.
What you should be able to do is dump the CAEAGLLayer connection entirely (and, therefore, you don't need to create a UIView subclass), use glRenderbufferStorage or glRenderbufferStorageMultisampleAPPLE to allocate a colour buffer for your framebuffer instead (so that it has storage, but wherever OpenGL feels like putting it), do all your drawing, then use glReadPixels to get the pixel contents back.
From there you can use CGDataProviderCreateWithData and CGImageCreate to convert the raw pixel data to a suitable CGImageRef.
The GPU stuff should be a lot faster than you can manage on the CPU normally, but your main costs are likely to be the upload and the download. If you don't actually need it as a CGImageRef other than to show it on screen, you'll be better just using a CAEAGLLayer toting UIView subclass. They act exactly like any other view — updating if and when you push new data, compositing in exactly the same way — so there's no additional complexity. The only disadvantage, if you're new, is that most tutorials and sample code on OpenGL tend to focus on setting things up to be full screen, updating 60 times a second, etc, that being what games want.
I am pretty sure this is a straightforward problem I must be confused about (I am a total newbie on the iPhone): I am trying to continuously draw shapes using the CG APIs into a bitmap object, and to always show an up-to-date version of that bitmap on the screen, inside a UIScrollView object so the user can pan, zoom, etc.
Using UIView with drawRect is no good because this is an ongoing thing -- I have a separate thread where all the drawing commands are issued, and I need them to be applied and accumulated on that same framebuffer (kind of like a web page rendering engine). What I keep seeing is that all GCContext APIs related to images seem to CREATE a brand new image whenever you want it rendered to the screen. That sounds bad because it forces the creation of another copy of the image.
I guess I am looking for a way to create an off-screen bitmap, render to it as much as I want using Core Graphics, and whenever I want to, blt that image to the screen, but still retaining the ability to keep drawing to it and blt it to the screen again, later on.
Thanks in advance!
You could use a CGLayer. Create it once with CGLayerCreateWithContext() and later retrieve the layer's context with CGLayerGetContext() whenever you need to draw into it.
To draw the layer into another graphics context, call CGContextDrawLayerAtPoint() or CGContextDrawLayerInRect().
I'm using OpenGL ES 1.1 to render a large view in an iPhone app. I have a "screenshot"/"save" function, which basically creates a new GL context offscreen, and then takes exactly the same geometry and renders it to the offscreen context. This produces the expected result.
Yet for reasons I don't understand, the amount of time (measured with CFAbsoluteTimeGetCurrent before and after) that the actual draw calls take when sending to the offscreen buffer is more than an order of magnitude longer than when drawing to the main framebuffer that backs an actual UIView. All of the GL state is the same for both, and the geometry list is the same, and the sequence of calls to draw is the same.
Note that there happens to be a LOT of geometry here-- the order of magnitude is clearly measurable and repeatable. Also note that I'm not timing the glReadPixels call, which is the thing that I believe actually pulls data back from the GPU. This is just a mesaure of the time spent in e.g. glDrawArrays.
I've tried:
Render that geometry to the screen again just after doing the offscreen render: takes the same quick time for the screen draw.
Render the offscreen thing twice in a row-- both times show the same slow draw speed.
Is this an inherent limitation of offscreen buffers? Or might I be missing something fundamental here?
Thanks for your insight/explanation!
Your best bet is probably to sample both your offscreen rendering and window system rendering each running in a tight loop with the CPU Sampler in Instruments and compare the results to see what differences there are.
Also, could you be a bit more clear about what exactly you mean by “render the offscreen thing twice in a row?” You mentioned at the beginning of the question that you “create a new GL context offscreen”—do you mean a new framebuffer and renderbuffer, or a completely new EAGLContext? Depending on how many new resources and objects you’re creating in order to do your offscreen rendering, the driver may need to do a lot of work to set up these resources the first time you use them in a draw call. If you’re just screenshotting the same content you were putting onscreen, you shouldn’t even need to do any of this—it should be sufficient to call glReadPixels before -[EAGLContext presentRenderbuffer:], since the backbuffer contents will still be defined at that point.
Could offscreen rendering be forcing the GPU to flush all its normal state, then do your render, flush the offscreen context, and have to reload all the normal stuff back in from CPU memory? That could take a lot longer than any rendering using data and frame buffers that stays completely on the GPU.
I'm not an expert on the issue but from what I understand graphics accelerators are used for sending data off to the screen so normally the path is Code ---vertices---> Accelerator ---rendered-image---> Screen. In your case you are flushing the framebuffer back into main memory which might be hitting some kind of bottleneck in bandwidth in the memory controller or something or other.
I've got several UIViews that have a layer class of CATiledLayer because they need to be zoomed using UIScrollViews. I use Quartz to render some PDF pages inside said UIViews. Everything is fine until I try to animate said views frames on page rotation.
The animation is fine, the contents of the view gets scaled quickly and cheaply to match the new frame size. After the animation I call setNeedsDisplay to re-render the whole thing. Now before my -(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx get's called (but after setNeedsDisplay) the views contents shortly reverts back to the previous state (with the remaining pixels being stretched edge pixels in cases where the contents becomes smaller than the view). This results in a flash of some very annoying graphical distortion before reverting back to normal thanks to the new "render pass".
After a lot of debugging I've managed to sort out that this definitely happens during the actual drawing cycle (rather than the application run cycle where I do setNeedsDisplay) but before -(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx even gets called.
What's the sanest way of avoiding or further debugging this?
Edit: I've thrown together a very simple project that demonstrates this effect perfectly. http://dl.dropbox.com/u/1478968/EEBug.zip
PS: There's a sleep(2) in the rendering code just so that the effect could be better observed on screen.
My guess is that pdf rendering is done by the CPU, and takes a fair amount of time to texture load any pdf renderings from CPU memory into the GPU memory for actual display. So your new UIView rendering might be getting displayed before all your updated pdf layer pixels makes it to the GPU.
You best bet may be to pre-render the new pdf images at the new scale factor into another offscreen CALayer before the UIView setNeedsDisplay, give it time to upload to the GPU (you may have to profile/tune for the bus bandwidth of the worse case device you support), and then finish your view animation while quickly flipping between old and new CALayers.