I am developing an iPhone app that uses OpenGL ES to draw the graphics onscreen. With each new frame i draw to the entire screen, which brings me to my question. Is it necessary to call glClear to clear the color buffer each frame or can i just overwrite it with the new data? The glClear function is currently 9.9% of the frametime so if i could get rid of it it would provide a nice boost.
Forgive me if this is a very naive question as i am something of a noob on the topic. Thanks in advance.
Theoretically you can leave it out.
But if you don't clear the color buffer, the previously drawn data will stay on screen.
That limits applications to such with non-modifying data.
If you have a static scene that doesn't need re-drawing, then you could skip the call to glClear() for the color buffer.
From the Khronos OpenGL ES docs:
glClear sets the bitplane area of the window to values previously selected by glClearColor.
glClear takes a single argument indicating which buffer is to be cleared.
GL_COLOR_BUFFER_BIT Indicates the buffers currently enabled for color writing.
Typically (when making 3D games) you don't clear just the color buffer, but the depth buffer as well and use depth test to make sure that the stuff that's closest to the camera is drawn. Therefore it is important to clear the depth buffer, otherwise only pixels with lower Z value than in the previous frame would be redrawn. Not clearing the color buffer would also be problematic when drawing transparent things.
If you're making a 2D application and manage the drawing order yourself (first draw the stuff that's furthest away from camera, and then what is closer a.k.a. painters algorithm) then you can leave out the clear call. One thing to note is that with such algorithm you have another problem - overdrawing i.e. drawing the same pixels more often than necessary.
Related
I am currently reading an iPhone OpenGL ES project that draws some 3D shapes (shpere, cone, ..). I am a little bit confused about the behavior of glDrawElements.
After binding the vertexbuffer to GL_ARRAY_BUFFER, and the indexbuffer to GL_ELEMENT_ARRAY_BUFFER, the function glDrawElements is called:
glDrawElements(GL_TRIANGLES, IndexCount, GL_UNSIGNED_SHORT, 0);
At first I thought this function draws the shapes on screen, but actually the shapes are later drawn on the screen using:
[m_context presentRenderbuffer:GL_RENDERBUFFER];
So what does glDrawElements do? The manual describes it as render primitives from array data. But I don't understand the real meaning of render & it's difference from draw (my native language is not english)
The DrawElements call is really what "does" the drawing. Or rather it tells the GPU to draw. And the GPU will do that eventually.
The present call is only needed because the GPU usually works double buffered: One buffer that you don't see but draw to, and one buffer that is currently on display on the screen. Once you are done with all the drawing you flip them.
If you would not do this you would see flickering while drawing.
Also it allows for parallel operation. When you call DrawElements you call it multiple times for one frame. Only when you call present does the GPU have to wait for all of them to be done.
It's true that glDraw commands are responsible for your drawing and that you don't see any visible results until you call the presentRenderbuffer: method, but it's not about double buffering.
All iOS devices use GPUs designed around Tile-Based Deferred Rendering (TBDR). This is an algorithm that works very well in embedded environments with less compute resources, memory, and power-usage budget than a desktop GPU. On a desktop (stream-based) GPU, every draw command immediately does work that (typically) ends with some pixels being painted into a renderbuffer. (Usually, you don't see this because, usually, desktop GL apps are set up to use double or triple buffering, drawing a complete frame into an offscreen buffer and then swapping it onto the screen.)
TBDR is different. You can think about it being sort of like a laser printer: for that, you put together a bunch of PostScript commands (set state, draw this, set a different state, draw that, and so on), but the printer doesn't do any work until you've sent all the draw commands and finished with the one that says "okay, start laying down ink!" Then the printer computes the areas that need to be black and prints them all in one pass (instead of running up and down the page printing over areas it's already printed).
The advantage of TBDR is that the GPU knows about the whole scene before it starts painting -- this lets it use tricks like hidden surface removal to avoid doing work that won't result in visible pixels being painted.
So, on an iOS device, glDraw still "does" the drawing, but that work doesn't happen until the GPU needs it to happen. In the simple case, the GPU doesn't need to start working until you call presentRenderbuffer: (or, if you're using GLKView, until you return from its drawRect: or its delegate's glkView:drawInRect: method, which implicitly presents the renderbuffer). If you're using more advanced tricks, like rendering into textures and then using those textures to render to the screen, the GPU starts working for one render target as soon as you switch to another (using glBindFramebuffer or similar).
There's a great explanation of how TBDR works in the Advances in OpenGL ES talk from WWDC 2013.
i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i don´t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.
I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.
I'm using OpenGL ES 1.1 to render a large view in an iPhone app. I have a "screenshot"/"save" function, which basically creates a new GL context offscreen, and then takes exactly the same geometry and renders it to the offscreen context. This produces the expected result.
Yet for reasons I don't understand, the amount of time (measured with CFAbsoluteTimeGetCurrent before and after) that the actual draw calls take when sending to the offscreen buffer is more than an order of magnitude longer than when drawing to the main framebuffer that backs an actual UIView. All of the GL state is the same for both, and the geometry list is the same, and the sequence of calls to draw is the same.
Note that there happens to be a LOT of geometry here-- the order of magnitude is clearly measurable and repeatable. Also note that I'm not timing the glReadPixels call, which is the thing that I believe actually pulls data back from the GPU. This is just a mesaure of the time spent in e.g. glDrawArrays.
I've tried:
Render that geometry to the screen again just after doing the offscreen render: takes the same quick time for the screen draw.
Render the offscreen thing twice in a row-- both times show the same slow draw speed.
Is this an inherent limitation of offscreen buffers? Or might I be missing something fundamental here?
Thanks for your insight/explanation!
Your best bet is probably to sample both your offscreen rendering and window system rendering each running in a tight loop with the CPU Sampler in Instruments and compare the results to see what differences there are.
Also, could you be a bit more clear about what exactly you mean by “render the offscreen thing twice in a row?” You mentioned at the beginning of the question that you “create a new GL context offscreen”—do you mean a new framebuffer and renderbuffer, or a completely new EAGLContext? Depending on how many new resources and objects you’re creating in order to do your offscreen rendering, the driver may need to do a lot of work to set up these resources the first time you use them in a draw call. If you’re just screenshotting the same content you were putting onscreen, you shouldn’t even need to do any of this—it should be sufficient to call glReadPixels before -[EAGLContext presentRenderbuffer:], since the backbuffer contents will still be defined at that point.
Could offscreen rendering be forcing the GPU to flush all its normal state, then do your render, flush the offscreen context, and have to reload all the normal stuff back in from CPU memory? That could take a lot longer than any rendering using data and frame buffers that stays completely on the GPU.
I'm not an expert on the issue but from what I understand graphics accelerators are used for sending data off to the screen so normally the path is Code ---vertices---> Accelerator ---rendered-image---> Screen. In your case you are flushing the framebuffer back into main memory which might be hitting some kind of bottleneck in bandwidth in the memory controller or something or other.
I am working on a 2D scrolling game for iPhone. I have a large image background, say 480×6000 pixels, of only a part is visible (exactly one screen’s worth, 480×320 pixels). What is the best way to get such a background on the screen?
Currently I have the background split into several textures (to get around the maximum texture size limit) and draw the whole background in each frame as a textured triangle strip. The scrolling is done by translating the modelview matrix. The scissor box is set to the window size, 480×320 pixels. This is not meant to be fast, I just wanted a working code before I get to optimizing.
I thought that maybe the OpenGL implementation would be smart enough to discard the invisible portion of the background, but according to some measuring code I wrote it looks like background takes 7 ms to draw on average and 84 ms at maximum. (This is measured in the simulator.) This is about a half of the whole render loop, ie. quite slow for me.
Drawing the background should be as easy as copying some 480×320 pixels from one part of the VRAM to another, or, in other words, blazing fast. What is the best way to get closer to such performance?
That's the fast way of doing it. Things you can do to improve performance:
Try different texture-formats. Presumably the SDK docs have details on the preferred format, and presumably smaller is better.
Cull out entirely offscreen tiles yourself
Split the image into smaller textures
I'm assuming you're drawing at a 1:1 zoom-level; is that the case?
Edit: Oops. Having read your question more carefully, I have to offer another piece of advice: Timings made on the simulator are worthless.
The quick solution:
Create a geometry matrix of tiles (quads preferably) so that there is at least one row/column of off-screen tiles on all sides of the viewable area.
Map textures to all those tiles.
As soon as one tile is outside the viewable area you can release this texture and bind a new one.
Move the tiles using a modulo of the tile width and tile height as position (so that the tile will reposition itself at its starting pos when it have moved exactly one tile in length). Also remember to remap the textures during that operation. This allows you to have a very small grid/very little texture memory loaded at any given time. Which I guess is especially important in GL ES.
If you have memory to spare and are still plagued with slow load speed (although you shouldn't for that amount of textures). You could build a texture streaming engine that preloads textures into faster memory (whatever that may be on your target device) when you reach a new area. Mapping as textures will in that case go from that faster memory when needed. Just be sure that you are able to preload it without using up all memory and remember to release it dynamically when not needed.
Here is a link to a GL (not ES) tile engine. I haven't used it myself so I cannot vouch for its functionality but it might be able to help you: http://www.mesa3d.org/brianp/TR.html