glFramebufferTexture2D performance - iphone

I'm doing heavy computation using the GPU, which involves a lot of render-to-texture operations. It's an iterative computation, so there's a lot of rendering to a texture, then rendering that texture to another texture, then rendering the second texture back to the first texture and so on, passing the texture through a shader each time.
My question is: is it better to have a separate FBO for each texture I want to render into, or should I rather have one FBO and bind the target texture using glFramebufferTexture2D each time I want to change render target?
My platform is OpenGL ES 2.0 on the iPhone.

On the iPhone implementation, it is inexpensive to change the attachment, assuming the old and new textures are the same dimensions/format/etc. Otherwise, the driver has to do some additional work to reconfigure the framebuffer.

AFAIK, better performance is achieved by using only one FBO, and changing the texture attachments as necessary.

The best way is to do benchmark.

Related

Is it faster to bind 2 textures to one or two FBO?

I want to to run several shaders one after the other (mostly image processing), the output of one being the input of the following. I wonder if there's a performance gain to use only one FBO bound to all the needed textures, or if it's the same to create one FBO for each texture?
In case it matters, the aimed platform is the iPhone, therefore with OpenGL ES 2.0.
Thanks
I am not familiar with OpenGL ES, but on PC platform it is normally better to use only one FBO, and i don't see why this should be different on ES (however, I might be wrong). It is important though that all textures bound have the same size (e.g. viewport size) for FBO completeness, otherwise it won't work.
Normally, you attach all textures to one FBO initially and then in each frame just change the rendertarget channel in each pass. This saves you a lot of state changes caused by fbo binding.
Note that the maximum number of textures that can be bound is limited by GL_MAX_COLOR_ATTACHMENTS_EXT.
You might also find this helpful: http://www.songho.ca/opengl/gl_fbo.html
Cheers,
Reinhold

Best way to render 2d animated sprites openGl ES

I am writing a 2d game on the android and I am targeting phones with that have a minimum OpenGl ES 1.1 support.
I am currently looking at creating my animated sprite class which is basically a quad that has a changing texture on it to provide animation.
I wanted to stick to Opengl 1.1 so am avoiding shaders and was wondering how other people have approached the implementation of animated sprites.
My thoughts initially were to either:
Have a single vertex buffer object with one texture coordinate set, then use lots of pre loaded textures that would be swapped at runtime in the correct order.
Have just one texture sprite sheet and modify texture coordinates at runtime to display the coorect subsection of the sprite sheet.
Is there a more clever or more efficient way to do this without shaders?
Thanks
Choose #2 if you have only the two options.
However, I recommend making and caching all of quad vertex set for each sprite frames into vertex buffer on memory closest to GPU. Or just generate new sprite quad vertex and specify them for each drawing. This is trade off problem between performance vs memory by caching. Think about memory consumption vertices for single frame.
Changing GPU internal state is a lot expensive operation. Of course, including texture object swapping. Avoid this as much as possible.
This is the reason huge texture atlas are used on traditional game development.
Transferring resources (including vertices) to VRAM (closest memory to GPU) may be expensive because they need to be copied over slower bus. This is similar with server-client situation. GPU+VRAM is server, CPU+RAM is client connected through PCI-bus network. However this can be vary by system structure and memory/bus model.

Real time soft shadows without stencil buffers

I'm really curious how the following is done
(source: kortham.net)
They seem to achieve real time softish shadows on the iphone which does not have a stencil buffer available. It seems to run pretty fluid here http://www.youtube.com/watch?v=u5OM6tPoxLU
Anyone has an idea?
The stencil buffer allows hardware acceleration of shadows rendering, but isn't necessarily needed for displaying shadow volumes. With a low count of bodies and light sources, the software may emulate the behavior of the stencil buffer (but that will be very slow, compared to the hardware-accelerated implementation).
Also, there is other ways to display shadows. The most frequently used is Shadow Mapping (a more in-depth approach can be found on GameDev.net), which doesn't require a stencil buffer. It is used for PS2 games, as well as Wii games, because those hardware also doesn't have a stencil buffer.
And finally, under the circumstances of this particular game, the shadow algorithm can also be implemented as a simple ray tracing system, because there is no need for floor detection, and the shadows are basically calculated on 2D simple shapes (circles and squares). That might be the best approach for this particular case.
Most likely a "Shadow Mapping" variant. http://en.wikipedia.org/wiki/Shadow_mapping

how to set/get pixel on a texture in OpenGL ES on iPhone?

I am trying to Google for what I've mentioned in the title, but somehow I couldn't find it. This should not be that hard, should it?
What I am looking for is a way to gain access to an OpenGL ES texture on iPhone, and a way to get/set pixel with it. What are the OpenGL ES functions I am looking for?
Before OpenGL ES is able to see your texture, you should have loaded it in memory already, generated texture names(glGenTextures), and bound it(glBindTexture). Your texture data is just a big array in memory.
Therefore, should you with to change a single texel, you can manipulate it in-memory and then bind it again. This approach is usually done for procedural texture generation. There are many available resources on the net about it, for instance: http://www.blumtnwerx.com/blog/2009/06/opengl-es-texture-mapping-for-iphone-oolong-powervr/
While glReadPixels is available, there are very few situations where you'd need to use it for interactive applications(screen capture comes to mind). It absolutely destroys performance. And still won't give you back the original textures, but instead will return a block of the framebuffer.
I have no idea what kind of effect you are looking for. However, if you are targeting a device that supports pixel shaders, perhaps a custom pixel shader can do what you want.
Of course, I am working under the assumption you didn't mean pixel as in screen coordinates.
I don't know about setting an individual pixel, but glReadPixels can read a block of pixels from the frame buffer (http://www.khronos.org/opengles/sdk/docs/man/glReadPixels.xml). Your problem googling may be because texture pixels are often shortened to 'texels'.

It's possible to draw multiple and different textures in openGL for iPhone with a single draw call?

I have this game with some variety of textures. Whenever I draw a texture I bind It and draw It.
It's possible to draw all the elements of my game in just one draw call ? , with Interleaved Arrays ...how can I do It ?
The performance of my game will increase by doing this ?
I believe that you would benefit from using a texture atlas, one giant texture containing all of your smaller ones. For OpenGL ES on the iPhone, Apple recommends that you place all of the vertices that you can draw at one time in a Vertex Buffer Object (VBO), and that you interleave the vertex, normal, color, and texture information within that buffer (in that order). Note that the VBO itself doesn't give you a significant performance boost over a standard array on the original iPhones, but the 3GS has hardware VBO support.
Either by grouping data into a VBO or an array, I've seen significant performance improvements in my application when I reduced the number of draw calls.
Another area that you might want to look at is in reducing the size of your geometry. By going from a GLfloat to GLshort for my vertex and normal coordinate data, I saw an over 30% improvement in OpenGL ES rendering speed on the iPhone 3G.
TBH you are better off doing it with seperate draw calls. Performance is unlikely to be noticeably affected.
There "may" be ways to do it by indexing into slices of a volume texture (or texture array) but you'd need the 3GS to do this and it wouldn't work on the old iPhone. I also doubt you'd get any noticeable performance improvement by doing it.