I am writing a 2d game on the android and I am targeting phones with that have a minimum OpenGl ES 1.1 support.
I am currently looking at creating my animated sprite class which is basically a quad that has a changing texture on it to provide animation.
I wanted to stick to Opengl 1.1 so am avoiding shaders and was wondering how other people have approached the implementation of animated sprites.
My thoughts initially were to either:
Have a single vertex buffer object with one texture coordinate set, then use lots of pre loaded textures that would be swapped at runtime in the correct order.
Have just one texture sprite sheet and modify texture coordinates at runtime to display the coorect subsection of the sprite sheet.
Is there a more clever or more efficient way to do this without shaders?
Thanks
Choose #2 if you have only the two options.
However, I recommend making and caching all of quad vertex set for each sprite frames into vertex buffer on memory closest to GPU. Or just generate new sprite quad vertex and specify them for each drawing. This is trade off problem between performance vs memory by caching. Think about memory consumption vertices for single frame.
Changing GPU internal state is a lot expensive operation. Of course, including texture object swapping. Avoid this as much as possible.
This is the reason huge texture atlas are used on traditional game development.
Transferring resources (including vertices) to VRAM (closest memory to GPU) may be expensive because they need to be copied over slower bus. This is similar with server-client situation. GPU+VRAM is server, CPU+RAM is client connected through PCI-bus network. However this can be vary by system structure and memory/bus model.
Related
I want to create an explosion particle system, but I'm not sure how can I do it. I was thinking create a fire particle system with emitter shape being an Sphere and after that just increasing sphere radius, but I don't know how can I animate it's size. Does anyone tell me how can I do that? Or does anyone have a better idea?
Emitter systems for particles are setting initial particle directions, and the rate they'll move at. That's generally how a visual representation of an explosion is created.
So rather than increasing the size of the emitter source to present an explosion, the dissemination of the particles in an outward direction creates the appearance of an explosion.
You're not limited to one batch of particles, nor one type of particles, nor just one emitter. The best explosions are a highly complex layering of different particle types with different textures, coming from different emitters at differing rates, with differing rates of decay, spin rates, colour changes and falloff in both transparency and movement speed.
Making a truly great looking explosion is a real art form and will often take a good designer days to do with a GUI and constant real time playback, especially when trying to minimise the use of textures, quads, blends, fillrate and physics.
Here's a video from Unreal Engine, wherein similar concepts and qualities as what's available in Scene Kit are used to teach the terminology. It's not a 1:1 parallel with the Scene Kit particle engine, but it's probably the best combination of visuals and simplistic explanations to help you rapidly understand what is possible and how to do it with particles.
//caveat: Unreal Engine probably has the best real time particle engine in the world at the moment, so it's a little more advanced than what's in Scene Kit.
But...the principles are essentially the same:
https://www.youtube.com/watch?v=OXK2Xbd7D9w
I'm doing heavy computation using the GPU, which involves a lot of render-to-texture operations. It's an iterative computation, so there's a lot of rendering to a texture, then rendering that texture to another texture, then rendering the second texture back to the first texture and so on, passing the texture through a shader each time.
My question is: is it better to have a separate FBO for each texture I want to render into, or should I rather have one FBO and bind the target texture using glFramebufferTexture2D each time I want to change render target?
My platform is OpenGL ES 2.0 on the iPhone.
On the iPhone implementation, it is inexpensive to change the attachment, assuming the old and new textures are the same dimensions/format/etc. Otherwise, the driver has to do some additional work to reconfigure the framebuffer.
AFAIK, better performance is achieved by using only one FBO, and changing the texture attachments as necessary.
The best way is to do benchmark.
I am trying to Google for what I've mentioned in the title, but somehow I couldn't find it. This should not be that hard, should it?
What I am looking for is a way to gain access to an OpenGL ES texture on iPhone, and a way to get/set pixel with it. What are the OpenGL ES functions I am looking for?
Before OpenGL ES is able to see your texture, you should have loaded it in memory already, generated texture names(glGenTextures), and bound it(glBindTexture). Your texture data is just a big array in memory.
Therefore, should you with to change a single texel, you can manipulate it in-memory and then bind it again. This approach is usually done for procedural texture generation. There are many available resources on the net about it, for instance: http://www.blumtnwerx.com/blog/2009/06/opengl-es-texture-mapping-for-iphone-oolong-powervr/
While glReadPixels is available, there are very few situations where you'd need to use it for interactive applications(screen capture comes to mind). It absolutely destroys performance. And still won't give you back the original textures, but instead will return a block of the framebuffer.
I have no idea what kind of effect you are looking for. However, if you are targeting a device that supports pixel shaders, perhaps a custom pixel shader can do what you want.
Of course, I am working under the assumption you didn't mean pixel as in screen coordinates.
I don't know about setting an individual pixel, but glReadPixels can read a block of pixels from the frame buffer (http://www.khronos.org/opengles/sdk/docs/man/glReadPixels.xml). Your problem googling may be because texture pixels are often shortened to 'texels'.
I've seen a lot of bandying about what's better, Quartz or OpenGL ES for 2D gaming. Neverminding libraries like Cocos2D, I'm curious if anyone can point to resources that teach using OpenGL ES as a 2D platform. I mean, are we really stating that learning 3D programming is worth a slight speed increase...or can it be learned from a 2D perspective?
GL is likely to give you better performance, with less CPU usage, battery drain, and so on. 2D drawing with GL is just like 3D drawing with GL, you just don't change the Z coordinate.
That being said, it's easier to write 2D drawing code with Quartz, so you have to decide the trade-off.
Cribbed from a similar answer I provided here:
You probably mean Core Animation when you say Quartz. Quartz handles static 2-D drawing within views or layers. On the iPhone, all Quartz drawing for display is done to a Core Animation layer, either directly or through a layer-backed view. Each time this drawing is performed, the layer is sent to the GPU to be cached. This re-caching is an expensive operation, so attempting to animate something by redrawing it each frame using Quartz results in terrible performance.
However, if you can split your graphics into sprites whose content doesn't change frequently, you can achieve very good performance using Core Animation. Each one of those sprites would be hosted in a Core Animation CALayer or UIKit UIView, and then animated about the screen. Because the layers are cached on the GPU, basically as textures, they can be moved around very smoothly. I've been able to move 50 translucent layers simultaneously at 60 FPS (100 at 30 FPS) on the original iPhone (not 3G S).
You can even do some rudimentary 3-D layout and animation using Core Animation, as I show in this sample application. However, you are limited to working with flat, rectangular structures (the layers).
If you need to do true 3-D work, or want to squeeze the last bit of performance out of the device, you'll want to look at OpenGL ES. However, OpenGL ES is nowhere near as easy to work with as Core Animation, so my recommendation has been to try Core Animation first and switch to OpenGL ES only if you can't do what you want. I've used both in my applications, and I greatly prefer working with Core Animation.
I have this game with some variety of textures. Whenever I draw a texture I bind It and draw It.
It's possible to draw all the elements of my game in just one draw call ? , with Interleaved Arrays ...how can I do It ?
The performance of my game will increase by doing this ?
I believe that you would benefit from using a texture atlas, one giant texture containing all of your smaller ones. For OpenGL ES on the iPhone, Apple recommends that you place all of the vertices that you can draw at one time in a Vertex Buffer Object (VBO), and that you interleave the vertex, normal, color, and texture information within that buffer (in that order). Note that the VBO itself doesn't give you a significant performance boost over a standard array on the original iPhones, but the 3GS has hardware VBO support.
Either by grouping data into a VBO or an array, I've seen significant performance improvements in my application when I reduced the number of draw calls.
Another area that you might want to look at is in reducing the size of your geometry. By going from a GLfloat to GLshort for my vertex and normal coordinate data, I saw an over 30% improvement in OpenGL ES rendering speed on the iPhone 3G.
TBH you are better off doing it with seperate draw calls. Performance is unlikely to be noticeably affected.
There "may" be ways to do it by indexing into slices of a volume texture (or texture array) but you'd need the 3GS to do this and it wouldn't work on the old iPhone. I also doubt you'd get any noticeable performance improvement by doing it.