Positioning elements in 2D space with OpenGL ES - iphone

In my spare time I like to play around with game development on the iPhone with OpenGL ES. I'm throwing together a small 2D side-scroller demo for fun, and I'm relatively new to OpenGL, and I wanted to get some more experienced developers' input on this.
So here is my question: does it make sense to specify the vertices of each 2D element in model space, then translate each element to it's final view space each time a frame is drawn?
For example, say I have a set of blocks (squares) that make up the ground in my side-scroller. Each square is defined as:
const GLfloat squareVertices[] = {
-1.0, 1.0, -6.0, // Top left
-1.0, -1.0, -6.0, // Bottom left
1.0, -1.0, -6.0, // Bottom right
1.0, 1.0, -6.0 // Top right
}
Say I have 10 of these squares that I need to draw together as the ground for the next frame. Should I do something like this, for each square visible in the current scene?
glPushMatrix();
{
glTranslatef(currentSquareX, currentSquareY, 0.0);
glVertexPointer(3, GL_FLOAT, 0, squareVertices);
glEnableClientState(GL_VERTEX_ARRAY);
// Do the drawing
}
glPopMatrix();
It seems to me that doing this for every 2D element in the scene, for every frame, gets a bit intense and I would imagine the smarter people who use OpenGL much more than I do may have a better way of doing this.
That all being said, I'm expecting to hear that I should profile the code and see where any bottlenecks may be: to those people, I say: I haven't written any of this code yet, I'm simply in the process of wrapping my mind around it so that when I do go to write it it goes smoother.
On the subject of profiling and optimization, I'm really not trying to prematurely optimize here, I'm just trying to wrap my mind around how one would set up a 2D scene and render it. Like I said, I'm relatively new to OpenGL and I'm just trying to get a feel for how things are done. If anyone has any suggestions on a better way to do this, I'd love to hear your thoughts.
Please keep in mind that I'm not interested in 3D, just 2D for now. Thanks!

You are concerned with the overhead it takes to transform a model (in this case a square) from model coordinates to world coordinates when you have a lot of models. This seems like an obvious optimization for static models.
If you build your square's vertices in world coordinates, then of course it is going to be faster as each square will avoid the extra cost of these three functions (glPushMatrix, glPopMatrix, and glTranslatef) since there is no need to translate from model to world coordinates at render time. I have no idea how much faster this will be, I suspect that it won't be a humongous optimization, and you lose the modularity of keeping the squares in model coordinates: What if in the future you decide you want these squares to be moveable? That will be a lot harder if you're keeping their vertices in world coordinates.
In short, it's a tradeoff:
World Coordinates
More Memory - each square needs its
own set of vertices.
Less computation - no need to perform
glPushMatrix, glPopMatrix, or
glTranslatef for each square at render time.
Less flexible - lacks support (or
complicates) for dynamically moving these squares
Model Coordinates
Less memory - the squares can share the same vertex data
More Computation - each square must
perform three extra functions at
render time.
More Flexible - squares can easily be
moved by manipulating the
glTranslatef call.
I guess the only way to know what is the right decision is by doing and profiling. I know you said you haven't written this yet, but I suspect that whether your squares are in model or world coordinates it won't make much of a difference - and if it does, I can't imagine an architecture that you could create where it would be hard to switch your squares from model to world coordinates or vice-versa.
Good luck to you and your adventures in iPhone game development!

If you are only using screen aligned quads it might be easier to use the OES Draw Texture extension. Then you can use a single texture to hold all your game "sprites". First specify the crop rectangle by setting the GL_TEXTURE_CROP_RECT_OES TexParameter. This is the boundry of the sprite within the larger texture. To render, call glDrawTexiOES passing in the desired position & size in viewport coordinates.
int rect[4] = {0, 0, 16, 16};
glBindTexture(GL_TEXTURE_2D, sprites);
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, rect);
glDrawTexiOES(x, y, z, width, height);
This extension isn't available on all devices, but it works great on the iPhone.

You might also consider using a static image and just scrolling that instead of drawing each individual block of the floor, and translating its position, etc.

Related

Why does merging geometries improve rendering speed?

In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.

OpenGL: optimizing render of quad particles

I'm rendering particles in a 2D game. Each particle is a quad (2 triangles). How can I make the drawing the fastest possible? All the particles has the same texture, I'm only changing it's positions.
Now I'm using a call to glVertexPointer and glDrawArrays for each particle. So I'm sending 4 vertices each time to the GPU.
Is there any other approach that could be faster?
I'm using OpenGL ES 1.1 (iPhone)
Thanks!
Every draw call you make (glDrawArrays) is expensive. Doing this once per particle is DEFINITELY way too often. All your particles can be drawn with a single draw call; just set up a big array of all the triangle verts and another big array with the texture coords, and call glVertexPointer/glDrawArrays once-- that's the power of glVertexPointer: arbitrary geometry of the same type in one call. :)
For what you're doing, you should also look into point sprites (GL_POINTS), which also function as tiny textured quads. They're 2D only, so you can't map your texture into the Z axis, but if your particles are just 2D quads of the same texture over and over, point sprites will likely do exactly what you want.
There's a way to do that all in one draw routine. I THINK it's by adding an extra vertex after each quad, which is the same as the previous vertex, but I could be wrong.
EDIT: After looking into it a bit, it looks like you need two in between; essentially one after, and one before. It does add up to quite a few extra vertexes, but I know from experience that it makes a HUGE positive difference on the iPhone to do it all in one draw operation (we were drawing text from a texture, so essentially the same thing).
EDIT2: Also note, I'm referring to using GL_TRIANGLE_STRIP - if you were using GL_TRIANGLES instead, you wouldn't need the extra vertices... except, then you'd be doing the same amount extra anyway, due to repeating 2 for each second triangle.

Rotating an object in OpenGL ES for iPhone [translate to origin --> rotate --> translate back is not working]

I recently started working with OpenGL ES for the iPhone, and I am having a bit of trouble with it. I want to be able to rotate an object with your fingers. My problem is that I have my object placed at (0, 0, -3), and I would like to rotate it about its center. I know that I need to translate back to the origin, rotate, and then bring it back to the original place. I think I am facing a problem because I am using a matrix to keep track (?) of all of my rotations/translation/scaling etc, and I think it may be combining the operations in a way that order is not even considered (so the two translations would cancel each other). I just started learning OpenGL a day ago and am a complete newbie, so my assumption may be wrong.
Here is the part of the in drawView that I am having trouble with:
GLfloat matrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
glLoadIdentity();
glTranslatef(0, 0, 3); // bring to origin
glRotatef(self.angle, self.dy, self.dx, 0); // rotate
glTranslatef(0, 0, -3); // put it back in place
glMultMatrixf(matrix); // save the transformations performed
Help would be much appreciated, thank you!
Retrieving the modelview matrix, and then multiplying it back on after your rotation seems fishy, but it depends on what your other transforms are and what coordinate space things are supposed to be. Your comment on the glMultMatrix line doesn't correspond with what you're doing.
Normally you would just do the translate+rotate+translate as the most local actions on the object, just before you render it. Also note that this would only apply if you're object is at (0, 0, -3) in object-space. If it's at that location in world space, then the rotation will already rotate the object around its own center if you have previously made a series of transform calls (translate, rotate, etc) to move the object to its intended position in the world.
Transform order is one of the tricky parts of learning OpenGL. As a general rule of thumb, your operations start with the outer-most and progress to the inner-most. So a typical simple set of transforms would be: the inverse of the camera transform to move the world to match up with the camera, then the object's translation to move it to its world-space position, then the rotation to set it's intended orientation. The PushMatrix/PopMatrix stack functions let you save and undo part of that series of transforms so that groups of objects can share portions of that chain.

Why is my object disappearing after using gluLookat for OpenGL ES 2.0?

I'm putting together a simple game for the iPhone, and am trying to implement the effect of moving the camera around the GLView.
I'm drawing about a hundred objects using glDrawArrays with vertex and color pointers. After this, I want to move the camera to the right by 1 unit. This is the snippet of code I have in my drawView method. I change the matrix mode to the projection stack, and then change back to model view mode after the project manipulation is complete (I may be getting this wrong, I am a newbie to OpenGL).
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranslatef(1.0, 0.0, 0.0);
glMatrixMode(GL_MODELVIEW);
In any case, the result is definitely not expected. What happens is that I see my objects very briefly (for perhaps a frame), and then they disappear. The same thing happens if I take away the glTranslatef in the block above.
What am I doing wrong?
Thanks in advance!
Before
After
One of the confusing aspects of OpenGL for beginners is the distinction between what happens in the projection matrix versus what happens on the modelview matrix (and why would you even need both of them?)
The projection matrix is in charge of transforming the coordinates of your vertices to points in a 2D coordinate system (it splats the world onto your virtual film - the viewport). The projection matrix only specifies the behavior of your camera (for example: should it be a wide-angle lens, or telephoto, or a completely orthogonal one, like architectural oblique drawings?).
The modelview matrix, on the other hand, is in charge of specifying where in 3D space your vertices go to. So for example, to specify where a character's arm is with respect to the character's body, or where this character is with respect to the world, you will want to change the modelview matrix. It is important to notice, in particular, that changes in the position and orientation of the camera belong on the modelview matrix (it is the "view" part of it)
The reason it gets confusing is that at the end of the day, the vertices you give OpenGL are multiplied by the modelview matrix and then by the projection matrix. That is, given a modelview matrix M, a projection matrix P, and a vertex v, the final coordinate of the vertex is given by PMv. This means that some transformations seem to work regardless of which matrix you use. You should be careful about this - when you get to fancier OpenGL techniques, you will run into situations in which using the correct matrices makes a difference.
Until you get to that point, however, let me give you a good rule of thumb. Until you get used to the distinction between the two matrices, only use glOrtho or glFrustum on projection matrices (gluPerspective and their friends are ok too). All other calls (glTranslate, glScale, glRotate, etc) belong to the class of things you should be doing to the modelview matrix.
You probably want to read http://www.opengl.org/resources/faq/technical/viewing.htm (especially section 8.080).
Use something along the following lines:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// setup your projection matrix here, e.g. with glFrustum or gluPerspective
glFrustum(...)
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(...) // with your gluLookAt parameters, of course
glTranslatef(1.0, 0.0, 0.0);
// your drawing code here
Just for clarification: gluLookAt creates a matrix which is meant to be multiplied with the modelview matrix. If you use the order of function calls suggested above, things should work as expected.
I hope that helps.

Glitch when moving camera in OpenGL

I am writing a tile-based game engine for the iPhone and it works in general apart from the following glitch. Basically, the camera will always keep the player in the centre of the screen, and it moves to follow the player correctly and draws everything correctly when stationary. However whilst the player is moving, the tiles of the surface the player is walking on glitch as shown:
http://img41.imageshack.us/img41/9422/movingy.png
Compared to the stationary (correct):
http://img689.imageshack.us/img689/7026/still.png
Does anyone have any idea why this could be?
Thanks for the responses so far. Floating point error was my first thought also and I tried slightly increasing the size of the tiles but this did not help. Changing glClearColor to red still leaves black gaps so maybe it isn't floating point error. Since the tiles in general will use different textures, I don't know if vertex arrays can be used (I always thought that the same texture had to be applied to everything in the array, correct me if I'm wrong), and I don't think VBO is available in OpenGL ES. Setting the filtering to nearest neighbour improved things but the glitch still happens every ten frames or so, and the pixelly result means that this solution is not viable anyway.
The main difference between what I'm doing now and what I've done in the past is that this time I am moving the camera rather than the stationary objects in the world (i.e. the tiles, the player is still being moved). The code I'm using to move the camera is:
void Camera::CentreAtPoint( GLfloat x, GLfloat y )
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(x - size.x / 2.0f, x + size.x / 2.0f, y + size.y / 2.0f, y - size.y / 2.0f, 0.01f, 5.0f);
glMatrixMode(GL_MODELVIEW);
}
Is there a problem with doing things this way and if so is there a solution?
My first guess would be floating point rounding error. This could cause the co-ordinates for your quads to be just a little bit out, resulting in the gaps you see. to verify this, you might want to try changing glClearColor() and seeing if the gaps change colour with it.
One solution to this would be to make the tiles slightly larger. Only a very small increment is needed (like 0.0001f) to cover over this kind of error.
Alternatively, you could try using a Vertex Array or a VBO to store your ground mesh (ensuring that adjoining squares share vertices). I'd expect this to fix the issue, but I'm not 100% sure - and it should also render faster.
Sometimes this is caused by filtering issues on border texels. You could try using GL_CLAMP_TO_EDGE in your texture parameters.
Its due to filtering.. use clamp to edge AND leave a 1 or 2 pixel border.. this is why we have an option for BORDER in glTexImage call..
the 4th parameter change from 0 to 1