Iphone OpengGL : Editing the MODELVIEW_MATRIX - iphone

I working on a spinning 3D cube (glFrustumf setup) and it multiplies the current matrix by the previous one so that the cube continues to spin. See below
/* save current rotation state */
GLfloat matrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
/* re-center cube, apply new rotation */
glLoadIdentity();
glRotatef(self.angle, self.dy,self.dx,0);
glMultMatrixf(matrix);
The problem is I need to step back from this (as if I had a camera).
I tried to edit the matrix and that kind of works but picks up noise. The cube jumps around.
matrix[14] = -5.0;
matrix[13] = 0;
matrix[12] =0;
Is there a way to edit the current Modelview Matrix so that I can set the position of the cube with multiplying it by another matrix?

You should not mistreat OpenGL as a scene graph, nor a math library. That means: Don't read back the matrix, and multiply it arbitrarily back. Instead rebuild the whole matrix stack a new every time you do a render pass. I think I should point out, that in OpenGL-4 all the matrix functions have been removed. Instead you're expected to supply the matrices as uniforms.
EDIT due to comment by #Burf2000:
Your typical render handler will look something like this (pseudocode):
draw_object():
# bind VBO or plain VertexArrays (you might even use immediate mode, but that's deprecated)
# draw the stuff using glDrawArrays or better yet glDrawElements
render_subobject(object, parent_transform):
modelview = parent_tranform * object.transform
if OPENGL3_CORE:
glUniformMatrix4fv(object.shader.uniform_location[modelview], 1, 0, modelview)
else:
glLoadMatrixf(modelview)
draw_object(object)
for subobject in object.subobjects:
render_subobject(subobject, modelview)
render(deltaT, window, scene):
if use_physics:
PhysicsSimulateTimeStep(deltaT, scene.objects)
else:
for o in scene.objects:
o.animate(deltaT)
glClearColor(...)
glClearDepth(...)
glViewport(0, 0, window.width, window.height)
glDisable(GL_SCISSOR_TEST);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT)
# ...
# now _some_ objects' render pass - others may precede or follow, like for creating reflection cubemaps or water refractions.
glViewport(0, 0, window.width, window.height)
glEnable(GL_DEPTH_TEST)
glDepthMask(1)
glColorMask(1,1,1,1)
if not OPENGL3_CORE:
glMatrixMode(GL_PROJECTION)
glLoadMatrixf(scene.projection.matrix)
for object in scene.objects:
bind_shader(object.shader)
if OPENGL3_CORE:
glUniformMatrix4fv(scene.projection_uniform, 1, 0, scene.projection.matrix)
# other render passes
glViewport(window.HUD.x, window.HUD.y, window.HUD.width, window.HUD.height)
glStencil(window.HUD.x, window.HUD.y, window.HUD.width, window.HUD.height)
glEnable(GL_STENCIL_TEST)
glDisable(GL_DEPTH_TEST)
if not OPENGL3_CORE:
glMatrixMode(GL_PROJECTION)
glLoadMatrixf(scene.HUD.projection.matrix)
render_HUD(...)
and so on. I hope you get the general idea. OpenGL is neither a scene graph, nor a matrix manipulation library.

Related

Why do vertices of a quad and the localScale of the quad not match in Unity?

I have a Quad whose vertices I'm printing like this:
public MeshFilter quadMeshFilter;
for(var vertex in quadMeshFilter.mesh.vertices)
{
print(vertex);
}
And, the localScale like this:
public GameObject quad;
print(quad.transform.localScale);
Vertices are like this:
(-0.5, -0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, 0.5)
while the localScale is:
(6.4, 4.8, 0)
How is this possible - because the vertices make a square but localScale does not.
How do I use vertices and draw another square in front of the quad?
I am not well versed in the matters of meshes, but I believe I know the answer to this question.
Answer
How is this possible
Scale is a value which your mesh is multiplied in size by in given directions (x, y, z). A scale of 1 is default size. A scale of 2 is double size and so on. Your localSpace coordinates will then be multiplied by this scale.
Say a localSpace coordinate is (1, 0, 2), the scale however, is (3, 1, 3). Meaning that the result is (1*3, 0*1, 2*3).
How do I use vertices and draw another square in front of the quad?
I'd personally just create the object and then move it via Unity's Transform system. Since it allows you to change the worldSpace coordinates using transform.position = new Vector3(1f, 5.4f, 3f);
You might be able to move each individual vertex in WorldSpace too, but I haven't tried that before.
I imagine it is related to this bit of code though: vertices[i] = transform.TransformPoint(vertices[i]); since TransformPoint converts from localSpace to worldSpace based on the Transform using it.
Elaboration
Why do I get lots of 0's and 5's in my space coordinates despite them having other positions in the world?
If I print the vertices of a quad using the script below. I get these results, which have 3 coordinates and can be multiplied as such by localScale.
Print result:
Script:
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
This first result is what we call local space.
There also exists something called WorldSpace. You can convert between local- and worldSpace.
localSpace is the objects mesh vertices in relation to the object itself while worldSpace is the objects location in the Unity scene.
Then you get the results as seen below, first the localSpace coordinates as in the first image, then the WorldSpace coordinates converted from these local coordinates.
Here is the script I used to print the above result.
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
Debug.Log("World Space");
for (int i = 0; i < vertices.Length; ++i)
{
vertices[i] = transform.TransformPoint(vertices[i]);
Debug.Log(vertices[i]);
}
Good luck with your future learning process.
This becomes clear once you understand how Transform hierarchies work. Its a tree, in which parent transform [3x3] matrix (position, rotation, scale (rotation is actually a quaternion but lets assume its euler for simplicity so that math works). by extension of this philosophy, the mesh itself can be seen as child to the gameoobject that holds it.
If you imagine a 1x1 quad (which is what is described by your vertexes), parented to a gameobject, and that gameobject's Transform has a non-one localScale, all the vertexes in the mesh get multiplied by that value, and all the positions are added.
now if you parent that object to another gameObject, and give it another localScale, this will again multiply all the vertex positions by that scale, translate by its position etc.
to answer your question - global positions of your vertexes are different than contained in the source mesh, because they are feed through a chain of Transforms all the way up to the scene root.
This is both the reason that we only have localScale and not scale, and this is also the reason why non-uniform scaling of objects which contain rotated children can sometimes give very strange results. Transforms stack.

Drawing multiple moving objects

I'm currently working on an iOS game where, long story short, I need to draw a lot of moving cubes - approximate maximum of 200 per frame. Emphasis on moving because yes, I have indeed Googled away for hours on this topic and have yet to find a suitable solution for fast, efficient drawing of multiple objects where their position updates every frame.
Through my endless amounts of research on this subject most seem to mention VBOs, however I'm not sure this would suit my case where the position of every object changes every frame.
I'm using OpenGL 1 at the moment - I have working code and on generation 3/4+ devices (the ones which support OpenGL 2, ha) it runs at a reasonable framerate - however when testing on my (old, yes) 2nd-gen iPod touch, it is very sluggish and essentially unplayable.
My code comprises of a static array of vertices for a 'cube' and an array containing the position and colour of every cube. My game logic loop updates the position of every cube in the array. At the moment I'm looping through the cube array, calling glTranslatef and glDrawArrays for every cube. From what I've read this is very inefficient, however I'm completely confused as to how you would optimise it. Any ideas?
(maybe I shouldn't be aiming for old, discontinued iOS devices but given my belief that my code is incredibly inefficient, I figure it'll help my future endeavours regardless if I find a way to address this)
For such simple objects I would make one big VBO say 200 Objects * NrVerticesPerCube, put all the data interleaved Vertex,Normal,UV,Vertex,Normal,UV, etc.
I do something similar in a keyframe animation of a beaver in my game, I start with something like this:
glGenBuffers(1, &vboObjects[vboGroupBeaver]);
glBindBuffer(GL_ARRAY_BUFFER, vboObjects[vboGroupBeaver]);
glBufferData(GL_ARRAY_BUFFER, beaverVerts*8*sizeof(GLfloat), 0, GL_STATIC_DRAW);
vbo_buffer = glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
NSString *path;
path = [[NSBundle mainBundle] pathForResource:#"beaver01" ofType:#"bin"];
NSFileHandle *model = [NSFileHandle fileHandleForReadingAtPath:path];
float vertice[8];
int counter = 0;
while (read([model fileDescriptor], &vertice, 8*sizeof(float))) {
memcpy(vbo_buffer, vertice, 8*sizeof(GLfloat)); // 0
vbo_buffer += 8*sizeof(GLfloat);
counter++;
}
glUnmapBufferOES(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This creates my VBO buffer with the correct size (in this case 8 * sizeof(GLfloat) wich is 3 Verts, 3 Normals and 2UV), and copies the first keyframe to the buffer, you could do the same with you initial object positions, or just leave that and compute latter...
Then in each frame I do interpolation between 2 keyframes for each vertex of my beaver, and just make one draw call, this is very fast for the 4029 vertices my beaver has, and works at 60FPS on my iPhone 3G.
For you doing only gltranslates it would be even simpler, just add the values of x,y,z to each vertice of each cube.
You would update it like this:
glBindBuffer(GL_ARRAY_BUFFER, vboObjects[vboGroupBeaver]);
GLvoid* vbo_buffer = glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
Bind the vbo buffer and mapit to a buffer var.
Calculate the stuff you want on a temp var.
memcpy(vbo_buffer, currentVert, 6*sizeof(GLfloat)); // 0
vbo_buffer += 8*sizeof(GLfloat);
Copy it and update buffer to next object, repeat until all objects updated...
You could also do all the updates in a seperate array and copy the whole array, but then you would be copying extra info that usually doesn't change (normals and UV). Or you could not use interleaved data and copy that...
glUnmapBufferOES(GL_ARRAY_BUFFER);
Unmap the VBO buffer
glVertexPointer(3, GL_FLOAT, 8*sizeof(GLfloat), (GLvoid*)((char*)NULL));
glNormalPointer(GL_FLOAT, 8*sizeof(GLfloat), (GLvoid*)((char*)NULL+3*sizeof(GLfloat)));
glTexCoordPointer(2, GL_FLOAT,8*sizeof(GLfloat), (GLvoid*)((char*)NULL+6*sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLES, 0, beaverVerts);
Setup your draw call, and draw it all...
If you need to rotate objects and not just gltranslate them, you will need to add some matrix multiplications along the way...
EDIT **
ok, making a gltranste by hand is actually very easy (rotation, etc is a bit trickier).
I'm using a an interleaved plane drawed using TRIANGLE_STRIP instead of triangles, but the principle is the same.
float beltInter[] = {
0.0, 0.0, 0.0, // vertices[0]
0.0, 0.0, 1.0, // Normals [0]
6.0, 1.0, // UV [0]
0.0, 480, 0.0, // vertices[1]
0.0, 0.0, 1.0, // Normals [1]
0.0, 1.0, // UV [1]
320.0, 0.0, 0.0, // vertices[2]
0.0, 0.0, 1.0, // Normals [2]
6.0, 0.0, // UV [2]
320.0, 480, 0.0, // vertices[3]
0.0, 0.0, 1.0, // Normals [3]
0.0, 0.0 // UV [3]
};
So this is interleaved vertex, you got vertex then Normals then UV, if you're not using textures substitute UV for color.
The easiest way is to have an array with all the objects inside (made easy if all your objects are the same size) and make the position updates after draw (instead of in the middle of the opengl frame), better still make a seperate thread, create 2 VBOs update one of them while drawing from the other, something like this:
Thread 1 OpenGL DrawFrom VBO0
Thread 2 Game Updates, update positions on internal array and copy to VBO1, set Var saying VBO1 yes ready (so thread 1 only changes from drawing to VBO1 when all the updates are done).
Thread 1 OpenGL DrawFrom VBO1
Thread 2 Game update, same thing but update VBO0
continue with same logic
this is called double buffering and you use it to garanty stability, without this sometimes your game logic will be updating the VBO while the graphics card needs it and the graphics card will have to wait, resulting in lower FPS.
Anyway, back on topic
to make the equivalent to gltranslatef(10,20,30) just do:
int maxvertices = 4;
float x = 10;
float y = 20;
float z = 30;
int counter = 0;
int stride = 8; // stride is 8 = 3 x vertice + 3 x normal + 2 x UV change to 3 x color or 4 x color depending on your needs
glBindBuffer(GL_ARRAY_BUFFER, vboObjects[myObjects]);
GLvoid* vbo_buffer = glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
while (counter < (maxVertices*8)) {
beltInter[counter] += x; // just sum the corresponding values to each
beltInter[counter+1] += y;
beltInter[counter+2] += z;
memcpy(vbo_buffer, currentVert, 3*sizeof(GLfloat)); // again only copy what you need, in this case only copying the vertices, if your're updating all the data, you can just do a single memcpy at the end instead of these partial ones
vbo_buffer += stride*sizeof(GLfloat); // forward the buffer
counter += stride; // only update the vertex, but you could update everything
}
glUnmapBufferOES(GL_ARRAY_BUFFER);
glVertexPointer(3, GL_FLOAT, stride*sizeof(GLfloat), (GLvoid*)((char*)NULL));
glNormalPointer(GL_FLOAT, stride*sizeof(GLfloat), (GLvoid*)((char*)NULL+3*sizeof(GLfloat)));
glTexCoordPointer(2, GL_FLOAT,stride*sizeof(GLfloat), (GLvoid*)((char*)NULL+6*sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, maxVertices);
Of course the update values doesn't have to be the same for all the objects, infact using a base array like this you can update all the info as you go along and just have the routine to copy it to VBO when needed.
All this was written from memory on the fly, so there maybe dragons :-)
Hope that helps.
You could optimise quite a bit by sticking all the coords for all your cubes in a single array, and drawing it with a single glDrawArrays call.
I'm not sure why you'd want to split up the cubes into separate arrays, except maybe because it makes your data structure more elegant/object oriented, but that's the first place I'd look at making an improvement.
Dump the cube coordinates in one big array, and give each cube object an index into that array so that you can still keep your update logic fairly compartmentalised (as in, cube n owns the coordinates in the range x to y and is responsible for updating them, but when you actually draw the coordinates you run glDrawArrays directly on the centralised coord array instead of looping through the cube objects and rendering them individually).

Rendering huge amount of points

I would like to render image in OpenGL ES, pixel by pixel. I want to do it this way because I plan to move those pixels over time to create various effect.
For performance and design reasons I decided to use only every other pixel in both directions (thus reducing their number to one quarter)
I have only very basic understanding of opengl, so I am probably missing some key knowledge to achieve this.
What is the best way to achieve this? Do I have to really render it pixel by pixel? Or can I somehow create texture out of array of pixels?
I would like to make this work on as much devices as possible (so OpenGL ES 1.1 solution is preffered, but if it is not possible or it would be really inconvenient or slow, 2.0 can be used)
I tried to do this using VBO with mixed results. I am not sure I have done it properly, because there are some problems (and it is very slow). Here is my code:
Initialization:
glGenBuffers(1, &pointsVBO);
glBindBuffer(GL_ARRAY_BUFFER, pointsVBO);
glBufferData(GL_ARRAY_BUFFER, 160*240*sizeof(Vertex), 0, GL_DYNAMIC_DRAW);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
Rendering:
- (void)renderPoints:(ImagePixel**)imagePixels {
int count = 160 * 240;
for(int i = 0; i < count; ++i) {
vertices[i].v[0] = imagePixels[i]->positionX;
vertices[i].v[1] = imagePixels[i]->positionY;
vertices[i].color[0] = imagePixels[i]->red;
vertices[i].color[1] = imagePixels[i]->green;
vertices[i].color[2] = imagePixels[i]->blue;
vertices[i].color[3] = 1;
}
glVertexPointer(2, GL_FLOAT, sizeof(Vertex), vertices[0].v);
glColorPointer(4, GL_FLOAT, sizeof(Vertex), vertices[0].color);
// update vbo
GLvoid *vbo_buffer = glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
memcpy(vbo_buffer, vertices, count * sizeof(Vertex));
glUnmapBufferOES(GL_ARRAY_BUFFER);
// draw contents of vbo
glDrawArrays(GL_POINTS, 0, count);
}
Vertex struct:
typedef struct Vertex
{
float v[2];
float color[4];
} Vertex;
imagePixels array is filled with data from image.
When I do this, I get most of my image but I am missing few last rows and I can see some random pixels around the screen. Is it possible that I hit some limit in drawArrays that uses only portion of data?
Second problem is, that points in second half of columns aren't aligned properly. I guess this is caused by rounding errors in float math when computing position during rendering itself (supplied coordinates are all multiples of 2). Is there any way how to prevent this? I need all points to be aligned in proper grid.
I will provide you with screenshot as soon as I get my iphone back
If you really are wanting to manipulate every pixel, you should probably just use a single full-screen quad in OpenGL and update its texture each frame.
You can create a texture out of a bitmap array of pixels using glTexImage2D.

Box2D + Opengl ES 2.0 (XCode, iPhone): Efficiently updating body vertex locations

If you've used Box2d, you're familiar with setting a b2Body->userData property which is then used to update rendered shape x,y coordinates:
// Physics update:
int32 velocityIterations = 6;
int32 positionIterations = 2;
world->Step(timeDelta, velocityIterations, positionIterations);
for (b2Body* b = world->GetBodyList(); b; b = b->GetNext()) {
id obj = (id)b->GetUserData();
if ((obj != NULL)) {
Shape *s = (MyType*)obj;
CGPoint newCentre = CGPointMake(b->GetPosition().x * PTM_RATIO, b->GetPosition().y * PTM_RATIO);
[s drawAt:newCentre];
}
}
Conceptually, the rendering procedure for this data flow is straightforward: create a Shape class to represent each body, add a drawAt method that uses OpenGL ES 2.0 to render all desired vertices (with attributes) for the body, and have that class redraw upon its coordinates being changed.
However, I want to optimise rendering by sticking vertices for all bodies into one single vertex buffer. Therefore I intend to modify class Shape to include offsets in the buffer for where its vertices are located, then it can simply update these in the buffer upon drawAt.
I realise the management of these offsets could get messy for additions/removals to the buffer, however these are infrequent in my application and I want to squeeze every drop of rendering performance out of the pipeline that I can.
My questions are:
Does Opengl ES 2.0 allow me to specify a set of vertices as a
'shape' which can then be translated with one matrix operation, or
must I explicitly update vertices one-by-one in this way?
Must I create a unique updatable object to assign to each
b2Body->userData, or is there some more efficient practise?
Is it better to update graphical objects in their own timeline, reading positions from associated b2Body instances, or update graphical objects immediately in the b2Body update loop listed above?
I'd appreciate some advice, thank you.
-1. If you set the vertices in the buffer as the positions as you gave when creating fixtures for the bodies, should should be able to do this:
b2Vec2 pos = m_body->GetPosition();
float angle = m_body->GetAngle();
glPushMatrix();
glTranslatef( pos.x, pos.y, 0 );
glRotatef( angle * RADTODEG, 0, 0, 1 );//OpenGL uses degrees here
glDrawArrays(GL_WHATEVER, startIndex, vertexCount);
glPopMatrix();
See http://www.iforce2d.net/b2dtut/drawing-objects for a more in-depth discussion.
-2. Typically you will only want to draw what's visible. The above method could be done only after you've determined that this particular object should be drawn at all. The code in your question above may needlessly update positions for objects that never get drawn.
-3. uh... I think I just answered that. I think it's better to have a list of your game objects, each of them holding a pointer to the b2Body* that represents them in the physics world, rather than referring to Box2D's body list.

How to move incrementally in a 3D world using glRotatef() and glTranslatef()

I have some 3D models that I render in OpenGL in a 3D space, and I'm experiencing some headaches in moving the 'character' (that is the camera) with rotations and translation inside this world.
I receive the input (ie the coordinates where to move/the dregrees to turn) from some extern event (image a user input or some data from a GPS+compass device) and the kind of event is rotation OR translation .
I've wrote this method to manage these events:
- (void)moveThePlayerPositionTranslatingLat:(double)translatedLat Long:(double)translatedLong andRotating:(double)degrees{
[super startDrawingFrame];
if (degrees != 0)
{
glRotatef(degrees, 0, 0, 1);
}
if (translatedLat != 0)
{
glTranslatef(translatedLat, -translatedLong, 0);
}
[self redrawView];
}
Then in redrawView I'm actualy drawing the scene and my models. It is something like:
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
NSInteger nModels = [models count];
for (NSInteger i = 0; i < nModels; i++)
{
MD2Object * mdobj = [models objectAtIndex:i];
glPushMatrix();
double * deltas = calloc(sizeof(double),2);
deltas[0] = currentCoords[0] - mdobj.modelPosition[0];
deltas[1] = currentCoords[1] - mdobj.modelPosition[1];
glTranslatef(deltas[0], -deltas[1], 0);
free(deltas);
[mdobj setupForRenderGL];
[mdobj renderGL];
[mdobj cleanupAfterRenderGL];
glPopMatrix();
}
[super drawView];
The problem is that when translation an rotation events are called one after the other: for example when I'm rotating incrementally for some iterations (still around the origin) then I translate and finally rotate again but it appears that the last rotation does not occur around the current (translated) position but around the old one (the old origin). I'm well aware that this happens when the order of transformations is inverted, but I believed that after a drawing the new center of the world was given by the translated system.
What am I missing? How can I fix this? (any reference to OpenGL will be appreciated too)
I would recommend not doing cummulative transformations in the event handler, but internally storing the current values for your transformation and then only transforming once, but I don't know if this is the behaviour that you want.
Pseudocode:
someEvent(lat, long, deg)
{
currentLat += lat;
currentLong += long;
currentDeg += deg;
}
redraw()
{
glClear()
glRotatef(currentDeg, 0, 0, 1);
glTranslatef(currentLat, -currentLong, 0);
... // draw stuff
}
It sounds like you have a couple of things that are happening here:
The first is that you need to be aware that rotations occur about the origin. So when you translate then rotate, you are not rotating about what you think is the origin, but the new origin which is T-10 (the origin transformed by the inverse of your translation).
Second, you're making things quite a bit harder than you really need. What you might want to consider instead is to use gluLookAt. You essentially give it a position within your scene and a point in your scene to look at and an 'up' vector and it will set up the scene properly. To use it properly, keep track of where you camera is located, call that vector p, and a vector n (for normal ... indicates the direction you're looking) and u (your up vector). It will make things easier for more advanced features if n and u are orthonormal vectors (i.e. they are orthoginal to each other and have unit length). If you do this, you can compute r = n x u, (your 'right' vector), which will be a normal vector orthoginal to the other two. You then 'look at' p+n and provide the u as the up vector.
Ideally, your n, u and r have some canonical form, for instance:
n = <0, 0, 1>
u = <0, 1, 0>
r = <1, 0, 0>
You then incrementally accumulate your rotations and apply them to the canonical for of your oritentation vectors. You can use either Euler Rotations or Quaternion Rotations to accumulate your rotations (I've come to really appreciate the quaternion approach for a variety of reasons).