Related
I've been trying to switch from two distinct VBOs to just one with interleaved attributes. I can do it in C++, but in Scala it proves quite difficult.
Here is my implementation:
class Mesh(positions: Array[Float], textureCoordinates: Array[Float], indices: Array[Int])
{
// Create VAO, VBO and a buffer for the indices
val vao: Int = glGenVertexArrays
val vbo: Int = glGenBuffers
val ibo: Int = glGenBuffers
setup
private def setup(): Unit =
{
val interleavedBuffer: FloatBuffer = prepareFloatBuffer(positions ++ textureCoordinates)
val indicesBuffer: IntBuffer = prepareIntBuffer(indices)
// One VAO to bind them all!
glBindVertexArray(vao)
glBindBuffer(GL_ARRAY_BUFFER, vbo)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo)
// Fill buffers with data
glBufferData(GL_ARRAY_BUFFER, interleavedBuffer, GL_STATIC_DRAW)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indicesBuffer, GL_STATIC_DRAW)
// Set vertex attribute pointers
glVertexAttribPointer(0, 3, GL_FLOAT, false, 4*5, 0) // 0 = Position = Vector3(x,y,z) -> 3 (coordinates) * 4 (byte-size of float)
glVertexAttribPointer(1, 2, GL_FLOAT, false, 4*5, 4*3) // 1 = Texture Coordinates = Vector2(x,y) -> 2 (coordinates) * 4 (byte-size of float) => stride = 3 (coordinates) + 2 (texture coordinates) = 5 * 4 (byte-size of float); offset = 3 (coordinates) * 4 (byte-size of float)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)
glBindBuffer(GL_ARRAY_BUFFER, 0)
glBindVertexArray(0)
}
private def prepareIntBuffer(data: Array[Int]): IntBuffer =
{
val buffer: IntBuffer = BufferUtils.createIntBuffer(data.length)
buffer.put(data)
buffer.flip // Make the buffer readable
buffer
}
private def prepareFloatBuffer(data: Array[Float]): FloatBuffer =
{
val buffer: FloatBuffer = BufferUtils.createFloatBuffer(data.length)
buffer.put(data)
buffer.flip // Make the buffer readable
buffer
}
def render(): Unit =
{
glBindVertexArray(vao)
glBindBuffer(GL_ARRAY_BUFFER, vbo)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo)
glEnableVertexAttribArray(0) // Vertices are in zero
glEnableVertexAttribArray(1) // Texture Coords are in one
glDrawElements(GL_TRIANGLES, this.indices.length, GL_UNSIGNED_INT, 0)
glDisableVertexAttribArray(1)
glDisableVertexAttribArray(0)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0)
glBindBuffer(GL_ARRAY_BUFFER, 0)
glBindVertexArray(0)
}
}
The data (positions, textureCoordinates) is the same I used before, with two distinct VBOs for them.
now:
glVertexAttribPointer(0, 3, GL_FLOAT, false, 4*5, 0)
glVertexAttribPointer(1, 2, GL_FLOAT, false, 4*5, 4*3)
How do I calculate these strides and offsets you ask?
Well, position is a Vector3(x, y, z) so 3 floats. Texture coordinates are two floats.
3 + 2 = 5
The size of a float is... well, I thought it was 4 bytes. (according to http://wiki.lwjgl.org/wiki/The_Quad_interleaved it is in Java)
That would give 20, or 4*5
The offset for the texture coordinates would be calculated the same (3 * 4) for each coordinates of the position
Now, the outcome doesn't look too good...
can you guess what it actually should be? (Spoiler: A cube)
So, I figure that either my maths is totally broken or that a Float maybe has a different size in Scala?
In Java I could do Float.size, but Scala doesn't have anything the like it seems.
In C++ I'd define a struct and do:
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex, textureCoordinates));
Your problem is not sizeof float, but the data layout in your buffer. The statement
val interleavedBuffer: FloatBuffer = prepareFloatBuffer(positions ++ textureCoordinates)
Creates a buffer of the layout
xyz[0],xyz[1],…,xyz[n],st[0],st[1],…,st[m]
However what you configure OpenGL to expect is
xyz[0],st[0],xyz[1],st[1],…,xyz[n],st[n]
You can either properly interleave the attributes in the buffer, your you tell OpenGL that each attribute's elements are contiguous (0 stride, or the size of exactly one element of that attribute, i.e. 3*4 for xyz and 2*4 for st) and pass offsets to where each subbuffer start.
I have been starting to dive into OpenGL ES 2.0 the last couple days, but I still get really faulty results. One thing I do not quite understand, is how I am supposed to set up my buffers correctly.
I would like to create a shape like this: A kind of tent, if you like, without the left and right side.
3_______________________2
|\ /|
| \_ _ _ _ _ _ _ _ _ _/ |
| /4 5\ |
|/_____________________\|
0 1
So let's start with my Texture/Indices/Vertices Array:
That is what i set up :
#define RECT_TOP_R {1, 1, 0}
#define RECT_TOP_L {-1, 1, 0}
#define RECT_BOTTOM_R {1, -1, 0}
#define RECT_BOTTOM_L {-1, -1, 0}
#define BACK_RIGHT {1, 0, -1.73}
#define BACK_LEFT {-1, 0, -1.73}
const GLKVector3 Vertices[] = {
RECT_BOTTOM_L, //0
RECT_BOTTOM_R, //1
RECT_TOP_R, //2
RECT_TOP_L, //3
BACK_LEFT, //4
BACK_RIGHT //5
};
const GLKVector4 Color[] = {
{1,0,0,1},
{0,1,0,1},
{0,0,1,1},
{0,1,0,1},
{1,0,0,1},
{0,1,0,1},
{0,0,1,1},
{0,1,0,1}
};
const GLubyte Indices[] = {
0,1,3,
2,4,5,
0,1
};
const GLfloat texCoords[] = {
0,0,
1,0,
0,1,
1,1,
1,1,
0,0,
0,0,
1,0
};
Here I generate/bind the buffers.
glGenBuffers(1, &vertexArray);
glBindBuffer(GL_ARRAY_BUFFER, vertexArray);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition,3,GL_FLOAT,GL_FALSE,sizeof(Vertices),0);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
glGenBuffers(1, &colArray);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, sizeof(Color), 0);
glBufferData(GL_ARRAY_BUFFER, sizeof(Color), Color, GL_STATIC_DRAW);
glGenBuffers(1, &texArray);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(texCoords),0);
glBufferData(GL_ARRAY_BUFFER, sizeof(texCoords), texCoords, GL_STATIC_DRAW);
So I have a questions regarding buffers:
What is the difference between GL_ARRAY_BUFFER and GL_ELEMENT_ARRAY_BUFFER ?
Here is the gelegate method, which is called whenever it redraws:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
self.contentScaleFactor = 2.0;
self.opaque = NO;
glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
[self.effect prepareToDraw];
glDrawElements(GL_TRIANGLE_STRIP, sizeof(Indices), GL_UNSIGNED_BYTE, 0);
}
So, the code obviously does not work accordingly. Could you please help me ? I have been trying to get it to work, but I am losing my nerves.
Ok, so I definitely did something wrong there. I reused code from a website which basically stored all the Vertex data in one struct. I, however, have changed the code, in that I have separated the individual attribute arrays (colors, texture coordinates) into individual arrays. Before, the struct was buffered on its own, so the struct was processed by the GPU as a whole with the texture array and the color array. Now - after my changes - I need to generate and bind those buffers individually.
Another problem I could partly resolve was the one with the indices and texture mapping. I do not know whether I understood that right, but if I assign the texture coordinates (x,y) to a certain index and then reuse that index - with the aim of having another texture coordinate in that exact place - then apparently I would not have reason to wonder why everything is messed up.
What I ended up doing did not exactly solve my problem, but I got a whole lot nearer to my set goal and I am quite proud of my learning curve so far as far as openGL is concerned.
This answer is intended for others who might face the same problems and I hope that I do not spread any wrong information here. Please feel free to edit/point out any mistakes.
In response to your own answer, the vertex data in a struct you mentioned is called a struct of arrays. Apple recommend you use this layout.
I'd like to use Vertex Buffer Objects (VBOs) to improved my rendering of somewhat complicated models in my Open GL ES 1.1 game for iPhone. After reading several posts on SO and this (http://playcontrol.net/ewing/jibberjabber/opengl_vertex_buffer_object.html) tutorial, I'm still having trouble understanding VBOs and how to implement them given my Cheetah 3D export model format. Could someone please give me an example of implementing a VBO and using it to draw my vertices with the given data structure and explain the syntax? I greatly appreciate any help!
#define body_vertexcount 434
#define body_polygoncount 780
// The vertex data is saved in the following format:
// u0,v0,normalx0,normaly0,normalz0,x0,y0,z0
float body_vertex[body_vertexcount][8]={
{0.03333, 0.00000, -0.68652, -0.51763, 0.51063, 0.40972, -0.25028, -1.31418},
{...},
{...}
}
GLushort body_index[body_polygoncount][3]={
{0, 1, 2},
{2, 3, 0}
}
I've written the following code with the help of Chapter 9 from Pro OpenGL ES (Appress). I'm getting EXC_BAD_ACCESS with the DrawElements command and I'm not sure why. Could someone please shed some light? Thanks -
// First thing we do is create / setup the index buffer
glGenBuffers(1, &bodyIBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bodyIBO);
// For constrast, instead of glBufferSubData and glMapBuffer,
// we can directly supply the data in one-shot
glBufferData(GL_ELEMENT_ARRAY_BUFFER, body_polygoncount*sizeof(GLubyte), body_index, GL_STATIC_DRAW);
// Define our data structure
int numXYZElements = 3;
int numNormalElements = 3;
int numTextureCoordElements = 2;
long totalXYZBytes;
long totalNormalBytes;
long totalTexCoordinateBytes;
int numBytesPerVertex;
// Allocate a new buffer
glGenBuffers(1, &bodyVBO);
// Bind the buffer object to use
glBindBuffer(GL_ARRAY_BUFFER, bodyVBO);
// Tally up the size of the data components
numBytesPerVertex = numXYZElements;
numBytesPerVertex += numNormalElements;
numBytesPerVertex += numTextureCoordElements;
numBytesPerVertex *= sizeof(GLfloat);
// Actually allocate memory on the GPU ( Data is static here )
glBufferData(GL_ARRAY_BUFFER, numBytesPerVertex * body_vertexcount, 0, GL_STATIC_DRAW);
// Upload data to the cache ( memory mapping )
GLubyte *vboBuffer = (GLubyte *)glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
// Caclulate the total number of bytes for each data type
totalXYZBytes = numXYZElements * body_vertexcount * sizeof(GLfloat);
totalNormalBytes = numNormalElements * body_vertexcount * sizeof(GLfloat);
totalTexCoordinateBytes = numTextureCoordElements * body_vertexcount * sizeof(GLfloat);
// Set the total bytes property for the body
self.bodyTotalBytes = totalXYZBytes + totalNormalBytes + totalTexCoordinateBytes;
// Setup the copy of the buffer(s) using memcpy()
memcpy(vboBuffer, body_vertex, self.bodyTotalBytes);
// Perform the actual copy
glUnmapBufferOES(GL_ARRAY_BUFFER);
Here are the drawing commands where I'm getting the exception:
// Activate the VBOs to draw
glBindBuffer(GL_ARRAY_BUFFER, bodyVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bodyIBO);
// Setup drawing
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glClientActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,lightGreyInt);
// Setup pointers
glVertexPointer(3, GL_FLOAT, sizeof(vertexStruct), (char *)NULL + 0 );
glTexCoordPointer(2, GL_FLOAT, sizeof(vertexStruct), (char *)NULL + 12 );
glNormalPointer(GL_FLOAT, sizeof(vertexStruct), (char *)NULL + 24 );
// Now draw the body
glDrawElements(GL_TRIANGLES, body_polygoncount,GL_UNSIGNED_SHORT, (GLvoid*)((char*)NULL));
//glDrawElements(GL_TRIANGLES, body_polygoncount, GL_UNSIGNED_SHORT, nil);
//glDrawElements(GL_TRIANGLES,body_polygoncount*3,GL_UNSIGNED_SHORT,body_index);
Well, first of all your index buffer is too small, you don't just have body_polygoncount indices but body_polygoncount * 3. You also messed up the type, since they're shorts, you need GLushort and not GLubyte, so it should be
glBufferData(GL_ELEMENT_ARRAY_BUFFER, body_polygoncount*3*sizeof(GLushort),
body_index, GL_STATIC_DRAW);
And then, you messed up the offsets of your attributes, since your data contains first the texture coords, then the normal and then the position for each vertex, it should be
glVertexPointer(3, GL_FLOAT, sizeof(vertexStruct), (char *)NULL + 20 ); //3rd, after 5*4 byte
glTexCoordPointer(2, GL_FLOAT, sizeof(vertexStruct), (char *)NULL + 0 ); //1st
glNormalPointer(GL_FLOAT, sizeof(vertexStruct), (char *)NULL + 8 ); //2nd, after 2*4 bytes
And finally, in a glDrawElements call you don't give the number of triangles, but the number of elements (indices), so it should be
glDrawElements(GL_TRIANGLES, body_polygoncount*3,
GL_UNSIGNED_SHORT, (GLvoid*)((char*)NULL));
Otherwise your code looks reasonable (of course the mapping was senseless and you could have just used glBufferData again, but I guess you did it for learning) and if you understood everything it does, there is nothing more to it.
But I wonder that all these errors would also have occurred if you had just used client side vertex arrays without VBOs and I thought OpenGL ES 1.1 doesn't have immediate mode glBegin/glEnd. So I wonder why your game worked previously without VBOs if you're not aware of these errors.
I am porting an Android app I made to iPhone and running into problems with the syntax (I think)
I'm basing the project off of the example from
http://iphonedevelopment.blogspot.com/2009/04/opengl-es-from-ground-up-part-2-look-at.html
I would like to pull out the geometry from the rendering process to keep the code modular but I can't seem to get it to work. I've created a class called Icosahedron (comments are my understanding of what's going on)
Icosahedron.h
#import <Foundation/Foundation.h>
#import "OpenGLCommon.h"
#interface Icosahedron : NSObject {
Vertex3D *vertices[12]; //allocate a group of 12 Vertex3D pointers
Vertex3D *normals[12]; // ditto
GLubyte *faces[60]; // ditto but with 60 face values
}
// Declare methods
-(Vertex3D *) vertices;
-(void) setVertices:(Vector3D *)setVerts;
-(Vertex3D *) normals;
-(void) setNormals:(Vector3D *)setNorms;
-(GLubyte *) faces;
-(void) setFaces:(GLubyte *)setFaces;
#end
Icosahedron.m
#import "Icosahedron.h"
#implementation Icosahedron
// Returns the pointer to the vertices instance variable
-(Vector3D *) vertices{
return *vertices;
}
-(void) setVertices:(Vector3D *)setVerts
{
//vertices=setVerts[0];
}
-(Vector3D *) normals{
return *normals;
}
-(void) setNormals:(Vector3D *)setNorms
{
//normals=setNorms;
}
-(GLubyte *) faces{
return *faces;
}
-(void) setFaces:(GLubyte *)setFaces
{
//faces=setFaces;
}
/**/
-(id)init
{
// super method
self=[super init];
// create 12 Vector3D objects and populate them...
Vector3D tempVert[12]={
{0, -0.525731, 0.850651}, // vertices[0]
{0.850651, 0, 0.525731}, // vertices[1]
{0.850651, 0, -0.525731}, // vertices[2]
{-0.850651, 0, -0.525731}, // vertices[3]
{-0.850651, 0, 0.525731}, // vertices[4]
{-0.525731, 0.850651, 0}, // vertices[5]
{0.525731, 0.850651, 0}, // vertices[6]
{0.525731, -0.850651, 0}, // vertices[7]
{-0.525731, -0.850651, 0}, // vertices[8]
{0, -0.525731, -0.850651}, // vertices[9]
{0, 0.525731, -0.850651}, // vertices[10]
{0, 0.525731, 0.850651} // vertices[11]
};
// same...
Vector3D tempNorm[12]={
{0.000000, -0.417775, 0.675974},
{0.675973, 0.000000, 0.417775},
{0.675973, -0.000000, -0.417775},
{-0.675973, 0.000000, -0.417775},
{-0.675973, -0.000000, 0.417775},
{-0.417775, 0.675974, 0.000000},
{0.417775, 0.675973, -0.000000},
{0.417775, -0.675974, 0.000000},
{-0.417775, -0.675974, 0.000000},
{0.000000, -0.417775, -0.675973},
{0.000000, 0.417775, -0.675974},
{0.000000, 0.417775, 0.675973},
};
// face values
GLubyte tempFaces[60]={
1, 2, 6,
1, 7, 2,
3, 4, 5,
4, 3, 8,
6, 5, 11,
5, 6, 10,
9, 10, 2,
10, 9, 3,
7, 8, 9,
8, 7, 0,
11, 0, 1,
0, 11, 4,
6, 2, 10,
1, 6, 11,
3, 5, 10,
5, 4, 11,
2, 7, 9,
7, 1, 0,
3, 9, 8,
4, 8, 0,
};
// set the instance pointers to the temp values
*vertices=tempVert;
*normals=tempNorm;
*faces=tempFaces;
at this point the values are NOT properly populated, only the first value is correct.
return self;
}
#end
All I want to do is to be able to call something like
...
ico=[[Icosahedron alloc] init];
glVertexPointer(3, GL_FLOAT, 0, [ico vertices]);
...
in the rendering section but it the farthest I've been able to get is setting the first value of the Vertex3Ds inside the Icosahedron class and I get 'out of scope' in the debugger for any of the Icosahedron's values in the rendering class.
I suspect that this is just me learning Objective-C's quirks but I've tried many different approaches for a few days and nothing seems to get me anywhere.
Please help me Overflow-Wan, you're my only hope.
You’re getting your pointers and arrays mixed up. You’d want something like this:
Vertex3D vertices[12];
- (void) setVertices: (Vertex3D *)newVertices;
{
memcpy( vertices, newVertices, 12 * sizeof( Vertex3D ) );
}
- (Vertex3D *) vertices;
{
return vertices;
}
To copy arrays in C you have to use memcpy (or a hand-written loop), you cannot do this with the assignment operator.
Firstly, you seem to be using "Vertex3D" and "Vector3D" interchangeably. I don't know if that's actually a problem...
I think the arrays should be declared as arrays of values, not arrays of pointers. So...
Vertex3D vertices[12]; //allocate a group of 12 Vertex3D pointers
Vertex3D normals[12]; // ditto
GLubyte faces[60]; // ditto but with 60 face values
That way you have room for all the values, not just pointers to other vectors (which you haven't allocated).
As an aside, the typical init sequence is:
-(id)init
{
if (self = [super init])
{
//do initialization stuff
}
return self;
}
It handles errors more gracefully.
Your declarations of vertices, normals and faces are to arrays of pointers to the entities in question. Whereas it seems like what you really want is to have arrays of the structs/values themselves. So your interface should say:
Vertex3D vertices[12];
Vertex3D normals[12];
GLubyte faces[60];
Your object will then own a chunk of memory containing 12 vertices, 12 normals and 60 faces (um, whatever those are -- GLubyte values, anyway).
Pointers and arrays are sort of interchangeable in C/Objective-C. So vertices and normals can also be thought of as Vertex3D* values, and faces as a GLubyte*, and they can be passed as such to functions that want those types.
To make it clear what's what, it would be better to have the initial values as single static const arrays at the top level of your implementation file, and then copy those values into the object-owned arrays at initialisation time. You can do this several ways, but the simplest is probably to use memcpy as in Sven's answer.
How much memory will consume a texture loaded with this method ?
With this method, will a 1024x1024 texture consume 4MB anyway ? ( regardless of loading it as RGBA4444 ) ?
-(void)loadTexture:(NSString*)nombre {
CGImageRef textureImage =[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:nombre ofType:nil]].CGImage;
if (textureImage == nil) {
NSLog(#"Failed to load texture image");
return;
}
// Dimensiones de nuestra imagen
imageSizeX= CGImageGetWidth(textureImage);
imageSizeY= CGImageGetHeight(textureImage);
textureWidth = NextPowerOfTwo(imageSizeX);
textureHeight = NextPowerOfTwo(imageSizeY);
GLubyte *textureData = (GLubyte *)calloc(1,textureWidth * textureHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, textureWidth,textureHeight,8, textureWidth * 4,CGImageGetColorSpace(textureImage),kCGImageAlphaPremultipliedLast );
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)textureWidth, (float)textureHeight), textureImage);
/**************** Convert data to RGBA4444******************/
//Convert "RRRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" to "RRRRGGGGBBBBAAAA"
void *tempData = malloc(textureWidth * textureHeight * 2);
unsigned int* inPixel32 = (unsigned int*)textureData;
unsigned short* outPixel16 = (unsigned short*)tempData;
for(int i = 0; i < textureWidth * textureHeight ; ++i, ++inPixel32)
*outPixel16++ =
((((*inPixel32 >> 0) & 0xFF) >> 4) << 12) | // R
((((*inPixel32 >> 8) & 0xFF) >> 4) << 8) | // G
((((*inPixel32 >> 16) & 0xFF) >> 4) << 4) | // B
((((*inPixel32 >> 24) & 0xFF) >> 4) << 0); // A
free(textureData);
textureData = tempData;
// Ya no necesitamos el bitmap, lo liberamos
CGContextRelease(textureContext);
glGenTextures(1, &textures[0]);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, textureData);
free(textureData);
//glEnable(GL_BLEND);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
The GL ES 1.1.12 specification (pdf) has this to say on the subject:
The GL stores the resulting texture with internal component resolutions of its
own choosing. The allocation of internal component resolution may vary based
on any TexImage2D parameter (except target), but the allocation must not be a
function of any other state and cannot be changed once established.
However, according to the iphone dev center, RGBA4444 is natively supported. So I would expect it to consume 2MB with your code snippet. Do you have reasons to doubt it's using 2MB ?
In OpenGL, there is no way to find out how much video-memory a texture uses.
Also there is no single answer to the question to start with. Such details depend on graphic card, driver version, platform..
From the OpenGL ES Programming Guide for iOS:
If your application cannot use
compressed textures, consider using a
lower precision pixel format. A
texture in RGB565, RGBA5551, or
RGBA4444 format uses half the memory
of a texture in RGBA8888 format. Use
RGBA8888 only when your application
needs that level of quality.
Above that paragraph, they also really recommend using PVRTC compressed textures, because they save even more memory.