issue with glDrawElements - iphone

this shows up with the color red:
VertexColorSet(&colors[vertexCounter], 1.0f, 0.0f, 0.0f, 1.0f);
this shows the color black:
VertexColorSet(&colors[vertexCounter], 0.9f, 0.0f, 0.0f, 1.0f);
why is it the color black shouldn't it just be a darker shade of red?
glEnableClientState(GL_COLOR_ARRAY);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glVertexPointer(2, GL_FLOAT, 0, vertexes);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawElements(GL_TRIANGLES, 3*indexesPerButton*totalButtons, GL_UNSIGNED_SHORT, indexes);
//glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glDisableClientState(GL_COLOR_ARRAY);

and yes it is black because i used an int instead of a float

Related

Applying Translations to Entire OpenGL ES Scene - iPhone

I have an OpenGL ES scene which is made up of about 20 objects. In the render method for each object I have code which scales, rotates and positions (using glmultmatrix) that object in the correct place in each scene (see code below).
My question is how can I then apply a transformation to the entire scene as a whole ? E.g scale / enlarge the entire scene by 2 ?
glPushMatrix();
glLoadIdentity();
//Move some objects.
if (hasAnimations) {
glTranslatef(kBuildingOffset);
//scale
glScalef(kModelScale);
//glMultMatrixf(testAnimation);
zRotation = kBuildingzRotation
xRotation = kBuildingxRotation
yRotation = kBuildingyRotation
glRotatef(yRotation, 0.0f, 1.0, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
//NSLog(#"ANIMATION FRAME IS %d", animationFrame);
//NSLog(#"MATRICE IS %f", animationArray[0][0]);
glMultMatrixf(animationArray[animationFrame]);
//glMultMatrixf(matricesArray);
glMultMatrixf(matricePivotArray);
//glMultMatrixf(testAnimation);
}
//First rotate our objects as required.
if ([objectName isEqualToString:#"movingobject1"]) {
glTranslatef(kFan1Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation =kyFanFlip;
xRotation = kxFanRotation;
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0, 0.0f);
glTranslatef(0.0, 0.0, -300);
}
if ([objectName isEqualToString:#"movingobject2"]) {
glTranslatef(kFan2Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation = kyFanFlip;
xRotation = kxFanRotation;
glRotatef(-kFan3YOffset, 0.0, 1.0, 0.0);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0, 0.0f);
glRotatef(kFan3YOffset, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0, 0.0, -300);
}
if ([objectName isEqualToString:#"movingobject3"]) {
glTranslatef(kFan3Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation =kyFanFlip;
xRotation =kxFanRotation;
glRotatef(-kFan2YOffSet, 0.0, 1.0, 0.0);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(kFan2YOffSet, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -300);
}
//Then position the rest of the scene objects.
if (![objectName isEqualToString:#"movingobject1"])
if (![objectName isEqualToString:#"movingobject2"])
if(![objectName isEqualToString:#"movingobject3"])
if (!hasAnimations) {
glLoadIdentity();
glTranslatef(kBuildingOffset);
//scale
glScalef(kModelScale);
zRotation = kBuildingzRotation
xRotation = kBuildingxRotation
yRotation = kBuildingyRotation
glRotatef(yRotation, 0.0f, 1.0, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
if ([Matrices count]!=0) {
glMultMatrixf(matricesArray);
}
if (hasPivotNode) {
glMultMatrixf(matricePivotArray);
}
}
[mesh render];
glPopMatrix();
//restore the matrix
You should be able to achieve this easily enough by pushing the transform matrix you desire on to the matrix stack before you do any of your object-specifc transforms, but then don't load the identity matrix each time you push another matrix onto the stack. Practically speaking, this will transform all subsequent matrix operations. This is the basic pattern...
// Push an identity matrix on the bottom of the stack...
glPushMatrix();
glLoadIdentity();
// Now scale it, so all subsequent transforms will be
// scaled up 2x.
glScalef(2.f, 2.f, 2.f);
foreach(mesh) {
glPushMatrix();
//glLoadIdentity(); This will erase the scale set above.
glDoABunchOfTransforms();
[mesh render];
glPopMatrix();
}

Camera frame to UIImage to OpenGL rendering gives an odd image

I'm extracting a UIImage with
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
then i'm creating an OpenGL texture and render it.
then i extract a UIImage from the frame buffer but it comes out wrong as you can see.
i tried playing with texture vertices array but it stayed the same.
this are the coordinates:
const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);`
It is problem with your texture coordinates. Check them properly.

How to draw cube without indices?OpenGL ES 2.0

I want to draw cube without indices and using VBO. I can't find anything in internet(tutorials,examples).
What I tried:
const GLfloat Vertices[] = {
-1.0f, -1.0f, 1.0f, //Vertex 0
1.0f, -1.0f, 1.0f, //v1
-1.0f, 1.0f, 1.0f, //v2
1.0f, 1.0f, 1.0f, //v3
1.0f, -1.0f, 1.0f, //...
1.0f, -1.0f, -1.0f,
1.0f, 1.0f, 1.0f,
1.0f, 1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
-1.0f, -1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, -1.0f,
-1.0f, -1.0f, -1.0f,
-1.0f, -1.0f, 1.0f,
-1.0f, 1.0f, -1.0f,
-1.0f, 1.0f, 1.0f,
-1.0f, -1.0f, -1.0f,
1.0f, -1.0f, -1.0f,
-1.0f, -1.0f, 1.0f,
1.0f, -1.0f, 1.0f,
-1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, -1.0f,
1.0f, 1.0f, -1.0f,
};
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 24);
But it turns out that something strange, but not the cube
First check in your glDrawPrimatives call,that you're using GL_QUADS as your draw mode. Anything else will give weird results.
You vertices are fine though. They're in the correct order and in the correct positions.
EDIT: Since you can't use quads, you will have to define each triangle individually, your first block will look like this:
-1.0f, -1.0f, 1.0f, //v0
1.0f, -1.0f, 1.0f, //v1
-1.0f, 1.0f, 1.0f, //v2
-1.0f, 1.0f, 1.0f, //v2
1.0f, -1.0f, 1.0f, //v1
1.0f, 1.0f, 1.0f, //v3
I would highly suggest using indices though, if you keep the same vertex buffer as you defined, you can set up your indices like this:
byte indices[6 * 6];
int n = 0;
for(int i = 0; i < 4 * 6; i += 4)
{
indices[n++] = i;
indices[n++] = i + 1;
indices[n++] = i + 2;
indices[n++] = i + 2;
indices[n++] = i + 1;
indices[n++] = i + 3;
}

OpenGL ES; rendering texture created from CGBitmapContext

I am executing the following, which I have derived from a few different tutorials (Just a single render pass, initialisation code not shown but works fine for untextured primitives):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, xSize, 0, ySize, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glEnable(GL_TEXTURE_2D);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
GLuint texture[1];
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
int width = 50;
int height = 50;
void* textureData = malloc(width * height * 4);
CGColorSpaceRef cSp = CGColorSpaceCreateDeviceRGB();
CGContextRef ct = CGBitmapContextCreate(textureData, width, height, 8, width*4, cSp, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextSetRGBFillColor(ct, 0, 1, 0, 1);
CGContextFillRect(ct, CGRectMake(0, 0, 50, 50));
CGContextRelease(ct);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
float verts[] = {
0.0f, 0.0f, 0.0f,
50.0f, 0.0f, 0.0f,
0.0f, 50.0f, 0.0f,
50.0f, 50.0f, 0.0f
};
float texCords[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f
};
glVertexPointer(3, GL_FLOAT, 0, verts);
glTexCoordPointer(2, GL_FLOAT, 0, texCords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_TEXTURE_2D);
The result is a white square. Not the green one as intended. Can anyone spot the error(s) in my code which result in its' failure to render?
I hope to get this working then move it on to text rendering.
The problem is that width and height are not powers of two. There are two solutions:
Use the texture rectangle extension. Set the texture target to GL_TEXTURE_RECTANGLE_ARB instead of GL_TEXTURE_2D. You will have to enable this extension before using it. Note that rectangle textures do not support mipmaps.
Use powers of two for texture dimensions.

glFrustumf displays only clear color, glOrthof displays as expected (OpenGL ES)

I'm new to OpenGL, so I'm sure this is a dummy mistake, but I've read every post, and reviewed sample code, and I can't find a difference, explaining why glFrustum wont display as I'd like it to.
I initialize OpenGL like:
- (void) initOpenGL{
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
//glLoadIdentity();
//glOrthof(0.0f, self.bounds.size.width, self.bounds.size.height, 0.0f, -10.0f, 10.0f);
const GLfloat zNear = -0.1, zFar = 1000.0, fieldOfView = 60.0;
GLfloat size;
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
// This give us the size of the iPhone display
CGRect rect = self.bounds;
glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar);
glViewport(0, 0, rect.size.width, rect.size.height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
#if 0
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
#endif
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
#if TARGET_IPHONE_SIMULATOR
glColor4f(0.0, 0.0, 0.0, 0.0f);
#else
glColor4f(0.0, 0.0, 0.0, 0.0f);
#endif
[[Texture2D alloc] initWithImage:[UIImage imageNamed:#"GreenLineTex.png"] filter:GL_LINEAR];
glInitialised = YES;
And my drawing is done like:
- (void)drawView {
if(!glInitialised) {
[self initOpenGL];
}
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_TEXTURE_2D);
static const GLfloat texCoords[] = {
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
};
// draw the edges
glEnableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);
for (int i = 0; i < connectionNumber; i++){
Vertex2DSet(&vertices[0], connectionLines[i].lineVertexBeginPoint.x, connectionLines[i].lineVertexBeginPoint.y);
Vertex2DSet(&vertices[1], connectionLines[i].lineVertexBeginPoint.x+connectionLines[i].normalVector.x, connectionLines[i].lineVertexBeginPoint.y+connectionLines[i].normalVector.y);
Vertex2DSet(&vertices[2], connectionLines[i].lineVertexEndPoint.x, connectionLines[i].lineVertexEndPoint.y);
Vertex2DSet(&vertices[3], connectionLines[i].lineVertexEndPoint.x+connectionLines[i].normalVector.x, connectionLines[i].lineVertexEndPoint.y+connectionLines[i].normalVector.y);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Where the block in the for loop is a set of vertices that make up some triangle strips.
If I uncomment the glOrthof() line, then I can see my display, however it's orthographic, and I'd like to move the camera in and out, to change the scaling of the whole scene.
What have I done incorrectly that causes glFrustumf() to display only the clear color?
Short answer: you are looking in the wrong direction.
Long answer:
Your frustum is symmetric while your orthographic matrix isn't. So if your model is set up to be visible in the glOrtho case, it may not be visible with your glFrustum.
Also you shouldn't use glOrtho AND glFrustum together, because the matrices are multiplied and will surely yield a funny projection matrix.
You can use Nate Robins' GL tutors at http://www.xmission.com/~nate/tutors.html to experiment with glFrustum and glOrtho (in the "projection" application).