OpenGL ES; rendering texture created from CGBitmapContext - iphone

I am executing the following, which I have derived from a few different tutorials (Just a single render pass, initialisation code not shown but works fine for untextured primitives):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, xSize, 0, ySize, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glEnable(GL_TEXTURE_2D);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
GLuint texture[1];
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
int width = 50;
int height = 50;
void* textureData = malloc(width * height * 4);
CGColorSpaceRef cSp = CGColorSpaceCreateDeviceRGB();
CGContextRef ct = CGBitmapContextCreate(textureData, width, height, 8, width*4, cSp, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextSetRGBFillColor(ct, 0, 1, 0, 1);
CGContextFillRect(ct, CGRectMake(0, 0, 50, 50));
CGContextRelease(ct);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
float verts[] = {
0.0f, 0.0f, 0.0f,
50.0f, 0.0f, 0.0f,
0.0f, 50.0f, 0.0f,
50.0f, 50.0f, 0.0f
};
float texCords[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f
};
glVertexPointer(3, GL_FLOAT, 0, verts);
glTexCoordPointer(2, GL_FLOAT, 0, texCords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_TEXTURE_2D);
The result is a white square. Not the green one as intended. Can anyone spot the error(s) in my code which result in its' failure to render?
I hope to get this working then move it on to text rendering.

The problem is that width and height are not powers of two. There are two solutions:
Use the texture rectangle extension. Set the texture target to GL_TEXTURE_RECTANGLE_ARB instead of GL_TEXTURE_2D. You will have to enable this extension before using it. Note that rectangle textures do not support mipmaps.
Use powers of two for texture dimensions.

Related

iPhone OpenGL texture not completely transparent

I tried to paint a transparent texture over a sphere, but the transparent areas are not completely transparent. A vivid shade of gray remains. I tried to load a Photoshop generated PNG then paint it on sphere using the code below:
My code to load textures:
- (void) loadPNGTexture: (int)index Name: (NSString*) name{
CGImageRef imageRef = [UIImage imageNamed:[NSString stringWithFormat:#"%#.png",name]].CGImage;
GLsizei width = CGImageGetWidth(imageRef);
GLsizei height = CGImageGetHeight(imageRef);
GLubyte * data = malloc(width * 4 * height);
if (!data)
NSLog(#"error allocating memory for texture loading!");
else {
NSLog(#"Memory allocated for %#", name);
}
NSLog(#"Width : %d, Height :%d",width,height);
CGContextRef cg_context = CGBitmapContextCreate(data, width, height, 8, 4 * width, CGImageGetColorSpace(imageRef), kCGImageAlphaPremultipliedLast);//kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(cg_context, 0, height);
CGContextScaleCTM(cg_context, 1, -1);
CGContextDrawImage(cg_context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(cg_context);
CGContextSetBlendMode(cg_context, kCGBlendModeCopy); //kCGBlendModeCopy);
glGenTextures(2, m_texture[index]);
glBindTexture(GL_TEXTURE_2D, m_texture[index][0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
free(data);
}
Drawing clouds:
glPushMatrix();
glTranslatef(0, 0, 3 );
glScalef(3.1, 3.1, 3.1);
glRotatef(-1, 0, 0, 1);
glRotatef(90, -1, 0, 0);
glDisable(GL_LIGHTING);
//Load Texture for left side of globe
glBindTexture(GL_TEXTURE_2D, m_texture[CLOUD_TEXTURE][0]);
glVertexPointer(3, GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].vertex);
glNormalPointer(GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].normal);
glTexCoordPointer(2, GL_FLOAT, sizeof(TexturedVertexData3D), &VertexData[0].texCoord);
// draw the sphere
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_COPY);
glDrawArrays(GL_TRIANGLES, 0, 11520);
glEnable(GL_LIGHTING);
glPopMatrix();
This first thing that stands out in your code is the line:
glBlendFunc(GL_SRC_ALPHA, GL_COPY);
The second argument (GL_COPY), is not a valid argument for glBlendFunc.
You might want to change that to something along the lines of
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

Camera frame to UIImage to OpenGL rendering gives an odd image

I'm extracting a UIImage with
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
then i'm creating an OpenGL texture and render it.
then i extract a UIImage from the frame buffer but it comes out wrong as you can see.
i tried playing with texture vertices array but it stayed the same.
this are the coordinates:
const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);`
It is problem with your texture coordinates. Check them properly.

Draw Square with OpenGL ES for iOS

I am trying to draw a rectangle using the GLPaint example project provided by apple. I have tried modifying the vertices but cannot get a rectangle to appear on the screen. The finger painting works perfectly. Am I missing something in my renderRect method?
- (void)renderRect {
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Replace the implementation of this method to do your own custom drawing.
static const GLfloat squareVertices[] = {
-0.5f, -0.33f,
0.5f, -0.33f,
-0.5f, 0.33f,
0.5f, 0.33f,
};
static float transY = 0.0f;
glTranslatef(0.0f, (GLfloat)(sinf(transY)/2.0f), 0.0f);
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, squareVertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
The rest of the project is set up stock to allow drawing on the screen but just for reference these are the gl settings that are set.
// Set the view's scale factor
self.contentScaleFactor = 1.0;
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
CGFloat scale = self.contentScaleFactor;
// Setup the view port in Pixels
glOrthof(0, frame.size.width * scale, 0, frame.size.height * scale, -1, 1);
glViewport(0, 0, frame.size.width * scale, frame.size.height * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Set a blending function appropriate for premultiplied alpha pixel data
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / brushScale);
static const GLfloat squareVertices[] = {
30.0f, 300.0f,//-0.5f, -0.33f,
280.0f, 300.0f,//0.5f, -0.33f,
30.0f, 170.0f,//-0.5f, 0.33f,
280.0f, 170.0f,//0.5f, 0.33f,
};
That's definitely too much. OpenGL has normalized screen coords in range [-1..1]. So you have to convert device coords to normalized ones.
Issues are:
(1) the following code:
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
CGFloat scale = self.contentScaleFactor;
// Setup the view port in Pixels
glOrthof(0, frame.size.width * scale, 0, frame.size.height * scale, -1, 1);
glViewport(0, 0, frame.size.width * scale, frame.size.height * scale);
Establishes that the on-screen coordinates range from (0, 0) in the lower left to frame.size in the upper right. In other words, one OpenGL unit is one iPhone point. So your array of:
static const GLfloat squareVertices[] = {
-0.5f, -0.33f,
0.5f, -0.33f,
-0.5f, 0.33f,
0.5f, 0.33f,
};
Is less than 1 pixel in size.
(2) you have the following in the setup:
brushImage = [UIImage imageNamed:#"Particle.png"].CGImage;
/* ...brushImage eventually becomes the current texture... */
glEnable(GL_TEXTURE_2D);
You subsequently fail to supply texture coordinates for your quad. Probably you want to disable GL_TEXTURE_2D.
So the following:
static const GLfloat squareVertices[] = {
0.0f, 0.0f,
0.0, 10.0f,
90.0, 0.0f,
90.0f, 10.0f,
};
glDisable(GL_TEXTURE_2D);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, squareVertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Will produce a white quad 90 points wide and 10 points tlal in the lower left of the screen.

issue with glDrawElements

this shows up with the color red:
VertexColorSet(&colors[vertexCounter], 1.0f, 0.0f, 0.0f, 1.0f);
this shows the color black:
VertexColorSet(&colors[vertexCounter], 0.9f, 0.0f, 0.0f, 1.0f);
why is it the color black shouldn't it just be a darker shade of red?
glEnableClientState(GL_COLOR_ARRAY);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glVertexPointer(2, GL_FLOAT, 0, vertexes);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawElements(GL_TRIANGLES, 3*indexesPerButton*totalButtons, GL_UNSIGNED_SHORT, indexes);
//glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glDisableClientState(GL_COLOR_ARRAY);
and yes it is black because i used an int instead of a float

glFrustumf displays only clear color, glOrthof displays as expected (OpenGL ES)

I'm new to OpenGL, so I'm sure this is a dummy mistake, but I've read every post, and reviewed sample code, and I can't find a difference, explaining why glFrustum wont display as I'd like it to.
I initialize OpenGL like:
- (void) initOpenGL{
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
//glLoadIdentity();
//glOrthof(0.0f, self.bounds.size.width, self.bounds.size.height, 0.0f, -10.0f, 10.0f);
const GLfloat zNear = -0.1, zFar = 1000.0, fieldOfView = 60.0;
GLfloat size;
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
// This give us the size of the iPhone display
CGRect rect = self.bounds;
glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar);
glViewport(0, 0, rect.size.width, rect.size.height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
#if 0
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
#endif
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
#if TARGET_IPHONE_SIMULATOR
glColor4f(0.0, 0.0, 0.0, 0.0f);
#else
glColor4f(0.0, 0.0, 0.0, 0.0f);
#endif
[[Texture2D alloc] initWithImage:[UIImage imageNamed:#"GreenLineTex.png"] filter:GL_LINEAR];
glInitialised = YES;
And my drawing is done like:
- (void)drawView {
if(!glInitialised) {
[self initOpenGL];
}
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_TEXTURE_2D);
static const GLfloat texCoords[] = {
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
};
// draw the edges
glEnableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);
for (int i = 0; i < connectionNumber; i++){
Vertex2DSet(&vertices[0], connectionLines[i].lineVertexBeginPoint.x, connectionLines[i].lineVertexBeginPoint.y);
Vertex2DSet(&vertices[1], connectionLines[i].lineVertexBeginPoint.x+connectionLines[i].normalVector.x, connectionLines[i].lineVertexBeginPoint.y+connectionLines[i].normalVector.y);
Vertex2DSet(&vertices[2], connectionLines[i].lineVertexEndPoint.x, connectionLines[i].lineVertexEndPoint.y);
Vertex2DSet(&vertices[3], connectionLines[i].lineVertexEndPoint.x+connectionLines[i].normalVector.x, connectionLines[i].lineVertexEndPoint.y+connectionLines[i].normalVector.y);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
Where the block in the for loop is a set of vertices that make up some triangle strips.
If I uncomment the glOrthof() line, then I can see my display, however it's orthographic, and I'd like to move the camera in and out, to change the scaling of the whole scene.
What have I done incorrectly that causes glFrustumf() to display only the clear color?
Short answer: you are looking in the wrong direction.
Long answer:
Your frustum is symmetric while your orthographic matrix isn't. So if your model is set up to be visible in the glOrtho case, it may not be visible with your glFrustum.
Also you shouldn't use glOrtho AND glFrustum together, because the matrices are multiplied and will surely yield a funny projection matrix.
You can use Nate Robins' GL tutors at http://www.xmission.com/~nate/tutors.html to experiment with glFrustum and glOrtho (in the "projection" application).