Camera frame to UIImage to OpenGL rendering gives an odd image - iphone

I'm extracting a UIImage with
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
then i'm creating an OpenGL texture and render it.
then i extract a UIImage from the frame buffer but it comes out wrong as you can see.
i tried playing with texture vertices array but it stayed the same.
this are the coordinates:
const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);`

It is problem with your texture coordinates. Check them properly.

Related

Why my iPhone OpenGL-ES texture does not cover the viewport?

I have a square image of size 320x320, from which I create an OpenGL texture. I use the most basic Vertext and Fragment shaders and I want to display the texture in the entire view. The view (EAGLView derived from UIView as found in many OpenGL iOS samples) is also of size 320x320.
The problem is, the image is drawn on the top left corner, covering only around 50% of the entire view. It does not cover 100% of the view. I don't know why?
Here is my code:
position = glGetAttribLocation(m_shaderProgram, "position");
inputTextureCoordinate = glGetAttribLocation(m_shaderProgram, "inputTextureCoordinate");
inputImageTexture = glGetUniformLocation(m_shaderProgram, "inputImageTexture");
static const GLfloat textureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
static const GLfloat imageVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
[EAGLContext setCurrentContext:context];
glViewport(0, 0, backingWidth, backingHeight); // These are 320, 320
glUseProgram(m_shaderProgram);
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, sourceTextureID); // The texture is also of size 320x320
glUniform1i(inputImageTexture, 2);
glVertexAttribPointer(position, 2, GL_FLOAT, 0, 0, imageVertices);
glVertexAttribPointer(textureCoordinate, 2, GL_FLOAT, 0, 0, textureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
Vertext Shader.
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
Fragment Shader.
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
gl_FragColor = texture2D(inputImageTexture, textureCoordinate);
}
So the problem is that the texture's dimensions were not 2's power. So we need to scale the textureCoordinates accordingly. Inserting following lines solved the problem...
GLfloat textureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
float nearest2sPower = 2;
if (nearest2sPower < backingWidth) {
while (nearest2sPower < backingWidth) {
nearest2sPower *= 2;
}
}
verticalFlipTextureCoordinates[2] = backingWidth/nearest2sPower;
verticalFlipTextureCoordinates[6] = backingWidth/nearest2sPower;
nearest2sPower = 2;
if (nearest2sPower < backingWidth) {
while (nearest2sPower < backingWidth) {
nearest2sPower *= 2;
}
}
verticalFlipTextureCoordinates[1] = backingHeight/nearest2sPower;
verticalFlipTextureCoordinates[3] = backingHeight/nearest2sPower;

Applying Translations to Entire OpenGL ES Scene - iPhone

I have an OpenGL ES scene which is made up of about 20 objects. In the render method for each object I have code which scales, rotates and positions (using glmultmatrix) that object in the correct place in each scene (see code below).
My question is how can I then apply a transformation to the entire scene as a whole ? E.g scale / enlarge the entire scene by 2 ?
glPushMatrix();
glLoadIdentity();
//Move some objects.
if (hasAnimations) {
glTranslatef(kBuildingOffset);
//scale
glScalef(kModelScale);
//glMultMatrixf(testAnimation);
zRotation = kBuildingzRotation
xRotation = kBuildingxRotation
yRotation = kBuildingyRotation
glRotatef(yRotation, 0.0f, 1.0, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
//NSLog(#"ANIMATION FRAME IS %d", animationFrame);
//NSLog(#"MATRICE IS %f", animationArray[0][0]);
glMultMatrixf(animationArray[animationFrame]);
//glMultMatrixf(matricesArray);
glMultMatrixf(matricePivotArray);
//glMultMatrixf(testAnimation);
}
//First rotate our objects as required.
if ([objectName isEqualToString:#"movingobject1"]) {
glTranslatef(kFan1Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation =kyFanFlip;
xRotation = kxFanRotation;
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0, 0.0f);
glTranslatef(0.0, 0.0, -300);
}
if ([objectName isEqualToString:#"movingobject2"]) {
glTranslatef(kFan2Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation = kyFanFlip;
xRotation = kxFanRotation;
glRotatef(-kFan3YOffset, 0.0, 1.0, 0.0);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0, 0.0f);
glRotatef(kFan3YOffset, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0, 0.0, -300);
}
if ([objectName isEqualToString:#"movingobject3"]) {
glTranslatef(kFan3Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation =kyFanFlip;
xRotation =kxFanRotation;
glRotatef(-kFan2YOffSet, 0.0, 1.0, 0.0);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(kFan2YOffSet, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -300);
}
//Then position the rest of the scene objects.
if (![objectName isEqualToString:#"movingobject1"])
if (![objectName isEqualToString:#"movingobject2"])
if(![objectName isEqualToString:#"movingobject3"])
if (!hasAnimations) {
glLoadIdentity();
glTranslatef(kBuildingOffset);
//scale
glScalef(kModelScale);
zRotation = kBuildingzRotation
xRotation = kBuildingxRotation
yRotation = kBuildingyRotation
glRotatef(yRotation, 0.0f, 1.0, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
if ([Matrices count]!=0) {
glMultMatrixf(matricesArray);
}
if (hasPivotNode) {
glMultMatrixf(matricePivotArray);
}
}
[mesh render];
glPopMatrix();
//restore the matrix
You should be able to achieve this easily enough by pushing the transform matrix you desire on to the matrix stack before you do any of your object-specifc transforms, but then don't load the identity matrix each time you push another matrix onto the stack. Practically speaking, this will transform all subsequent matrix operations. This is the basic pattern...
// Push an identity matrix on the bottom of the stack...
glPushMatrix();
glLoadIdentity();
// Now scale it, so all subsequent transforms will be
// scaled up 2x.
glScalef(2.f, 2.f, 2.f);
foreach(mesh) {
glPushMatrix();
//glLoadIdentity(); This will erase the scale set above.
glDoABunchOfTransforms();
[mesh render];
glPopMatrix();
}

OpenGL ES; rendering texture created from CGBitmapContext

I am executing the following, which I have derived from a few different tutorials (Just a single render pass, initialisation code not shown but works fine for untextured primitives):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, xSize, 0, ySize, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glEnable(GL_TEXTURE_2D);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
GLuint texture[1];
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
int width = 50;
int height = 50;
void* textureData = malloc(width * height * 4);
CGColorSpaceRef cSp = CGColorSpaceCreateDeviceRGB();
CGContextRef ct = CGBitmapContextCreate(textureData, width, height, 8, width*4, cSp, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextSetRGBFillColor(ct, 0, 1, 0, 1);
CGContextFillRect(ct, CGRectMake(0, 0, 50, 50));
CGContextRelease(ct);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
float verts[] = {
0.0f, 0.0f, 0.0f,
50.0f, 0.0f, 0.0f,
0.0f, 50.0f, 0.0f,
50.0f, 50.0f, 0.0f
};
float texCords[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f
};
glVertexPointer(3, GL_FLOAT, 0, verts);
glTexCoordPointer(2, GL_FLOAT, 0, texCords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_TEXTURE_2D);
The result is a white square. Not the green one as intended. Can anyone spot the error(s) in my code which result in its' failure to render?
I hope to get this working then move it on to text rendering.
The problem is that width and height are not powers of two. There are two solutions:
Use the texture rectangle extension. Set the texture target to GL_TEXTURE_RECTANGLE_ARB instead of GL_TEXTURE_2D. You will have to enable this extension before using it. Note that rectangle textures do not support mipmaps.
Use powers of two for texture dimensions.

How do I draw this exact gradient on the iPhone?

The gradient in question is Figure 8-5 from the Quartz 2D Programming Guide, "A radial gradient that varies between a point and a circle".
I'm trying to build a CGGradient object (not a CGShading object, which might be the problem) like so:
CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB();
CGFloat colors[] =
{
0, 0, 0, 0.9,
0, 0, 0, 0
};
CGGradientRef gradient = CGGradientCreateWithColorComponents(rgb, colors, NULL, sizeof(colors)/(sizeof(colors[0])*sizeof(CGFloat)));
CGContextClipToRect(context, rect);
CGContextDrawRadialGradient(context, gradient, startCenter, startRadius, endCenter, endRadius, gradientDrawingOptions);
CGGradientRelease(gradient);
CGColorSpaceRelease(rgb);
Of course, that isn't exactly right -- the centre points and radii are correct, but the actual gradient doesn't look the same. I just wish Apple had provided the source code for each example! >:(
UPDATE: These color values add the shading on top of other content (drawing from a point out to a circle):
CGFloat colors[] =
{
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.75f
};
Using these color values is pretty close (drawing from a point out to a circle):
CGFloat colors[] =
{
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f, 0.75f
};

issue with glDrawElements

this shows up with the color red:
VertexColorSet(&colors[vertexCounter], 1.0f, 0.0f, 0.0f, 1.0f);
this shows the color black:
VertexColorSet(&colors[vertexCounter], 0.9f, 0.0f, 0.0f, 1.0f);
why is it the color black shouldn't it just be a darker shade of red?
glEnableClientState(GL_COLOR_ARRAY);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glVertexPointer(2, GL_FLOAT, 0, vertexes);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawElements(GL_TRIANGLES, 3*indexesPerButton*totalButtons, GL_UNSIGNED_SHORT, indexes);
//glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glDisableClientState(GL_COLOR_ARRAY);
and yes it is black because i used an int instead of a float