I have downloaded the GLGravity project from apple site. I tried loading a new model to display instead of teapot. the model is loading but without using the defined textures.
I am trying to display the model using the following code but unable to display the texture.
// in setupView method
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, camaro_objVerts);
glNormalPointer(GL_FLOAT, 0, camaro_objNormals);
glTexCoordPointer(2, GL_FLOAT, 0, camaro_objTexCoords);
in drawView method
// draw data
glDrawArrays(GL_TRIANGLES, 0, camaro_objNumVerts);
I have also tried disabling lightning, but the model loads with white color and without texture.
Have you enabled GL_TEXTURE_2D? It should look like this (+ texture binding):
glBindTexture(GL_TEXTURE_2D, textureHandle);
glEnable(GL_TEXTURE_2D);
Related
I have a serious problem with OpenGL ES 1.1 and I can't find the solution though I've looked over it like a thousand times, and I haven't found anything like a comparable problem here. I hope you guys can help me.
I watched tutorial 2 at 71squared.com (you can find it here) and downloaded the code example from the same page. It runs fine.
Then I tried to write my own code in order to adapt it to my project. Anyway, when it comes to OpenGL calls, I paid attention to using the same code as the example.
The problem is the following: my call to glClear() affects the color of the screen, though my textures are not displayed. The problem can be caused neither by the UIView subclass (as glClear() is displayed), nor by the code loading the textures, as all iVars of the corresponding instances are calculated correctly. In addition, even the texture coordinates and the vertices take normal values, which are the same as in the code example. So the problem must be caused by some tiny mistake I made using OpenGL.
These are all of my OpenGL calls:
Initializing:
CGRect rect = [[UIScreen mainScreen] bounds];
// Set up OpenGL projection matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, rect.size.width, 0, rect.size.height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, rect.size.width, rect.size.height);
// Initialize OpenGL states
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDisable(GL_DEPTH_TEST);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_BLEND_SRC);
glEnableClientState(GL_VERTEX_ARRAY);
Drawing the Texture:
(t is a pointer to a struct containing information about how to draw the graphic.)
glPushMatrix();
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glTranslatef(t->posX, t->posY, 0.0f);
glRotatef(-(t->rotation), 0.0f, 0.0f, 1.0f);
glColor4f(t->filter[0], t->filter[1], t->filter[2], t->filter[3]);
glBindTexture(GL_TEXTURE_2D, _texture.name);
glEnable(GL_BLEND);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, _texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
I think this is where the problem is caused. If you want to see my code creating a frame buffer, I can show it to you. It is, again, nearly the same as in the example, though.
I'm quite desperate finding the solution, as I have single stepped through the whole code like a thousand times, but I can't find the piece of code where I do something different than the code example.
Thank you in advance.
Dominik
Each time you generate a texture image, you have to check this texture is valid by this method.
If this texture is not valid, you need to re-generate it with some delay time.
GLboolean glIsTexture(GLuint texture);
I've modified the OpenGL es 2.0 template in Xcode to render that little box to an offscreen texture (50*50), then reset the view port and render the texture to the screen using a fullscreen quad. But the FPS dropped down so much that obvious lags were seen (about 10).
I know iPad has problems concerning fillrate, but this just doesn't seem right. I used only one FBO and changed its color attachment between texture and renderBuffer in the loop. Does this have any influence?
Besides, I was writing an audio visualizer (like the one in Windows Media Player) editing pixel values in OpenGL. Any suggestions?
here goes the code:
//implement the texture in -(id)init
glGenTextures(1, &ScreenTex);
glBindTexture(GL_TEXTURE_2D, ScreenTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texSize, texSize, 0, GL_RGB, GL_UNSIGNED_BYTE, nil);
//And in the render loop
//draw to the texture
glViewport(0, 0, texSize, texSize);
glBindTexture(GL_TEXTURE_2D, ScreenTex);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ScreenTex, 0);
glClear(GL_COLOR_BUFFER_BIT);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glUniform1i(Htunnel, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//switch to render to render buffer here
glViewport(0, 0, backingWidth, backingHeight);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER,colorRenderbuffer);
glClear(GL_COLOR_BUFFER_BIT);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, texVertices);
glUniform1i(Htunnel, 1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//vertex shader
void main()
{
if (tunnel==0) {
gl_Position = position;
gl_Position.y += sin(translate) / 2.0;
colorVarying = color;
}else {
f_texCoord = v_texCoord;
gl_Position = position;
}
}
//frag shader
void main()
{
if (tunnel==0) {
gl_FragColor = colorVarying;
} else {
gl_FragColor = texture2D(s_texture, f_texCoord);
}
}
Without actual code, it will be difficult to pick out where the bottleneck is. However, you can get an idea of where the problem is by using Instruments to localize the causes.
Create a new Instruments document using both the OpenGL ES instrument and the new Time Profiler one. In the OpenGL ES instrument, hit the little inspector button on its right side, then click on the Configure button. Make sure pretty much every logging option is checked on the resulting page, particularly the Tiler Utilization % and Renderer Utilization %. Click Done and make sure that both of those statistics are checked in the Select statistics to list page.
Run this set of instruments against your application on the iPad for a little while during rendering. Stop it and look at the numbers. As explained in Pivot's answer to my question, if you are seeing the Tiler Utilization % in the OpenGL ES instrument hitting 100%, you are being limited by your geometry (unlikely here). Likewise, if the Renderer Utilization % is near 100%, you are fill-rate limited. You can also look to the other statistics you've logged to pull out what might be happening.
You can then turn to the Time Profiler results to see if you can narrow down the hotspots in your code where things might be getting slowed down. Find the items near the top of the list there. If they are in your code, double-click on them to see what's going on. If they are in system libraries, filter the results until you see something more relevant by right-clicking on the symbol name and choosing either Charge Library to Callers or Charge Symbol to Caller.
At some point, you'll start seeing OpenGL-related symbols up there, which should clue you in to what the GPU is doing. Also, you may be surprised to find some of your own code slowing things down.
There's another OpenGL ES instrument that you might try, but it's part of the Xcode 4 beta and is currently under NDA. Check out the WWDC 2010 session videos for more about that one.
I can comfortably render a scene to a texture and map that texture back onto a framebuffer for screen display. But what if I wanted to map the texture back onto itself in order to blur it (say, at a quarter opacity in a new location). Is that possible?
The way I've done it is simply to enable the texture:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, color_tex);
And then draw to it:
glVertexPointer(2, GL_FLOAT, 0, sv);
glTexCoordPointer(2, GL_FLOAT, 0, tcb1);
glColor4f (1.0f,1.0f,1.0f,0.25f);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
(some code omitted, obviously)
Is there anything obviously wrong with that idea? Am I being an idiot?
No, you can't write the texture to the same texture, that triggers undefined behaviour.
But you can use a technique called Ping-Pong rendering, so you draw the result of the operation into another texture, and if you need to do more processing, you write the result to the first texture.
I'm using OpenGL ES to draw things in my iPhone game. Sometimes I like to change the alpha of the textures I'm drawing. Here is the (working) code I use. What, if anything, in this code sample is unnecessary? Thanks!
// draw texture with alpha "myAlpha"
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COMBINE );
glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB,GL_MODULATE);
glTexEnvf(GL_TEXTURE_ENV, GL_SRC0_RGB,GL_PRIMARY_COLOR);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND0_RGB,GL_SRC_COLOR);
glTexEnvf(GL_TEXTURE_ENV, GL_SRC1_RGB,GL_TEXTURE);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB,GL_SRC_COLOR);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(myAlpha, myAlpha, myAlpha, myAlpha);
glPushMatrix();
glTranslatef(xLoc, yLoc, 0);
[MyTexture drawAtPoint:CGPointZero];
glPopMatrix();
glColor4f(1.0, 1.0, 1.0, 1.0);
Edit:
The above code sample is for drawing w/ a modified alpha value (so I can fade things in and out). When I just want to draw w/o modifying the alpha value I'd use the last 5 lines of the above sample just without the last call to glColor4f.
My drawing looks like:
glBindTexture(GL_TEXTURE_2D, tex->name); // bind to the name
glVertexPointer(3, GL_FLOAT, 0, vertices); // verices is the alignment in triangles
glTexCoordPointer(2, GL_FLOAT, 0, coordinates); // coordinates describes the rectangle
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); //draw
It's hard to know what you can remove without knowing exactly what you're trying to do.
edit:
Unless you are disabling blend at some point, you can get rid of all of the blend calls:
glEnable(GL_BLEND);
glBlendFunc...
Put them in the initialization of your GL state if you need to set it once. If you need to enable and disable it is better to set the state once for everything you need to draw blended and then set the state again (once) for the unblended.
OpenGL is a state machine so this general idea applies anywhere you need to change the GL state (like setting texture environment with glTexEnvf).
About state machines:
A current state is determined by past
states of the system. As such, it can
be said to record information about
the past, i.e., it reflects the input
changes from the system start to the
present moment. A transition indicates
a state change and is described by a
condition that would need to be
fulfilled to enable the transition.
I'm not a expert but I was reading the Optimizing OpenGL ES for iPhone OS and it had a section on "Optimizing Texturing" which may help you out.
You have calls to configure both the Texture Environment and Framebuffer Blending, which are entirely different features. You are also calling glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB, x) twice before drawing, and so only the second call matters.
If all you really want to do is multiply the primary color by the texture color, then it is as simple as glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE). The rest of the calls to glTexEnv are unnecessary.
I took the example of GLPaint... I'm trying to put a background into the "PaintingView", so you could draw over the background and finally save the image as a file..... I'm lost.
I'm loading the PNG (512x512) and try to "paint with it" at the very beginning of the program, but it's painted as 64x64 instead of 512x512...
I tried before to load is as a subview of the painting view... but then, glReadPixels doesn't work as expected (it only take in consideration the PaintingView, not the subview). Also the PaintingView doesn't have a method as initWithImage... I NEED glReadPixels work on the image (and in the modification) but i really don't know why when i load it, the texture has a 64x64 size..
The GLPaint example project uses GL_POINT_SPRITE to draw copies of the brush texture as you move the brush. On the iPhone, the glPointSize is limited to 64x64 pixels. This is a hardware limitation, and in the simulator I think you can make it larger.
It sounds like you're trying to use a GL_POINT_SPRITE method to draw your background image, and that's really not what you want. Instead, try drawing a flat, textured box that fills the screen.
Here's a bit of OpenGL code that sets up vertices and texcoords for a 2D box and then draws it:
const GLfloat verticies[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat texcoords[] = {
0, 0,
1, 0,
0, 1,
1, 1,
};
glVertexPointer(2, GL_FLOAT, 0, verticies);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texcoords);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Hope that helps! Note that you need to specify the vertices differently depending on how your camera projection is set up. In my case, I set up my GL_MODELVIEW using the code below - I'm not sure how the GLPaint example does it.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glOrtho(0, 1.0, 0, 1.0, -1, 1);
First, glReadPixels() is only going to see whatever framebuffer is associated with your current OpenGL context. That might explain why you're not getting the pixels you expect.
Second, what do you mean by the texture being rendered at a specific pixel size? I assume the texture is rendered as a quad, and then the size of that quad ought to be under your control, code-wise.
Also, check that the loading of the texture doesn't generate an OpenGL error, I'm not sure what the iPhone's limitations on texture sizes are. It's quite conceivable that 512x512 is out of range. You could of course investigate this yourself, by calling glGetIntegerv() and using the GL_MAX_TEXTURE_SIZE constant.