iPhone : Texture bigger than 64x64? - iphone

I took the example of GLPaint... I'm trying to put a background into the "PaintingView", so you could draw over the background and finally save the image as a file..... I'm lost.
I'm loading the PNG (512x512) and try to "paint with it" at the very beginning of the program, but it's painted as 64x64 instead of 512x512...
I tried before to load is as a subview of the painting view... but then, glReadPixels doesn't work as expected (it only take in consideration the PaintingView, not the subview). Also the PaintingView doesn't have a method as initWithImage... I NEED glReadPixels work on the image (and in the modification) but i really don't know why when i load it, the texture has a 64x64 size..

The GLPaint example project uses GL_POINT_SPRITE to draw copies of the brush texture as you move the brush. On the iPhone, the glPointSize is limited to 64x64 pixels. This is a hardware limitation, and in the simulator I think you can make it larger.
It sounds like you're trying to use a GL_POINT_SPRITE method to draw your background image, and that's really not what you want. Instead, try drawing a flat, textured box that fills the screen.
Here's a bit of OpenGL code that sets up vertices and texcoords for a 2D box and then draws it:
const GLfloat verticies[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
const GLfloat texcoords[] = {
0, 0,
1, 0,
0, 1,
1, 1,
};
glVertexPointer(2, GL_FLOAT, 0, verticies);
glEnableClientState(GL_VERTEX_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texcoords);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Hope that helps! Note that you need to specify the vertices differently depending on how your camera projection is set up. In my case, I set up my GL_MODELVIEW using the code below - I'm not sure how the GLPaint example does it.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glOrtho(0, 1.0, 0, 1.0, -1, 1);

First, glReadPixels() is only going to see whatever framebuffer is associated with your current OpenGL context. That might explain why you're not getting the pixels you expect.
Second, what do you mean by the texture being rendered at a specific pixel size? I assume the texture is rendered as a quad, and then the size of that quad ought to be under your control, code-wise.
Also, check that the loading of the texture doesn't generate an OpenGL error, I'm not sure what the iPhone's limitations on texture sizes are. It's quite conceivable that 512x512 is out of range. You could of course investigate this yourself, by calling glGetIntegerv() and using the GL_MAX_TEXTURE_SIZE constant.

Related

Set UIImage as a background for 2D OpenGL ES on iPhone

once again I'm asking for help after quite a bit of research.
I need to create a view where the user can place an image to the background and draw lines/dots(touch events) on top of it and then save the "sketch" by pressing save button.
So after research I decide to pick up this code and build the thing on top of it because it already does half of what I want(it does the drawing).
The sample I have is using OpenGL for drawing and basically I don't care if it is OpenGL or CoreGraphics as soon as it does it.
The problem I have is how to put an image as a background of EAGLView I have in this sample code. My research gave me only suggestions for OpenGL experienced developers but not the working code snippet/solution.
If somebody can help me with this I would be very appreciate.
What I need is just a sample of how to put a UIImage to EAGLView background so then the user can draw(already have the code) on top of it and save the result.
One usually doesn't mix OpenGL with ordinary UI... views. Also drawing a background image using OpenGL is trivial:
First you need to load the Image into a texture. In GLPaint a image file is loaded as brush-texture
https://github.com/omeryavuz/glpaint/blob/master/Classes/PaintingView.m function initWithCoder
To draw a background, the first thing you draw after framebuffer clear is a fullscreen quad with that texture. If you build upong GLPaint, then the projection and modelview matrix and the vertex array state are set properly already. So it boils down to
GLfloat vert[] = {0,0, frame.size.width,0, frame.size.width,frame.size.height, 0,frame.size.height};
GLfloat tex[] = {0,0, 1,0, 1,1, 0,1};
GLuint indexes[] = {0, 1, 2, 2, 3, 0};
glBindTexture(GL_TEXTURE_2D, backgroundTexture);
glEnable(GL_TEXTURE_2D);
glVertexPointer(2, GL_FLOAT, 0, vert);
glTexCoordPointer(2, GL_FLOAT, 0, tex);
glDrawElements(GL_TRIANGLES, 2, GL_UNSIGNED_INT, indexes);
In PaintingView.m, on line 89, set eaglLayer.opaque = NO;.
In your viewController, put a UIImageView or whatever behind the paintingView.
Note: This will probably decrese performance.
Note: It might not initially work; the OpenGL layer may overwrite itself with some sort of default background color before rendering a frame. EDIT: Line 304 in PaintingView.m: Try setting the color to glClearColor(1.0, 1.0, 1.0, 0.0);. I am not sure this works, and don't have time to test this. If it doesn't work, wait till Brad Larson comes around, sees your question, and answers it perfectly ;)

opengl textures are not displayed, though code is almost identical to a code example

I have a serious problem with OpenGL ES 1.1 and I can't find the solution though I've looked over it like a thousand times, and I haven't found anything like a comparable problem here. I hope you guys can help me.
I watched tutorial 2 at 71squared.com (you can find it here) and downloaded the code example from the same page. It runs fine.
Then I tried to write my own code in order to adapt it to my project. Anyway, when it comes to OpenGL calls, I paid attention to using the same code as the example.
The problem is the following: my call to glClear() affects the color of the screen, though my textures are not displayed. The problem can be caused neither by the UIView subclass (as glClear() is displayed), nor by the code loading the textures, as all iVars of the corresponding instances are calculated correctly. In addition, even the texture coordinates and the vertices take normal values, which are the same as in the code example. So the problem must be caused by some tiny mistake I made using OpenGL.
These are all of my OpenGL calls:
Initializing:
CGRect rect = [[UIScreen mainScreen] bounds];
// Set up OpenGL projection matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, rect.size.width, 0, rect.size.height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, rect.size.width, rect.size.height);
// Initialize OpenGL states
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDisable(GL_DEPTH_TEST);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_BLEND_SRC);
glEnableClientState(GL_VERTEX_ARRAY);
Drawing the Texture:
(t is a pointer to a struct containing information about how to draw the graphic.)
glPushMatrix();
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glTranslatef(t->posX, t->posY, 0.0f);
glRotatef(-(t->rotation), 0.0f, 0.0f, 1.0f);
glColor4f(t->filter[0], t->filter[1], t->filter[2], t->filter[3]);
glBindTexture(GL_TEXTURE_2D, _texture.name);
glEnable(GL_BLEND);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, _texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
I think this is where the problem is caused. If you want to see my code creating a frame buffer, I can show it to you. It is, again, nearly the same as in the example, though.
I'm quite desperate finding the solution, as I have single stepped through the whole code like a thousand times, but I can't find the piece of code where I do something different than the code example.
Thank you in advance.
Dominik
Each time you generate a texture image, you have to check this texture is valid by this method.
If this texture is not valid, you need to re-generate it with some delay time.
GLboolean glIsTexture(GLuint texture);

Rendering A Texture Onto Itself In OpenGL ES

I can comfortably render a scene to a texture and map that texture back onto a framebuffer for screen display. But what if I wanted to map the texture back onto itself in order to blur it (say, at a quarter opacity in a new location). Is that possible?
The way I've done it is simply to enable the texture:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, color_tex);
And then draw to it:
glVertexPointer(2, GL_FLOAT, 0, sv);
glTexCoordPointer(2, GL_FLOAT, 0, tcb1);
glColor4f (1.0f,1.0f,1.0f,0.25f);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
(some code omitted, obviously)
Is there anything obviously wrong with that idea? Am I being an idiot?
No, you can't write the texture to the same texture, that triggers undefined behaviour.
But you can use a technique called Ping-Pong rendering, so you draw the result of the operation into another texture, and if you need to do more processing, you write the result to the first texture.

Parallax backgrounds in OpenGL ES on the iPhone

I've got basically a 2d game on the iPhone and I'm trying to set up multiple backgrounds that scroll at different speeds (known as parallax backgrounds).
So my thought was to just stick the backgrounds BEHIND the foreground using different z-coordinate planes, and just make them bigger than the foreground (in size) to accommodate, so that the whole thing can be scrolled (just at a different speed).
And (as far as I know) I basically implemented that. The only problem is that it seems to entirely ignore whatever z-value I give it, or rather it just zeroes all of them. I see the background (I've only tested ONE background so far, to keep it simple...so for now I just have a foreground and I want one background scrolling at a different speed), but it scrolls 1:1 with my foreground, so it obviously doesn't look right, and most of it is cut off (cause it's bigger). And I've tried various z-values for the background and various near/far clipping planes...it's always the same. I'm probably just doing one simple thing wrong, but I can't figure it out. I'm wondering if it has to do with me using only 2 coordinates in glVertexPointer for the foreground? (Of course for the background I AM passing in 3)
I'll post some code:
This is some initial setup:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnableClientState(GL_VERTEX_ARRAY);
//glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//transparency
glEnable (GL_BLEND);
glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
A little bit about my foreground's float array....it's interleaved. For my foreground it goes vertex x, vertex y, texture x, texture y, repeat. This all works just fine.
This is my FOREGROUND rendering:
glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), texes); <br>
glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (GLvoid*)texes + 2*sizeof(GLfloat)); <br>
glDrawArrays(GL_TRIANGLES, 0, indexCount / 4);
BACKGROUND rendering:
Same drill here except this time it goes vertex x, vertex y, vertex z, texture x, texture y, repeat. Note the z value this time. I did make sure the data in this array was correct while debugging (getting the right z values). And again, it shows up...it's just not going far back in the distance like it should.
glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), b1Texes);
glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), (GLvoid*)b1Texes + 3*sizeof(GLfloat));
glDrawArrays(GL_TRIANGLES, 0, b1IndexCount / 5);
And to move my camera, I just do a simple glTranslatef(x, y, 0.0f);
I'm not understanding what I'm doing wrong cause this seems like the most basic 3D function imaginable...things further away are smaller and don't move as fast when the camera moves. Not the case for me. Seems like it should be pretty basic and not even really be affected by my projection and all that (though I've even tried doing glFrustum just for fun, no success). Please help, I feel like it's just one dumb thing. I will post more code if necessary.
Shot in the dark...
You may have to forgotten to setup the Depth-Buffering within the framebuffer initializer.
Copy&Paste from Apple's older EAGLView templates:
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
If you are depending of blending you must draw in depth order, meaning draw the furthest (deepest) layer first. Otherwise they will be covered by the layer on top as the z-buffer value is written even though the area is 100% transparent.
See here
I've figured out that I am using orthographic projections which are incapable of displaying things being further away (please correct me if I'm wrong on this). When I tried glFrustum earlier (as I stated in my question), I was doing something wrong with the setup of it. I was using a negative value for the near-clipping value, and I basically got the 1:1 scrolling problem, same as orthographic. But I have changed this to 0.01, and it finally started displaying correctly (backgrounds displayed further away).
My issue is resolved but just as a side idea, I'm now wondering if I can mix orthographic and perspective within the same frame, and what that would require. Because I'd rather keep the foreground very simple and orthographic (2d), but I want my backgrounds to display with the perspective depth.
My idea was something like:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f);
//render foreground
glLoadIdentity();
glFrustum(-1.0f, 1.0f, -1.5f, 1.5f, 0.01f, 1000.0f);
//render backgrounds
I will play around with this and comment with my results, in case anyone is curious. Feedback on this would be appreciated, though technically I have no pressing need on this issue anymore (from here on out it would just be idea discussion).

What am I doing that's unnecessary? (iPhone, OpenGL ES)

I'm using OpenGL ES to draw things in my iPhone game. Sometimes I like to change the alpha of the textures I'm drawing. Here is the (working) code I use. What, if anything, in this code sample is unnecessary? Thanks!
// draw texture with alpha "myAlpha"
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COMBINE );
glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB,GL_MODULATE);
glTexEnvf(GL_TEXTURE_ENV, GL_SRC0_RGB,GL_PRIMARY_COLOR);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND0_RGB,GL_SRC_COLOR);
glTexEnvf(GL_TEXTURE_ENV, GL_SRC1_RGB,GL_TEXTURE);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB,GL_SRC_COLOR);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(myAlpha, myAlpha, myAlpha, myAlpha);
glPushMatrix();
glTranslatef(xLoc, yLoc, 0);
[MyTexture drawAtPoint:CGPointZero];
glPopMatrix();
glColor4f(1.0, 1.0, 1.0, 1.0);
Edit:
The above code sample is for drawing w/ a modified alpha value (so I can fade things in and out). When I just want to draw w/o modifying the alpha value I'd use the last 5 lines of the above sample just without the last call to glColor4f.
My drawing looks like:
glBindTexture(GL_TEXTURE_2D, tex->name); // bind to the name
glVertexPointer(3, GL_FLOAT, 0, vertices); // verices is the alignment in triangles
glTexCoordPointer(2, GL_FLOAT, 0, coordinates); // coordinates describes the rectangle
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); //draw
It's hard to know what you can remove without knowing exactly what you're trying to do.
edit:
Unless you are disabling blend at some point, you can get rid of all of the blend calls:
glEnable(GL_BLEND);
glBlendFunc...
Put them in the initialization of your GL state if you need to set it once. If you need to enable and disable it is better to set the state once for everything you need to draw blended and then set the state again (once) for the unblended.
OpenGL is a state machine so this general idea applies anywhere you need to change the GL state (like setting texture environment with glTexEnvf).
About state machines:
A current state is determined by past
states of the system. As such, it can
be said to record information about
the past, i.e., it reflects the input
changes from the system start to the
present moment. A transition indicates
a state change and is described by a
condition that would need to be
fulfilled to enable the transition.
I'm not a expert but I was reading the Optimizing OpenGL ES for iPhone OS and it had a section on "Optimizing Texturing" which may help you out.
You have calls to configure both the Texture Environment and Framebuffer Blending, which are entirely different features. You are also calling glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB, x) twice before drawing, and so only the second call matters.
If all you really want to do is multiply the primary color by the texture color, then it is as simple as glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE). The rest of the calls to glTexEnv are unnecessary.