So I just started working on something in Open GL ES 2.0. I get the general impression that the switch from the 1.1 template to the 2.0 template in Xcode has caused some confusion for everyone, so as a result, there isn't much helpful on 2.0 (if there is anything really good and informative out there like 71squared's videos on the 1.1 template except for 2.0, your welcome to post a link to it).
My problem is displaying an image on the screen.
Right now, I've got this in my drawFrame method.
[(EAGLView *)self.view setFramebuffer];
// Replace the implementation of this method to do your own custom drawing.
static float transY = 0.0f;
glClearColor(0.0f, 1.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
[playerCube drawAtPoint:CGPointMake(160.0f, 240.0f)];
if ([context API] == kEAGLRenderingAPIOpenGLES2)
{
// Use shader program.
glUseProgram(program);
// Update uniform value.
glUniform1f(uniforms[UNIFORM_TRANSLATE], (GLfloat)transY);
transY += 0.075f;
// Validate program before drawing. This is a good check, but only really necessary in a debug build.
// DEBUG macro must be defined in your debug configurations if that's not already the case.
if (![self validateProgram:program])
{
NSLog(#"Failed to validate program: %d", program);
return;
}
}
else
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, (GLfloat)(sinf(transY)/2.0f), 0.0f);
transY += 0.075f;
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[(EAGLView *)self.view presentFramebuffer];
Essentially the template without the square bouncing around. This is in my ViewController.m by the way. That works fine on it's own. I've not had much luck with the code for displaying the image however. I'm using the Texture2D file from Apple's crash landing example, but I haven't had much luck with that no matter where the image related code is put.
I've used playerCube = [[Texture2D alloc] initWithImage:[UIImage imageNamed:#"Cubix.png"]]; to allocate it, which is probably right even if I've not been putting it in the right location, and [playerCube drawAtPoint:CGPointMake(160.0f, 240.0f)];.
Wait, that wouldn't show anything at the start cuz I've not set the height and width of the image right? I guess I sort of answered that part of my question. Would it work if I used something like image.frame = CGRectMake (); or would that not work with Texture 2D? Also where should I be putting this code for it to work as it would?
Hope I made some sense in this. I've never posted a question on StackOverflow before.
I show how to display an image as a texture in OpenGL ES 2.0 on the iPhone in this example, where the image is a frame of video from the camera, and this example, where I pull in a PVR-compressed texture as an image. The former example is described in the writeup here.
I describe how the vertex and fragment shaders work, along with all the supporting code for them, in the video for the OpenGL ES 2.0 class which is part of my freely available iOS development course on iTunes U. Additionally, I highly recommend you read Jeff LaMarche's series of posted chapters from his unpublished OpenGL ES 2.0 book.
In short, once you have the texture loaded in, you'll need to attach it as a uniform to your shader program using code like the following:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, myTexture);
glUniform1i(myTextureUniform, 0);
Within your fragment shader, you'll need to define the texture uniform:
uniform sampler2D myTexture;
and then sample the color from the texture at the appropriate point:
gl_FragColor = texture2D(myTexture, textureCoordinate);
Again, I go into this in more detail within my class, and you can use these examples as starting points to work from.
Texture2D, as presented in Crash Landing, isn't ging to work with ES 2.x. It should correctly upload your texture, but the two drawing methods, drawAtPoint: and drawInRect: won't work. They rely on the GL calls glVertexPointer and glTexCoordPointer; neither of those survives into ES 2.x. The logic behind that is that ES 1.x has a fixed pipeline, so it has a defined slot for vertex positions (which you supply with glVertexPointer), a defined slot for texture coordinates (via glTexCoordPointer) and a series of predefined ways in which that data can end up being processed. ES 2.x is fully programmable so it has no need to supply one way for providing vertices, another for providing texture coordinates, a third for colours, etc. It has just one way to supply an attribute for a vertex and it is up to you to specify how that relates to your vertex program, then up to your vertex program to figure out what needs to be processed and passed on to your fragment shader.
Really you shouldn't be conflating providing textures and providing geometry anyway. Crash Landing was withdrawn because it suggests poor programming patterns — my personal suspicion is that this was one of them.
In addition to not having fixed attributes of a vertex, ES 2.x doesn't supply any fragment shaders. If you want to texture map, you have to write suitable vertex and fragment shaders. The ones in the new GL template do Gouraud shading only if I recall.
An additional issue hinted at in the code is that ES 2.x doesn't have a matrix stack. You need to do something yourself about communicating to your vertex shader how it should map from the original vertices to screen locations. I suspect that if you just want to draw a texture as though 2d, that isn't so much of a problem.
Probably the correct route forward is:
modify Texture2D so that it can bind the texture it has loaded and remove all attempts to draw it
modify the vertex and fragment shader supplied in the GL template to do texturing and modify the buffer you submit so that it has texture coordinates in it
then modify the vertex shader so that you can do something other than a sine animation
If you're already happy with ES 1.x, probably a good place to start is the ES 2.x reference sheet and maybe the lighthouse3d GLSL tutorial — though that's for desktop OpenGL 2.0. On desktop GL 2.0 there's a lot of support built into GLSL for emulating the old fixed pipeline. None of that is in ES 2.x. But it should at least explain things about attributes, uniforms, samplers, etc. I'm afraid I've yet to come across an ES 2.x focussed tutorial.
Related
I'm nervous to ask this, because I've seen several posts alluding to the answer, but none have worked for me. Apologies if this is repetitive.
I'm trying to access more than one texture (2 at the moment) in my fragment shader on iPhone 4 (OS 4.3). My code is properly setting up the first texture and the shader can read and use it on my model. Once I try to add a second texture, though, it overwrites the first instead of giving me two textures.
Is there a good tutorial on passing multiple textures to a pixel shader in ES2.0 for iphone? I think I'm improperly overwriting the same texture slot or something like that.
My code is messy and lengthy. Much of it is modified online samples. Will clean up later.
Thanks!
link:
My code excerpts pertaining to texture loading and shader usage
This one is simple. Swap the "//Bind to textures" and "// Get uniform locations." blocks and it should work. The problem is that you are setting values of some uniforms, without knowing their location (which you get in the immediately following step).
So the effect that is happening is not "the one texture being overwritten", but the both samplers contain (the default) 0, so both read the same texture.
Hope this helps.
I need help setting up multi-pass rendering with OpenGL ES 2.0 on the iPhone. I haven't been able to find an example which implements both rendering to a texture and multi-pass shading.
I'm looking for some instructions and sample code which implement:
First stage: Render to a texture
Second stage: Input that texture and render to screen
I have referenced Apple's OpenGL ES Programming Guide, OpenGL Shading Language (Orange Book), and O'Reilly's iPhone 3D Programming Book.
The Orange Book discusses deferred shading and provides two shader programs for first-pass and second-pass rendering, but doesn't provide example code to setup that application or show how to communicate data between both shaders.
Questions:
How to render to texture?
Using glDrawElements
How to input that texture to the next pass?
How to implement two shading programs?
How to alternate first- and second-pass shading programs?
Need to attach, detach, and call 'use' for each pass?
How to implement multi-pass shading?
I wrote a short example of doing just this (multiple render-to-texture passes on the iPhone using OpenGL ES 2.0) a few weeks ago: http://www.mat.ucsb.edu/a.forbes/blog/?p=245
**
Edit, this post is a bit old, and it has moved here:
http://blog.angusforbes.com/openglglsl-render-to-texture/
**
Ok, first of all: I'm no expert on OpenGL ES 2.0. I was kind of in the same situation where I wanted to do a multipass render setup, in one of my first OpenGL ES applications.
I also used the Orange Book. Check chapter 12. Framebuffer Objects > Examples. The first example demonstrates how to use a framebuffer to render to a texture, and then draws that texture to screen.
Basically using that example I created an application that renders some geometry to a texture using an effect shader, then renders that texture to screen, layered with some other content all using a different shader.
I'm not sure if this is the best approach, but it works for my purposes. My setup:
I create two framebuffers, the default and an offscreen one. Same for the renderbuffers
I create a texture which the app will render to
I bind the offscreen framebuffer, and attach the texture to it using glFramebufferTexture2D
My rendering:
bind the offscreen framebuffer.
use my first shader program
draw my geometry
bind the default framebuffer
use my second shader program
draw a fullscreen quad with the texture attached to it.
Does anyone know a tutorial hat explains how to shade an object to look like
silver metal? (on iphone)?
Maybe starting with a spere like in this:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-5-living.html
Or can this not be accomplished without the new shaders in 2.0?
Thanks
Sebastian
What you look for is called environment mapping. This can be done using sphere mapping (this can be done on very simple hardware) or cube mapping.
Cube mapping could be done long before pixel shaders became popular, but it seems they are an extension to OpenGL ES 1.1, so the iPhone may or may not implement it (quick googling suggests not, but I didn't try).
Sphere mapping should be supported in ES. It has been in OpenGL since the beginning, I believe.
Anyway, to clarify: These methods only transform texture coordinates, so they need not work on pixel level. Hence a pixel shader is unnecessary. However, using a pixel shader you could do more advanced stuff like bump mapping, which would give your object more of a "surface".
Try something like this, transliterated to ES.
How big of a difference is the description language of Quartz2d to OpenGL ES?
It seems they are similar in description power... except that Quartz is mostly 2d and that OpenGL is out of the box 3d ( but can be made 2d focused ).
Are the mappings from 2dQuartz to 2d OpenGL ES that different? Im sure there must be differences in some specific features that might be handled differently on one vs another... but to do a translator?
Anyone have experience with both OpenGL and Quartz2d have some insights?
Quartz and OpenGL ES are two completely different animals. While they both have a C-based API that deals with a state machine and that draws into a context, their purposes are dissimilar. In Quartz you specify lines, Bezier and quadratic curves, arcs, or rectangles, as well as fills, gradients, and shadows / glows. In OpenGL ES, you provide vertices, raster textures, and lighting information, from which a scene is generated.
They are both useful in particular cases. You might draw a 2-D static element using Quartz, into a view, layer, or texture, and then place and move that view or layer in 3-D space using Core Animation or do the same for a texture using OpenGL ES.
Rather than try to overlay one API on the other, use whichever is more appropriate for what you are doing, or look to a framework like cocos2d which lets you build and animate 2-D scenes or Core Animation where you can do Quartz drawing into a layer but still use a nicely abstracted API for moving these layers around.
I am trying to Google for what I've mentioned in the title, but somehow I couldn't find it. This should not be that hard, should it?
What I am looking for is a way to gain access to an OpenGL ES texture on iPhone, and a way to get/set pixel with it. What are the OpenGL ES functions I am looking for?
Before OpenGL ES is able to see your texture, you should have loaded it in memory already, generated texture names(glGenTextures), and bound it(glBindTexture). Your texture data is just a big array in memory.
Therefore, should you with to change a single texel, you can manipulate it in-memory and then bind it again. This approach is usually done for procedural texture generation. There are many available resources on the net about it, for instance: http://www.blumtnwerx.com/blog/2009/06/opengl-es-texture-mapping-for-iphone-oolong-powervr/
While glReadPixels is available, there are very few situations where you'd need to use it for interactive applications(screen capture comes to mind). It absolutely destroys performance. And still won't give you back the original textures, but instead will return a block of the framebuffer.
I have no idea what kind of effect you are looking for. However, if you are targeting a device that supports pixel shaders, perhaps a custom pixel shader can do what you want.
Of course, I am working under the assumption you didn't mean pixel as in screen coordinates.
I don't know about setting an individual pixel, but glReadPixels can read a block of pixels from the frame buffer (http://www.khronos.org/opengles/sdk/docs/man/glReadPixels.xml). Your problem googling may be because texture pixels are often shortened to 'texels'.