In OpenGL ES2.0 for iOs, how can I use a CVPixelBufferRef to update a cubemap texture? - ios5

I have managed to get a CVPixelBufferRef from an AVPlayer to feed pixel data that I can use to texture a 2D object. When my pixelbuffer has data in it I do:
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage('
kCFAllocatorDefault,
videoTextureCache_,
pixelBuffer, //this is a CVPixelBufferRef
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
I would like to use this buffer to create a GL_TEXTURE_CUBE_MAP. My video frame data is actually 6 sections in one image (e.g. a cubestrip) that in total makes the sides of a cube. Any thoughts on a way to do this?
I had thought to just pretend my GL_TEXTURE_2D was a GL_TEXTURE_CUBE_MAP and replace the texture on my skybox with the texture generated by the code above, but this creates a distorted mess (as I suppose should be expected when trying to force a skybox to be textured with a GL_TEXTURE_2D.
The other idea was to setup unpacking using glPixelStorei and then read from the pixelbuffur:
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, X);
glPixelStorei(GL_UNPACK_SKIP_ROWS, Y);
glTexImage2D(...,&pixelbuffer);
But unbelievably GL_UNPACK_ROW_LENGTH is not supported in OpenGl ES2.0 for iOS.
So, is there:
-Any way to split us the pixel data in my CVPixelBufferRef through indexing the buffer to some pixel subset before using it to make a texture?
-Any way to make a 6 new GL_TEXTURE_2D as indexed subsets of my GL_TEXTURE_2D that is created by the code above
-any way to convert a GL_TEXTURE_2D to a valid GL_TEXTURE_CUBE_MAP (e.g. GLKit has a Skybox effect that loads a GL_TEXTURE_CUBE_MAP from a single cubestrip file. It doesnt have a method to load a texture from memory though or I would be sorted)
-any other ideas?

If it were impossible any other way (which is unlikely, there probably is an alternate way -- so this is probably not the best answer & involves more work than necessary) here is a hack I'd try:
How a cube map works is it projects the texture for each face from a point in the center of the geometry out toward each of the cube faces. So you could reproduce that behavior yourself; you could use Projective Texturing to make six draw calls, one for each face of your cube. Each time, you'd first draw the face you're interested in to the stencil buffer, then calculate the projection matrix for your texture (this technique is used a lot for 'spotlight' effects in games), then figure out the transform matrix required to augment the fragment shader's texture read so that for each face, only the portion of the texture that corresponds to that face winds up within the (0..1) texture lookup range. If everything has gone right, anything outside the 0..1 range should be discarded by the stencil buffer, and you'd be left with a DIY cube map out of a TEXTURE_2D.
The above method is actually really similar to what I'm doing for an app right now, except I'm only using projective texturing to mask off & replace a small portion of the cube map. I need to pixel-match the edges of the small square I'm projecting so that it's seamlessly applied to the skybox, so that's why I feel confident that this method will actually reproduce the cube map behavior -- otherwise, pixel-matching wouldn't be possible.
Anyway, I hope you find a way to simply transition your 2D to CUBEMAP, because that would probably be much easier and cleaner.

Related

build a tiled big texture from other textures

I am making a unity 2D RTS game and I thought of using a big texture for the tiled map (instead of a lot of textures - for the memory reasons...).
The tiled map is supposed to generate randomly at runtime so I don't want to save a texture and upload it. I want the map to be generated and then build it from a set of textures resources.
so, I have a little tiles textures of grass/forest/hills etc. and after I generate the map randomly, I need to draw those little textures on my big map texture so I will use it as my map.
How can I draw a texture from my resources on other texture? I saw there is only a Get/SetPixel functions... so I can use it to copy all the pixels one by one to the big texture, but there is something easier?
Is my solution for the map is OK? (is it better from just create a lot of texture tiles side by side? There is other better solution?)
The correct way to create a large tiled map would be to compose it from smaller, approximately-screen-sized chunks. Unity will correctly not draw the chunks that are off the screen.
As for your question about copying to a texture: I have not done this before in Unity, but this process is called Blitting, and there just happens to be a method in Unity called Graphics.Blit(). It takes a source texture and copies it into a destination texture, which sounds like exactly what you're looking for. However, it requires Unity Pro :(
There is also SetPixels(), but it sounds like this function does the processing on the CPU rather than the GPU, so it's going to be extremely slow/resource-intensive.
Well, after more searching I discovered the Get/SetPixel s
Texture2D sourceTex = //get it from somewere
var pix = sourceTex.GetPixels(x, y, width, height); // get the block of pixels
var destTex = new Texture2D(width, height); // create new texture to copy the pixels to it
destTex.SetPixels(pix);
destTex.Apply(); // important to save changes

OpenGL: optimizing render of quad particles

I'm rendering particles in a 2D game. Each particle is a quad (2 triangles). How can I make the drawing the fastest possible? All the particles has the same texture, I'm only changing it's positions.
Now I'm using a call to glVertexPointer and glDrawArrays for each particle. So I'm sending 4 vertices each time to the GPU.
Is there any other approach that could be faster?
I'm using OpenGL ES 1.1 (iPhone)
Thanks!
Every draw call you make (glDrawArrays) is expensive. Doing this once per particle is DEFINITELY way too often. All your particles can be drawn with a single draw call; just set up a big array of all the triangle verts and another big array with the texture coords, and call glVertexPointer/glDrawArrays once-- that's the power of glVertexPointer: arbitrary geometry of the same type in one call. :)
For what you're doing, you should also look into point sprites (GL_POINTS), which also function as tiny textured quads. They're 2D only, so you can't map your texture into the Z axis, but if your particles are just 2D quads of the same texture over and over, point sprites will likely do exactly what you want.
There's a way to do that all in one draw routine. I THINK it's by adding an extra vertex after each quad, which is the same as the previous vertex, but I could be wrong.
EDIT: After looking into it a bit, it looks like you need two in between; essentially one after, and one before. It does add up to quite a few extra vertexes, but I know from experience that it makes a HUGE positive difference on the iPhone to do it all in one draw operation (we were drawing text from a texture, so essentially the same thing).
EDIT2: Also note, I'm referring to using GL_TRIANGLE_STRIP - if you were using GL_TRIANGLES instead, you wouldn't need the extra vertices... except, then you'd be doing the same amount extra anyway, due to repeating 2 for each second triangle.

Testing point in the alpha channel

Is there a way to detect if the alpha of a pixel after drawing is not 0 when using OpenGLES on the iphone?
I would like to test multiple points to see id they are inside the area of a random polygon drawn by the user. If you know Flash, something equivalent to BitmapData::getPixel32 is what I'm looking for.
The framebuffer is kept by the GPU and is not immediately CPU accessible. I think the thing you'd most likely want from full OpenGL is the occlusion query; you can request geometry be drawn and be told how many pixels were actually plotted. Sadly that isn't available on the iPhone.
I think what you probably want is glReadPixels, which can be used to read a single pixel if you prefer, e.g. (written here, as I type, not tested)
GLubyte pixelValue[4];
glReadPixels(x, y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixelValue);
NSLog(#"alpha was %d", pixelValue[3]);
Using glReadPixels causes a pipeline flush, so is generally a bad idea from a GL performance point of view, but it'll do what you want. Unlike iOS, OpenGL uses graph paper order for pixel coordinates, so (0, 0) is the lower left corner.

Positioning elements in 2D space with OpenGL ES

In my spare time I like to play around with game development on the iPhone with OpenGL ES. I'm throwing together a small 2D side-scroller demo for fun, and I'm relatively new to OpenGL, and I wanted to get some more experienced developers' input on this.
So here is my question: does it make sense to specify the vertices of each 2D element in model space, then translate each element to it's final view space each time a frame is drawn?
For example, say I have a set of blocks (squares) that make up the ground in my side-scroller. Each square is defined as:
const GLfloat squareVertices[] = {
-1.0, 1.0, -6.0, // Top left
-1.0, -1.0, -6.0, // Bottom left
1.0, -1.0, -6.0, // Bottom right
1.0, 1.0, -6.0 // Top right
}
Say I have 10 of these squares that I need to draw together as the ground for the next frame. Should I do something like this, for each square visible in the current scene?
glPushMatrix();
{
glTranslatef(currentSquareX, currentSquareY, 0.0);
glVertexPointer(3, GL_FLOAT, 0, squareVertices);
glEnableClientState(GL_VERTEX_ARRAY);
// Do the drawing
}
glPopMatrix();
It seems to me that doing this for every 2D element in the scene, for every frame, gets a bit intense and I would imagine the smarter people who use OpenGL much more than I do may have a better way of doing this.
That all being said, I'm expecting to hear that I should profile the code and see where any bottlenecks may be: to those people, I say: I haven't written any of this code yet, I'm simply in the process of wrapping my mind around it so that when I do go to write it it goes smoother.
On the subject of profiling and optimization, I'm really not trying to prematurely optimize here, I'm just trying to wrap my mind around how one would set up a 2D scene and render it. Like I said, I'm relatively new to OpenGL and I'm just trying to get a feel for how things are done. If anyone has any suggestions on a better way to do this, I'd love to hear your thoughts.
Please keep in mind that I'm not interested in 3D, just 2D for now. Thanks!
You are concerned with the overhead it takes to transform a model (in this case a square) from model coordinates to world coordinates when you have a lot of models. This seems like an obvious optimization for static models.
If you build your square's vertices in world coordinates, then of course it is going to be faster as each square will avoid the extra cost of these three functions (glPushMatrix, glPopMatrix, and glTranslatef) since there is no need to translate from model to world coordinates at render time. I have no idea how much faster this will be, I suspect that it won't be a humongous optimization, and you lose the modularity of keeping the squares in model coordinates: What if in the future you decide you want these squares to be moveable? That will be a lot harder if you're keeping their vertices in world coordinates.
In short, it's a tradeoff:
World Coordinates
More Memory - each square needs its
own set of vertices.
Less computation - no need to perform
glPushMatrix, glPopMatrix, or
glTranslatef for each square at render time.
Less flexible - lacks support (or
complicates) for dynamically moving these squares
Model Coordinates
Less memory - the squares can share the same vertex data
More Computation - each square must
perform three extra functions at
render time.
More Flexible - squares can easily be
moved by manipulating the
glTranslatef call.
I guess the only way to know what is the right decision is by doing and profiling. I know you said you haven't written this yet, but I suspect that whether your squares are in model or world coordinates it won't make much of a difference - and if it does, I can't imagine an architecture that you could create where it would be hard to switch your squares from model to world coordinates or vice-versa.
Good luck to you and your adventures in iPhone game development!
If you are only using screen aligned quads it might be easier to use the OES Draw Texture extension. Then you can use a single texture to hold all your game "sprites". First specify the crop rectangle by setting the GL_TEXTURE_CROP_RECT_OES TexParameter. This is the boundry of the sprite within the larger texture. To render, call glDrawTexiOES passing in the desired position & size in viewport coordinates.
int rect[4] = {0, 0, 16, 16};
glBindTexture(GL_TEXTURE_2D, sprites);
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, rect);
glDrawTexiOES(x, y, z, width, height);
This extension isn't available on all devices, but it works great on the iPhone.
You might also consider using a static image and just scrolling that instead of drawing each individual block of the floor, and translating its position, etc.

How can I render reflections in OpenGL ES on the iPhone without a stencil buffer?

I'm looking for an alternative technique for rendering reflections in OpenGL ES on the iPhone. Usually I would do this by using the stencil buffer to mark where the reflection can be seen (the reflective surface) and then render the reversed image only in those pixels. Thus when the reflected object moves off the surface its reflection is no longer seen. However, since the iPhone's implementation doesn't support the stencil buffer I can't determine how to hide the portions of the reflection that fall outside of the surface.
To clarify, the issue isn't rendering the reflections themselves, but hiding them when they wouldn't be visible.
Any ideas?
Render the reflected scene first; copy out to a texture using glCopyTexImage2D; clear the framebuffer; draw the scene proper, applying the copied texture to the reflective surface.
I don't have an answer for reflections, but here's how I'm doing shadows without the stencil buffer, perhaps it will give you an idea:
I perform basic front-face/back-face determination of the mesh from the point of view of the light source. I then get a list of all edges that connect a front triangle to a back triangle. I treat this edge list as a line "loop". I project the vertices of this loop along the object-light ray until it intersects the ground. These intersection points are then used to calculate a 2D polygon on the same plane as the ground. I then use a tesselation algorithm to turn that poly into triangles. (This works fine as long as your lights sources or objects don't move too often.)
Once I have the triangles, I render them with a slight offset such that the depth buffer will allow the shadow to pass. Alternatively you can use a decaling algorithm such as the one in the Red Book.