I am using an API which only gives me the integer id of the texture object, and I need to pass that texture's data to AVAssetWriter to create the video.
I know how to create CVOpenGLESTexture object from pixel buffer (CVPixelBufferRef), but in my case I have to somehow copy the data of a texture of which only the id is available.
In other words, I need to copy an opengl texture to my pixelbuffer-based texture object. Is it possible? If yes then how?
In my sample code I have something like:
void encodeFrame(Gluint textureOb)
{
CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferAdaptor pixelBufferPool], &pixelBuffer[0]);
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, pixelBuffer[0],
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)FRAME_WIDTH,
(int)FRAME_HEIGHT,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture[0]);
CVPixelBufferLockBaseAddress(pixelBuffer[pixelBuffernum], 0);
//Creation of textureOb is not under my control.
//All I have is the id of texture.
//Here I need the data of textureOb somehow be appended as a video frame.
//Either by copying the data to pixelBuffer or somehow pass the textureOb to the Adaptor.
[assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer[0] withPresentationTime:presentationTime];
}
Thanks for tips and answers.
P.S. glGetTexImage isn't available on iOS.
Update:
#Dr. Larson, I can't set the texture ID for API. Actually I can't dictate 3rd party API to use my own created texture object.
After going through the answers what I understood is that I need to:
1- Attach pixelbuffer-associated texture object as color attachment to a texFBO.
For each frame:
2- Bind the texture obtained from API
3- Bind texFBO and call drawElements
What am I doing wrong in this code?
P.S. I'm not familiar with shaders yet, so it is difficult for me to make use of them right now.
Update 2:
With the help of Brad Larson's answer and using the correct shaders solved the problem. I had to use shaders which are an essential requirement of Opengl ES 2.0
For reading back data from OpenGL ES on iOS, you basically have two routes: using glReadPixels(), or using the texture caches (iOS 5.0+ only).
The fact that you just have a texture ID and access to nothing else is a little odd, and limits your choices here. If you have no way of setting what texture to use in this third-party API, you're going to need to re-render that texture to an offscreen framebuffer to extract the pixels for it either using glReadPixels() or the texture caches. To do this, you'd use an FBO sized to the same dimensions as your texture, a simple quad (two triangles making up a rectangle), and a passthrough shader that will just display each texel of your texture in the output framebuffer.
At that point, you can just use glReadPixels() to pull your bytes back into the the internal byte array of your CVPixelBufferRef or preferably use the texture caches to eliminate the need for that read. I describe how to set up the caching for that approach in this answer, as well as how to feed that into an AVAssetWriter. You'll need to set your offscreen FBO to use the CVPixelBufferRef's associated texture as a render target for this to work.
However, if you have the means of setting what ID to use for this rendered texture, you can avoid having to re-render it to grab its pixel values. Set up the texture caching like I describe in the above-linked answer and pass the texture ID for that pixel buffer into the third-party API you're using. It will then render into the texture that's associated with the pixel buffer, and you can record from that directly. This is what I use to accelerate the recording of video from OpenGL ES in my GPUImage framework (with the glReadPixels() approach as a fallback for iOS 4.x).
Yeah it's rather unfortunate that glGetTexImage isn't ios. I struggled with that when I implemented my CCMutableTexture2D class for cocos2d.
Caching the image before pushing to the gpu
If you take a look into the source you'll notice that in the end I kept the pixel buffer of the image cached into my CCMutableTexture2D class instead of the normal route of discarding it after it's pushed to the gpu.
http://www.cocos2d-iphone.org/forum/topic/2449
Using FBO's and glReadPixels
Sadly, I think this approach might not be appropriate for you since you're creating some kind of video with the texture data and holding onto every pixel buffer that we've cached eats up a lot of memory. Another approach could be to create an FBO on the fly in order to use glReadPixels to populate your pixel buffer. I'm not too sure how successful that approach will be but a good example was posted here:
Read texture bytes with glReadPixels?
Related
Do I have to generate and bind a framebuffer for every renderbuffer I create?
Or is there a chance to create renderbuffer only (and map it to a texture or submit somehow to the sahders)?
I just want to render to a one channel buffer to create some mask for later use. I think setting up a complete framebuffer would be overhead for this task.
Thanks.
A renderbuffer is just an image. You cannot bind one as a texture; if you want to create an image to use as a texture, then you need to create a texture. That's why we have renderbuffers and textures: one of them is for things that you don't intend to read from.
Framebuffers are collections of images. You can't render to a rendebuffer or texture; you render to the framebuffer, which itself must have renderbuffers and/or textures attached to them.
You can either render to the default framebuffer or to a framebuffer object. The images in the default framebuffer can't be used as textures. So if you want to render to a texture, you have to use a framebuffer object. That's how OpenGL works.
"setting up a complete framebuffer" may involve overhead, but you're going to have to do it if you want to render to a texture.
You could use a stencil buffer instead, and just disable the stencil test until you are ready to mask your output.
edit:
have a look at the following calls in the opengl docs:
glClearStencil
glClear(GL_STENCIL_BUFFER_BIT)
glEnable(GL_STENCIL_TEST)
glDisable(GL_STENCIL_TEST)
glStencilFunc
glStencilOp
http://www.opengl.org/sdk/docs/man/xhtml/glStencilFunc.xml
http://www.opengl.org/sdk/docs/man/xhtml/glStencilOp.xml
http://developer.nvidia.com/system/files/akamai/gamedev/docs/stencil.pdf?download=1
I am using too many fruits and vegetables images in iphone game, so how can I make texture of such images, because as per I read that every image in texture in OpenGL ES, so what should I do?
I have developed 6 iOS application, but this is my first game, so please guide me in proper way so that I can get idea.
You use glTexImage2D to upload raw pixel data to OpenGL in order to populate a texture. You can use Core Graphics and particularly CGBitmapContextCreate to get the raw pixel data to get the raw pixel data of (or convert to raw pixel data) anything else Core Graphics can draw — which for you probably means a CGImageRef, either through a C API load of a PNG or JPG, or just using the result of [someUIImage CGImage].
Apple's GLSprite sample (you'll need to be logged in, and I'm not sure those links work externally, but do a search in the Developer Library if necessary) is probably a good starting point. I'm not 100% behind the class structure, but if you look into EAGLView.m, lines 272 to 305, the code there loads a PNG from disk then does the necessary steps to post it off to OpenGL, with a decent amount of commenting.
I am building an iphone app that needs to record grayscale video and save it to the camera roll. I'm stumped at how best to approach this.
I am thinking along the following lines:
Use a shader and opengl to transform the video to grayscale
Use AVFoundation (AVAssetWriter with an AVAssetWriterInputPixelBufferAdaptor) to write the video to the file.
My questions are:
Is this the right approach (simplest, best performance)?
If so, what would be the best way to go from opengl output to a CVPixelBufferRef input for the AVAssetWriterInputPixelBufferAdaptor?
If not, what would be a better approach?
Any nudge in the right direction is much appreciated!
In general, I'd agree with this approach. Doing your processing in an OpenGL ES 2.0 shader should be the most performant way of doing video frame alteration like this, but it won't be very simple. Fortunately, you can start from a pre-existing template that already does this.
You can use the sample application I wrote here (and explained here) as a base. I use custom shaders in this example to track colors in an image, but you could easily alter this to convert the video frames to grayscale (I even saw someone do this once). The code for feeding camera video into a texture and processing it could be used verbatim from that sample.
In one of the display options within that application, I render the processed image first to a framebuffer object, then use glReadPixels() to pull the resulting image back into bytes that I can work with on the CPU. You could use this to get the raw image data back after the GPU has processed a frame, then feed those bytes into CVPixelBufferCreateWithBytes() to generate your CVPixelBufferRef for writing to disk.
(Edit: 2/29/2012) As an update to this, I just implemented this kind of video recording in my open source GPUImage framework, so I can comment on the specific performance for the encoding part of this. It turns out that you can capture video from the camera, perform live filtering on it, grab it from OpenGL ES using glReadPixels(), and write that out as live H.264 video in 640x480 frames on an iPhone 4 at 30 FPS (the maximum camera framerate).
There were a few things that I needed to do in order to get this recording speed. You need to make sure that you set your AVAssetWriterInputPixelBufferAdaptor to use kCVPixelFormatType_32BGRA as its color format for input pixel buffers. Then, you'll need to re-render your RGBA scene using a color-swizzling shader to provide BGRA output when using glReadPixels(). Without this color setting, your video recording framerates will drop to 5-8 FPS on an iPhone 4, where with it they are easily hitting 30 FPS. You can look at the GPUImageMovieWriter class source code to see more about how I did this.
Using the GPUImage framework, your above filtering and encoding task can be handled by simply creating a GPUImageVideoCamera, attaching a target of a GPUImageSaturationFilter with the saturation set to 0, and then attaching a GPUImageMovieWriter as a target of that. The framework will handle the OpenGL ES interactions for you. I've done this, and it works well on all iOS devices I've tested.
This is another take on a question which has no responses. The iPhone has the extension GL_EXT_texture_format_BGRA8888, which is supposed to allow me to use BGRA as the internal format of glTexImage2d.
I have only BGRA data, as that's the only thing I can get from the camera (other than YUV which I'm not ready to deal with).
How can I use BGRA with glReadPixels? Anything I try gives me a black screen!
Did you check for OpenGL errors after creating and loading the texture?
Did you bind the texture?
Did you enable texture mapping with glEnable?
Did you specify texture coordinates for each vertex of your polygon?
According to the OpenGL ES 2.0 specification, glReadPixels() only supports RGBA as the format to read back from your FBO. I believe the extension you quote there only allows for BGRA pixel format data to be provided to your texture, not that you can read back an FBO in BGRA format.
As Ben suggests, you can just swizzle the color components in your fragment shader if you need to have the end result be BGRA.
I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks
What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}
Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.