how to texture of images? - iphone

I am using too many fruits and vegetables images in iphone game, so how can I make texture of such images, because as per I read that every image in texture in OpenGL ES, so what should I do?
I have developed 6 iOS application, but this is my first game, so please guide me in proper way so that I can get idea.

You use glTexImage2D to upload raw pixel data to OpenGL in order to populate a texture. You can use Core Graphics and particularly CGBitmapContextCreate to get the raw pixel data to get the raw pixel data of (or convert to raw pixel data) anything else Core Graphics can draw — which for you probably means a CGImageRef, either through a C API load of a PNG or JPG, or just using the result of [someUIImage CGImage].
Apple's GLSprite sample (you'll need to be logged in, and I'm not sure those links work externally, but do a search in the Developer Library if necessary) is probably a good starting point. I'm not 100% behind the class structure, but if you look into EAGLView.m, lines 272 to 305, the code there loads a PNG from disk then does the necessary steps to post it off to OpenGL, with a decent amount of commenting.

Related

How can I load a Gigapixel image as a material in SceneKit?

I’m trying to create an AR image to project on a wall from a Gigapixel image. Obviously Xcode crashes if I try to load the image as a material. Is there an efficient way to load only parts of the image that the user is looking at?
I'm using Swift 4.
This may not do exactly what you want, and you might need to roll-your-own way of parsing and passing data between Core Animation and SceneKit, but this is native, and is designed to handle large images and texture data sources, and feed them out asynchronously, and/or on a demand based basis:
https://developer.apple.com/documentation/quartzcore/catiledlayer

iPhone OpenGL ES save/load

I'm working from the GLPaint example and have no idea how to implement a "save to" and "load from" a file. I don't want the drawing points, but to save the actual buffer so I can load it later, like a photoshop document or any other popular paint app. How is this possible?
Saving to an image doesn't seem like it would work unless its possible to render it into opengl once loaded, but even then it seems some of the quality would be lost from compression and the conversion process.
I thought about saving the drawing points, but loading that seems difficult, because somehow the colors would have to be save too and aligned once loaded.
Note: GLPaint uses caegllayer.
Thanks,
austin

Copy A Texture to PixelBuffer (CVPixelBufferRef)

I am using an API which only gives me the integer id of the texture object, and I need to pass that texture's data to AVAssetWriter to create the video.
I know how to create CVOpenGLESTexture object from pixel buffer (CVPixelBufferRef), but in my case I have to somehow copy the data of a texture of which only the id is available.
In other words, I need to copy an opengl texture to my pixelbuffer-based texture object. Is it possible? If yes then how?
In my sample code I have something like:
void encodeFrame(Gluint textureOb)
{
CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferAdaptor pixelBufferPool], &pixelBuffer[0]);
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, pixelBuffer[0],
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)FRAME_WIDTH,
(int)FRAME_HEIGHT,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture[0]);
CVPixelBufferLockBaseAddress(pixelBuffer[pixelBuffernum], 0);
//Creation of textureOb is not under my control.
//All I have is the id of texture.
//Here I need the data of textureOb somehow be appended as a video frame.
//Either by copying the data to pixelBuffer or somehow pass the textureOb to the Adaptor.
[assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer[0] withPresentationTime:presentationTime];
}
Thanks for tips and answers.
P.S. glGetTexImage isn't available on iOS.
Update:
#Dr. Larson, I can't set the texture ID for API. Actually I can't dictate 3rd party API to use my own created texture object.
After going through the answers what I understood is that I need to:
1- Attach pixelbuffer-associated texture object as color attachment to a texFBO.
For each frame:
2- Bind the texture obtained from API
3- Bind texFBO and call drawElements
What am I doing wrong in this code?
P.S. I'm not familiar with shaders yet, so it is difficult for me to make use of them right now.
Update 2:
With the help of Brad Larson's answer and using the correct shaders solved the problem. I had to use shaders which are an essential requirement of Opengl ES 2.0
For reading back data from OpenGL ES on iOS, you basically have two routes: using glReadPixels(), or using the texture caches (iOS 5.0+ only).
The fact that you just have a texture ID and access to nothing else is a little odd, and limits your choices here. If you have no way of setting what texture to use in this third-party API, you're going to need to re-render that texture to an offscreen framebuffer to extract the pixels for it either using glReadPixels() or the texture caches. To do this, you'd use an FBO sized to the same dimensions as your texture, a simple quad (two triangles making up a rectangle), and a passthrough shader that will just display each texel of your texture in the output framebuffer.
At that point, you can just use glReadPixels() to pull your bytes back into the the internal byte array of your CVPixelBufferRef or preferably use the texture caches to eliminate the need for that read. I describe how to set up the caching for that approach in this answer, as well as how to feed that into an AVAssetWriter. You'll need to set your offscreen FBO to use the CVPixelBufferRef's associated texture as a render target for this to work.
However, if you have the means of setting what ID to use for this rendered texture, you can avoid having to re-render it to grab its pixel values. Set up the texture caching like I describe in the above-linked answer and pass the texture ID for that pixel buffer into the third-party API you're using. It will then render into the texture that's associated with the pixel buffer, and you can record from that directly. This is what I use to accelerate the recording of video from OpenGL ES in my GPUImage framework (with the glReadPixels() approach as a fallback for iOS 4.x).
Yeah it's rather unfortunate that glGetTexImage isn't ios. I struggled with that when I implemented my CCMutableTexture2D class for cocos2d.
Caching the image before pushing to the gpu
If you take a look into the source you'll notice that in the end I kept the pixel buffer of the image cached into my CCMutableTexture2D class instead of the normal route of discarding it after it's pushed to the gpu.
http://www.cocos2d-iphone.org/forum/topic/2449
Using FBO's and glReadPixels
Sadly, I think this approach might not be appropriate for you since you're creating some kind of video with the texture data and holding onto every pixel buffer that we've cached eats up a lot of memory. Another approach could be to create an FBO on the fly in order to use glReadPixels to populate your pixel buffer. I'm not too sure how successful that approach will be but a good example was posted here:
Read texture bytes with glReadPixels?

approach for recording grayscale video on iphone?

I am building an iphone app that needs to record grayscale video and save it to the camera roll. I'm stumped at how best to approach this.
I am thinking along the following lines:
Use a shader and opengl to transform the video to grayscale
Use AVFoundation (AVAssetWriter with an AVAssetWriterInputPixelBufferAdaptor) to write the video to the file.
My questions are:
Is this the right approach (simplest, best performance)?
If so, what would be the best way to go from opengl output to a CVPixelBufferRef input for the AVAssetWriterInputPixelBufferAdaptor?
If not, what would be a better approach?
Any nudge in the right direction is much appreciated!
In general, I'd agree with this approach. Doing your processing in an OpenGL ES 2.0 shader should be the most performant way of doing video frame alteration like this, but it won't be very simple. Fortunately, you can start from a pre-existing template that already does this.
You can use the sample application I wrote here (and explained here) as a base. I use custom shaders in this example to track colors in an image, but you could easily alter this to convert the video frames to grayscale (I even saw someone do this once). The code for feeding camera video into a texture and processing it could be used verbatim from that sample.
In one of the display options within that application, I render the processed image first to a framebuffer object, then use glReadPixels() to pull the resulting image back into bytes that I can work with on the CPU. You could use this to get the raw image data back after the GPU has processed a frame, then feed those bytes into CVPixelBufferCreateWithBytes() to generate your CVPixelBufferRef for writing to disk.
(Edit: 2/29/2012) As an update to this, I just implemented this kind of video recording in my open source GPUImage framework, so I can comment on the specific performance for the encoding part of this. It turns out that you can capture video from the camera, perform live filtering on it, grab it from OpenGL ES using glReadPixels(), and write that out as live H.264 video in 640x480 frames on an iPhone 4 at 30 FPS (the maximum camera framerate).
There were a few things that I needed to do in order to get this recording speed. You need to make sure that you set your AVAssetWriterInputPixelBufferAdaptor to use kCVPixelFormatType_32BGRA as its color format for input pixel buffers. Then, you'll need to re-render your RGBA scene using a color-swizzling shader to provide BGRA output when using glReadPixels(). Without this color setting, your video recording framerates will drop to 5-8 FPS on an iPhone 4, where with it they are easily hitting 30 FPS. You can look at the GPUImageMovieWriter class source code to see more about how I did this.
Using the GPUImage framework, your above filtering and encoding task can be handled by simply creating a GPUImageVideoCamera, attaching a target of a GPUImageSaturationFilter with the saturation set to 0, and then attaching a GPUImageMovieWriter as a target of that. The framework will handle the OpenGL ES interactions for you. I've done this, and it works well on all iOS devices I've tested.

How does Quartz handle texture compression?

I'm developing on the iPhone and the majority of our game is using OpenGL ES, but there are also menus that use CGImage and Quartz in order to be displayed. In OpenGL ES, I know that no matter what image compression goes in (JPG, PNG, etc.), the data stored in memory as a texture is an 8-bit texture, unless I use PVRTC in which case I can get it to 2 or 4 bits. We've been having memory issues due to large CGImages, so my question is... what sort of optimizations and compressions do Quartz and CGImage use? I can't find the details in Apple's docs, when really I want to know if it would make a difference to put a 256-color image in, or a JPG vs a PNG, if having the dimensions at a power of 2 help, etc. Speed is unimportant, memory is the bottleneck here.
Thanks.
Quartz is uncompressed. It is for quickly compositing and rendering pixel accurate content. Once your images have been drawn into a context it doesn't matter where they came from, they take whatever that context takes per pixel for however many pixels they have (generally 4 bytes per pixel in a device if I recall correctly). The one big thing it does is premultiplies the alpha to avoid blending.
Now, some views under memory pressure can evict their contents if not displayed, and reconstitute them as needed. In those cases a CGImage from a compressed source generally ends up taking less memory, but I suspect that is not relevant in the case you described.