I did some testing on GLPaint and managed to save the image. But now, my problem for is I have no idea how to load the saved image back to render buffer so I can continue editing. Would be glad if someone can show the code and the idea behind, I am very new to OpenGL ES.
If you say, that you saved the image, then to load it back and display you have to create a texture object and map it to some polygons. Check this tutorial.
Related
I'm working from the GLPaint example and have no idea how to implement a "save to" and "load from" a file. I don't want the drawing points, but to save the actual buffer so I can load it later, like a photoshop document or any other popular paint app. How is this possible?
Saving to an image doesn't seem like it would work unless its possible to render it into opengl once loaded, but even then it seems some of the quality would be lost from compression and the conversion process.
I thought about saving the drawing points, but loading that seems difficult, because somehow the colors would have to be save too and aligned once loaded.
Note: GLPaint uses caegllayer.
Thanks,
austin
Greetings.
My goal is to implement a pipeline for processing video frames that has a "current" and "previous" frame. It needs to copy a subregion from the "previous" frame into the "current" frame on some occasions. This is a kind of decoder which is part of a larger application.
This is what I have running so far, on the iPhone using OpenGL-ES 1.1
glCopyTexSubImage2D
.--------------------\
glTexSubImage2D V glDrawArray |
image --------------------------> Texture --------------> FBO/Texture -------> render buffer
The Texture gets each new frame or partial frame updated as usual.
The Texture is drawn into the frame buffer object and eventually rendered.
These parts perform very nicely.
The problem is the glCopyTexSubImage2D, which profiled using Instruments shows that it takes about 50% of the CPU; it looks like it's doing the copy using the CPU. Yuck.
Before I post the code (and I will happily do that), I wanted to ask if this architecture is sound?
The next phase of the project is share the final FBO/Texture with another GL context to render to an external screen. I've read other posts here about the delicate nature of shared resources.
Thanks for any guidance.
Cheers, Chris
P.S. I had some trouble getting the diagram to look right. The back-flow line should go from the FBO/Texture node to the Texture node.
glCopyTexImage*D and glGetTexImage*D is know to be slow as hell, no matter what platform you're on.
To replace glCopyTexSubImage2D you could just add another FBO/Texture and render the other texture you want to copy into it, then you can get the texture from the FBO. I'm not sure It'll be faster but it should be.
Render to FBO
.--------------------\
glTexSubImage2D V glDrawArray |
image --------------------------> FBO/Texture --------------> FBO/Texture -------> render buffer
I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.
So I'd like to create a class that accepts a CGImage from an image file I just read from disk, does work on that image texture (color transformations) then returns the texture as a CGImage and does all this in the background w/out drawing to screen. I've looked at Apple's demo app on GLImageProcessing but it draws all the processing to the screen and I've seen bits and bites of how to do parts of what I want but can't assemble it.
Any suggestions would be greatly appreciated.
Thanks
You will have to use Framebuffer Objects (FBOs) to draw offscreen, but you still need a GL rendering context.
I'm working on an iPhone app that lets the user draw using GL. I used the GLPaint sample code project as a firm foundation, but now I want to add the ability for the user to load one of their previous drawings and continue working on it.
I know how to get the framebuffer contents and save it as a UIImage. Is there a way for me to take the UIImage and tell GL to draw that?
Any help is much appreciated.
Typically you would either:
1) Use glDrawPixels()
2) Load the image into a texture and then render a quad.