I'm working on an iPhone app that lets the user draw using GL. I used the GLPaint sample code project as a firm foundation, but now I want to add the ability for the user to load one of their previous drawings and continue working on it.
I know how to get the framebuffer contents and save it as a UIImage. Is there a way for me to take the UIImage and tell GL to draw that?
Any help is much appreciated.
Typically you would either:
1) Use glDrawPixels()
2) Load the image into a texture and then render a quad.
Related
I'm working from the GLPaint example and have no idea how to implement a "save to" and "load from" a file. I don't want the drawing points, but to save the actual buffer so I can load it later, like a photoshop document or any other popular paint app. How is this possible?
Saving to an image doesn't seem like it would work unless its possible to render it into opengl once loaded, but even then it seems some of the quality would be lost from compression and the conversion process.
I thought about saving the drawing points, but loading that seems difficult, because somehow the colors would have to be save too and aligned once loaded.
Note: GLPaint uses caegllayer.
Thanks,
austin
I did some testing on GLPaint and managed to save the image. But now, my problem for is I have no idea how to load the saved image back to render buffer so I can continue editing. Would be glad if someone can show the code and the idea behind, I am very new to OpenGL ES.
If you say, that you saved the image, then to load it back and display you have to create a texture object and map it to some polygons. Check this tutorial.
So I'd like to create a class that accepts a CGImage from an image file I just read from disk, does work on that image texture (color transformations) then returns the texture as a CGImage and does all this in the background w/out drawing to screen. I've looked at Apple's demo app on GLImageProcessing but it draws all the processing to the screen and I've seen bits and bites of how to do parts of what I want but can't assemble it.
Any suggestions would be greatly appreciated.
Thanks
You will have to use Framebuffer Objects (FBOs) to draw offscreen, but you still need a GL rendering context.
I'm trying to create dynamic graphics for my game, which I'm building with Cocos2D. The graphics generation will occur at predictable, finite points, such as level loading. I'm having a hard time figuring out how to actually draw this at runtime. From what I can tell, the easiest way would be to draw into a PNG file at runtime and then load an AtlasSprite based on the PNG file, but I can't seem to figure out if this is indeed the best way or how to go about doing it. Any suggestions?
I'm not sure how Cocos2D loads Sprites or Atlases so this is a more general answer.
It might be worth taking a look at the Texture2D class that comes with the old CrashLanding example app. It uses a bitmap graphics context to generate a texture of a string for drawing with OpenGL. The code uses the CGBitmapContextCreate function to create a context. You can draw whatever you want onto it.
Then once you've finished drawing, you can either save the file as a PNG or you can call glTexImage2D on the data to use it with OpenGL.
There's more information about it in the Graphics and Drawing
documentation, specifically the section: Creating and Drawing Images.
Edit: It looks like Cocos2D comes with Texture2D so you should be in good shape. Check out the initWithString method here.
I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks
What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}
Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.