I'm trying to create dynamic graphics for my game, which I'm building with Cocos2D. The graphics generation will occur at predictable, finite points, such as level loading. I'm having a hard time figuring out how to actually draw this at runtime. From what I can tell, the easiest way would be to draw into a PNG file at runtime and then load an AtlasSprite based on the PNG file, but I can't seem to figure out if this is indeed the best way or how to go about doing it. Any suggestions?
I'm not sure how Cocos2D loads Sprites or Atlases so this is a more general answer.
It might be worth taking a look at the Texture2D class that comes with the old CrashLanding example app. It uses a bitmap graphics context to generate a texture of a string for drawing with OpenGL. The code uses the CGBitmapContextCreate function to create a context. You can draw whatever you want onto it.
Then once you've finished drawing, you can either save the file as a PNG or you can call glTexImage2D on the data to use it with OpenGL.
There's more information about it in the Graphics and Drawing
documentation, specifically the section: Creating and Drawing Images.
Edit: It looks like Cocos2D comes with Texture2D so you should be in good shape. Check out the initWithString method here.
Related
My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !
Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.
I am trying to draw individual pixels in xcode.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
I don't know how to find a tutorial for this, and I think it must be very quick. Do you guys have a file like this, or could point me to a tutorial?
Any help is greatly appreciated.
Simply use NSBitmapImageRep with setColor:atX:y:
Create a empty NSBitmapImageRep.
Every time you need to update the view do something like this:
- Set the specific pixel to a certain color with setColor:atX:y:
- Convert NSBitmapImageRep to NSImage
- Show result in a NSImageView
This works perfect if you don't need to update the view too many times/sec.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
If you're not interested in learning Core Graphics, then tough luck. You get two choices for graphics: OpenGL, or UIKit/Core Graphics. Your choice, but Core Graphics is considerably easier. You can use OpenGL to paint on a per pixel basis, but I'm assuming if you have no interest in learning Core Graphics you're probably not going to be keen on OpenGL. For high performance applications you're looking at OpenGL as the only realistic option.
So, if you don't want to learn OpenGL your best bet is the Quartz programming guide:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html%23//apple_ref/doc/uid/TP40007533-SW1
Out of interest, why wouldn't you want to look into Quartz?
You could either use CoreGraphic to draw a filled rect in a drawRect after a setNeedsDisplay on the view. Or you could generate a bitmap context and assign it to the layer contents of your view at a 1.0 scale if you want actual pixels instead of 1x1 point rectangles.
I'm developing a cute puzzle app - http://gotoandplay.freeblog.hu/categories/compactTangram/ - , and for performance reasons I decided to render the view with OpenGL. I started to learning it, I'm ok with buffers, vertices, textures in a really basic way.
The situation:
In the game user manipulates 7 puzzlePiece, each has 5 sublayers to get some pretty lighting feel. Most of the textures are 256x256. The user manipulates only one piece at a time, so the rest is unchanged during play. A skeleton of app without any graphic here: http://gotoandplay.freeblog.hu/archives/2009/11/11/compactTangram_v10_-_puzzle_completement_test/
The question:
How should I organize them? Is it a good idea to "predraw" the actual piece states in separate framebuffers(?)/textures(?) or I can simply redraw every piece/layers (1+7*5=36 sprite) in a timestep? If I use "predraw", then what should I do? Drawing to a puzzePiece framebuffer? Then how can I draw it into the scene framebuffer? Or is there a simplier way to "merge" textures?
Hope you can understand my question, if it seems too dim please take a look at my idea on how render an actual piece in my blog (there is a simple flash implemetation of what I'm gonna do) here: http://gotoandplay.freeblog.hu/archives/2010/01/07/compactTangram_072_-_tan_rendering_labs/
A common way of handling textures is to pack all your images into a 'texture atlas' at the start of the game/level.
Your maximum texture size is 1024x1024 and you can have about three of them in memory on the iPhone.
When you have all the images in these 'super textures' you can just draw the relevant area of the large texture. This has the advantage that you have to bind textures less often and you gain better performance, as well as cutting out any excess space used by the necessity to put small images in power-of-two size textures.
I'm writing a game that displays 56 hexagon pieces filling the screen in the shape of a board. I'm currently drawing each piece using a singleton rendering class that when called to draw a piece, creates a path from 6 points based of the coordinate passed in. This path is filled with a solid color and then a 59x59 png with an alpha to white gradient is overlayed over the drawing to give the piece a shiny look. Note I'm currently doing this in Core Graphics.
My first thought is that creating a path everytime I draw is costly and seems like I can somehow do this once and then reuse it, but I'm not sure of the best approach for this. When I look at the bottlenecks with Shark, it looks like the drawing of the png is the most taxing part of the process. I've tried just rendering the png overlay or just rendering the path without the overlay and both give me some frame gains, although removing the png overlay yields the most frames.
My current thought is that at startup, I should render 6 paths (1 for each color piece I have) and overlay them with the png and then store an image of these pieces and then just redraw the pieces each time I need them. Is there an effecient machanism for storing something you've drawn once and redrawing it? It kinda just sounds like I'd be running into the whole drawing pngs too often thing again, but maybe there's a less taxing method that does a similar thing...
Any suggestions are much appreciated.
Thanks!
You might try CGLayer or CALayer.
General thoughts:
Game programming on iPhone usually necessitates OpenGL. Core Graphics is a bit easier to work with, but OpenGL is optimized for speed.
Prerender this "shiny look" into the textures as much as is possible (as in: do it in Photoshop before you even insert them into your project). Alpha blending is hell on performance.
Maybe try PVRTC (also this tutorial) as it's a format used by iPhone's GPU's manufacturer. Then again, this could make things worse depending on where your bottleneck is.
If you really need speed you have to go the OpenGL route. Be careful if you want to mix OpenGL and Core Animation, they can conflict.
OpenGL is a pain if you haven't done much with it. It sounds like you could use Core Animation and make each tile a layer. CA doesn't call the redraw again unless you change something, so you should be able to just move that layer around without taking a big hit. Also note that CA stores the layer in the texture memory so it should be much faster.
Some others have mentioned that you should use OpenGL. Here's a nice introduction specifically for the iPhone: OpenGL ES from the Ground Up: Table of Contents
You might also want to look at cocos2d. It seems to be significantly faster than using CoreAnimation in my tests, and provides lots of useful stuff for games.
I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks
What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}
Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.