Alright, I have a UIView which displays a CGPath which is rather wide. It can be scrolled within a UIScrollView horizontally. Right now, I have the view using a CATiledLayer, because it's more than 1024 pixels wide.
So my question is: is this efficient?
- (void) drawRect:(CGRect)rect {
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextAddPath(g,path);
CGContextSetStrokeColor(g,color);
CGContextDrawPath(g,kCGPathStroke);
}
Essentially, I'm drawing the whole path every time a tile in the layer is drawn. Does anybody know if this is a bad idea, or is CGContext relatively smart about only drawing the parts of the path that are within the clipping rect?
The path is mostly set up in such a way that I could break it up into blocks that are similar in size and shape to the tiles, but it would require more work on my part, would require some redundancy amongst the paths (for shapes that cross tile boundaries), and would also take some calculating to find which path or paths to draw.
Is it worth it to move in this direction, or is CGPath already drawing relatively quickly?
By necessity Quartz must clip everything it draws against the dimensions of the CGContext it's drawing into. But it will still save CPU if you only send it geometry that is visible within that tile. If you do this by preparing multiple paths, one per tile, you're talking about doing that clipping once (when you create your multiple paths) versus Quartz doing it every time you draw a tile. The efficiency gain will come down to how complex your path is: if it's a few simple shapes, no big deal; if it's a vector drawn map of the national road network, it could be huge! Of course you have to trade this speedup off against the increased complexity of your code.
What you could do is use instruments to see how much time is being spent in Quartz before you go crazy optimizing stuff.
Related
I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts
How would I efficiently draw a CGPath on a CATiledLayer? I'm currently checking if the bounding box of the tile intersects the bounding box of the path like this:
-(void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context {
CGRect boundingBox = CGPathGetPathBoundingBox(drawPath);
CGRect rect = CGContextGetClipBoundingBox(context);
if( !CGRectIntersectsRect(boundingBox, rect) )
return;
// Draw path...
}
This is not very efficient as the drawLayer:inContext: is called multiple times from multiple threads and results in drawing the path many times.
Is there a better, more efficient way to do this?
The simplest option is to draw your curve into a large image and then tile the image. But if you're tiling, it probably means the image would be too large, or you would have just drawn the path in the first place, right?
So you probably need to split your path up. The simplest approach is to split it up element by element using CGPathApply. For each element, you can check its bounding box and determine if that element falls in your bounds. If not, just keep track of the last end point. If so, then move to the last end point you saw and add the element to a new path for this tile. When you're done, each tile will draw its own path.
Technically you will "draw" things that go outside your bounds here (such as a line that extends beyond the tile), but this is much cheaper than it sounds. Core Graphics is going to clip single elements very easily. The goal is to avoid calculating elements that are not in your bounding box at all.
Be sure to cache the resulting path. You don't need to calculate the path for every tile; just the ones you're drawing. But avoid recalculating it every time the tile draws. Whenever the data changes, dump your cache. If there are a very large number of tiles, you can also use NSCache to optimize this even better.
You don't show where the path gets created. If possible, you might try building the path up in the -drawLayer:inContext: method, only creating the portion of it needed for the tile being drawn.
As with all performance problems, you should use Instruments to profile your code and find out exactly where the bottlenecks are. Have you tried that already, and if so, what did you find?
As a side note, is there a reason you're using CGPath instead of UIBezierPath? From Apple's documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”
I want to use bitmap images as a "map" for levels in iphone game. Basicly it's all about the location of obstacles in the rectangular world. The obstacles would be color-coded -- where the white pixel is, there's no obstacle. Black means there is one at this point.
Now I need to use this data to do 2 things: (a) display the level map, (b) for in-game calculations. So, in general, I need a way to read the data from the bitmap and create some data structure (matrix-like) with those information - to both overlay the bitmap onto the level graphics as well as to calculate collisions and such.
How should I do it? Is there any easy way to read the data from image? And what's the best format to keep the images for this?
Have you looked at how Texture2D translates an image file to an OpenGL Texture ?
Tip: take a look at this Method in Texture2D.m:
- (id) initWithCGImage:(CGImageRef)image orientation:(UIImageOrientation)orientation sizeToFit:(BOOL)sizeToFit pixelFormat:(Texture2DPixelFormat)pixelFormat filter:(GLenum) filter
In 3D apps, it's quite common to use this kind of representation for height maps, in a height map, you use a Texture with colors that range from black to white ( white represents the maximum altitude )
For example, from this:
To this:
That was just to tell you that your representation is not that crazy :).
About reading the bitmap, I would also recommend you to read this (just in case you want to go deeper)
Hope I helped a bit!
I'm new to game programming. And i have a question. I want to have a dotted circle to be drawn on the screen. I can use one big sprite (for example 256x256 pixels) which contains all the circle or i can use many small sprites representing dots.
I use cocos2d libs and i'm able to render using batch. So what is the best way to perform such tasks ?
In my opinion your best bet (if all the dots are the same) is to have one sprite of the dot, and repeat it in the shape you are looking for.
Generally you'll want a single asset for each unique graphic. You can combine those assets into a single sprite and reuse them. This allows for more flexibility as well as speed.
Most of todays graphics hardware is optimized to texture dimensions that are a power of two. Your sprites are likely to have other dimensions. By using sprites, you can minimize the padding that is needed to fill this space (and thus, minimize CPU/GPU cycles spent on correcting this internally). Besides that, the file size will be smaller, since you need less overhead and compression is likely to be more effective.
Go with one large sprite. It's fewer calls into the rendering engine, and adds flexibility to change the look (for example, if you decide to have the circle made of dashed lines rather than dots).
In my spare time I like to play around with game development on the iPhone with OpenGL ES. I'm throwing together a small 2D side-scroller demo for fun, and I'm relatively new to OpenGL, and I wanted to get some more experienced developers' input on this.
So here is my question: does it make sense to specify the vertices of each 2D element in model space, then translate each element to it's final view space each time a frame is drawn?
For example, say I have a set of blocks (squares) that make up the ground in my side-scroller. Each square is defined as:
const GLfloat squareVertices[] = {
-1.0, 1.0, -6.0, // Top left
-1.0, -1.0, -6.0, // Bottom left
1.0, -1.0, -6.0, // Bottom right
1.0, 1.0, -6.0 // Top right
}
Say I have 10 of these squares that I need to draw together as the ground for the next frame. Should I do something like this, for each square visible in the current scene?
glPushMatrix();
{
glTranslatef(currentSquareX, currentSquareY, 0.0);
glVertexPointer(3, GL_FLOAT, 0, squareVertices);
glEnableClientState(GL_VERTEX_ARRAY);
// Do the drawing
}
glPopMatrix();
It seems to me that doing this for every 2D element in the scene, for every frame, gets a bit intense and I would imagine the smarter people who use OpenGL much more than I do may have a better way of doing this.
That all being said, I'm expecting to hear that I should profile the code and see where any bottlenecks may be: to those people, I say: I haven't written any of this code yet, I'm simply in the process of wrapping my mind around it so that when I do go to write it it goes smoother.
On the subject of profiling and optimization, I'm really not trying to prematurely optimize here, I'm just trying to wrap my mind around how one would set up a 2D scene and render it. Like I said, I'm relatively new to OpenGL and I'm just trying to get a feel for how things are done. If anyone has any suggestions on a better way to do this, I'd love to hear your thoughts.
Please keep in mind that I'm not interested in 3D, just 2D for now. Thanks!
You are concerned with the overhead it takes to transform a model (in this case a square) from model coordinates to world coordinates when you have a lot of models. This seems like an obvious optimization for static models.
If you build your square's vertices in world coordinates, then of course it is going to be faster as each square will avoid the extra cost of these three functions (glPushMatrix, glPopMatrix, and glTranslatef) since there is no need to translate from model to world coordinates at render time. I have no idea how much faster this will be, I suspect that it won't be a humongous optimization, and you lose the modularity of keeping the squares in model coordinates: What if in the future you decide you want these squares to be moveable? That will be a lot harder if you're keeping their vertices in world coordinates.
In short, it's a tradeoff:
World Coordinates
More Memory - each square needs its
own set of vertices.
Less computation - no need to perform
glPushMatrix, glPopMatrix, or
glTranslatef for each square at render time.
Less flexible - lacks support (or
complicates) for dynamically moving these squares
Model Coordinates
Less memory - the squares can share the same vertex data
More Computation - each square must
perform three extra functions at
render time.
More Flexible - squares can easily be
moved by manipulating the
glTranslatef call.
I guess the only way to know what is the right decision is by doing and profiling. I know you said you haven't written this yet, but I suspect that whether your squares are in model or world coordinates it won't make much of a difference - and if it does, I can't imagine an architecture that you could create where it would be hard to switch your squares from model to world coordinates or vice-versa.
Good luck to you and your adventures in iPhone game development!
If you are only using screen aligned quads it might be easier to use the OES Draw Texture extension. Then you can use a single texture to hold all your game "sprites". First specify the crop rectangle by setting the GL_TEXTURE_CROP_RECT_OES TexParameter. This is the boundry of the sprite within the larger texture. To render, call glDrawTexiOES passing in the desired position & size in viewport coordinates.
int rect[4] = {0, 0, 16, 16};
glBindTexture(GL_TEXTURE_2D, sprites);
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, rect);
glDrawTexiOES(x, y, z, width, height);
This extension isn't available on all devices, but it works great on the iPhone.
You might also consider using a static image and just scrolling that instead of drawing each individual block of the floor, and translating its position, etc.