SpriteKit reducing SKLabelnode draw calls - sprite-kit

So I have a scene in my game which displays the levels, like any other game with level, I subclass SKSpriteNode to make a custom level button and within this subclass i Add a SKLabelNode to display the level title ( level 1, level 2 .....). The problem know is that i have a lot of draw calls because each SKLabelNode renders as one texture instant of combining them into an atlas. I would like to know if someone can help me reduce these draw calls. I dont want to use Glyph designer because this game is going to be in a lot of different languages like Japanese Chinese and more.
Any advice?
-(void)setText: (NSString *)text{
_label = [SKLabelNode labelNodeWithFontNamed:#"CooperBlack"];
_label.text = text;
_label.fontColor = [UIColor blackColor];
_label.fontSize = 11;
_label.zPosition = 2;
_label.verticalAlignmentMode = SKLabelVerticalAlignmentModeCenter;
_label.position = CGPointMake(0, 0);
[self addChild: _label];
}

Depending on what you're doing and when, you could render out the contents of the labels into textures at runtime (pre-loading / caching), and then manipulate them in any ways you'd like.
SKLabelNode *theThingToBecomeATexture;
//OR
SKSpriteNode *theThingToBecomeATexture;
SKTexture *theTexture = [theView textureFromNode:theThingToBecomeATexture];
But my follow up question or comment would be: I have a difficult time believing that you are running into performance problems by showing a few dozen label nodes on the screen. I can understand you hitting a load spike if you are trying to alloc and init a number of them all at the same time, in which case, I would preload them, or alloc them not on the main thread.

Related

How to avoid FPS drop when drawing lines in SpriteKit?

My current project contains a gravity simulator where sprites move in accordance with the forces they experience in the game scene.
One of my features involve allowing moving sprites to draw a line behind them so you can see what paths they take.
Shown here:
However, as the Sprite continues it's movements around the screen, the FPS begins to dive. This can be seen in this second image where some time has passed since the sprite first started its movement.
When researching, I found other people had posted with similar problems:
Multiple skshapenode in one draw?
However, in the question above, the answer's poster detailed that it (The answer) was meant for a static image, which isn't something I want, because this line will change in real time depending on what influences the sprites path, this was reflected when I tried implementing a function to add a new Line to the old one which didn't work. That Code here
I'm asking if anyone can assist me in finding a way to properly stop this constant FPS drop that comes from all the draw operations. My current draw code consists of two Functions.
-(void)updateDrawPath:(CGPoint)a B:(CGPoint)b
{
CGPathAddLineToPoint(_lineToDraw, NULL, b.x, b.y);
_lineNode.path = _lineToDraw;
}
-(void)traceObject:(SKPlanetNode *)p
{
_lineToDraw = CGPathCreateMutable();
CGPathMoveToPoint((_lineToDraw), NULL, p.position.x, p.position.y);
_lineNode = [SKShapeNode node];
_lineNode.path = _lineToDraw;
_lineNode.strokeColor = [SKColor whiteColor];
_lineNode.antialiased = YES;
_lineNode.lineWidth = 3;
[self addChild:_lineNode];
}
updateDrawPath: Draws line to latest position of Sprite.
traceObject: Takes SKPlanetNode (Subclass of SKSpriteNode), and sets it up to have a line drawn after it.
If anyone can suggest a way to do this and also reduce the terrible overhead I keep accumulating, it would be fantastic!
A couple suggestions:
Consider that SKShapeNode is more or less just a tool for debug drawing mostly, due to the fact that it doesn't draw in batches it's really not suitable to make a game around that or to use it extensively (both many shapes as well as few but complex shapes).
You could draw lines using a custom shader which will likely be faster and more elegant solution, though of course you may have to learn how to write shader programs first.
Be sure to measure performance only on a device, never the simulator.

Most efficient way to create CCSprites and particles in Cocos2D

Right now, in my game, I am spawning a sprite every second or so at the top of the screen (using a sceduler) using this code:
The init method:
[self schedule:#selector(addMeteor:) interval:1];
The scheduler method:
- (void)addMeteor:(ccTime)dt
{
CCTexture2D *meteor = [[CCTextureCache sharedTextureCache] addImage:#"Frame3.png"];
target = [CCSprite spriteWithTexture:meteor rect:CGRectMake(0, 0, 53, 56)];
//Rest of positioning code was here
}
Doing it this way causes a stutter in the frame rate every second or so (Whenever another sprite is spawned). Is there a way to eliminate that?
Thanks in advance!
Tate
I'm guessing the stutter is more likely coming from other parts of the code. Do you explicitly call removeChild on meteors? That might cause a hiccup, especially with many meteors.
My advice: create N meteor sprites up front. When you need one, make it visible and change its position. When you're done with it, set it to visible = NO to make it disappear.

Combining two textures in one

I'm developing a game for iOS. I'm using cocos2d libs. I want to have an object, that have 3 parts - beginning, ending and the middle. I've got the image with these components. And the object can be stretched, when created. But only the middle part should be stretched, the beginning and endings should have no scaling. Because this operation is done only once i decided it's a good idea to create a new CCSprite for this object, and not to keep three (for increasing performance).
I'm using CCSPriteBatchNode for rendering, and i don't know if i really need to combine the object's parts (maybe rendering 3 parts using batch will be as fast as rendering one pre-combined object).
So there are two quastions:
Do i need to combine parts in one object?
If, yes - how can i do that?
Instead of combining the textures you could create a node and add the three sprites as children to it. You can then work with the parent node as a single entity.
Something along the lines of:
CCNode *sprites = [CCNode node];
CCSprite *spriteA = [CCSprite spriteWithSpriteFrameName:#"spriteA.png"];
spriteA.position = ccp(-10, 0);
[sprites addChild:spriteA];
CCSprite *spriteB = [CCSprite spriteWithSpriteFrameName:#"spriteB.png"];
spriteB.position = ccp(0, 0);
[sprites addChild:spriteB];
CCSprite *spriteC = [CCSprite spriteWithSpriteFrameName:#"spriteC.png"];
spriteC.position = ccp(10, 0);
[sprites addChild:spriteC];
You can scale and position each individual sprite depending on your parameters then work with the sprites object to position/scale them as a whole.
There might be a small performance hit so I would think twice before using this for a large amount of sprites, but I've been using this method in a few situations and in my case I didn't notice any issues with performance.
Look at the RenderTexture demo.
Instead of using the brush, you can use put your 3 parts onto it using those images instead of the brush.

App with expanding animated line in iOS ... howto

The basic idea
is very easy. Simplified you could say... a snake like line realized by a let's say 3px line is expanding across the screen collecting and interacting with different stuff, can be steered through user input. Like a continous line you would draw with a pen.
I've already started reading apple's documentation on Quartz/CG.
As I understand now, I need to put my rendering code into a UIView's drawRect.
Then I would need to set a timer (found some answers/posts here and there, nothing concrete though) that fires x times per second and calls setNeedsDisplay on the UIView.
Question:
How to realize the following:
Have whole snake on UIView 1 / (Layer ?), draw new part on UIView 2, merge them, so that the new part gets appended to UIView 1 (or CALayer instead of views ?). I ask explicitly about this cause I read, that one shouldn't redraw the same content over and over again, but just the new/moving part.
I hope you can provide me some sample code or some details which classes I should use and the strategy on how to use them / which calls to make.
Edit
OK, I see that my idea or what I've read before to realize this with Quartz and different views is not so wise... ( Daniel Bleisteiner )
So I switched to OpenGL now, well I'm looking into it, reading examples, Jeff LaMarche's OpenGL blog entries, etc..
I guess I would be able to draw my line. I think I would create classes for curves, straight lines / direction changes, etc. and then on user input I would create the related objects (depending on the steer input) store them in an array and then recreate and redraw the whole line by reading the object properties from the objects stored in the array on each frame. A simple line I would draw like this (code from Beginning iPhone Development)
glDisable(GL_TEXTURE_2D);
GLfloat vertices[4];
// Convert coordinates
vertices[0] = start.x;
vertices[1] = start.y;
vertices[2] = end.x;
vertices[3] = end.y;
glLineWidth(3.0);
glVertexPointer (2, GL_FLOAT , 0, vertices);
glDrawArrays (GL_LINES, 0, 2);
Maybe I will even find a way to antialias it but
now I'm more curious if my idea is good enough or if there are some better established strategies for this
and maybe someone could tell me how to seperate code for hud's, the line drawing itself and some menus that I will have to display f.e. at the beginning ... so that it's easy to transition from one "view" to another (like with a fade)? Some hints ?
Maybe you have also read a book, that will explain how to solve this kind of problems?
Or you can point me to another good answer / example code as I have huge problems in finding "animated drawing" examples.
this got me a little further, but it is still a little vague for me.
I don't know how to realize "update path that you draw (only with needed points + one last point that moves)"
Don't try to merge different views... setNeedsDisplay has a rect parameter that tells the core graphics part that only a certain part of the screen needs to be rendered again. Respect this parameter in your drawRect method and it should be enough for standard 2D games and tools.
If you intent to use intense graphics there is no other option than to use OpenGL. The performance is multiple times better and you won't have to care about optimizations... OpenGL does much in the background.
Use OpenGL ES. Then what you will want to do is create a run loop function or method which gets called by a CADisplayLink 30 to 60 times per second. 60 is the maximum.
So how does this work?
1) Create an assign-property for CADisplayLink. Do NOT retain your CADisplayLink, because display links and timers retain their target. Otherwise you would create a retain-cycle which may cause abandoned memory issues (which is even worse than a leak and much harder to discover).
#property (nonatomic, assign) CADisplayLink *displayLink;
2) Create and schedule the CADisplayLink:
- (void)startRunloop {
if (!animating) {
CADisplayLink *dl = [[UIScreen mainScreen] displayLinkWithTarget:self selector:#selector(drawFrame)];
[dl setFrameInterval:1];
[dl addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
self.displayLink = dl;
animating = YES;
}
}
Some things to note here: -setFrameInterval:1 tells the CADisplayLink to not skip any frame. This means: You get the maximum fps. BUT: This can be bad if your code needs longer than 1/60 seconds. In that case, it's better to set this to 2, for example. Makes your animation more fluid.
3) In your -drawFrame method do your OpenGL ES drawing as usual. The only difference is that this code gets called multiple times per second. Just keep track of time and determine in your code what you want to draw, and how you want it to be drawn. If you were to animate a rectangle moving from bottom left to top right with an animation duration of 1 second, you would simply interpolate the animation frames between start and end by applying a function which takes time t as argument. There are thousands of ways to do it. This is one of them.
4) When you're done or want to halt OpenGL ES drawing, just invalidate or pause your CADisplayLink. Something like this:
- (void)stopRunloop {
if (animating) {
[self.displayLink invalidate];
self.displayLink = nil;
animating = NO;
}
}

UIView transparency shows how the sausages are made!

I have a UIView container that has two UIImageViews inside it, one partially obscuring the other (they're being composed like this to allow for occasional animation of one "layer" or another.
Sometimes I want to make this container 50% alpha, so what the users sees fades. Here's the problem: setting my container view to 50% alpha makes all my subviews inherit this as well, and now you can see through the first subview into the second, which in my application has a weird X-Ray effect that I'm not looking for.
What I'm after, of course, is for what the user currently sees to become 50% transparent-- the equivalent of flattening the visible view into one bitmap, and then making that 50% alpha.
What are my best bets for accomplishing this? Ideally would like to avoid actually, dynamically flattening the views if I can help it, but best practices on that welcome as well. Am I missing something obvious? Since most views have subviews and would run into this issue, I feel like there's some obvious solution here.
Thanks!
EDIT: Thanks for the thoughts folks. I'm just moving one image around on top of another image, which it only partially obscures. And this pair of images has to move together sometimes, as well. And sometimes I want to fade the whole thing out, wherever it is, and whatever the state of the image pair is at the moment. Later, I want to bring it back and continue animating it.
Taking a snapshot of the container, either by rendering its layer (?) or by doing some other offscreen compositing on the fly before alpha'ing out the whole thing, is definitely possible, and I know there are a couple ways to do it. But what if the animation should continue to happen while the whole thing's at 50% alpha, for example?
It sounds like there's no obvious solution to what I'm trying to do, which seems odd to me, but thank you all for the input.
Recently I had this same problem, where I needed to animate layers of content with a global transparency. Since my animation was quite complex, I discovered that flattening the UIView hierarchy made for a choppy animation.
The solution I found was using CALayers instead of UIViews, and setting the .shouldRasterize property to YES in the container layer, so that any sublayers would be flattened automatically prior to applying the opacity.
Here's what a UIView could look like:
#import <QuartzCore/QuartzCore.h> //< Needed to use CALayers
...
#interface MyView : UIView{
CALayer *layer1;
CALayer *layer2;
CALayer *compositingLayer; //< Layer where compositing happens.
}
...
- (void)initialization
{
UIImage *im1 = [UIImage imageNamed:#"image1.png"];
UIImage *im2 = [UIImage imageNamed:#"image2.png"];
/***** Setup the layers *****/
layer1 = [CALayer layer];
layer1.contents = im1.CGImage;
layer1.bounds = CGRectMake(0, 0, im1.size.width, im1.size.height);
layer1.position = CGPointMake(100, 100);
layer2 = [CALayer layer];
layer2.contents = im2.CGImage;
layer2.bounds = CGRectMake(0, 0, im2.size.width, im2.size.height);
layer2.position = CGPointMake(300, 300);
compositingLayer = [CALayer layer];
compositingLayer.shouldRasterize = YES; //< Here we turn this into a compositing layer.
compositingLayer.frame = self.bounds;
/***** Create the layer three *****/
[compositingLayer addSublayer:layer1]; //< Add first, so it's in back.
[compositingLayer addSublayer:layer2]; //< Add second, so it's in front.
// Don't mess with the UIView's layer, it's picky; just add sublayers to it.
[self.layer addSublayer:compositingLayer];
}
- (IBAction)animate:(id)sender
{
/* Since we're using CALayers, we can use implicit animation
* to move and change the opacity.
* Layer2 is over Layer1, the compositing is partially transparent.
*/
layer1.position = CGPointMake(200, 200);
layer2.position = CGPointMake(200, 200);
compositingLayer.opacity = 0.5;
}
I think that flattening the UIView into a UIImageView is your best bet if you have your heart set on providing this feature. Also, I don't think that flattening the image is going to be as complicated as you might think. Take a look at the answer provided in this question.
Set the bottom UIImageView to have .hidden = YES, then set .hidden = NO when you setup a cross-fade animation between the top and bottom UIImageViews.
When you need to fade the whole thing, you can either set .alpha = 0.5 on the container view or the top image view - it shouldn't matter. It may be computationally more efficient to set .alpha = 0.5 on the image view itself, but I don't know enough about the graphics pipeline on the iPhone to be sure about that.
The only downside to this approach is that you can't do a cross-fade when your top image is set to 50% opacity.
A way to do this would be to add the ImageViews to the UIWindow (the container would be a fake one)