How to avoid FPS drop when drawing lines in SpriteKit? - sprite-kit

My current project contains a gravity simulator where sprites move in accordance with the forces they experience in the game scene.
One of my features involve allowing moving sprites to draw a line behind them so you can see what paths they take.
Shown here:
However, as the Sprite continues it's movements around the screen, the FPS begins to dive. This can be seen in this second image where some time has passed since the sprite first started its movement.
When researching, I found other people had posted with similar problems:
Multiple skshapenode in one draw?
However, in the question above, the answer's poster detailed that it (The answer) was meant for a static image, which isn't something I want, because this line will change in real time depending on what influences the sprites path, this was reflected when I tried implementing a function to add a new Line to the old one which didn't work. That Code here
I'm asking if anyone can assist me in finding a way to properly stop this constant FPS drop that comes from all the draw operations. My current draw code consists of two Functions.
-(void)updateDrawPath:(CGPoint)a B:(CGPoint)b
{
CGPathAddLineToPoint(_lineToDraw, NULL, b.x, b.y);
_lineNode.path = _lineToDraw;
}
-(void)traceObject:(SKPlanetNode *)p
{
_lineToDraw = CGPathCreateMutable();
CGPathMoveToPoint((_lineToDraw), NULL, p.position.x, p.position.y);
_lineNode = [SKShapeNode node];
_lineNode.path = _lineToDraw;
_lineNode.strokeColor = [SKColor whiteColor];
_lineNode.antialiased = YES;
_lineNode.lineWidth = 3;
[self addChild:_lineNode];
}
updateDrawPath: Draws line to latest position of Sprite.
traceObject: Takes SKPlanetNode (Subclass of SKSpriteNode), and sets it up to have a line drawn after it.
If anyone can suggest a way to do this and also reduce the terrible overhead I keep accumulating, it would be fantastic!

A couple suggestions:
Consider that SKShapeNode is more or less just a tool for debug drawing mostly, due to the fact that it doesn't draw in batches it's really not suitable to make a game around that or to use it extensively (both many shapes as well as few but complex shapes).
You could draw lines using a custom shader which will likely be faster and more elegant solution, though of course you may have to learn how to write shader programs first.
Be sure to measure performance only on a device, never the simulator.

Related

Cocos2d. Diffuse image (60 fps)

The game was created by support cocos2d 0.99.5 and Box2d.
Iphone SDK 4.3
We have a character. When a character moves quickly, it looks blurred (fuzzy // unfocused). On a simulator and on device (iPhone 3G).
To move a character using mouseJoint (dampingRatio = 0 // frequencyHz = -1).
In the screenshot image clearly. link
The character is focused. The screenshot not transfer problems.
All the time 60 fps.
Tried params:
use kCCDirectorProjection2D // 3D
alies // antialies to texture params
CC_COCOSNODE_RENDER_SUBPIXEL 1 and 0
Video sample: link
How to get a clear image of the character during the move?
I also had a problem like this and fixed it by changing this line in ccConfig.h:
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 0
to
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 1
This is the comment for this define, maybe it helps someone.
/** #def CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
If enabled, the texture coordinates will be calculated by using this formula:
- texCoord.left = (rect.origin.x*2+1) / (texture.wide*2);
- texCoord.right = texCoord.left + (rect.size.width*2-2)/(texture.wide*2);
The same for bottom and top.
This formula prevents artifacts by using 99% of the texture.
The "correct" way to prevent artifacts is by using the spritesheet-artifact-fixer.py or a similar tool.
Affected nodes:
- CCSprite / CCSpriteBatchNode and subclasses: CCLabelBMFont, CCTMXTiledMap
- CCLabelAtlas
- CCQuadParticleSystem
- CCTileMap
To enabled set it to 1. Disabled by default.
#since v0.99.5
*/
I am pretty sure that what you are describing is an optical illusion. LCDs, especially lower-quality LCDs, have a finite response time. If this response time is too slow, it can cause ghosting, i.e. the moving object looks smeared. Basically what's happening is the previous frame's (or several frames') pixels take a long time to actually "turn off" and you see fainter versions of your sprite left behind as it moves.
With regards to your comment:
For the experiment, I took a pencil and put it to a sheet of paper
began to move quickly. Eyes see a pencil in focus, then problem is not
an optical effect, a code problems
Looking at a moving object in the real world is not the same as looking at a moving object on the screen, with or without a poor display response time. The real-world object moves continuously, but the screen object moves in discrete steps. Your eye can follow the pencil exactly and keep the image sharp on your retina. If you follow a screen image, however, your eye moves smoothly, while the screen image "jumps" from place to place. This can cause a "juddering" effect for sufficiently fast-moving objects, even at high framerates. If 60fps is still juddery, there is basically no way around this; it is a limitation of current technology.

How to determine intersection of CGPaths

My Question is something similar to this.
I have 2 CGPathRef and 1 will be moved by finger touch. I want to find that whether the 2 CGPathRef are intersected? That question was asked almost 2 years ago and I want to know whether something has been found in the mean time.
This is fairly old, but I found it looking for a similar solution, in my problem I wanted to find when a circle overlapped with a path (a special case of your question).
I solved this by using CGPathCreateCopyByStrokingPath to create a stroked version of the original path using the radius of the circle as the stroke width. If the center point of the circle overlaps the stroked path then the original path overlaps the circle.
BOOL CGPathIntersectsCircle(CGPathRef path, CGPoint center, CGFloat radius)
{
CGPathRef fuzzyPath;
fuzzyPath = CGPathCreateCopyByStrokingPath(path, NULL, radius,
kCGLineCapRound,
kCGLineJoinRound, 0.0);
if (CGPathContainsPoint(fuzzyPath, NULL, center, NO))
{
CGPathRelease(fuzzyPath);
return YES;
}
CGPathRelease(fuzzyPath);
return NO;
}
Edit: A minor bug where the fuzzyPath was not released.
I have written a small pixel based path collision detection API for CGPathRefs. It requires that you add a few source directories to your project, and it only works with ARC, but it should at least show you how one might do something like this. It basically draws the two paths on two separate contexts, and then does pixel-by-pixel checks to see if any pixels are on both paths. Obviously this would be slow to run every time the user drags their finger, but it certainly could be done once every half second or so, maybe not even on the main thread.
This is the easiest way I've found of doing something like this, and it may easily be that there's no better way, besides using lots of math.
The source on Github
A quick Youtube demo.
Generally speaking, finding the intersection of two arbitrary CGPaths is going to be very complex.
There are ways to do approximations. Checking the intersections of the bounding boxes is a good first step. You can also subdivide the curve and repeat the process to get better approximations. Another option is to flatten the paths and see if any of the line segments of the flattened paths intersect.
For the general case, however, things get very nasty very fast. Consider, for example, the fact that two cubic bezier segments (never mind an entire path... just one segment) can intersect with another segment at up to 6 points. The more segments in your path, the more potential intersections. There is also the problem of degenerate bezier curves where a segment has a cusp that just touches one point of another segment. Does that count as an intersection? (sometimes yes, sometimes no)
It's not clear from your question, but you might also want to consider the intersections of the strokes that are applied to the curves, and correctly account for line joins and miters. That that gets even harder. Macromedia FreeHand (a drawing program similar to Adobe Illustrator) had a very large, complex, intensely mathematical library for discovering arbitrary bezier curve intersections. The problem is not easily solved.
To find the intersection of two CAShapeLayers, we can use below method, CAShapeLayer won't return frame. But we can get the refPath frame using CGPathGetBoundingBox. But this one will give the frame in rectangle.I thing you may understand.
if (CGRectIntersectsRect(CGPathGetBoundingBox(layer.path), CGPathGetBoundingBox(layer.path)))

iPhone OpenGL ES: Firing a bullet and detecting if it hit an object

I have worked out how to detect touching on an object using glReadPixels but how would I detect if a object hits another object (a bullet for example).
I cant do that with detecting colours.
As others have said, do this in the object model, not in the graphics.
For one simple model, give each object other than a bullet a size. Then check if a bullet's location is within that object's radius every tick. In pseudocode:
for each bullet
for each hittableObjectInWorld
if ([hittableObject isTouchedBy:bullet]) {/*handle collision*/}
endFor
endFor
hittableObject::isTouchedBy:(Sprite *)otherObject {
xDistance = [self x] - otherObject.x;
yDistance = [self y] - otherObject.y;
totalDistance = sqrt((xDistance*xDistance) + (yDistance*yDistance));
if (totalDistance <= [self size]) return YES;
else return NO;
}
Now you've got a simple collision detection system. There are some abstractions here: We treat every hittable object as if it were shaped like a sphere with its 'size' as its radius. Bullets are pinpoint small, but you can correct for that by adding a bullet's radius to the radius of each of the hittable objects and it makes the math run a wee bit faster this way.
This might be the simplest possible collision detection system. There's a lot of room for improvement here. The big thing is you're doing the number of bullets times the number of hittable objects checks each tick. If you have many bullets and many objects in the world, that can be a lot of processor time. There's all sorts of hacks to cut down on the number of checks you have to do. If you run into speed problems with this version, that's the next thing to start tuning up.
Good luck!
You do it in your object model, not in your graphics code. OpenGL is only tangentially related to collision detection.
OpenGL only deals with the graphical display of your game's objects. Any logic about how the objects in your game behave should be done in the code that manages the state of the objects not in the OpenGL graphics code.
What you are looking for is collision detection which can be a fairly deep topic. Just to be clear, once you detect a collision (e.g. a bullet hitting an object) you will more than likely run some OpenGL code to display the reaction of the collision to the user but the actual detection of the collision should not occur within the OpenGL realm.
Lastly, if you find all of this a bit overwhelming I would recommend the use of a game engine like cocos2d or Unity.

App with expanding animated line in iOS ... howto

The basic idea
is very easy. Simplified you could say... a snake like line realized by a let's say 3px line is expanding across the screen collecting and interacting with different stuff, can be steered through user input. Like a continous line you would draw with a pen.
I've already started reading apple's documentation on Quartz/CG.
As I understand now, I need to put my rendering code into a UIView's drawRect.
Then I would need to set a timer (found some answers/posts here and there, nothing concrete though) that fires x times per second and calls setNeedsDisplay on the UIView.
Question:
How to realize the following:
Have whole snake on UIView 1 / (Layer ?), draw new part on UIView 2, merge them, so that the new part gets appended to UIView 1 (or CALayer instead of views ?). I ask explicitly about this cause I read, that one shouldn't redraw the same content over and over again, but just the new/moving part.
I hope you can provide me some sample code or some details which classes I should use and the strategy on how to use them / which calls to make.
Edit
OK, I see that my idea or what I've read before to realize this with Quartz and different views is not so wise... ( Daniel Bleisteiner )
So I switched to OpenGL now, well I'm looking into it, reading examples, Jeff LaMarche's OpenGL blog entries, etc..
I guess I would be able to draw my line. I think I would create classes for curves, straight lines / direction changes, etc. and then on user input I would create the related objects (depending on the steer input) store them in an array and then recreate and redraw the whole line by reading the object properties from the objects stored in the array on each frame. A simple line I would draw like this (code from Beginning iPhone Development)
glDisable(GL_TEXTURE_2D);
GLfloat vertices[4];
// Convert coordinates
vertices[0] = start.x;
vertices[1] = start.y;
vertices[2] = end.x;
vertices[3] = end.y;
glLineWidth(3.0);
glVertexPointer (2, GL_FLOAT , 0, vertices);
glDrawArrays (GL_LINES, 0, 2);
Maybe I will even find a way to antialias it but
now I'm more curious if my idea is good enough or if there are some better established strategies for this
and maybe someone could tell me how to seperate code for hud's, the line drawing itself and some menus that I will have to display f.e. at the beginning ... so that it's easy to transition from one "view" to another (like with a fade)? Some hints ?
Maybe you have also read a book, that will explain how to solve this kind of problems?
Or you can point me to another good answer / example code as I have huge problems in finding "animated drawing" examples.
this got me a little further, but it is still a little vague for me.
I don't know how to realize "update path that you draw (only with needed points + one last point that moves)"
Don't try to merge different views... setNeedsDisplay has a rect parameter that tells the core graphics part that only a certain part of the screen needs to be rendered again. Respect this parameter in your drawRect method and it should be enough for standard 2D games and tools.
If you intent to use intense graphics there is no other option than to use OpenGL. The performance is multiple times better and you won't have to care about optimizations... OpenGL does much in the background.
Use OpenGL ES. Then what you will want to do is create a run loop function or method which gets called by a CADisplayLink 30 to 60 times per second. 60 is the maximum.
So how does this work?
1) Create an assign-property for CADisplayLink. Do NOT retain your CADisplayLink, because display links and timers retain their target. Otherwise you would create a retain-cycle which may cause abandoned memory issues (which is even worse than a leak and much harder to discover).
#property (nonatomic, assign) CADisplayLink *displayLink;
2) Create and schedule the CADisplayLink:
- (void)startRunloop {
if (!animating) {
CADisplayLink *dl = [[UIScreen mainScreen] displayLinkWithTarget:self selector:#selector(drawFrame)];
[dl setFrameInterval:1];
[dl addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
self.displayLink = dl;
animating = YES;
}
}
Some things to note here: -setFrameInterval:1 tells the CADisplayLink to not skip any frame. This means: You get the maximum fps. BUT: This can be bad if your code needs longer than 1/60 seconds. In that case, it's better to set this to 2, for example. Makes your animation more fluid.
3) In your -drawFrame method do your OpenGL ES drawing as usual. The only difference is that this code gets called multiple times per second. Just keep track of time and determine in your code what you want to draw, and how you want it to be drawn. If you were to animate a rectangle moving from bottom left to top right with an animation duration of 1 second, you would simply interpolate the animation frames between start and end by applying a function which takes time t as argument. There are thousands of ways to do it. This is one of them.
4) When you're done or want to halt OpenGL ES drawing, just invalidate or pause your CADisplayLink. Something like this:
- (void)stopRunloop {
if (animating) {
[self.displayLink invalidate];
self.displayLink = nil;
animating = NO;
}
}

Simple iPhone drawing app with Quartz 2D

I am making a simple iPhone drawing program as a personal side-project.
I capture touches event in a subclassed UIView and render the actual stuff to a seperate CGLayer. After each render, I call [self setNeedsLayout] and in the drawRect: method I draw the CGLayer to the screen context.
This all works great and performs decently for drawing rectangles. However, I just want a simple "freehand" mode like a lot of other iPhone applications have.
The way I thought to do this was to create a CGMutablePath, and simply:
CGMutablePathRef path;
-(void)touchBegan {
path = CGMutablePathCreate();
}
-(void)touchMoved {
CGPathMoveToPoint(path,NULL,x,y);
CGPathAddLineToPoint(path,NULL,x,y);
}
-(void)drawRect:(CGContextRef)context {
CGContextBeginPath(context);
CGContextAddPath(context,path);
CGContextStrokePath(context);
}
However, after drawing for more than 1 second, performance degrades miserably.
I would just draw each line into the off-screen CGLayer, if it were not for variable opacity! The less-than-100% opacity causes dots to be left on the screen connecting the lines. I have looked at CGContextSetBlendingMode() but alas I cannot find an answer.
Can anyone point me in the right direction? Other iPhone apps are able to do this with very good efficiency.
The problem is that with CGStrokePath() the current mutable path gets closed and drawn and a new path is created when you move your finger. So you probably end up with a lot of paths for one touch "session", at least that's what your pseudocode seems to do.
You can try to begin a new mutable path when touches begin, use CGAddLineToPoint() when the touches move und end the path when touches end (much like your pseudocode shows). But in the draw method, you draw a copy of the current mutable path, and the actual mutable path is still being elongated until the touches end, so you only get one path for the whole touch session. After the touches end you can add the path permanently - you can for example put all paths into an array and iterate over them in the draw method.
What SanHolo said - plus you may want to throttle the adding of points, so it only adds a new point no more often than every 10ms, say (you'd need to play with the interval). You can do that with a simple timer.
Also, how are you instructing the view that it needs to redraw itself? You might want to throttle that too - and it could be on a longer interval than the point capturing (e.g. capture points no more than every 10ms, and redraw no more often than every 200ms - again you'd need to play with the numbers).
In both cases you'd need to make sure that, if nothing happens for longer than the interval the last point is captured, or the redraw is requested. That's where the timer comes in.