Cocos2d update method efficiency - iphone

In a fairly small game, I have everything updating (sprites, velocities,backgrounds ect.) in on large scheduled update method. I was wondering if there was a performance difference between just having one large scheduled update, or several ones only updating a couple sprites each?
I was also wondering if there a performance difference between:
sprite.position = ccpAdd(sprite.postion, (delta*10, delta*5));
and
sprite.position = ccp(sprite.position.x + delta*10, sprite.position.y + delta*5);
Is there a performance difference between assigning positions via ccp vs CGPointMake?

None that matters.
If you really, really want to know, measure it.
Those are minutiae. It's like asking if your car goes faster after waxing it. It might, it might not. In 99.99999% it simply doesn't matter because the difference is negligible and other contributing factors have much more weight (car: traffic and road conditions / game: drawing stuff on the screen).

ccpAdd is resolved to ccp which is then resolved to CGPointMake so they are identical in your compiled code. They are all #define so it is done in the preprocessor.

Indeed, ccpAdd & ccp are identical in your compiled code.
As for your performance problem, if you have a lot of sprites to update you may want to spawn a background thread to do part of your updating there.
performSelectorInBackground:withObject: and don't forget to add the code in an autorelease pool

Related

UE4 get all players in FoV

I'm trying to build an array of all player pawns that are in the players FoV cone. I'd prefer to not have to loop through GetAllActorsofClass for obvious performance reasons. This will be done every tick.
GetAllActorsOfClass iterates over a hash table of things of that class. Even with 100 players it is unlikely to be very costly. I would imagine that a "get actors in frustum" would just do that under the hood.
If you are okay with using it, from there you would use ConvertWorldLocationToScreenLocation and compare that to the screen bounds coordinates with GetViewportSize.
The only method the wouldn't use GetAllActorsOfClass I can think of offhand is to calculate the size of the rectangle at the "end" of the frustum, using a giant multi box trace, and filtering based on the dot product. Traces are cheap, dot product is cheap. Whether or not it's cheaper than GetAllActorsOfClass is going to be specific to your game.
If performance is really a problem the best solution is to use code. Depending on your implementation you might be able to use Blueprint nativization to get an extra boost without digging into code.
Use MultiSphereTrace from your player to his FOV direction and loop through hit results.
Make sure you set the collision layer correctly so the trace only interact with target player.
I do this on my mobile game with around 10-20 actors per frame, and it works fine.

How to Troubleshoot Jitter in Unity - Vive VR

Update:
Shadows! It looks like the biggest problem I had was with shadows. I had absolutely every mesh giving and receiving shadows. Turning off shadows seems to reduce my issues by 99%. Not sure what the remaining 1% is, but this is way better! So on that note, my question still remains, how the heck could I have known it was shadows? Surely the graph below should have somehow told me this, yes?
Original Question:
The main question is how can I troubleshoot this further? I know there's a profiler thing but I'm not sure what to do with it. Is that what I want and need? Does this tell anyone anything? It should be representative of when the jitter is occurring.
More Background: I use a Vive controller to move by pulling the trigger. I perform a transform based on Time.DeltaTime using the vector position of the controller minus the y axis to know which way to move and how fast. There are tons of enemies and tons of spherical trees. The game seems to run quite well except when I look towards a particular direction. In trying to isolate what it is in that direction, I started turning stuff off such as enemies and the trees. Oddly enough, it makes it WORSE. The only way I can imagine this to be possible is if something is eating up resources all the more now that it's no longer fighting for them with other objects. But it would have to be doing it outside the regular frame loop or else the movement wouldn't appear impaired at all, right? That, or it's something wrong with how I move which now has more processing power to make it worse? I'm so at a loss. Anything wrong with this strategy?
public override void OnTriggerPulled(float triggerPullPercent)
{
if (!mainHand && triggerPullPercent > 0.05)
{
Vector3 goForward = transform.forward;
goForward.y = 0;
transform.parent.transform.Translate(goForward * triggerPullPercent * Time.deltaTime * 5f);
}
}
Even worse, the issue is inconsistent. After removing all the trees and seeing zero signs of improvement, I removed a single object just cuz it was ugly and not ready for the game anyway. Suddenly, things seemed quite a bit better than the "Worse" state it was in but it was more akin to the original jittery state which had me doing this to begin with. I added the object back in and oddly enough, it did not revert back to its worse state. So clearly it must not have been the object that made it any better/worse. I added my trees back in and it was suddenly back to the horrendous state. Now suspecting the trees, I removed them again and found no difference. Still crappy. So... not the trees or the object? I removed the one object and it was suddenly much better again. I can do this forever very repeatably. What gives? It's neither and both simultaneously? Any ideas? What more can I do to troubleshoot this? My current method is severely lacking.
Thanks!

To use GKAgents or not

I am developing (or atleast trying to develop)
a decently big real time tactics game (something similar to RTS) using SpriteKit.
I am using GamePlay kit for pathfinding.
Initially I used SKActions to move the sprites within the Path but fast enough I realized that it was a big mistake.
Then I tried to implement it with GKAgents (this is my current state)
I feel that GKAgents are very raw and premature also they are following some strange Newton Law #1 that makes them to move forever (I can't think of any scenario where it would be useful - maybe for presentations at WWDC)
as well as I see that they have some Angular speed to perform rotations
which I don't need at all and can't really find how to disable it...
As well as GKBehaviors given a GKGoals seems to make some weird thing...
Setting behavior to avoid obstacles makes my units to joggle around them...
Setting behavior with follow path goal completely ignores everything unless the maxPredictionTime is low enough...
I am not even willing to tell what happens when I combine both them.
I feel broken...
I feel like I have 2 options now:
1) to struggle more with those agents and trying to make them behave as I wish
2) To roll all the movement on my own with help by GKObstacleGraph and a path Finding (which is buggy as well I have to say at some points the path to the point will generate the most awful path like "go touch that obstacle then reverse touch that one then go to the actual point (which from the beginning could be achieved by a straight line)").
Question is:
What would be the best out of those options ?
One of the best ways (in SpriteKit/GameplayKit) to get the kind of behavior you're after is to recognize that path planning and path following need not be the same operation. GameplayKit provides tools for both — GKObstacleGraph is good for planning and GKAgent is good for following a planned path — and they work best when you combine the strengths of each.
(It can be a bit misleading that GKAgent provides obstacle avoidance; don't think of this in the same way as finding a route around obstacles, more like reacting to sudden obstacles in your way.)
To put it another way, GKObstacleGraph and GKAgent are like the difference between navigating with a map and safely driving a car. The former is where you decide to take CA-85 and US-101 instead of I-280. (And maybe reevaluate your decision once in awhile — say, to pick a different set of roads around a traffic jam.) The latter is where you, continuously moment-to-moment, change lanes, avoid potholes, pass slower vehicles, slow down for heavy traffic, etc.
In Apple's DemoBots sample code, they break this out into two steps:
Use GKObstacleGraph to do high level path planning. That is, when the bad guys are "here" and the hero is "way over there", and there are some walls in between, select a series of waypoints that roughly approximates a route from here to there.
Use GKAgent behaviors to make the character roughly follow that path while also reacting to other factors (like making the bad guys not step on each other and giving them vaguely realistic movement curves instead of simply following the lines between waypoints).
You can find most of the relevant stuff behind this in TaskBotBehavior.swift in that sample code — start from addGoalsToFollowPath and look at both the places that gets called and the calls it makes.
As for the "moving forever" and "angular speed" issues...
The agent simulation is a weird mix of a motivation analogy (i.e. the agent does what's needed to move it toward where it "wants" within constraints) and a physics system (i.e. those movements are modeled like forces/impulses). If you take away an agent's goals, it doesn't know that it needs to stop — instead, you need to give it a goal of stopping. (That is, a movement speed goal of zero.) There might be a better model than what Apple's chosen here — file bugs if you have suggestions for design improvements.
Angular speed is trickier. The notion of agents' intrinsic physical constraints being sort of analogous to, say, vehicles on land or boats at sea is pretty well baked into the system. It can't really handle things like space fighters that have to reorient to vector their thrust, or walking creatures that can just as happily walk sideways or backwards as forward — at least, not on its own. You can get some mileage toward changing the "feel" of agent movement with the maxAcceleration property, but you're limited by the fact that said property covers both linear and angular acceleration.
Remember, though, that the interface between what the agent system "wants" and what "actually happens" in your game world is under your control. The easiest way to implement GKAgentDelegate is to just sync the velocity and position properties of the agent and the sprite that it represents. However, you don't have to do it that way — you could calculate a different force/impulse and apply it to your sprite.
I can't comment yet so I post as an answer. I faced the same problem recently: agent wiggling around the target or the agent that keeps moving even if you remove the behavior. Then I realized that the behavior is just the algorithm controling the movement, but you can still access and set the agent's speed, position and angle by hand.
In my case, I have a critter entity that chases for food in the scene. When it makes contact with the food agent, the food entity is removed. I tried many things to make the critter stop after eating the food (it would keep going in a straight line). And all I had to do was to set its speed to 0. That is because the behavior will influence not the position directly, but the speed/angle combination instead (from what I understand). When there is no goal for the entity, it doesn't "want" to change it's state, so whatever speed and direction it reached, it will keep. It will simply not update/change it. So unless you create a goal to make it want to stop, it will wiggle/keep going. The easy way is to set the behavior to nil and set the speed to 0 yourself.
If the behavior/goal system doesn't do it for the type of animation you are looking for, you can still use the Agent system and customize the movement with the AgenDelegate protocol and the update method and make it interact with other agents later on. You can even synchronize the agent with a node that is moved with the physics engine or with actions (or any other way).
I think the agent system is nice to keep around since you can use it later, even if it's only for special effects. But just as mixing actions and physics can give some weird results, mixing goal/behaviors and any other "automated" tool will probably result in an erratic behavior.
You can also use the agent system for other stuff than moving an actual sprite around. For example, you could use an agent to act as a "target seeker" to simulate reaction time for your enemies. The agent moves around the scene and finds other agents, when it makes contact with a suitable target, the enemy entity would attack it (random idea).
It's not a "one size fits all" solution, but it's a very nice tool to have.

Separating OpenGL Calls from Updating on the iPhone

I'm a bit of a newb when it comes to threading, so any pointers in the right direction would be a great help. I've got a game with both a fairly heavy update function, and a fairly heavy draw function. I'd assume that the majority of the weight in the draw function is going to happen on the GPU. Because of this, I'd like to start calculating the update on the next frame while the drawing is happening. Right now, my game loop is quite simple:
Game->Update1();
Game->Update2();
Game->Draw();
Update1() updates variables that do not change game state, so it can run independently from Draw. That is to say, there should be no fights over data between the two. It is also the bulk of the CPU processing.
Update2() updates variables that Draw needs, and it is quite fast, so it seems right to have it running serially with Draw(). Additionally, I believe that the Draw() function is light on CPU and heavy on GPU.
What I would like to happen is that while the GPU is busy processing all the Draw functionality, the next frame's Update1() can use the CPU to get the next frame's update ready. It doesn't seem like I'm automatically getting this functionality -- the Draw cycle seems to take a little while and block everything until it's done, which is less than ideal.
What's the proper way to do this? Is this already happening, and I'm just not observing it properly?
That depends on what Draw() contains, you should get the CPU-GPU parallelism automatically, unless some call inside Draw() synchronizes between CPU and GPU. One simple example is using glReadPixels.

Is there a faster way to draw text?

Shark complains about a big performance hit with this line, which takes like 80% of CPU time. I have a counter that is updated very frequently and performance seriously sucks.
It's an custom UILabel subclass with -drawRect: implemented. Every time the counter value changes, this is used to draw the new text:
[self.text drawInRect:textRect withFont:correctedFont lineBreakMode:self.lineBreakMode alignment:self.textAlignment];
When I comment this line out, performance rocks. Its smooth and fast. So Shark isn't wrong about this. But what could I do to improve this? Maybe go a level deeper? Does that make any sense?
Probably drawing text is really so incredible heavy...?
There's no reason the drawing of a single label should cause such a massive performance hit. If you're updating it more than 30-60 times per second, though, the system may have trouble keeping up. In that case, you could use an NSTimer to only perform the drawing at fixed intervals. There's no doubt that drawing text is expensive, but you've pretty much found the optimal way of doing the drawing itself, unless the label is only a single line, in which case you can use the slightly cheaper drawAtPoint:withAttributes:
Underneath, the text is being drawn with Quartz2D. You might see some improvement if you use it directly.