pyglet: synchronise event with frame drawing - pyglet

The default method of animation is to have an independent timer set to execute at the same frequency as the frame rate. This is not what I'm looking for because it provides no guarantee the event is actually executed at the right time. The lack of synchronisation with the actual frame drawing leads to occasional animation jitters. The obvious solution is to have a function run once for every frame, but I can't find a way to do that. on_draw() only runs when I press a key on the keyboard. Can I get on_draw() to run once per frame, synchronised with the frame drawing?

The way to do this is to use pyglet.clock.schedule(update), making sure vsync is enabled (which it is by default). This ensures on_draw() is run once per frame. The update function passed to pyglet.clock.schedule() doesn't even need to do anything. It can just be blank. on_draw() is now being executed once per frame draw, so there's no point in having separate functions both of which are being executed once per frame drawing. It would have been a lot nicer if there were just an option somewhere in the Window class saying on_draw() should be drawn once per second, but at least it's working now.
I'm still getting bad tearing, but that's a bug in pyglet on my platform or system. There wasn't supposed to be any tearing.

Related

Drawable presented late, causes steady state delay

I have a little Swift playground that uses a Metal compute kernel to draw into a texture each time the mouse moves. The compute kernel runs very fast, but for some reason, as I start dragging the mouse, some unknown delays build up in the system and eventually the result of each mouse move event is displayed as much as 4 frames after the event is received.
All my code is here: https://github.com/jtbandes/metalbrot-playground
I copied this code into a sample app and added some os_signposts around the mouse event handler so I could analyze it in Instruments. What I see is that the first mouse drag event completes its compute work quickly, but the "surface queued" event doesn't happen until more than a frame later. Then once the surface is queued, it doesn't actually get displayed at the next vsync, but the one after that.
The second mouse drag event's surface gets queued immediately after the compute finishes, but it's now stuck waiting for another vsync because the previous frame was late. After a few frames, the delay builds and later frames have to wait a long time for a drawable to be available before they can do any work. In the steady state, I see about 4 frames of delay between the event handler and when the drawable is finally presented.
What causes these initial delays and can I do something to reduce them?
Is there an easy way to prevent the delays from compounding, for example by telling the system to automatically drop frames?
I still don't know where the initial delay came from, but I found a solution to prevent the delays from compounding.
It turns out I was making an incorrect assumption about mouse events. Mouse events can be delivered more frequently than the screen updates — in my testing, often there is less than 8ms between mouse drag events and sometimes even less than 3ms, while the screen updates at ~16.67ms intervals. So the idea of rendering the scene on each mouse drag event is fundamentally flawed.
A simple way to work around this is to keep track of queued draws and simply don't begin drawing again if another drawable is still queued. For example, something like:
var queuedDraws = 0
// Mouse event handler:
if queuedDraws > 1 {
return // skip this frame
}
queuedDraws += 1
// While drawing
drawable.addPresentedHandler { _ in
queuedDraws -= 1
}

MTKView updating frame buffer without clearing previous contents

I am working on a painting program where I draw interactive strokes via an MTKView. If I set the renderPassDescriptor loadAction to 'clear':
renderPassDescriptor?.colorAttachments[0].loadAction = .clear
The frame buffer, as expected, shows the latest contents of renderCommandEncoder?.drawPrimitives, which is this case is the leading edge of the brushstroke.
If I set loadAction to 'load':
renderPassDescriptor?.colorAttachments[0].loadAction = .load
The frame buffer flashes like crazy and shows a patchy trail of what I've just drawn. I now understand that the flashing is likely caused by MTKView's default triple buffering in place. Thus, each time I write to the currentDrawable, I'm likely writing to one of 3 cycling buffers. Please correct me if I'm wrong.
My question is, what do I need to do to draw a clean brushstroke without the frame buffer flashing as it does now? In other words, is there a way to have a master buffer that gets updated with the latest contents of commandEncoder?
You can use a texture of your own as the color attachment of a render pass. You don't have to use the texture of a drawable. In that way, you can use the .load action without getting garbage or weird flashing or whatever. You will have full control over which texture you're rendering to and what its contents are.
After rendering to that texture for a render pass, you then need to blit that to the drawable's texture for display.
The main complication here is that you won't have the benefits of double- or triple-buffering. You'll lose a certain amount of performance, since everything will have to be synced to that one texture's state. I suspect, though, that you don't need that much performance, since this is interactive and only has to keep up with the speed of a human.

Are there any obvious performance issues of mass-using SKActions?

I have a game where I use a lot of SKActions to deliver the desired game logic.
For example, instead of setting the zRotation of a sprite to a value, I would use runAction(SKAction.rotateTo(/* angle here */, duration: 0.0) instead.
I make calls like this in update, and in touchesMoved. Thus this could mean hundreds of these calls, sometimes nested with groups of other actions.
Am I incurring significant overhead relative to directly setting the zRotation property?
Never use SKActions for real-time motion. Instead you should either set the zRotation directly or set the necessary angular velocity each frame. In your case the update method and touchesMoved method are both bad to use for SKActions because they can run 30-60 times a second.
SKActions do generate significant overhead. Here is a quote from Apple's documentation:
When You Shouldn’t Use Actions
Although actions are efficient, there
is a cost to creating and executing them. If you are making changes to
a node’s properties in every frame of animation and those changes need
to be recomputed in each frame, you are better off making the changes
to the node directly and not using actions to do so. For more
information on where you might do this in your game, see Advanced
Scene Processing.
Source
You can see an example of using real-time motion instead of SKActions in my answer here
You should not call a SKAction in the update method as it is called 60 times a second. Instead, use a event based trigger which in turn calls the SKAction. The touchesMoved is a good example of that. You can also use a completion method block to signal for a new SKAction upon completion of the current action.

Separating OpenGL Calls from Updating on the iPhone

I'm a bit of a newb when it comes to threading, so any pointers in the right direction would be a great help. I've got a game with both a fairly heavy update function, and a fairly heavy draw function. I'd assume that the majority of the weight in the draw function is going to happen on the GPU. Because of this, I'd like to start calculating the update on the next frame while the drawing is happening. Right now, my game loop is quite simple:
Game->Update1();
Game->Update2();
Game->Draw();
Update1() updates variables that do not change game state, so it can run independently from Draw. That is to say, there should be no fights over data between the two. It is also the bulk of the CPU processing.
Update2() updates variables that Draw needs, and it is quite fast, so it seems right to have it running serially with Draw(). Additionally, I believe that the Draw() function is light on CPU and heavy on GPU.
What I would like to happen is that while the GPU is busy processing all the Draw functionality, the next frame's Update1() can use the CPU to get the next frame's update ready. It doesn't seem like I'm automatically getting this functionality -- the Draw cycle seems to take a little while and block everything until it's done, which is less than ideal.
What's the proper way to do this? Is this already happening, and I'm just not observing it properly?
That depends on what Draw() contains, you should get the CPU-GPU parallelism automatically, unless some call inside Draw() synchronizes between CPU and GPU. One simple example is using glReadPixels.

How can I chain animations in iPhone-OS?

I want to do some sophisticated animations. But I only know how to animate a block of changes. Is there a way to chain animations, so that I could for example make changes A -> B -> C -> D -> E -> F (and so on), while the core-animation waits until each animation has finished and then proceeds to the next one?
In particular, I want to do this: I have an UIImageView that shows a cube.
Phase 1) Cube is flat on the floor
Phase 2) Cube rotates slightly to left, while the rotation origin is in bottom left.
Phase 3) Cube rotates slightly to right, while the rotation origin is in the bottom right.
These phases repeat 10 times, and stop at Phase 1). Also, the wiggling becomes lesser with time.
I know how to animate ONE change in a block, but how could I do such a repeating-thing with some sophisticated code in between? It's not the same thing over time. It changes, since the wiggling becomes lesser and lesser until it stops.
Assuming you're using UIView animation...
You can provide an animation 'stopped' method that gets called when an animation is actually finished (check out setAnimationDelegate). So you can have an array of animations queued up and ready to go. Set the delegate then kick off the first animation. When it ends it calls the setAnimationDidStopSelector method (making sure to check the finished flag to make sure it's really done and not just paused). You can also pass along an animation context which could be the index in your queue of animations. It can also be a data structure that contains counters so you can adjust the animation values through each cycle.
So then it's just a matter of waiting for one to be done then kicking off the next one taken off the queue.
The same method applies if you're using other types of animation. You just need to set up your data structures so it knows what to do after each one ends.
You'll need to setup an animation delegate inside of your animation blocks. See setAnimationDelegate in the UIView documentation for all the details.
In essence your delegate can be notified whenever an animation ends. Use the context parameter to determine what step in your animation is currently ending (Phase 1, Phase 2, etc)
This strategy should allow you to chain together as many animation blocks as you want.
I think the answer to your question is that you need to get more specification on what you want to do. You clearly have an idea in mind, but more specification of details will help you answer your own question.