Drawable presented late, causes steady state delay - swift

I have a little Swift playground that uses a Metal compute kernel to draw into a texture each time the mouse moves. The compute kernel runs very fast, but for some reason, as I start dragging the mouse, some unknown delays build up in the system and eventually the result of each mouse move event is displayed as much as 4 frames after the event is received.
All my code is here: https://github.com/jtbandes/metalbrot-playground
I copied this code into a sample app and added some os_signposts around the mouse event handler so I could analyze it in Instruments. What I see is that the first mouse drag event completes its compute work quickly, but the "surface queued" event doesn't happen until more than a frame later. Then once the surface is queued, it doesn't actually get displayed at the next vsync, but the one after that.
The second mouse drag event's surface gets queued immediately after the compute finishes, but it's now stuck waiting for another vsync because the previous frame was late. After a few frames, the delay builds and later frames have to wait a long time for a drawable to be available before they can do any work. In the steady state, I see about 4 frames of delay between the event handler and when the drawable is finally presented.
What causes these initial delays and can I do something to reduce them?
Is there an easy way to prevent the delays from compounding, for example by telling the system to automatically drop frames?

I still don't know where the initial delay came from, but I found a solution to prevent the delays from compounding.
It turns out I was making an incorrect assumption about mouse events. Mouse events can be delivered more frequently than the screen updates — in my testing, often there is less than 8ms between mouse drag events and sometimes even less than 3ms, while the screen updates at ~16.67ms intervals. So the idea of rendering the scene on each mouse drag event is fundamentally flawed.
A simple way to work around this is to keep track of queued draws and simply don't begin drawing again if another drawable is still queued. For example, something like:
var queuedDraws = 0
// Mouse event handler:
if queuedDraws > 1 {
return // skip this frame
}
queuedDraws += 1
// While drawing
drawable.addPresentedHandler { _ in
queuedDraws -= 1
}

Related

Psychopy: delayed picture display in an EEG experiment

I'm running an EEG experiment with pictures and I send trigger over the parallel port. I added triggers to my code via Psychopy builder and synchronized it to screen refresh. I used a photo diode to test if pictures are displayed at exactly same time as the trigger is sent and I find irregular delays: a triggers is sent between 5ms and 26ms earlier than an image is actually displayed.
I don't think that an image size is an issue as I observed the delays even when I replaced pictures with a small-size white image. Moreover, there is an ISI period of half a second before a picture display which should help. I was told by the technicians that the graphic card or a cable should not be an issue. Does anyone have an idea why I get these delays and how it could be solved?
Due to the comments, I'm adding a piece of code that sends a trigger:
# *image_training_port* updates
if t >= 4.0 and image_training_port.status == NOT_STARTED:
# keep track of start time/frame for later
image_training_port.tStart = t # underestimates by a little under one frame
image_training_port.frameNStart = frameN # exact frame index
image_training_port.status = STARTED
win.callOnFlip(image_training_port.setData, int(triggers_image_training))
if image_training_port.status == STARTED and t >= (4.0 + (0.5-win.monitorFramePeriod*0.75)): #most of one frame period left
image_training_port.status = STOPPED
win.callOnFlip(image_training_port.setData, int(0))
Actually, this is most likely due to the monitor itself. Try swapping in a different monitor.
Explanation: Flat panel displays often do some "post-processing" on the frame pixels to make them look prettier (for flat panel TVs almost all of them do this). The post-processing is not only unwanted for the fact that it alters your carefully calibrated stimulus, but also because it can introduce delays if it takes longer than a few ms to perform. PsychoPy (or any software) can't detect this - it can only know about the time the frame was flipped at the level of the graphics card, not what happens after this.

Restart SKEmitterNode without removing particles

I have a particle effect for a muzzle flare set up. What I'm currently using is a low numParticlesToEmit to limit the emitter to a short burst, and doing resetSimulation() on the emitter whenever I want to start a new burst of particles.
The problem I'm having is that resetSimulation() removes all particles onscreen, and I often need to create a new burst of particles before the previous particles disappear normally so they get erased early.
Is there a clean way start up the emitter again without erasing the particles already onscreen?
Normally particle systems have a feature missing from SKEmitters: a duration. This controls how long a system emits. I don't see this in SKEmitter, despite being in SCNParticleSystems
Never mind, a work around:
SKEmitters have a numParticlesToEmit property and a particleBirthRate. Combined, these determine how long the particle system emits before shutting down.
Using these as a control of the emission it's possible to create pulses of particles emitted in the way you want for something like a muzzle flash or explosion.
I'm not sure if it remvoes itself when it reaches this limit. If not, you'll have to create a removal function of some sort. Because the way to get your desired effect (multiple muzzle flashes on screen) is to copy() the SKEmitter. This is quite efficient, so don't worry about overhead.
There is a targetNode on SKEmitters that are suppose to move the particles to another node so that when you reset the emitter, the previous particles still stay. Unfortunately, this is still bugged from what I can tell, unless somebody else has figured out how to get it working and I just missed it. Keep this in mind though in case they do ever fix it.
Hi to help future readers, the code that I use to calculate the duration of the emitter is this:
let duration = Double(emitter.numParticlesToEmit) / Double(emitter.particleBirthRate) + Double(emitter.particleLifetime + emitter.particleLifetimeRange/2)
It works perfectly for me
Extension:
extension SKEmitterNode {
var emitterDuration: Double {
return Double(numParticlesToEmit) / Double(particleBirthRate) + Double(particleLifetime + particleLifetimeRange/2)
}
}

pyglet: synchronise event with frame drawing

The default method of animation is to have an independent timer set to execute at the same frequency as the frame rate. This is not what I'm looking for because it provides no guarantee the event is actually executed at the right time. The lack of synchronisation with the actual frame drawing leads to occasional animation jitters. The obvious solution is to have a function run once for every frame, but I can't find a way to do that. on_draw() only runs when I press a key on the keyboard. Can I get on_draw() to run once per frame, synchronised with the frame drawing?
The way to do this is to use pyglet.clock.schedule(update), making sure vsync is enabled (which it is by default). This ensures on_draw() is run once per frame. The update function passed to pyglet.clock.schedule() doesn't even need to do anything. It can just be blank. on_draw() is now being executed once per frame draw, so there's no point in having separate functions both of which are being executed once per frame drawing. It would have been a lot nicer if there were just an option somewhere in the Window class saying on_draw() should be drawn once per second, but at least it's working now.
I'm still getting bad tearing, but that's a bug in pyglet on my platform or system. There wasn't supposed to be any tearing.

SKEmitterNode not acting as expected

In my SK game, I have a helicopter that launches rockets. The heli can only have one rocket on the screen at a time, the reason being that I didn't want to have to create a new SKEmitterNode every time a rocket is launched, because it seems this would be taxing on the CPU, having to unarchive the emitter every time.
Therefore, the rocket is a single instance. When someone launches a rocket, it is added to the scene as a child of the scene, launched, and when it hits something, removed from it's parent.
In my update{} metho, here's some psuedo-code:
if ([rocket parent]) {
rocketSmoke.position = rocket.position;
rocketSmoke.numParticlesToEmit = 0;
}else{
rocketSmoke.numParticlesToEmit = 1;
}
So basically, the emitter follows the rocket if it exists and if it doesn't exist, the emitter turns itself off. No, I cannot just add the emitter as a child of the rocket because then when the rocket hits an object and I call removeFromParent the smoke trail behind the rocket will instantly dissapear and that's not the effect I'm going for.
The problem is, when a new rocket is launched and numParticlesToEmit is set back to zero, the particle emitter acts like it's been emitting particles the entire time! I want the particle emitter to simply turn back on, but a bunch of smoke instantaneously appears behind the helicopter as well. You can observe this effect by playing with an emitter file in SK and setting the "max" on the particles to emit to 1, and then setting it back to zero: your screen will instantaneously fill back up as if it had been emitting particles the whole time. Obviously that's not what I want.
I also can't call resetSimulation, because that will remove all existing particles, and so any lingering smoke from the last launch will instantaneously disappear, which would look pretty unnatural if two rockets were launched one after another.
Is my only option to write my own emitter?
Create new instances of the emitter for each different rocket. That will resolve your problem.
If it becomes "Too Taxing on the CPU" then create two emitters and alternate which one is used.
Ex. Fire rocket 1, use emitter 1.
Fire rocket 2, use emitter 2.
Fire rocket 3, use emitter 1.
This will give you time for the emitter to finish the effect you want when a rocket hits something while also allowing your next rocket's emitter to behave as desired.

How can I chain animations in iPhone-OS?

I want to do some sophisticated animations. But I only know how to animate a block of changes. Is there a way to chain animations, so that I could for example make changes A -> B -> C -> D -> E -> F (and so on), while the core-animation waits until each animation has finished and then proceeds to the next one?
In particular, I want to do this: I have an UIImageView that shows a cube.
Phase 1) Cube is flat on the floor
Phase 2) Cube rotates slightly to left, while the rotation origin is in bottom left.
Phase 3) Cube rotates slightly to right, while the rotation origin is in the bottom right.
These phases repeat 10 times, and stop at Phase 1). Also, the wiggling becomes lesser with time.
I know how to animate ONE change in a block, but how could I do such a repeating-thing with some sophisticated code in between? It's not the same thing over time. It changes, since the wiggling becomes lesser and lesser until it stops.
Assuming you're using UIView animation...
You can provide an animation 'stopped' method that gets called when an animation is actually finished (check out setAnimationDelegate). So you can have an array of animations queued up and ready to go. Set the delegate then kick off the first animation. When it ends it calls the setAnimationDidStopSelector method (making sure to check the finished flag to make sure it's really done and not just paused). You can also pass along an animation context which could be the index in your queue of animations. It can also be a data structure that contains counters so you can adjust the animation values through each cycle.
So then it's just a matter of waiting for one to be done then kicking off the next one taken off the queue.
The same method applies if you're using other types of animation. You just need to set up your data structures so it knows what to do after each one ends.
You'll need to setup an animation delegate inside of your animation blocks. See setAnimationDelegate in the UIView documentation for all the details.
In essence your delegate can be notified whenever an animation ends. Use the context parameter to determine what step in your animation is currently ending (Phase 1, Phase 2, etc)
This strategy should allow you to chain together as many animation blocks as you want.
I think the answer to your question is that you need to get more specification on what you want to do. You clearly have an idea in mind, but more specification of details will help you answer your own question.