What happens to Flutter Ticker if the UI misses frames due to bad performances? - flutter

I read on the the Ticker class documentation that:
"Ticker class calls its callback once per animation frame."
I am using createTicker(TickerCallback onTick) to implement a stopwatch. So I need the elapsed variable passed to the TickerCallback to be extremely precise (i.e. i need that after 5 seconds, the value of elapsed is exactly 5 seconds).
Now my question is: what happens if I have a sluggish UI very badly coded and that misses a lot of frame due to bad optimization? I can think of 2 cases:
The time of the stopwatch gets updated not at 60fps (because of my bad coding) but once it gets updated, the time being displayed is correct
The time displayed is wrong
Other?
Which is the case? And why (most importantly)? Also, considering the above, is it adviceable to use ticker for a stopwatch? Thanks

To answer your question(s):
1.The time of the stopwatch gets updated not at 60fps (because of my bad coding) but once it gets updated, the time being displayed is correct.
If phone works at 120 fps, does that mean it will forward time :)
Flutter aims to provide 60 frames per second (fps) performance, or 120 fps performance on devices capable of 120Hz updates. For 60fps, frames need to render approximately every 16ms. Jank occurs when the UI doesn't render smoothly.
So you may use ticker, and even if animation is sluggish, still it will display right time. Like on let say we have some delays on frames 500, they will be delays of animation not the time passed. Like on second 3 we have 1 second delay we will have 5 after that, it updates the screen, but timer will continue.
Also, considering the above, is it adviceable to use ticker for a stopwatch?
It is. At worst case you will have drop frames, jumping seconds but timer will be exact.

Related

How do you schedule a fade-out using setTargetAtTime

I am having problems with the 'windowing', as it's called, of pitch-shifted grains of sounds using the Web Audio API. When I play the sound, I successfully set the initial gain to 0, using setValueAtTime, and successfully fade the sound in using linearRampToValueAtTime. However, I am not successful in scheduling a fade-out to occur slightly before the sound ends. It may be because the sound is pitch-shifted, although in the code below, I believe I have set the parameters correctly. Though maybe the final parameter in setTargetAtTime is not suitable because I don't fully understand that parameter, despite having read Chris's remarks on the matter. How does one successfully schedule a fade-out when the sound is pitch-shifted, using setTargetAtTime? Below is my code. You can see the piece itself at https://vispo.com/animisms/enigman2/2022
source.buffer = audioBuffer;
source.connect(this.GrainObjectGain);
this.GrainObjectGain.connect(app.mainGain);
source.addEventListener('ended', this.soundEnded);
//------------------------------------------------------------------------
// Now we do the 'windowing', ie, when we play it, we fade it in,
// and we set it to fade out at the end of play. The fade duration
// in seconds is a maximum of 10 milliseconds and
// a minimum of 10% of the duration of the sound. This helps
// eliminate pops.
source.playbackRate.value = this.playbackRate;
var fadeDurationInSeconds = Math.min(0.01,0.1*duration*this.playbackRate);
this.GrainObjectGain.gain.setValueAtTime(0, app.audioContext.currentTime);
this.GrainObjectGain.gain.linearRampToValueAtTime(app.constantGrainGain, app.audioContext.currentTime+fadeDurationInSeconds);
this.GrainObjectGain.gain.setTargetAtTime(0, app.audioContext.currentTime+duration*this.playbackRate-fadeDurationInSeconds, fadeDurationInSeconds);
source.start(when, offset, duration);
Given your comment below I guess you want to schedule the fade out fadeDurationInSeconds before the sound ends.
Since you change playbackRate you need to divide the original duration by that playbackRate to get the actual duration.
Changing your setTargetAtTime() call as follows should schedule the fade out at the desired point in time.
this.GrainObjectGain.gain.setTargetAtTime(
0,
app.audioContext.currentTime + duration / this.playbackRate - fadeDurationInSeconds,
fadeDurationInSeconds
);
Please not that setTargetAtTime() actually never reaches the target. At least in theory. It comes reasonably close though after some time. But that time is longer as the timeConstant.
The relevant text from the spec can be found here: https://webaudio.github.io/web-audio-api/#dom-audioparam-settargetattime
Start exponentially approaching the target value at the given time with a rate having the given time constant. Among other uses, this is useful for implementing the "decay" and "release" portions of an ADSR envelope. Please note that the parameter value does not immediately change to the target value at the given time, but instead gradually changes to the target value.
The timeConstant parameter roughly defines the time it takes to reach 2/3 of the desired signal attenuation.
It's mentioned here: https://webaudio.github.io/web-audio-api/#dom-audioparam-settargetattime-timeconstant
More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value 1−1/𝑒 (around 63.2%) given a step input response (transition from 0 to 1 value).

Millisecond timer?

Does anyone know how to make a timer with a time interval of 0.001? I read somewhere that a Timer can't have a time interval lower than 0.02, and my timer is behaving that way.
My timer:
timer = Timer.scheduledTimer(timeInterval: 0.001, target: self, selector: #selector(self.updateTimer), userInfo: nil, repeats: true)
#objc func updateTimer() {
miliseconds += 0.001
}
Yes, you can have time intervals less than 0.02 seconds. E.g. 0.01 second interval is easily accomplished. But at 0.001 or 0.0001 seconds, you’re starting to be constrained by practical considerations of how much you could accomplish in that period of time.
But if your goal is to represent something in the UI, there’s rarely any point for exceeding the device’s maximum frames per second (usually 60 fps). If the screen can only be updated every 60th of a second, what’s the point in calculating more frequently than that? That’s just wasted CPU cycles. By the way, if we were trying to achieve optimal screen refresh rates, we’d use CADisplayLink, not Timer. It works just like a timer, but is optimally timed for refreshing the screen at its optimal rate.
For what it’s worth, the idea of updating milliseconds to capture elapsed time is a bit of non-starter. We never update a “elapsed time counter” like this, because even in the best case scenarios, the frequency is not guaranteed. (And because of limitations in binary representations of fractional decimal values, we rarely want to be adding up floating point values like this, either.)
Instead, if we want to have the fastest possible updates in our UI, we’d save the start time, start a CADisplayLink, and every time the timer handler is called, calculate the elapsed time as the difference between current time and the start time.
Now, if you really needed a very precise timer, it can be done, but you should see Technical Note TN2169, which shows how to create high precision timer. But this is more of an edge-case scenario.
Short answer: You can't do that. On iOS and Mac OS, Timers are invoked by your app's run loop, and only get invoked when your app visits the main event loop. How often a short-interval timer actually fires will depend on how much time your app spends serving the event loop. As you say, the docs recommend a minimum interval of about 1/50th of a second.
You might look at using a CADisplayLink timer, which is tied to the display refresh. Those timers need to run fast in order to not interfere with the operation of your app.
Why do you think you need a timer that fires exactly every millisecond?

AudioKit AKMetronome callback timing seems imprecise or quantized

I'm new to AudioKit and digital audio in general, so I'm sure there must be something I'm missing.
I'm trying to get precise timing from AKMetronome by getting the timestamp of each callback. The timing seems to be quantized in some way though, and I don't know what it is.
Example: if my metronome is set to 120, each callback should be exactly 0.5 seconds apart. But if I calculate the difference from one tick to the next, I get this:
0.49145491666786256
0.49166241666534916
0.5104563333334227
0.4917322500004957
0.5104953749978449
0.49178879166720435
0.5103940000008151
0.4916401666669117
It's always one of 2 values, within a very small margin of error. I want to be able to calculate when the next tick is coming so I can trigger animation a few frames ahead, but this makes it difficult. Am I overlooking something?
edit: I came up with a solution since I originally posted this question, but I'm not sure if it's the only or best solution.
I set the buffer to the smallest size using AKSettings.BufferLength.veryShort
With the smallest buffer, the timestamp is always on within a millisecond or two. I'm still not sure though if I'm doing this right, or whether this is the intended behavior of the AKCallback. It seems like the callback should be on time even with a longer buffer.
Are you using Timer to calculate the time difference? From my point of view and based on my findings, the issue is related to the Timer that is not meant to be precise in ios see the thread (Accuracy of NSTimer).
Alternatively, you can look into AVAudioTime (https://audiokit.io/docs/Extensions/AVAudioTime.html)

What is the fastest I should run an NSTimer?

What is the fastest I can run an NSTimer and still get reliable results? I've read that approaching 30ms it STARTS to become useless, so where does it "start to start becoming useless"...40ms? 50ms?
Say the docs:
the effective resolution of the time
interval for a timer is limited to on
the order of 50-100 milliseconds
Sounds like if you want to be safe, you shouldn't use timers below 0.1 sec. But why not try it in your own app and see how low you can go?
You won't find a guarantee on this. NSTimers are opportunistic by nature since they run with the event loop, and their effective finest granularity will depend on everything else going on in your app in addition to the limits of whatever the Cocoa timer dispatch mechanisms are.
What's you definition of reliable? A 16 mS error in a 1 second timer is under 2% error, but in a 30 mS timer is over 50% error.
NSTimers will wait for whatever is happening in the current run loop to finish, and any errors in time can accumulate. e.g. if you touch the display N times, all subsequent repeating NSTimer firings may be late by the cumulative time taken by 0 to N touch handlers (plus anything else that was running at the "wrong" time). etc.
CADisplayLink timers will attempt to quantize time to the frame rate, assuming that no set of foreground tasks takes as long as a frame time.
Depends on what kind of results you are trying to accomplish. NSTimer Class 0.5 - 1.0 is a good place to start for reliable results.

iOS - Speed Issues

Hey all, I've got a method of recording that writes the notes that a user plays to an array in real time. The only problem is that there is a slight delay and each sequence is noticeably slowed down when playing back. I upped the speed of playback by about 6 miliseconds, and it sounds right, but I was wondering if the delay would vary on other devices?
I've tested on an ipod touch 2nd gen, how would that preform on 3rd, and 4th as well as iphones? do I need to test on all of them and find the optimal delay variation?
Any Ideas?
More Info:
I use two NSThreads instead of timers, and fill an array with blank spots where no notes should play (I use integers, -1 is a blank). Every 0.03 seconds it adds a blank when recording. Every time the user hits a note, the most recent blank is replaced by a number 0-7. When playing back, the second thread is used, (2 threads because the second one has a shorter time interval) that has a time of 0.024. The 6 millisecond difference compensates for the delay between the recording and playback.
I assume that either the recording or playing of notes takes longer than the other, and thus creates the delay.
What I want to know is if the delay will be different on other devices, and how I should compensate for it.
Exact Solution
I may not have explained it fully, that's why this solution wasn't provided, but for anyone with a similar problem...
I played each beat similar to a midi file like so:
while playing:
do stuff to play beat
new date xyz seconds from now
new date now
while now is not > date xyz seconds from now wait.
The obvious thing that I was missing was to create the two dates BEFORE playing the beat...
D'OH!
It seems more likely to me that the additional delay is caused by the playback of the note, or other compute overhead in the second thread. Grab the wallclock time in the second thread before playing each note, and check the time difference from the last one. You will need to reduce your following delay by any excess (likely 0.006 seconds!).
The delay will be different on different generations of the iphone, but by adapting to it dynamically like this, you will be safe as long as the processing overhead is less than 0.03 seconds.
You should do the same thing in the first thread as well.
Getting high resolution timestamps - there's a a discussion on apple forums here, or this stackoverflow question.