How do you schedule a fade-out using setTargetAtTime - scheduled-tasks

I am having problems with the 'windowing', as it's called, of pitch-shifted grains of sounds using the Web Audio API. When I play the sound, I successfully set the initial gain to 0, using setValueAtTime, and successfully fade the sound in using linearRampToValueAtTime. However, I am not successful in scheduling a fade-out to occur slightly before the sound ends. It may be because the sound is pitch-shifted, although in the code below, I believe I have set the parameters correctly. Though maybe the final parameter in setTargetAtTime is not suitable because I don't fully understand that parameter, despite having read Chris's remarks on the matter. How does one successfully schedule a fade-out when the sound is pitch-shifted, using setTargetAtTime? Below is my code. You can see the piece itself at https://vispo.com/animisms/enigman2/2022
source.buffer = audioBuffer;
source.connect(this.GrainObjectGain);
this.GrainObjectGain.connect(app.mainGain);
source.addEventListener('ended', this.soundEnded);
//------------------------------------------------------------------------
// Now we do the 'windowing', ie, when we play it, we fade it in,
// and we set it to fade out at the end of play. The fade duration
// in seconds is a maximum of 10 milliseconds and
// a minimum of 10% of the duration of the sound. This helps
// eliminate pops.
source.playbackRate.value = this.playbackRate;
var fadeDurationInSeconds = Math.min(0.01,0.1*duration*this.playbackRate);
this.GrainObjectGain.gain.setValueAtTime(0, app.audioContext.currentTime);
this.GrainObjectGain.gain.linearRampToValueAtTime(app.constantGrainGain, app.audioContext.currentTime+fadeDurationInSeconds);
this.GrainObjectGain.gain.setTargetAtTime(0, app.audioContext.currentTime+duration*this.playbackRate-fadeDurationInSeconds, fadeDurationInSeconds);
source.start(when, offset, duration);

Given your comment below I guess you want to schedule the fade out fadeDurationInSeconds before the sound ends.
Since you change playbackRate you need to divide the original duration by that playbackRate to get the actual duration.
Changing your setTargetAtTime() call as follows should schedule the fade out at the desired point in time.
this.GrainObjectGain.gain.setTargetAtTime(
0,
app.audioContext.currentTime + duration / this.playbackRate - fadeDurationInSeconds,
fadeDurationInSeconds
);
Please not that setTargetAtTime() actually never reaches the target. At least in theory. It comes reasonably close though after some time. But that time is longer as the timeConstant.
The relevant text from the spec can be found here: https://webaudio.github.io/web-audio-api/#dom-audioparam-settargetattime
Start exponentially approaching the target value at the given time with a rate having the given time constant. Among other uses, this is useful for implementing the "decay" and "release" portions of an ADSR envelope. Please note that the parameter value does not immediately change to the target value at the given time, but instead gradually changes to the target value.
The timeConstant parameter roughly defines the time it takes to reach 2/3 of the desired signal attenuation.
It's mentioned here: https://webaudio.github.io/web-audio-api/#dom-audioparam-settargetattime-timeconstant
More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value 1−1/𝑒 (around 63.2%) given a step input response (transition from 0 to 1 value).

Related

Beckhoff - Ramp that can be interrupted/updated while ramping operation is in progress

I am looking for a way to ramp lights properly. This function block looked like a good candidate :
https://infosys.beckhoff.com/english.php?content=../content/1033/tcplclibbabasic/11640060811.html&id=
Unfortunately, it will ignore any subsequent nEndLevel updates. So while ramp is in progress any new values of nEndLevel
Are ignored where what is usually needed from this type of ramp is to stop the current ramping operation and start a new one as soon as new nEndLevel value is received.
Is there any other ramp function block in the Beckhoff library that can do that ?
I need a ramp that can be interrupted/updated while ramping operation is in progress basically.
I don't think you need another function block...
A rising-edge at bStart starts dimming the light from the actual to the end-level (nEndLevel)
bStart: This input starts the dim-ramp from the actual value to nEndLevel within the time defined as tRampTime. This can be interrupted by bOn, bOff or bToggle at any time.
As I understand it, a rising edge on bStart starts a new ramp.
You could try to detect the change in the value of nEndLevel and generate a rising edge on bStart, which should make it generate a new ramp. For example, using a temporary variable nEndLevel_old that retain the old value for comparison in the next cycle.

What happens to Flutter Ticker if the UI misses frames due to bad performances?

I read on the the Ticker class documentation that:
"Ticker class calls its callback once per animation frame."
I am using createTicker(TickerCallback onTick) to implement a stopwatch. So I need the elapsed variable passed to the TickerCallback to be extremely precise (i.e. i need that after 5 seconds, the value of elapsed is exactly 5 seconds).
Now my question is: what happens if I have a sluggish UI very badly coded and that misses a lot of frame due to bad optimization? I can think of 2 cases:
The time of the stopwatch gets updated not at 60fps (because of my bad coding) but once it gets updated, the time being displayed is correct
The time displayed is wrong
Other?
Which is the case? And why (most importantly)? Also, considering the above, is it adviceable to use ticker for a stopwatch? Thanks
To answer your question(s):
1.The time of the stopwatch gets updated not at 60fps (because of my bad coding) but once it gets updated, the time being displayed is correct.
If phone works at 120 fps, does that mean it will forward time :)
Flutter aims to provide 60 frames per second (fps) performance, or 120 fps performance on devices capable of 120Hz updates. For 60fps, frames need to render approximately every 16ms. Jank occurs when the UI doesn't render smoothly.
So you may use ticker, and even if animation is sluggish, still it will display right time. Like on let say we have some delays on frames 500, they will be delays of animation not the time passed. Like on second 3 we have 1 second delay we will have 5 after that, it updates the screen, but timer will continue.
Also, considering the above, is it adviceable to use ticker for a stopwatch?
It is. At worst case you will have drop frames, jumping seconds but timer will be exact.

AudioKit AKMetronome callback timing seems imprecise or quantized

I'm new to AudioKit and digital audio in general, so I'm sure there must be something I'm missing.
I'm trying to get precise timing from AKMetronome by getting the timestamp of each callback. The timing seems to be quantized in some way though, and I don't know what it is.
Example: if my metronome is set to 120, each callback should be exactly 0.5 seconds apart. But if I calculate the difference from one tick to the next, I get this:
0.49145491666786256
0.49166241666534916
0.5104563333334227
0.4917322500004957
0.5104953749978449
0.49178879166720435
0.5103940000008151
0.4916401666669117
It's always one of 2 values, within a very small margin of error. I want to be able to calculate when the next tick is coming so I can trigger animation a few frames ahead, but this makes it difficult. Am I overlooking something?
edit: I came up with a solution since I originally posted this question, but I'm not sure if it's the only or best solution.
I set the buffer to the smallest size using AKSettings.BufferLength.veryShort
With the smallest buffer, the timestamp is always on within a millisecond or two. I'm still not sure though if I'm doing this right, or whether this is the intended behavior of the AKCallback. It seems like the callback should be on time even with a longer buffer.
Are you using Timer to calculate the time difference? From my point of view and based on my findings, the issue is related to the Timer that is not meant to be precise in ios see the thread (Accuracy of NSTimer).
Alternatively, you can look into AVAudioTime (https://audiokit.io/docs/Extensions/AVAudioTime.html)

iOS - Speed Issues

Hey all, I've got a method of recording that writes the notes that a user plays to an array in real time. The only problem is that there is a slight delay and each sequence is noticeably slowed down when playing back. I upped the speed of playback by about 6 miliseconds, and it sounds right, but I was wondering if the delay would vary on other devices?
I've tested on an ipod touch 2nd gen, how would that preform on 3rd, and 4th as well as iphones? do I need to test on all of them and find the optimal delay variation?
Any Ideas?
More Info:
I use two NSThreads instead of timers, and fill an array with blank spots where no notes should play (I use integers, -1 is a blank). Every 0.03 seconds it adds a blank when recording. Every time the user hits a note, the most recent blank is replaced by a number 0-7. When playing back, the second thread is used, (2 threads because the second one has a shorter time interval) that has a time of 0.024. The 6 millisecond difference compensates for the delay between the recording and playback.
I assume that either the recording or playing of notes takes longer than the other, and thus creates the delay.
What I want to know is if the delay will be different on other devices, and how I should compensate for it.
Exact Solution
I may not have explained it fully, that's why this solution wasn't provided, but for anyone with a similar problem...
I played each beat similar to a midi file like so:
while playing:
do stuff to play beat
new date xyz seconds from now
new date now
while now is not > date xyz seconds from now wait.
The obvious thing that I was missing was to create the two dates BEFORE playing the beat...
D'OH!
It seems more likely to me that the additional delay is caused by the playback of the note, or other compute overhead in the second thread. Grab the wallclock time in the second thread before playing each note, and check the time difference from the last one. You will need to reduce your following delay by any excess (likely 0.006 seconds!).
The delay will be different on different generations of the iphone, but by adapting to it dynamically like this, you will be safe as long as the processing overhead is less than 0.03 seconds.
You should do the same thing in the first thread as well.
Getting high resolution timestamps - there's a a discussion on apple forums here, or this stackoverflow question.

iphone BPM tempo button

i want to create a button that allows the user to tap on it and thereby set a beats per minute. i will also have touches moved up and down on it to adjust faster and slower. (i have already worked out this bit).
what are some appropriate ways to get the times that the user has clicked on the button to get an average time between presses and thereby work out a tempo.
Overall
You best use time() from time.h instead of an NSDate. At the rate of beats the overhead of creating an NSDate could result in an important loss of precision.
I believe time_t is guaranteed to be of double precision, therefore you're safe to use time() in combination with difftime().
Use the whole screen for this, don't just give the user 1 small button.
Two idea
Post-process
Store all times in an array.
Trim the result. Remove elements from the start and end that are more than a threshold from the average.
Get the average from the remaining values. That's your speed.
If it's close to a common value, use that.
Adaptive
Use 2 variables. One is called speed and the other error.
After the first 2 beats calculate the estimated speed, set error to speed.
After each beat
queue = Fifo(5) # First-in, first-out queue. Try out
# different values for the length
currentBeat = now - timeOflastBeat
currentError = |speed - currentBeat|
# adapt
error = (error + currentError) / 2 # you have to experiment how much
# weight currentError should have
queue.push(currentBeat) # push newest speed on queue
# automatically removes the oldest
speed = average(queue)
As soon as error gets smaller than a certain threshold you can stop and tell the user you've determined the speed.
Go crazy with the interface. Make the screen flash whenever the user taps. Extra sparks for a tap that is nearly identical to the expected time.
Make the background color correspond to the error. Make it brighter the smaller the error gets.
Each time the button is pressed, store the current date/time (with [NSDate date]). Then, the next time it's pressed, you can calculate the difference with -[previousDate timeIntervalSinceNow] (negative because it's subtracting the current date from the previous), which will give you the number of seconds.