Should I disconnect nodes that can't be used anymore? - web-audio-api

I'm experimenting with Web Audio, and I made a function to play a note.
var context = new (window.AudioContext || window.webkitAudioContext)()
var make_triangle = function(destination, frequency, start, duration) {
var osc = context.createOscillator()
osc.type = "triangle"
osc.frequency.value = frequency
var gain = context.createGain()
osc.connect(gain)
gain.connect(destination)
// timing
osc.start(start)
osc.stop(start + 2*duration) // this line is discussed later
gain.gain.setValueAtTime(0.5, start)
gain.gain.linearRampToValueAtTime(0, start+duration)
}
Usage is something like this:
make_triangle(context.destination, 440, context.currentTime+1, 1)
This works just fine.
Firefox has a Web Audio tab in its developer console. When I play the sound, the Oscillator and Gain show up in the graph. Without the osc.stop(start + 2*duration) line, these linger forever. With the osc.stop(start + 2*duration) line, the Oscillator goes away, but the Gain stays connected to the AudioDestination forever.
I don't want to cause a memory leak or performance hit from having lots of old things still connected. To what extent do I need to clean up after creating nodes? Should I stop the oscillator? Disconnect everything? Both?

The Web Audio API is designed for oscillators to stop as soon as their 'note' finishes [1]. If you use the line osc.stop(start + 2*duration) then the oscillator will be disconnected from the gain and destroyed immediately.
If you don't plan on reusing the gain node that your oscillator was connected to, then I would suggest disconnecting it so that it can be garbage-collected.
Simply adding this callback to the oscillator within your make_triangle function would suffice:
...
osc.onended = function() {
gain.disconnect();
};
};
The callback is triggered as soon as the oscillator has ended its lifetime (ie: when the stop method has been called or scheduled with a timing parameter)
If you try this with Firefox's Web Audio tab open, you'll see that disconnected gain nodes are eventually garbage-collected (as long as nothing else is connected to the gain node).
Tip
Also, it's not a bad idea to have a single gain node that you keep connected to the AudioContext so that other nodes can connect up to it. This 'final-gain' is useful for mixing all other connected nodes and keeping them from clipping (ie: exceeding an amplitude of 1). You could pass this node into your make_triangle function as a parameter like so:
var context = new (window.AudioContext || window.webkitAudioContext)()
// Gain that will remain connected to `AudioContext`
var finalGain = context.createGain();
finalGain.connect(context.destination);
var make_triangle = function(destination, frequency, start, duration) {
var osc = context.createOscillator()
osc.type = "triangle"
osc.frequency.value = frequency
var gain = context.createGain()
osc.connect(gain)
gain.connect(destination)
// timing
osc.start(start)
osc.stop(start + 2*duration) // destroy osc and disconnect from gain
gain.gain.setValueAtTime(0.5, start)
gain.gain.linearRampToValueAtTime(0, start + duration)
// Added for cleanup:
osc.onended = function() {
gain.disconnect();
};
};
// Connect a new triangle osc to the finalGain
make_triangle(finalGain, 440, context.currentTime, 1);

If you don't want the oscillator to live forever, you definitely need to schedule it to stop eventually. Otherwise it will play forever, consuming resources. A really smart implementation might be be able to do something clever, but I wouldn't depend on that because that's not required.
When the oscillator stops it should automatically disconnect itself from any downstream nodes. If there are no other references to the oscillator or any of the downstream nodes either, then they should all be eventually collected without you having to do anything.
It is a bug in the implementation if this doesn't happen.
It could be a bug in Firefox's WebAudio developer tab that the gain node still appears. But it could also be a bug in Firefox's implementation.

Related

Stopping a `CallbackInstrument` prior to setting `AVAudioSession.setActive(false)`

In attempt to pause my signal chain when a user puts the app into the background, or is interrupted by a phone call, I am trying to handle the interruption by stopping all playing nodes and setting the AVAudioSession().setActive(false) as per convention.
It seems fine to call stop() on all nodes except CallbackInstrument, which crashes at line 231 of DSPBase.cpp in *CAudioKitEX from the AudioKitEX repo :
void DSPBase::processOrBypass(AUAudioFrameCount frameCount, AUAudioFrameCount bufferOffset) {
if (isStarted) {
process(FrameRange{bufferOffset, frameCount});
} else {
// Advance all ramps.
stepRampsBy(frameCount);
// Copy input to output.
if (inputBufferLists.size() and !bCanProcessInPlace) {
for (int channel=0; channel< channelCount; ++channel) {
auto input = (const float *)inputBufferLists[0]->mBuffers[channel].mData + bufferOffset;
auto output = (float *)outputBufferList->mBuffers[channel].mData + bufferOffset;
std::copy(input, input+frameCount, output);
}
}
// Generators should be silent.
if (inputBufferLists.empty()) {
zeroOutput(frameCount, bufferOffset);
}
}
}
My CallbackInstrument is attached to a Sequencer as a discrete track. The crash occurs when the sequencer is playing and the app goes into the background, which I then call a method to stop all current sequencers, stop all active nodes prior to calling AVAudioSession.setSession(false).
I would simply ignore this and/or not stop CallbackInstrument however, by not attempting to stop or reset the CallbackInstrument node, AVAudioSession catches an error:
AVAudioSession_iOS.mm:1271 Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
Error seting audio session false:Error Domain=NSOSStatusErrorDomain Code=560030580 "(null)"
Error code=560030580 refers to AVAudioSessionErrorCodeIsBusy as stated here
Question:
If stopping a Sequencer with a CallbackInstrument does not in fact stop rendering audio/midi from the callback, how do we safely stop a signal chain with a CallbackInstrument in order to prepare for AVAudioSession.setActive(false)?
I have an example repo of the issue which can be found here.
Nicely presented question by the way :)
Seems like doing it this way does not guarantee that the audioBufferList in process or bypass is populated, while it does report that the size() == 1, its pointer to audioBuferList[0] is nill...
Possibly there should be an additional check for this?
However, if instead of calling stop() on every node in your AudioManager.sleep() if you call engine.stop()
and conversely self.start() in your AudioManager.wake() instead of start on all your nodes... This avoids your error and should stop/start all nodes in the process.

Adding a delay in FiddlerScript

I would like to add a uniform delay to responses in all sessions that Fiddler intercepts. The use of "response-trickle-delay" is unacceptable, since that doesn't actually introduce a uniform delay, but rather delays each 1KB of transfer (which simulates low bandwidth, rather than high latency).
The only reference I could find was here, which used the following atrocity (DO NOT USE!):
static function wait(msecs)
{
var start = new Date().getTime();
var cur = start;
while(cur – start < msecs)
{
cur = new Date().getTime();
}
}
and wait(5000); is inserted into OnBeforeResponse.
As expected, it locked up my computer and started overheating my CPU and I had to quit Fiddler.
I'm looking for something:
Less stupid, and
As simple as possible.
It looks like FiddlerScript is written in JScript.NET, and from what I gather there is a setTimeout() function, but I'm having trouble calling it (and I don't know JavaScript or .NET at all). Here is my OnBeforeResponse:
static function OnBeforeResponse(oSession: Session) {
if (m_Hide304s && oSession.responseCode == 304) {
oSession["ui-hide"] = "true";
}
setTimeout(function(){},1000);
}
It just gives a syntax error at the setTimeout line. Can setTimeout() be used from a FiddlerScript to introduce uniform delay?
FiddlerScript uses JScript.NET, which can reference .NET assemblies, and System.Threading contains Sleep.
In Tools > Fiddler Options > Extensions, add the path to System.Threading.dll. On my machine, this was located at C:\Windows\Microsoft.NET\Framework\v4.0.30319.
In FiddlerScript, add import System.Threading;.
You can now add lines like Thread.Sleep(1000).

Gstreamer 1.0 Pause signal

I need to detect when the current playing audio/video is paused. I cannot find anything for 1.0. My app is a bit complex but here is condensed code
/* This function is called when the pipeline changes states. We use it to
* keep track of the current state. */
static void state_changed_cb(GstBus *bus, GstMessage *msg, CustomData *data)
{
GstState old_state, new_state, pending_state;
gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
if(GST_MESSAGE_SRC(msg) == GST_OBJECT(data->playbin))
{
g_print("State set to %s\n", gst_element_state_get_name(new_state));
}
}
gst_init(&wxTheApp->argc, &argv);
m_playbin = gst_element_factory_make("playbin", "playbin");
if(!m_playbin)
{
g_printerr("Not all elements could be created.\n");
exit(1);
}
CustomData* data = new CustomData(xid, m_playbin);
GstBus *bus = gst_element_get_bus(m_playbin);
gst_bus_set_sync_handler(bus, (GstBusSyncHandler) create_window, data, NULL);//here I do video overly stuffs
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, &data);
What do I do wrong? I cannot find working example on connecting such events on Gstreamer 1.0 and 0.x seems a bit different than 1.0 so the vast exaples there don't help
UPDATE
I have found a way to get signals. I run wxWidgets timer with 500ms time span and each time timer fires I call
GstMessage* msg = gst_bus_pop(m_bus);
if(msg!=NULL)
{
g_print ("New Message -- %s\n", gst_message_type_get_name(msg->type));
}
Now I get a lot of 'state-change' messages. Still I want to know if that message is for Pause or Stop or Play or End of Media (I mean way to differentiate which message is this) so that I can notify the UI.
So while I get signals now, the basic problem, to get specific signals, remains unsolved.
You have to call gst_bus_add_signal_watch() (like in 0.10) to enable emission of the signals. Without that you can only use the other ways to get notified about GstMessages on that bus.
Also just to be sure, you need a running GLib main loop on the default main context for this to work. Otherwise you need to do things a bit different.
For the updated question:
Check the documentation: gst_message_parse_state_changed() can be used to parse the old, new and pending state from the message. This is also still the same as in 0.10. From the application point of view, and conceptionally nothing much has changed really between 0.10 and 1.0
Also you shouldn't do this timeout-waiting as it will block your wxwidget main loop. Easiest solution would be to use a sync bus handler (which you already have) and dispatch all messages from there to some callback on the wxwidget main loop.

How to ignore the 60fps limit in javafx?

I need to create a 100fps animation that display 3d data from a file that contains 100 frames per second. But the AnimationTimer in javaFx allows me to get 60fps only. How to get over it?
Removing the JavaFX Frame Rate Cap
You can remove the 60fps JavaFX frame rate cap by setting a system property, e.g.,
java -Djavafx.animation.fullspeed=true MyApp
Which is an undocumented and unsupported setting.
Removing the JavaFX frame rate cap may make your application considerably less efficient in terms of resource usage (e.g. a JavaFX application without a frame rate cap will consume more CPU than an application with the frame rate cap in place).
Configuring the JavaFX Frame Rate Cap
Additionally, there is another undocumented system property you could try:
javafx.animation.framerate
I have not tried it.
Debugging JavaFX Frames (Pulses)
There are other settings like -Djavafx.pulseLogger=true which you could enable to help you debug the JavaFX architecture and validate that your application is actually running at the framerate you expect.
JavaFX 8 has a Pulse Logger (-Djavafx.pulseLogger=true system property) that "prints out a lot of crap" (in a good way) about the JavaFX engine's execution. There is a lot of provided information on a per-pulse basis including pulse number (auto-incremented integer), pulse duration, and time since last pulse. The information also includes thread details and events details. This data allows a developer to see what is taking most of the time.
Warning
Normal warnings for using undocumented features apply, as Richard Bair from the JavaFX team notes:
Just a word of caution, if we haven't documented the command line switches, they're fair game for removal / modification in subsequent releases :-)
Fullspeed=true will give you a high framerate without control (and thereby decrease performance of your app as it renders too much), framerate doesn't work indeed.
Use:
-Djavafx.animation.pulse=value
You can check your framerate with the following code. I checked if it actually works too by setting the pulserate to 2, 60 and 120 (I have a 240Hz monitor) and you see a difference in how fast the random number changes.
private final long[] frameTimes = new long[100];
private int frameTimeIndex = 0 ;
private boolean arrayFilled = false ;
Label label = new Label();
root.getChildren().add(label);
AnimationTimer frameRateMeter = new AnimationTimer() {
#Override
public void handle(long now) {
long oldFrameTime = frameTimes[frameTimeIndex] ;
frameTimes[frameTimeIndex] = now ;
frameTimeIndex = (frameTimeIndex + 1) % frameTimes.length ;
if (frameTimeIndex == 0) {
arrayFilled = true ;
}
if (arrayFilled) {
long elapsedNanos = now - oldFrameTime ;
long elapsedNanosPerFrame = elapsedNanos / frameTimes.length ;
double frameRate = 1_000_000_000.0 / elapsedNanosPerFrame ;
label.setText(String.format("Current frame rate: %.3f" +", Random number:" + Math.random(), frameRate));
}
}
};
frameRateMeter.start();

Web Audio API: How do I play a mono source in only left or right channel?

Is there an (easy) way to take a mono input and play it only in the left or right channel? I'm thinking I can do it through the ScriptProcessing node, but if there is a node meant to handle this situation I'd really like to know. The API has a section on mixing, but I don't see any code on how to manipulate channels yourself in this fashion.
Note, I have tried the panner node, but it doesn't seem to really cut off the left the from the right channel, I don't want any sound bleeding from one channel to the other.
You do want to use the ChannelSplitter, although there is a bug when a channel is simply not connected. See this issue: Play an Oscillation In a Single Channel.
Take a look at the splitter node: https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#ChannelSplitterNode-section
One application for ChannelSplitterNode is for doing "matrix mixing" where individual gain control of each channel is desired.
(I haven't yet tried it, let me know : )
You can try to use the CreatePanner() and then setPosition() to the desired channel. Don't forget to connect your previous node to the panner node and the panner to the context.destination.
For example:
//Lets create a simple oscilator just to have some audio in our context
var oscillator = context.createOscillator();
//Now lets create the panner node
var pannerNode = context.createPanner();
//Connecting the nodes
oscillator.connect(pannerNode); //Connecting the oscillator output to the panner input
pannerNode.connect(context.destination); //Connecting the panner output to our sound output
//Setting the position of the sound
pannerNode.setPosition(-1, 0, 0);//If you want it to play on the left channel
pannerNode.setPosition(1, 0, 0);//If you want it to play on the right channel
//Playing the sound
oscillator.noteOn(0);
Is that what you need?