Web Audio API: How do I play a mono source in only left or right channel? - web-audio-api

Is there an (easy) way to take a mono input and play it only in the left or right channel? I'm thinking I can do it through the ScriptProcessing node, but if there is a node meant to handle this situation I'd really like to know. The API has a section on mixing, but I don't see any code on how to manipulate channels yourself in this fashion.
Note, I have tried the panner node, but it doesn't seem to really cut off the left the from the right channel, I don't want any sound bleeding from one channel to the other.

You do want to use the ChannelSplitter, although there is a bug when a channel is simply not connected. See this issue: Play an Oscillation In a Single Channel.

Take a look at the splitter node: https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#ChannelSplitterNode-section
One application for ChannelSplitterNode is for doing "matrix mixing" where individual gain control of each channel is desired.
(I haven't yet tried it, let me know : )

You can try to use the CreatePanner() and then setPosition() to the desired channel. Don't forget to connect your previous node to the panner node and the panner to the context.destination.
For example:
//Lets create a simple oscilator just to have some audio in our context
var oscillator = context.createOscillator();
//Now lets create the panner node
var pannerNode = context.createPanner();
//Connecting the nodes
oscillator.connect(pannerNode); //Connecting the oscillator output to the panner input
pannerNode.connect(context.destination); //Connecting the panner output to our sound output
//Setting the position of the sound
pannerNode.setPosition(-1, 0, 0);//If you want it to play on the left channel
pannerNode.setPosition(1, 0, 0);//If you want it to play on the right channel
//Playing the sound
oscillator.noteOn(0);
Is that what you need?

Related

Gapless playback in pyglet

I understood this page to mean that queuing in pyglet provides a gapless transition between audio tracks. But when I test it out, there is a noticeable gap. Has anyone here worked with gapless audio in pyglet?
Example:
player = pyglet.media.Player()
source1 = pyglet.media.load([file1]) # adding streaming=False doesn't fix the issue
source2 = pyglet.media.load([file2])
player.queue(source1)
player.queue(source2)
player.play()
player.seek([time]) # to avoid having to wait until the end of the track. removing this doesn't fix the gap issue
pyglet.app.run()
I would suggest you either edit your url1 and url2 into caching them locally if they're external sources. And then use Player().time to identify when you're about to reach the end. And then call player.next_source.
Or if it's local files and you don't want to programatically solve the problem you could chop up the audio files in something like Audacity to make them seamless on start/stop.
You could also experiment with having multiple players and layer them on top of each other. But if you're only interested in audio playback, there's other alternatives.
It turns out that there were 2 problems.
The first one: I should have used
source_group = pyglet.media.SourceGroup()
source_group.add(source1)
source_group.add(source2)
player.queue(source_group)
The second one: mp3 files are apparently slightly padded at the beginning and at the end, so that is where the gap is coming from. However, this does not seem to be an issue with any other file type.

Is it possible to change MediaRecorder's stream?

getUserMedia(constrains).then(stream => {
var recorder = new MediaRecorder(stream)
})
recorder.start()
recorder.pause()
// get new stream getUserMedia(constrains_new)
// how to update recorder stream here?
recorder.resume()
Is it possible? I've try to create MediaStream and use addTrack and removeTrack methods to change stream tracks but no success (recorder stops when I try to resume it with updated stream)
Any ideas?
The short answer is no, it's not possible. The MediaStream recording spec explicitly describes this behavior: https://w3c.github.io/mediacapture-record/#dom-mediarecorder-start. It's bullet point 15.3 of that algorithm which says "If at any point, a track is added to or removed from stream’s track set, the UA MUST immediately stop gathering data ...".
But in case you only want to record audio you can probably use an AudioContext to proxy your streams. Create a MediaStreamAudioDestinationNode and use the stream that it provides for recording. Then you can feed your streams with MediaStreamAudioSourceNodes and/or MediaStreamTrackAudioSourceNodes into the audio graph and mix them in any way you desire.
Last but not least there are currently plans to add the functionality you are looking for to the spec. Maybe you just have to wait a bit. Or maybe a bit longer depending on the browser you are using. :-)
https://github.com/w3c/mediacapture-record/issues/167
https://github.com/w3c/mediacapture-record/pull/186

When connecting to a merger node is there any reason to use a number other than 0 as the second argument if the input is not channel splitter

I understand that when you connect a splitter to a merger you can do something like this:
splitter.connect(merger, 1, 0);
But when connecting an input source such as a stereo buffer source directly to a merger is there any reason ever to set the second argument of the connect method to something other than zero ? I assume the answer is no, but I'm not sure and looking for validation.
var stereoSoundSource = audioContext.createBufferSource();
stereoSoundSource.buffer = whatever;
stereoSoundSource.connect(merger, 0, 1);
In short, no.
Splitter is currently the only node that has multiple outputs, so it's the only node for which you would ever need to specify an output other than 0.
There are scenarios where you would do this with a splitter. For example, imagine how to create a graph that flips stereo channels:
var merger = context.createMerger(2);
var splitter = context.createSplitter(2);
splitter.connect(merger,0,1);
splitter.connect(merger,1,0);
In the future, some other nodes might acquire other outputs (like, I've proposed using a separate output for the envelope in a noise gate/expander node), and then there might be other cases (and this answer would change).

What is the "proper" way to sum multi channel audio buffers to mono

I am using the channel splitter and merger to attempt to split a stereo file into two discrete channels and then funnel them back into the node graph as a "mono" input source that plays in both the left and right monitors. I figured out a way to do it but it uses the stereoPanner node set to 0.5 , and it feels a bit "hacky". How would I do this without using the stereoPanner node ?
//____________________________________________BEGIN Setup
var merger = audioContext.createChannelMerger();
var stereoPanner = audioContext.createStereoPanner();
var stereoInputSource = audioContext.createBufferSource();
stereoInputSource.buffer = soundObj.soundToPlay;
//____________________________________________END Setup
stereoInputSource.connect(merger, 0, 0);
merger.connect(stereoPanner);
stereoPanner.pan.value = 0.5;
stereoPanner.connect(audioContext.destination);
Create a ChannelMerger with only one channel and use to to force downmixing?
Just take the mean (average) of the left and right sample.
I guess I was over thinking this. The following seems to work. I guess the names of the nodes are what is a bit confusing. I would have though I would need a merge node for this
stereoInputSource.connect(splitter);
splitter.connect(monoGain, 0); // left output
splitter.connect(monoGain, 1); // right output
monoGain.connect(audioContext.destination);
EDIT
The "correct" way is what Chris mentioned. Explicitly setting the output channel on the merge node invocation is what confused me.
var stereoInputSource = audioContext.createBufferSource();
var merger = audioContext.createChannelMerger(1); // Set number of channels
stereoInputSource.buffer = soundObj.soundToPlay;
stereoInputSource.connect(merger);
merger.connect(audioContext.destination)

Adding videos to Unity3d

We are developing a game about driving awareness.
The problem is we need to show videos to the user if he makes any mistakes after completing driving. For example, if he makes two mistakes we need to show two videos at the end of the game.
Can you help with this. I don't have any idea.
#solus already gave you an answer, regarding "how to play a (pre-registered) video from your application". However, from what I've understood, you are asking about saving (and visualize) a kind of replay for the "wrong" actions, performed by the player. This is not an easy task, and I don't think that you can receive an exaustive answer, but only some advices. I will try to give you my own ones.
First of all, you should "capture" the position of the player's car, in various time periods.
As an example, you could read player's car position every 0.2 seconds, and save it into a structure (example: a List).
Then, you would implement some logic to detect the "wrong" actions (crashes, speeding...They obviously depend on your game) and save a reference to the pair ["mistake", "relevant portion of the list containg car's positions for that event"].
Now, you have all what you need to recreate a replay of the action: that is, making the car "driving alone", by reading the previously saved positions (that will act as waypoints for generating the route).
Obviously, you also have to deal with the camera's position and rotation: just leave it attached to the car (as the normal "in-game" action), or modify it during time to catch the more interesting angulations, as the AAA racing games do (this will make the overall task more difficult, of course).
Unity will import a video as a MovieTexture. It will be converted to the native Theora/Vorbis (Ogg) format. (Use ffmpeg2theora if import fails.)
Simply apply it as you would any texture. You could use a plane or a flat cube. You should adjust its localScale to the aspect ratio of your video (movie.width/(float)movie.height).
Put the attached audioclip in an AudioSource. Then call movie.Play() and audio.Play().
You could also load the video from a local file path or the web (in the correct format).
var movie = new WWW(#"file://C:\videos\myvideo.ogv").movie;
...
if(movie.isReadyToPlay)
{
renderer.material.mainTexture = movie;
audio.clip = movie.audioClip;
movie.Play();
audio.clip.Play();
}
Use MovieTexture, but do not forget to install QuickTime, you need it to import movie clip (.mov file for example).