I have created an audio worklet that performs pitch detection , all works fine but I want to free the microphone once I am done
I get the stream and wire everything up like this
const AudioContextConstructor =
window.AudioContext || window.webkitAudioContext;
this.audioContext = new AudioContextConstructor();
await this.audioContext.audioWorklet.addModule('js/worklet_pitcher.js');
this.stream = await navigator.mediaDevices.getUserMedia({ audio: true });
var mediaStreamSource = this.audioContext.createMediaStreamSource(this.stream);
this.pitchWorklet = new AudioWorkletNode(this.audioContext, 'pitch-processor');
mediaStreamSource.connect(this.pitchWorklet);
When I am done I simply do this
stop = (): void => {
if (this.running) {
this.audioContext.close();
this.running = false;
}
}
this stops the worklet pipeline but the red dot still shows in the browser tab meaning that I still own the mic.
I looked for a stream.close so I could explicitly close the MediaStream returned by getUserMediabut there isnt one
You also need to call stop() on each MediaStreamTrack of the MediaStream obtained from the mic.
this.stream.getTracks().forEach((track) => track.stop());
https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/stop
https://developer.mozilla.org/en-US/docs/Web/API/MediaStream/getTracks
Related
I am trying to build a midi player using web audio API. I used tonejs to parse midi file into JSON. I am using mp3 files to play notes. Following are the relevant parts of the code:
//create audio samples
static async setupSample(audioContext, filepath) {
const response = await fetch(filepath);
const arrayBuffer = await response.arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
return audioBuffer;
}
//play a single sample
static playSample(audioContext, audioBuffer, time) {
const sampleSource = new AudioBufferSourceNode(audioContext, {
buffer: audioBuffer,
playbackRate: 1,
});
sampleSource.connect(audioContext.destination);
sampleSource.start(time);
return sampleSource;
}
Scheduling samples:
async start() {
this.startTime = this.audioCtx.currentTime;
this.play();
}
play() {
let nextNote = this.notes[this.noteIndex];
//schedule samples
while ((nextNote.time + this.startTime) - this.audioCtx.currentTime <= 0.250) {
let s = Audio.playSample(this.audioCtx, this.samples[nextNote.midi], this.startTime + nextNote.time);
s.stop(this.startTime + nextNote.time + nextNote.duration);
this.noteIndex++;
if (this.noteIndex == this.notes.length) {
break;
}
nextNote = this.notes[this.noteIndex];
}
if (this.noteIndex == this.notes.length) {
return;
}
requestAnimationFrame(() => {
this.play();
});
}
I am testing code with a midi file which contains C major scale. I have tested the midi file using timidity and it is fine.
The code does play the midi file correctly execpet a small problem: I hear some clicking sounds during playback. The clicking increases with increasing tempo but does not completely go away even with tempo as small as 50bpm. Any ideas what could be going wrong?
Full code can be viewed at : https://test.meedee.in/
Nothing is "wrong". You are observing a phenomenon intrinsic to the physics of audio.
Chopping audio samples arbitrarily like this creates clicks at the transitions. Any instantaneous change in level is heard as a click. To get rid of the clicks, apply an envelope to the sample, blend adjacent notes, or apply a low-pass filter.
Essentially the app is like snapchat. I take pics and reset back to camera mode, the issue comes when I record video and reset, it goes back to camera mode but the audio form the video keeps playing in the background. The functions are somwhat exactly like the camera doc, with a few addition to reset the camera.
I added this:
_reset() {
if (mounted)
setState(() {
if (this._didCapture) {
this._didCapture = false;
this._isRecording = false;
this._isPosting = false;
this._file = File('');
this._fileType = null;
this._captions.clear();
this._textEditingControllers.clear();
this._videoController = null;
this._videoPlayerListener = null;
}
});
}
It works just fine but the audio in the background is still on. Also wondering if the video/picture is saved on the phone, which I don't want to...
i had been looking for a similar answer, but i didn´t find it. You could try to stop it adding this to your function:
this._controller.setVolume(0.0);
that´s what i did in my app
I'm trying to dynamically load and play a video file. No matter what I do, I cannot seem to figure out why the audio does not play.
var www = new WWW("http://unity3d.com/files/docs/sample.ogg");
var movieTexture = www.movie;
var movieAudio = www.movie.audioClip;
while (!movieTexture.isReadyToPlay) yield return 0;
// Assign movie texture and audio
var videoAnimation = videoAnimationPrefab.GetComponent<VideoAnimation>();
var videoRenderer = videoAnimation.GetVideoRenderer();
var audioSource = videoAnimation.GetAudioSource();
videoRenderer.material.mainTexture = movieTexture;
audioSource.clip = movieAudio;
// Play the movie and sound
movieTexture.Play();
audioSource.Play();
// Double check audio is playing...
Debug.Log("Audio playing: " + audioSource.isPlaying);
Every time I receive Audio playing: False
I've also tried using a GUITexture using this as a guide, but no dice. There are no errors displayed in the console.
What am I doing wrong that makes the audio never work?
Thanks in advance for any help!
Changed to:
while (!movieTexture.isReadyToPlay) yield return 0;
var movieAudio = movieTexture.audioClip;
Even though AudioClip inherits from Object, a call to movieTexture.audioClip seems to return a copied version instead of returning a reference by value to the object. So at the time I was assigning it, it had not been created yet and had to wait until the movie was "Ready to Play" until fetching the audioClip.
I decode amrnb to PCM, then put right pcm buffer to Enqueue buffer (I'm sure PCM data is right), but no sound is heard. And when feeding buffer, log outputs:
/AudioTrack(14857): obtainBuffer timed out (is the CPU pegged?)
My code is below, and my questions are:
Is there something wrong when I use the OpenSL ES?
Is it true that OpenSL ES only works on the real device?
Sample code:
void AudioTest()
{
StartAudioPlay();
while(1)
{
//decode AMR to PCM
/* Convert to little endian and write to wav */
//write buffer to buffer queue
AudioBufferWrite(littleendian, 320);
}
}
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
//do nothing
}
void AudioBufferWrite(const void* buffer, int size)
{
(*gBQBufferQueue)->Enqueue(gBQBufferQueue, buffer, size );
}
// create buffer queue audio player
void SlesCreateBQPlayer(/*AudioCallBackSL funCallback, void *soundMix,*/ int rate, int nChannel, int bitsPerSample )
{
SLresult result;
// configure audio source
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, 1, SL_SAMPLINGRATE_8,
SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
SL_SPEAKER_FRONT_CENTER, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, gOutputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*gEngineEngine)->CreateAudioPlayer(gEngineEngine, &gBQObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*gBQObject)->Realize(gBQObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_PLAY, &gBQPlay);
// get the buffer queue interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_BUFFERQUEUE,
&gBQBufferQueue);
// register callback on the buffer queue
result = (*gBQBufferQueue)->RegisterCallback(gBQBufferQueue, bqPlayerCallback, NULL/*soundMix*/);
// get the effect send interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_EFFECTSEND,
&gBQEffectSend);
// set the player's state to playing
result = (*gBQPlay)->SetPlayState(gBQPlay, SL_PLAYSTATE_PLAYING );
}
I'm not entirely sure, but I think you're correct in that the emulator's OpenSL ES support doesn't actually work. I've never gotten it to work in practice, while it works on any device I've tried.
In my application I have to support Android 2.2 as well, so I have a fallback to use JNI to access the Java AudioTrack APIs. I added a special case to my app to always use the AudioTrack interface when the emulator is detected.
For mostly security reasons, I'm not allowed to store a WAV file on the server to be accessed by a browser. What I have is a byte array contains audio data (the data portion of a WAV file I believe) on the sever, and I want it to be played on a browser through JavaScript (or Applet but JS preferred), I can use JSON-PRC to send the whole byte[] over, or I can open a socket to stream it over, but in either case I don't know who to play the byte[] within the browser?
The following code will play the sine wave at 0.5 and 2.0. Call the function play_buffersource() in your button or anywhere you want.
Tested using Chrome with Web Audio flag enabled. For your case, all that you need to do is just to shuffle your audio bytes to the buf.
<script type="text/javascript">
const kSampleRate = 44100; // Other sample rates might not work depending on the your browser's AudioContext
const kNumSamples = 16834;
const kFrequency = 440;
const kPI_2 = Math.PI * 2;
function play_buffersource() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser sucks because it does NOT support any AudioContext!");
return;
}
window.AudioContext = window.webkitAudioContext;
}
var ctx = new AudioContext();
var buffer = ctx.createBuffer(1, kNumSamples, kSampleRate);
var buf = buffer.getChannelData(0);
for (i = 0; i < kNumSamples; ++i) {
buf[i] = Math.sin(kFrequency * kPI_2 * i / kSampleRate);
}
var node = ctx.createBufferSource(0);
node.buffer = buffer;
node.connect(ctx.destination);
node.noteOn(ctx.currentTime + 0.5);
node = ctx.createBufferSource(0);
node.buffer = buffer;
node.connect(ctx.destination);
node.noteOn(ctx.currentTime + 2.0);
}
</script>
References:
http://epx.com.br/artigos/audioapi.php
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
If you need to resample the audio, you can use a JavaScript resampler: https://github.com/grantgalitz/XAudioJS
If you need to decode the base64 data, there are a lot of JavaScript base64 decoder: https://github.com/carlo/jquery-base64
I accomplished this via the following code. I pass in a byte array containing the data from the wav file to the function playByteArray. My solution is similar to Peter Lee's, but I could not get his to work in my case (the output was garbled) whereas this solution works well for me. I verified that it works in Firefox and Chrome.
window.onload = init;
var context; // Audio context
var buf; // Audio buffer
function init() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser does not support any AudioContext and cannot play back this audio.");
return;
}
window.AudioContext = window.webkitAudioContext;
}
context = new AudioContext();
}
function playByteArray(byteArray) {
var arrayBuffer = new ArrayBuffer(byteArray.length);
var bufferView = new Uint8Array(arrayBuffer);
for (i = 0; i < byteArray.length; i++) {
bufferView[i] = byteArray[i];
}
context.decodeAudioData(arrayBuffer, function(buffer) {
buf = buffer;
play();
});
}
// Play the loaded file
function play() {
// Create a source node from the buffer
var source = context.createBufferSource();
source.buffer = buf;
// Connect to the final output node (the speakers)
source.connect(context.destination);
// Play immediately
source.start(0);
}
If you have the bytes on the server then I would suggest that you create some kind of handler on the server that will stream the bytes to the response as a wav file. This "file" would only be in memory on the server and not on disk. Then the browser can just handle it like a normal wav file.
More details on the server stack would be needed to give more information on how this could be done in your environment.
I suspect you can achieve this with HTML5 Audio API easily enough:
https://developer.mozilla.org/en/Introducing_the_Audio_API_Extension
This library might come in handy too, though I'm not sure if it reflects the latest browser behaviours:
https://github.com/jussi-kalliokoski/audiolib.js