How to get separate analysers in wavesurfer to process stereo channels - web-audio-api

I'm using wavesurfer.js in a react app to play audio files and I want to get the frequency data for each stereo channel independently so I can implement stereo VU meters.
I'm using the audioprocess event of an wavesurfer instance as shown below.
const player = WaveSurfer.create({
container: '#sound',
waveColor: 'blue',
progressColor: 'white'
});
const analyser = player.backend.analyser;
player.on('audioprocess', (data) => {
let frequencyData = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(frequencyData);
console.log("PLAYER FREQ DATA [0]...[n]", frequencyData.length, getAverageVolume(frequencyData));
});
This works ok, but appears to be the frequency response for both channels. How can I get two analysers with each being just one of the stereo channels.

Related

Web audio playback contains clicks

I am trying to build a midi player using web audio API. I used tonejs to parse midi file into JSON. I am using mp3 files to play notes. Following are the relevant parts of the code:
//create audio samples
static async setupSample(audioContext, filepath) {
const response = await fetch(filepath);
const arrayBuffer = await response.arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
return audioBuffer;
}
//play a single sample
static playSample(audioContext, audioBuffer, time) {
const sampleSource = new AudioBufferSourceNode(audioContext, {
buffer: audioBuffer,
playbackRate: 1,
});
sampleSource.connect(audioContext.destination);
sampleSource.start(time);
return sampleSource;
}
Scheduling samples:
async start() {
this.startTime = this.audioCtx.currentTime;
this.play();
}
play() {
let nextNote = this.notes[this.noteIndex];
//schedule samples
while ((nextNote.time + this.startTime) - this.audioCtx.currentTime <= 0.250) {
let s = Audio.playSample(this.audioCtx, this.samples[nextNote.midi], this.startTime + nextNote.time);
s.stop(this.startTime + nextNote.time + nextNote.duration);
this.noteIndex++;
if (this.noteIndex == this.notes.length) {
break;
}
nextNote = this.notes[this.noteIndex];
}
if (this.noteIndex == this.notes.length) {
return;
}
requestAnimationFrame(() => {
this.play();
});
}
I am testing code with a midi file which contains C major scale. I have tested the midi file using timidity and it is fine.
The code does play the midi file correctly execpet a small problem: I hear some clicking sounds during playback. The clicking increases with increasing tempo but does not completely go away even with tempo as small as 50bpm. Any ideas what could be going wrong?
Full code can be viewed at : https://test.meedee.in/
Nothing is "wrong". You are observing a phenomenon intrinsic to the physics of audio.
Chopping audio samples arbitrarily like this creates clicks at the transitions. Any instantaneous change in level is heard as a click. To get rid of the clicks, apply an envelope to the sample, blend adjacent notes, or apply a low-pass filter.

Stream synthetized audio in real time in flutter

I'm trying to create an app generating a continuos sinewave of various frequency (controlled by the user) and I'm trying to play the data as it's generated in real time.
I'm using just_audio right now to play bytes generated using wave_generator, as follows (snippet from issue):
class BufferAudioSource extends StreamAudioSource {
final Uint8List _buffer;
BufferAudioSource(this._buffer) : super(tag: "Bla");
#override
Future<StreamAudioResponse> request([int? start, int? end]) {
start = start ?? 0;
end = end ?? _buffer.length;
return Future.value(
StreamAudioResponse(
sourceLength: _buffer.length,
contentLength: end - start,
offset: start,
contentType: 'audio/wav',
stream: Stream.value(List<int>.from(_buffer.skip(start).take(end - start))),
),
);
}
}
And I'm using the audio source like this:
StreamAudioSource _source = BufferAudioSource(_data!);
_player.setAudioSource(_source);
_player.play()
Is there a way I could feed the data to the player as soon as I generate it on the fly, using a sinewave generator, so that if the user changes the frequency, the playback will reflect the change as soon as it happens?
I tried looking online and on the repository github but I couldn't find anything.

using WebAudio AnalyserNode.getFloatFrequencyData() to shift pitch of a BufferSource

I have a BufferSource, which I create thusly:
const proxyUrl = location.origin == 'file://' ? 'https://cors-anywhere.herokuapp.com/' : '';
const request = new XMLHttpRequest();
request.open('GET', proxyUrl + 'http://heliosophiclabs.com/~mad/projects/mad-music/non.mp3', true);
// request.open('GET', 'non.mp3', true);
request.responseType = 'arraybuffer';
request.onload = () => {
audioCtx.decodeAudioData(request.response, buffer => {
buff = buffer;
}, err => {
console.error(err);
});
}
request.send();
Yes, the CORS workaround is pathetic, but this is the way I found to be able to work locally without needing to run a HTTP server. Anyway...
I would like to shift the pitch of this buffer. I've tried various different forms of this:
const source = audioCtx.createBufferSource();
source.buffer = buff;
const analyser = audioCtx.createAnalyser();
analyser.connect(audioCtx.destination);
analyser.minDecibels = -140;
analyser.maxDecibels = 0;
analyser.smoothingTimeConstant = 0.8;
analyser.fftSize = 2048;
const dataArray = new Float32Array(analyser.frequencyBinCount);
source.connect(analyser);
analyser.connect(audioCtx.destination);
source.start(0);
analyser.getFloatFrequencyData(dataArray);
console.log('dataArray', dataArray);
All to no avail. dataArray is always filled with -Infinity values, no matter what I try.
My idea is to get this frequency domain data and then to move all the frequencies up/down by some amount and create a new Oscillator node out of these, like this:
const wave = audioCtx.createPeriodicWave(real, waveCompnents);
oscillator.setPeriodicWave(wave);
Anyway. If anyone has a better idea of how to shift pitch, I'd love to hear it. Sadly, detune and playbackRate both seem to do basically the same thing (why are there two ways of doing the same thing?), namely just to speed up or slow down the playback, so that's not it.
First, there's a small issue with the code: you connect the analyser to the destination twice. You don't actually need to connect it at all.
Second, I think the reason you're getting all -infinity values is because you call getFloatFrequencyData right after you start the source. There's a good chance that no samples have been played so the analyser only has buffers of all zeros.
You need to call getFloatFrequencyData after a bit of time to see non-zero values.
Third, I don't think this will work at all, even for shifting the pitch of an oscillator. getFloatFrequencyData only returns the magnitude information. You will need the phase information for the harmonics to get everything shifted correctly. Currently there's no way to get the phase information.
Fourth, if you have an AudioBuffer with the data you need, consider using the playbackRate to change the pitch. Not sure if this will produce the shift you want.

Webaudio :: Play Recorded Audio

I want to play the recorded audio using microphone.
After recording it as 32 bit arrays
let left = e.inputBuffer.getChannelData(0);
let tempLeftChannel = this.state.leftChannel;
tempLeftChannel.push(new Float32Array(left));
this.setState({ leftChannel: tempLeftChannel });
Now In the leftChannel array, I had chunk of audio data. Now, I want to play them in the browser. How can I do that?
You leave quite a bit out from your snippet, but perhaps the following will give you an idea of one way to play out the float array that you have. Let context be the AudioContext that you probably have.
let buffer = new AudioBuffer({length: leftChannel.length,
sampleRate: context.sampleRate});
buffer.copyToChannel(leftChannel, 0);
let source = new AudioBufferSourceNode(context, {buffer: buffer});
source.connect(context.destination);
source.start();

How to use multiple USB webcam in Matlab working simultaneously?

I would like to take the live video with two USB webcams (Philips SPC 900NC), but I found that they cannot work simultaneously on my laptop. Either of the two USB webcams could work alone or work with another webcam (mounted on my laptop originally).
When I use the simulink block 'From video device', Matlab gave the error message with ' Multiple VIDEOINPUT objects cannot access the same device simultaneously.'. Then I checked the video input device with command 'imaqhwinfo', only one of the USB Philips webcam could be detected.
I would like to know that,
what's the reason of this situation? is it because the hardware limitation (USB bus bandwidth) or just matlab video object don't support same multiple video devices?
what's the solution of this? could anyone give me some suggestions?
You may interest in this link:
http://opencv.willowgarage.com/wiki/faq#How_to_use_2_cameras_.28multiple_cameras.29_with_cvCam_library
Which contains:
First, init the cvcam library and get the number of cams by:
int ncams = cvcamGetCamerasCount( ); //returns the number of available cameras in the system
Show dialog to choose which cameras in use
int* out; int nselected = cvcamSelectCamera(&out);
Get the selected cams and enable them.
int cam1 = out[0];
int cam2 = out[1];
cvcamSetProperty(cam1, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam1, CVCAM_PROP_RENDER, CVCAMTRUE); //We'll render stream from this source
cvNamedWindow("Cam1", 1);
cvcamWindow MyWin1 = (cvcamWindow)cvGetWindowHandle("Cam1");
cvcamSetProperty(cam1, CVCAM_PROP_WINDOW, &MyWin1); // Selects a window for video rendering
//Same code for camera 2
cvcamSetProperty(cam2, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam2, CVCAM_PROP_RENDER, CVCAMTRUE);
cvNamedWindow("Cam2", 1);
cvcamWindow MyWin2 = (cvcamWindow)cvGetWindowHandle("Cam2");
cvcamSetProperty(cam2, CVCAM_PROP_WINDOW, &MyWin1);
//If you want to open the property dialog for setting the video format parameters, uncomment this line
//cvcamGetProperty(cam1, CVCAM_VIDEOFORMAT, NULL);
//cvcamGetProperty(cam2, CVCAM_VIDEOFORMAT, NULL);
Enable the stereo mode (2 cameras working at the same time)
cvcamSetProperty(cam1, CVCAM_STEREO_CALLBACK , stereocallback); //stereocallback is the function running to process every frames
cvcamInit();
cvcamStart();
//Your app is working
while (1)
{
int key = cvWaitKey(5);
if (key == 27) break;
}
cvcamStop( );
cvcamExit( );
Define the stereocallback function outside of the function above.
void stereocallback(IplImage* image1, IplImage* image2) {
//Process 2 images here
}