How can I handle audio files? - flutter

What is the best option to:
Record audio from microphone,
Store the audio as files in memory,
Being able to play those files ?
Is there one package that is convenient to record and play? Does it works on all platforms (web compatible)? What is the best strategy to store them in memory?

Here is a package you can use audio_recorder
For record and storing part here is sample examples (read the package documentation)
// Import package
import 'package:audio_recorder/audio_recorder.dart';
// Check permissions before starting
bool hasPermissions = await AudioRecorder.hasPermissions;
// Get the state of the recorder
bool isRecording = await AudioRecorder.isRecording;
// Start recording
await AudioRecorder.start(path: _controller.text, audioOutputFormat: AudioOutputFormat.AAC);
// Stop recording
Recording recording = await AudioRecorder.stop();
print("Path : ${recording.path}, Format : ${recording.audioOutputFormat}, Duration : ${recording.duration}, Extension : ${recording.extension},");
play audio you need another package i suggest audioplayers :
// To pause
int result = await audioPlayer.pause();
//To Stop
int result = await audioPlayer.stop();
// To Jump through
int result = await audioPlayer.seek(Duration(milliseconds: 1200));
// To Resume
int result = await audioPlayer.resume();

Related

Web audio playback contains clicks

I am trying to build a midi player using web audio API. I used tonejs to parse midi file into JSON. I am using mp3 files to play notes. Following are the relevant parts of the code:
//create audio samples
static async setupSample(audioContext, filepath) {
const response = await fetch(filepath);
const arrayBuffer = await response.arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
return audioBuffer;
}
//play a single sample
static playSample(audioContext, audioBuffer, time) {
const sampleSource = new AudioBufferSourceNode(audioContext, {
buffer: audioBuffer,
playbackRate: 1,
});
sampleSource.connect(audioContext.destination);
sampleSource.start(time);
return sampleSource;
}
Scheduling samples:
async start() {
this.startTime = this.audioCtx.currentTime;
this.play();
}
play() {
let nextNote = this.notes[this.noteIndex];
//schedule samples
while ((nextNote.time + this.startTime) - this.audioCtx.currentTime <= 0.250) {
let s = Audio.playSample(this.audioCtx, this.samples[nextNote.midi], this.startTime + nextNote.time);
s.stop(this.startTime + nextNote.time + nextNote.duration);
this.noteIndex++;
if (this.noteIndex == this.notes.length) {
break;
}
nextNote = this.notes[this.noteIndex];
}
if (this.noteIndex == this.notes.length) {
return;
}
requestAnimationFrame(() => {
this.play();
});
}
I am testing code with a midi file which contains C major scale. I have tested the midi file using timidity and it is fine.
The code does play the midi file correctly execpet a small problem: I hear some clicking sounds during playback. The clicking increases with increasing tempo but does not completely go away even with tempo as small as 50bpm. Any ideas what could be going wrong?
Full code can be viewed at : https://test.meedee.in/
Nothing is "wrong". You are observing a phenomenon intrinsic to the physics of audio.
Chopping audio samples arbitrarily like this creates clicks at the transitions. Any instantaneous change in level is heard as a click. To get rid of the clicks, apply an envelope to the sample, blend adjacent notes, or apply a low-pass filter.

How to get duration of an audio file given only its url

I have an mp3 audio file located under this link:
https://www.soundhelix.com/examples/mp3/SoundHelix-Song-1.mp3
How can I get its duration in my flutter app without getting the whole audio ?
Just it's length duration? I want the quickest possible way to get it.
You could go with the just_audio package.
import 'package:just_audio/just_audio.dart';
final player = AudioPlayer(); // Create a player
final duration = await player.setUrl( // Load a URL
'https://example.com/bar.mp3'); // Schemes: (https: | file: | asset: )

how to fully shutdown audio input stream

I have created an audio worklet that performs pitch detection , all works fine but I want to free the microphone once I am done
I get the stream and wire everything up like this
const AudioContextConstructor =
window.AudioContext || window.webkitAudioContext;
this.audioContext = new AudioContextConstructor();
await this.audioContext.audioWorklet.addModule('js/worklet_pitcher.js');
this.stream = await navigator.mediaDevices.getUserMedia({ audio: true });
var mediaStreamSource = this.audioContext.createMediaStreamSource(this.stream);
this.pitchWorklet = new AudioWorkletNode(this.audioContext, 'pitch-processor');
mediaStreamSource.connect(this.pitchWorklet);
When I am done I simply do this
stop = (): void => {
if (this.running) {
this.audioContext.close();
this.running = false;
}
}
this stops the worklet pipeline but the red dot still shows in the browser tab meaning that I still own the mic.
I looked for a stream.close so I could explicitly close the MediaStream returned by getUserMediabut there isnt one
You also need to call stop() on each MediaStreamTrack of the MediaStream obtained from the mic.
this.stream.getTracks().forEach((track) => track.stop());
https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/stop
https://developer.mozilla.org/en-US/docs/Web/API/MediaStream/getTracks

Stream synthetized audio in real time in flutter

I'm trying to create an app generating a continuos sinewave of various frequency (controlled by the user) and I'm trying to play the data as it's generated in real time.
I'm using just_audio right now to play bytes generated using wave_generator, as follows (snippet from issue):
class BufferAudioSource extends StreamAudioSource {
final Uint8List _buffer;
BufferAudioSource(this._buffer) : super(tag: "Bla");
#override
Future<StreamAudioResponse> request([int? start, int? end]) {
start = start ?? 0;
end = end ?? _buffer.length;
return Future.value(
StreamAudioResponse(
sourceLength: _buffer.length,
contentLength: end - start,
offset: start,
contentType: 'audio/wav',
stream: Stream.value(List<int>.from(_buffer.skip(start).take(end - start))),
),
);
}
}
And I'm using the audio source like this:
StreamAudioSource _source = BufferAudioSource(_data!);
_player.setAudioSource(_source);
_player.play()
Is there a way I could feed the data to the player as soon as I generate it on the fly, using a sinewave generator, so that if the user changes the frequency, the playback will reflect the change as soon as it happens?
I tried looking online and on the repository github but I couldn't find anything.

OpenSL ES can not play audio on Android emulator

I decode amrnb to PCM, then put right pcm buffer to Enqueue buffer (I'm sure PCM data is right), but no sound is heard. And when feeding buffer, log outputs:
/AudioTrack(14857): obtainBuffer timed out (is the CPU pegged?)
My code is below, and my questions are:
Is there something wrong when I use the OpenSL ES?
Is it true that OpenSL ES only works on the real device?
Sample code:
void AudioTest()
{
StartAudioPlay();
while(1)
{
//decode AMR to PCM
/* Convert to little endian and write to wav */
//write buffer to buffer queue
AudioBufferWrite(littleendian, 320);
}
}
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
//do nothing
}
void AudioBufferWrite(const void* buffer, int size)
{
(*gBQBufferQueue)->Enqueue(gBQBufferQueue, buffer, size );
}
// create buffer queue audio player
void SlesCreateBQPlayer(/*AudioCallBackSL funCallback, void *soundMix,*/ int rate, int nChannel, int bitsPerSample )
{
SLresult result;
// configure audio source
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, 1, SL_SAMPLINGRATE_8,
SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
SL_SPEAKER_FRONT_CENTER, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, gOutputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*gEngineEngine)->CreateAudioPlayer(gEngineEngine, &gBQObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*gBQObject)->Realize(gBQObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_PLAY, &gBQPlay);
// get the buffer queue interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_BUFFERQUEUE,
&gBQBufferQueue);
// register callback on the buffer queue
result = (*gBQBufferQueue)->RegisterCallback(gBQBufferQueue, bqPlayerCallback, NULL/*soundMix*/);
// get the effect send interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_EFFECTSEND,
&gBQEffectSend);
// set the player's state to playing
result = (*gBQPlay)->SetPlayState(gBQPlay, SL_PLAYSTATE_PLAYING );
}
I'm not entirely sure, but I think you're correct in that the emulator's OpenSL ES support doesn't actually work. I've never gotten it to work in practice, while it works on any device I've tried.
In my application I have to support Android 2.2 as well, so I have a fallback to use JNI to access the Java AudioTrack APIs. I added a special case to my app to always use the AudioTrack interface when the emulator is detected.