Webaudio :: Play Recorded Audio - web-audio-api

I want to play the recorded audio using microphone.
After recording it as 32 bit arrays
let left = e.inputBuffer.getChannelData(0);
let tempLeftChannel = this.state.leftChannel;
tempLeftChannel.push(new Float32Array(left));
this.setState({ leftChannel: tempLeftChannel });
Now In the leftChannel array, I had chunk of audio data. Now, I want to play them in the browser. How can I do that?

You leave quite a bit out from your snippet, but perhaps the following will give you an idea of one way to play out the float array that you have. Let context be the AudioContext that you probably have.
let buffer = new AudioBuffer({length: leftChannel.length,
sampleRate: context.sampleRate});
buffer.copyToChannel(leftChannel, 0);
let source = new AudioBufferSourceNode(context, {buffer: buffer});
source.connect(context.destination);
source.start();

Related

MPMusicPlayerController being extremely slow (lagging 1+ second)

I am trying to make a music player and my app is being really slow when i added the player in there...
I can't switch more than a couple songs within 5 seconds or my app will freeze
let myCollection = MPMediaItemCollection(items: mySongs)
myController.setQueue(with: myCollection)
//setting the album artwork, title, artist and genre... it is way faster when i take it out of the array that I made. when i pluck it from the music player, it is very slow
let mySong = myController.nowPlayingItem
songTitle.text = mySong?.title
artistName.text = mySong?.artist
genreTitle.text = mySong?.genre
albumArt.image = mySong?.artwork?.image(at: size)
//Skipping stuff when i choose to skip a song.... This is embedded within a gesture... I don't know if thats whats making it slow but it has about a 1 second lag
myController.skipToNextItem()

AudioKit tap skips over time intervals

I am building an app that uses microphone input to detect sounds and trigger events. I based my code on AKAmplitudeTap, but I when I ran it, I found that I was only obtaining sample data for intervals with missing sections.
The tap code looks like this (with the guts ripped out and simply keeping track of how many samples would have been processed):
open class MyTap {
// internal let bufferSize: UInt32 = 1_024 // 8-9 kSamples/sec
internal let bufferSize: UInt32 = 4096 // 39.6 kSamples/sec
// internal let bufferSize: UInt32 = 16536 // 43.3 kSamples/sec
public init(_ input: AKNode?) {
input?.avAudioNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil ) { buffer, _ in
sampleCount += self.bufferSize
}
}
I initialize the tap with:
func afterLoad() {
assert(!loaded)
AKSettings.audioInputEnabled = true
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
print("Could not set session category.")
}
mic = AKMicrophone()
myTap = MyTap(mic) // seriously, can it be that easy?
loaded = true
}
The original tap code was capturing samples to a buffer, but I saw that big chunks of time were missing with a buffer size of 1024. I suspected that the processing time for the sample buffer might be excessive, so...
I simplified the code to simply keep track of how many samples were being passed to the tap. In another part of the code, I simply print out sampleCount/elapsedTime and, as noted in the comments after 'bufferSize' I get different amounts of samples per second.
The sample rate converges on 43.1 KSamples/sec with a 16K buffer, and only collects about 20% of the samples with a 1K buffer. I would prefer to use the small buffer size to obtain near real-time response to detected sounds. As I've been writing this, the 4K buffer version has been running and has stabilized at 39678 samples/sec.
Am I missing something? Can a tap with a small buffer size actually capture 44.1 Khz sample data?
Problem resolved... the tap requires this line of code
buffer.frameLength = self.bufferSize
... and suddenly all the samples appear. I obviously stripped out a bit too much code from the code I obviously didn't understand.

MovieTexture won't play audio

I'm trying to dynamically load and play a video file. No matter what I do, I cannot seem to figure out why the audio does not play.
var www = new WWW("http://unity3d.com/files/docs/sample.ogg");
var movieTexture = www.movie;
var movieAudio = www.movie.audioClip;
while (!movieTexture.isReadyToPlay) yield return 0;
// Assign movie texture and audio
var videoAnimation = videoAnimationPrefab.GetComponent<VideoAnimation>();
var videoRenderer = videoAnimation.GetVideoRenderer();
var audioSource = videoAnimation.GetAudioSource();
videoRenderer.material.mainTexture = movieTexture;
audioSource.clip = movieAudio;
// Play the movie and sound
movieTexture.Play();
audioSource.Play();
// Double check audio is playing...
Debug.Log("Audio playing: " + audioSource.isPlaying);
Every time I receive Audio playing: False
I've also tried using a GUITexture using this as a guide, but no dice. There are no errors displayed in the console.
What am I doing wrong that makes the audio never work?
Thanks in advance for any help!
Changed to:
while (!movieTexture.isReadyToPlay) yield return 0;
var movieAudio = movieTexture.audioClip;
Even though AudioClip inherits from Object, a call to movieTexture.audioClip seems to return a copied version instead of returning a reference by value to the object. So at the time I was assigning it, it had not been created yet and had to wait until the movie was "Ready to Play" until fetching the audioClip.

Using Novocaine in an audio app

I'm building an iPhone app that generates random guitar music by playing back individual recorded guitar notes in "caf" format. These notes vary in duration from 3 to 11 seconds, depending on the amount of sustain.
I originally used the AVAudioPlayer for playback, and in the simulator at 120 bpm, playing 16th notes it sung beautifully, but on my handset, as soon as I
upped the tempo a little over 60 bpm playing just 1/4 notes, it ran like a dog and wouldn't keep in time. My elation was very short lived.
To reduce latency, I tried to implement playback via Audio Units using the Apple MixerHost project as a template for an audio engine, but kept getting a bad access error after I bolted it on and connected everything up.
After many hours of it doing my head in, I gave up on that avenue of thought and I bolted on the Novocaine audio engine instead.
I have now run into a brick wall trying to connect it up to my model.
On the most basic level, my model is a Neck object containing an NSDictionary of Note objects.
Each Note object knows what string and fret of the guitar neck it's on and contains its own AVAudioPlayer.
I build a chromatic guitar neck containing either 122 notes (6 strings by 22 frets) or 144 notes (6 strings by 24 frets) depending on the neck size selected in the user preferences.
I use these Notes as my single point of truth so all scalar Notes generated by the music engine are pointers to this chromatic note bucket.
#interface Note : NSObject <NSCopying>
{
NSString *name;
AVAudioPlayer *soundFilePlayer;
int stringNumber;
int fretNumber;
}
I always start off playback with the root Note or Chord of the selected scale and then generate the note to play next so I am always playing one note behind the generated note. This way, the next Note to play is always queued up ready to go.
Playback control of these Notes is a achieved with the following code:
- (void)runMusicGenerator:(NSNumber *)counter
{
if (self.isRunning) {
Note *NoteToPlay;
// pulseRate is the time interval between beats
// staticNoteLength = 1/4 notes, 1/8th notes, 16th notes, etc.
float delay = self.pulseRate / [self grabStaticNoteLength];
// user setting to play single, double or triplet notes.
if (self.beatCounter == CONST_BEAT_COUNTER_INIT_VAL) {
NoteToPlay = [self.GuitarNeck generateNoteToPlayNext];
} else {
NoteToPlay = [self.GuitarNeck cloneNote:self.GuitarNeck.NoteToPlayNow];
}
self.GuitarNeck.NoteToPlayNow = NoteToPlay;
[self callOutNoteToPlay];
[self performSelector:#selector(runDrill:) withObject:NoteToPlay afterDelay:delay];
}
- (Note *)generateNoteToPlayNext
{
if ((self.musicPaused) || (self.musicStopped)) {
// grab the root note on the string to resume
self.NoteToPlayNow = [self grabRootNoteForString];
//reset the flags
self.musicPaused = NO;
self.musicStopped = NO;
} else {
// Set NoteRingingOut to NoteToPlayNow
self.NoteRingingOut = self.NoteToPlayNow;
// Set NoteToPlaNowy to NoteToPlayNext
self.NoteToPlayNow = self.NoteToPlayNext;
if (!self.NoteToPlayNow) {
self.NoteToPlayNow = [self grabRootNoteForString];
// now prep the note's audio player for playback
[self.NoteToPlayNow.soundFilePlayer prepareToPlay];
}
}
// Load NoteToPlayNext
self.NoteToPlayNext = [self generateRandomNote];
}
- (void)callOutNoteToPlay
{
self.GuitarNeck.NoteToPlayNow.soundFilePlayer.delegate = (id)self;
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer setVolume:1.0];
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer setCurrentTime:0];
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer play];
}
Each Note's AVAudioPlayer is loaded as follows:
- (AVAudioPlayer *)buildStringNotePlayer:(NSString *)nameOfNote
{
NSString *soundFileName = #"S";
soundFileName = [soundFileName stringByAppendingString:[NSString stringWithFormat:#"%d", stringNumber]];
soundFileName = [soundFileName stringByAppendingString:#"F"];
if (fretNumber < 10) {
soundFileName = [soundFileName stringByAppendingString:#"0"];
}
soundFileName = [soundFileName stringByAppendingString:[NSString stringWithFormat:#"%d", fretNumber]];
NSString *soundPath = [[NSBundle mainBundle] pathForResource:soundFileName ofType:#"caf"];
NSURL *fileURL = [NSURL fileURLWithPath:soundPath];
AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL error:nil];
return notePlayer;
}
Here is where I come a cropper.
According to the Novocaine Github page ...
Playing Audio
Novocaine *audioManager = [Novocaine audioManager];
[audioManager setOutputBlock:^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels) {
// All you have to do is put your audio into "audioToPlay".
}];
But in the downloaded project, you use the following code to load the audio ...
// AUDIO FILE READING OHHH YEAHHHH
// ========================================
NSURL *inputFileURL = [[NSBundle mainBundle] URLForResource:#"TLC" withExtension:#"mp3"];
fileReader = [[AudioFileReader alloc]
initWithAudioFileURL:inputFileURL
samplingRate:audioManager.samplingRate
numChannels:audioManager.numOutputChannels];
[fileReader play];
fileReader.currentTime = 30.0;
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels)
{
[fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels];
NSLog(#"Time: %f", fileReader.currentTime);
}];
Here is where I really start to get confused because the first method uses a float and the second one uses a URL.
How do you pass a "caf" file to a float? I am not sure how to implement Novocaine - it is still fuzzy in my head.
My questions that I hope someone can help me with are as follows ...
Are Novocaine objects similar to AVAudioPlayer objects, just more versatile and tweaked to the max for minimum latency? i.e. self contained audio playing (/recording/generating) units?
Can I use Novocaine in my model as it is? i.e. 1 Novocaine object per chromatic note or should I have 1 novocain object that contains all the Chromatic Notes? Or do I just store the URL in the note instead and pass that to a Novocaine player?
How can I put my audio into "audioToPlay" when my audio is a "caf" file and "audioToPlay" take a float?
If I include and declare a Novocaine property in Note.m do I then have to rename the class to Note.mm in order to use the Novocaine object?
How do I play multiple Novocaine objects concurrently in order to reproduce chords and intervals?
Can I loop a Novocaine object's playback?
Can I set the playback length of a note? i.e. play a 10 sec note for only 1 sec?
Can I modify the above code to use Novocaine?
Is the method I am using for runMusicGenerator the correct one to use in order to maintain a tempo that is up to professional standards?
Novocaine makes your life easier by eliminating the need for you to setup the RemoteIO AudioUnit manually. This includes having to painfully fill a bunch of CoreAudio structs and providing a bunch of callbacks such as this audio process callback.
static OSStatus PerformThru(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData);
Instead Novocaine handles that in its implementation and then calls your block, which you set by doing this.
[audioManager setOutputBlock: ^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels){} ];
Whatever you write to audioToPlay gets played.
Novocaine sets up the RemoteIO AudioUnit for you. This is a low-level CoreAudio API, different from the high-level AVFoundation, and very low-latency as expected. You are right in that Novocaine is self-contained. You can record, generate, and process audio in realtime.
Novocaine is a singleton, you cannot have multiple Novocaine instances. One way to do it is to store your guitar sound/sounds in a separate class or array, and then write a bunch of methods, using Novocaine to play them.
You have a bunch of options. You can use Novocaine's AudioFileReader to play your .caf file for you. You do this by allocating an AudioFileReader and then passing the URL of the .caf file you want to play, as per example code. You then stick [fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels] in your block, as per example code. Each time your block is called, AudioFileReader grabs and buffers a chunk of audio from disk and puts it in audioToPlay which subsequently gets played. There are some disadvantages with this. For short sounds (such as your guitar sound I'm assuming) repeatedly calling retrieveFreshAudio is a performance hit. It is generally a better idea (for short sounds) to perform a synchronous, sequential read of the entire file into memory. Novocaine does not provide a way to do this (yet). You will have to use ExtAudioFileServices to do this. The Apple example project MixerHost details how to do this.
If you are using AudioFileReader yes. You only rename to .mm when you are #import ing from Obj-C++ headers or #include ing C++ headers.
As mentioned earlier, only 1 Novocaine instance is allowed. You can achieve polyphony by mixing multiple audio sources. This is simply just adding buffers together. If you have made multiple versions of the same guitar sound at different pitches, just read them all in to memory, and mix away. If you only want to have one guitar sound, then you have to, in realtime, change the playback rate of however many notes you are playing and then mixdown.
Novocaine is agnostic to what you are actually playing and does not care how long you are playing a sample for. In order to loop a sound, you have to maintain a count of how many samples have elapsed, check if you are at the end of your sound, and then set that count back to 0.
Yes. Assuming a 44.1k sample rate, 1 sec of audio = 44100 samples. You would then reset your count when it reaches 44100.
Yes. It looks something like this. Assuming you have 4 guitar sounds which are mono and longer than 1 second long, and you have read them into memory float *guitarC, *guitarE, *guitarG, *guitarB; (jazzy CMaj7 chord w00t), and want to mix them down for 1 second and loop that back in mono:
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels){
static int count = 0;
for(int i=0; i<numFrames; ++i){
//Mono mix each sample of each sound together. Since result can be 4x louder, divide the total amp by 4.
//You should be using `vDSP_vadd` from the accelerate framework for added performance.
data[count] = (guitarC[count] + guitarE[count] + guitarG[count] + guitarB[count]) * 0.25;
if(++count >= 44100) count = 0; //Plays the mix for 1 sec
}
}];
Not exactly. Using performSelector or any mechanism scheduled on a runloop or thread is not guaranteed to be precise. You might experience timing irregularities when the CPU load fluctuates, for example. Use the audio block if you want sample accurate timing.

AudioQueue screws up output after modification

I am currently working on an audio DSP App development. The project requires direct access and modification of audio data. Right now I can successfully access and modify the raw audio data using AudioQueue but encounters error during playback. The output audio after any modification turns out be noise.
In short, the code is something like this:
(Modified from Speakhere sample code. The rest remains unchanged.)
void AQPlayer::AQBufferCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer)
{
AQPlayer *THIS = (AQPlayer *)inUserData;
if (THIS->mIsDone) return;
UInt32 numBytes;
UInt32 nPackets = THIS->GetNumPacketsToRead();
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(),
false,
&numBytes,
inCompleteAQBuffer->mPacketDescriptions,
THIS->GetCurrentPacket(),
&nPackets,
inCompleteAQBuffer->mAudioData);
if (result)
printf("AudioFileReadPackets failed: %d", (int)result);
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
//My modification starts from here
//Modifying audio data
SInt16 *testBuffer = (SInt16*)inCompleteAQBuffer->mAudioData;
for (int i = 0; i < (inCompleteAQBuffer->mAudioDataByteSize)/sizeof(SInt16); i++)
{
//printf("before modification %d", (int)*testBuffer);
*testBuffer = (SInt16) *testBuffer/2; //Say some simple modification
//printf("after modification %d", (int)*testBuffer);
testBuffer++;
}
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
}
During debugging, the data in buffer is displayed as expected, but the actual output is nothing but noise.
Here are some other strange behaviors of the code that makes both the whole team crazy:
If there is no change to the data (add/sub by 0, multiply by 1) or the whole buffer is assigned to a constant (say 0, then the audio will be muted), the playback behaves normally (Of course!) But if I perform anything more than it, it still turns out to be noise.
In the case I hardcode a single tone as test audio, the output noise spreads into another channel also.
So where is the bug in this code? Or if I am on the wrong track, what is the correct approach to modify the audio data and perform playback CORRECTLY? Any insight will be sincerely appreciated.
Thank you very much :-)
Cheers,
Manca
are you SURE the sample format is SInt16? And how many channels are there? You seem to treat the audio as a single channel short stream, but suppose the format is actually dual channel Float32 or so, and you do the modifications there, than the effect would be exactly as you describe, including the noise on other channels.