Audiokit crashes when changing AKPlayer file - swift

I have recently done the migration from AudioKit 3.7 to 4.2 (using Cocoapods), needed for XCode 9.3. I followed the migration guide and changed AKAudioPlayer to AKPlayer.
The issue
When AKPlayer plays an audio file, AudioKit is crashing with this error:
2018-04-17 09:32:43.042658+0200 hearfit[3509:2521326] [avae] AVAEInternal.h:103:_AVAE_CheckNoErr: [AVAudioEngineGraph.mm:3632:UpdateGraphAfterReconfig: (AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainFullTraversal, *conn.srcNode, isChainActive)): error -10875
2018-04-17 09:32:43.049372+0200 hearfit[3509:2521326] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'error -10875'
*** First throw call stack:
(0x1847d6d8c 0x1839905ec 0x1847d6bf8 0x18a0ff1a0 0x18a11bf58 0x18a12aab0 0x18a128cdc 0x18a1a1738 0x18a1a160c 0x10519192c 0x10519d2f4 0x10519d64c 0x10503afdc 0x10507c4a0 0x10507c01c 0x104f6d9cc 0x1852233d4 0x18477faa8 0x18477f76c 0x18477f010 0x18477cb60 0x18469cda8 0x18667f020 0x18e67d78c 0x10504dfd4 0x18412dfc0)
libc++abi.dylib: terminating with uncaught exception of type NSException
Sometimes it happens on the first play, and sometimes the first play is done correctly, but not the second one.
Everything was working great before the migration. I also tried to keep AKAudioPlayer: sounds are played correctly but AKFrequencyTracker does not work anymore.
Context
This is my setup:
Quick explanation:
AKPlayer 1 plays short audio files (between 1 and 5 seconds)
AKFrequencyTracker is used to display a plot
AKPlayer 2 plays background sound (volume must be configurable)
AKWhiteNoise allows to do some manual volume measurements (using AKMixer 2 volume property)
Use case example
The user starts an exercise. A sound is played continuously (with looping) using AKPlayer 2 and the user listens a word (played with AKPlayer 1), the plot is displayed. Next, several words are displayed on screen and the user must pick the right one. And a new word is listened... and so on.
So I have to change dynamically the played file of AKPlayer 1. All the code is written in a dedicated class, a singleton. All the nodes are setup in the init() function.
// singleton
static let main = AudioPlayer()
private init() {
let silenceUrl = Bundle.main.url(forResource: "silence", withExtension: "m4a", subdirectory: "audio")
self.silenceFile = silenceUrl!
self.mainPlayer = AKPlayer(url: self.silenceFile)!
self.mainPlayer.volume = 1.0
self.freqTracker = AKFrequencyTracker(self.mainPlayer, hopSize: 256, peakCount: 10)
let noiseUrl = Bundle.main.url(forResource: "cocktail-party", withExtension: "m4a", subdirectory: "audio")
self.noiseFile = noiseUrl!
self.noisePlayer = AKPlayer(url: self.noiseFile)!
self.noisePlayer.volume = 1.0
self.noisePlayer.isLooping = true
let mixer = AKMixer(self.freqTracker, self.noisePlayer)
self.whiteNoise = AKWhiteNoise(amplitude: 1.0)
self.whiteNoiseMixer = AKMixer(self.whiteNoise)
self.whiteNoiseMixer.volume = 0
self.mixer = AKMixer(mixer, self.whiteNoiseMixer)
AudioKit.output = self.mixer
do {
try AudioKit.start()
} catch (let error) {
print(error)
}
// stop directly the white noise mixer
self.whiteNoise.stop()
self.whiteNoiseMixer.volume = self.whiteNoiseVolume
self.mainPlayer.completionHandler = {
DispatchQueue.main.async {
if let timer = self.timer {
timer.invalidate()
self.timer = nil
}
if let completion = self.completionHandler {
Timer.scheduledTimer(withTimeInterval: 0.5, repeats: false, block: { (_) in
completion()
self.completionHandler = nil
})
}
}
}
}
To change the AKPlayer 1 audio file, I use this function, on the same class:
func play(fileUrl: URL, tracker: #escaping TrackerCallback, completion: (() -> Void)?) throws {
self.completionHandler = completion
let file = try AKAudioFile(forReading: fileUrl)
self.mainPlayer.load(audioFile: file)
self.mainPlayer.preroll()
self.timer = Timer.scheduledTimer(withTimeInterval: self.trackerRefreshRate, repeats: true) { (timer) in
tracker(self.freqTracker.frequency, self.freqTracker.amplitude)
}
self.mainPlayer.play()
}
Thank you.

I'm not sure what you are replacing into the player, but if the format of the file is different from what you had before, channels, samplerate, etc -- you should create a new AKPlayer instance rather than load into the same one. If your files are all the same format then it should work ok.
That said, I haven't seen the crash you show.
Another thing that is dangerous in your code is force unwrapping those optionals - you should guard against things being nil. AKPlayer actually uses AVAudioFile, no need for AKAudioFile.
guard let akfile = try? AVAudioFile(forReading: url) else { return }
if akfile.channelCount != player?.audioFile?.processingFormat.channelCount ||
akfile.sampleRate != player?.audioFile?.processingFormat.sampleRate {
AKLog("Need to create new player as formats have changed.")
}

Related

stopContinuousRecognition() blocks the app for 5-7 seconds

I am trying to implement speech recognition using the Azure Speech SDK in iOS project using Swift and I ran into the problem that the speech recognition completion function (stopContinuousRecognition()) blocks the app UI for a few seconds, but there is no memory or processor load or leak. I tried to move this function to DispatchQueue.main.async {}, but it gave no results. Maybe someone faced such a problem? Is it necessary to put this in a separate thread and why does the function take so long to finish?
Edit:
It is very hard to provide working example, but basically I am calling this function on button press:
private func startListenAzureRecognition(lang:String) {
let audioFormat = SPXAudioStreamFormat.init(usingPCMWithSampleRate: 8000, bitsPerSample: 16, channels: 1)
azurePushAudioStream = SPXPushAudioInputStream(audioFormat: audioFormat!)
let audioConfig = SPXAudioConfiguration(streamInput: azurePushAudioStream!)!
var speechConfig: SPXSpeechConfiguration?
do {
let sub = "enter your code here"
let region = "enter you region here"
try speechConfig = SPXSpeechConfiguration(subscription: sub, region: region)
speechConfig!.enableDictation();
speechConfig?.speechRecognitionLanguage = lang
} catch {
print("error \(error) happened")
speechConfig = nil
}
self.azureRecognition = try! SPXSpeechRecognizer(speechConfiguration: speechConfig!, audioConfiguration: audioConfig)
self.azureRecognition!.addRecognizingEventHandler() {reco, evt in
if (evt.result.text != nil && evt.result.text != "") {
print(evt.result.text ?? "no result")
}
}
self.azureRecognition!.addRecognizedEventHandler() {reco, evt in
if (evt.result.text != nil && evt.result.text != "") {
print(evt.result.text ?? "no result")
}
}
do {
try! self.azureRecognition?.startContinuousRecognition()
} catch {
print("error \(error) happened")
}
}
And when I press the button again to stop recognition, I am calling this function:
private func stopListenAzureRecognition(){
DispatchQueue.main.async {
print("start")
// app blocks here
try! self.azureRecognition?.stopContinuousRecognition()
self.azurePushAudioStream!.close()
self.azureRecognition = nil
self.azurePushAudioStream = nil
print("stop")
}
}
Also I am using raw audio data from mic (recognizeOnce works perfectly for first phrase, so everything is fine with audio data)
Try closing the stream first and then stopping the continuous recognition:
azurePushAudioStream!.close()
try! azureRecognition?.stopContinuousRecognition()
azureRecognition = nil
azurePushAudioStream = nil
You don't even need to do it asynchronously.
At least this worked for me.

AKAmplitudeTracker amplitude getting 0.0 using audioKit

I want to get the volume of AKAmplitudeTracker but getting -inf what is wrong with me please help out.
AKAudioFile.cleanTempDirectory()
AKSettings.audioInputEnabled = true
AKSettings.bufferLength = .medium
AKSettings.defaultToSpeaker = true
AKSettings.playbackWhileMuted = true
AKSettings.enableRouteChangeHandling = true
AKSettings.enableCategoryChangeHandling = true
AKSettings.enableLogging = true
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
print("error \(error.localizedDescription)")
}
microphone = AKMicrophone()!
tracker = AKAmplitudeTracker(microphone)
booster = AKBooster(tracker, gain: 0)
AudioKit.output = booster
try AudioKit.start()
=================
extension AKAmplitudeTracker {
var volume: Decibel {
return 20.0 * log10(amplitude)
}
}
=================
OutPut print(tracker. amplitude)
0.0
Had a quick look, seems that you followed the basic setup, you do seem to fail to trace the data generated in time correctly! Amplitude data is provided during the time period for the computation that is taken from the microphone, so to look at what it looks like in timeline you can use a timer, as such:
func reset() throws {
do {
self.timer.invalidate()
self.timer = nil
} catch {
throw error
}
}
func microphoneTracker() {
guard self.timer == nil else { return }
self.watcher()
let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in
log.info(self.akMicrophoneAmplitudeTracker.amplitude)
}
self.timer = timer
}
Change the withTimeInterval to how frequently you want to check the amplitude.
I think it's quite readable what I put there for you, but I'll break it down in a few words:
Keep a reference for the AKAmplitudeTracker in a property, here I've named it akMicrophoneAmplitudeTracker
Keep a reference for your timed event, that will check the amplitude value during a period
Compute the data in the closure body, the property holding value is .amplitude
The computation in the example is a logger that prints .amplitude
As required, use the .invalidate method to stop the timer
A few other things you may want to double-check on your code is to make sure that the tracker is part of the signal chain, as that's an AVAudioEngine engine requirement; I've also noticed in some other people's code a call for the method .start in the AKAmplitudeTracker, as follows:
akMicrophoneAmplitudeTracker.start()
To finish, have in mind that if you are testing it through Simulator, look at the microphone settings of your host-machine and expect amplitudes that might be different then the actual hardware.

AudioKit Creating Sinewave Tone When Returning from Background

I'm using AudioKit to run an AKSequencer() that plays both mp3 and wav files using AKMIDISampler(). Everything works great, except in cases when the app has entered background state for more than 30+ min, and then brought back up again for use. It seems to then lose all of it's audio connections and plays the "missing file" sinewave tone mentioned in other threads. The app can happily can enter background momentarily, user can quit, etc without the tone. It seems to only happen when left in background for long periods of time and then brought up again.
I've tried changing the order of AudioKit.start() and file loading, but nothing seems to completely eliminate the issue.
My current workaround is simply to prevent the user's display from timing out, however that does not address many use-cases of the issue occurring.
Is there a way to handle whatever error I'm setting up that creates this tone? Here is a representative example of what I'm doing with ~40 audio files.
//viewController
override func viewDidLoad() {
sequencer.setupSequencer()
}
class SamplerWav {
let audioWav = AKMIDISampler()
func loadWavFile() {
try? audioWav.loadWav("some_wav_audio_file")
}
class SamplerMp3 {
let audioMp3 = AKMIDISampler()
let audioMp3_akAudioFile = try! AKAudioFile(readFileName: "some_other_audio_file.mp3")
func loadMp3File() {
try? audioMp3.loadAudioFile(audioMp3_akAudioFile)
}
class Sequencer {
let mixer = AKMixer()
let subMix = AKMixer()
let samplerWav = SamplerWav()
let samplerMp3 = SamplerMp3()
var callbackTrack: AKMusicTrack!
let callbackInstr = AKMIDICallbackInstrument()
func setupSequencer{
AudioKit.output = mixer.mixer
try! AudioKit.start()
callbackTrack = sequencer.newTrack()
callbackTrack?.setMIDIOutput(callbackInstr.midiIn)
samplerWav.loadWavFile()
samplerMp3.loadMp3File()
samplerWav.audioWav >>> subMix
samplerMp3.audioMp3 >>> submix
submix >>> mixer
}
//Typically run from a callback track
func playbackSomeSound(){
try? samplerWav.audioWav.play(noteNumber: 60, velocity: 100, channel: 1)
}
}
Thanks! I'm a big fan of AudioKit.
After some trial and error, here's a workflow that seems to address the issue for my circumstance:
-create my callback track(s) -once- from viewDidLoad
-stop AudioKit, and call .detach() on all my AKMIDISampler tracks and any routing in willResignActive
-start AudioKit (again), and reload and reroute all of the audio files/tracks from didBecomeActive

AKOfflineRenderNode - Scheduling AKAudioPlayers, only the last player renders

Getting some weird results when trying to render a sequence of AKAudioPlayers with AudioKit 4.0, Swift 4 on iOS 11.1
I'm aware of AudioKit.renderToFile alternative on the development branch (https://github.com/AudioKit/AudioKit/commit/09aedf7c119a399ab00026ddfb91ae6778570176) but would like to cover iOS 9+ if possible
Expected result:
A long audio file with the each file (URL) rendered in sequence
Actual result:
Only the last scheduled file is rendered (at the correct offset in the resulting wav file)
Weirdly, if I schedule them all at the 0 offset, they all get rendered. Also, if I play things back without rendering, it sounds correct (though I have to adjust the AVAudioTime to use mach_absolute_time)
It almost seems like scheduling an AKAudioPlayer cancels the previous one.
Setup:
class func initialize (){
// ....
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
AKLog("Could not set session category.")
}
//AKSettings.playbackWhileMuted = true
AKSettings.defaultToSpeaker = true
mainMixer = AKMixer()
offlineRender = AKOfflineRenderNode()
mainMixer! >>> offlineRender!
AudioKit.output = offlineRender!
AudioKit.start()
// ....
Rendering:
class func testRender(urls: [URL], dest: URL, offset: TimeInterval = 2){
// Stop / Start AudioKit when switching internalRenderEnabled, otherwise I get the following error:
// *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'player started when engine not running'
AudioKit.stop()
var players = [AKAudioPlayer]()
var scheduleTime : TimeInterval = 0
// create players
for url in urls {
do {
let file = try AKAudioFile(forReading: url)
let player = try AKAudioPlayer(file: file)
players.append(player)
player.connect(to: mainMixer!)
print("Connecting player")
} catch {
print("error reading")
}
}
offlineRender!.internalRenderEnabled = false
AudioKit.start()
for player in players{
do {
// 0 instead of mach_absolute_time(), otherwise the result is silent
let avTime = AKAudioPlayer.secondsToAVAudioTime(hostTime: 0, time: scheduleTime)
// schedule and play according to:
// https://stackoverflow.com/questions/45799686/how-to-apply-audio-effect-to-a-file-and-write-to-filesystem-ios/45807586#45807586
player.schedule(from: 0, to: player.duration, avTime: nil)
player.play(at: avTime);
scheduleTime += offset
} catch {
print("error reading")
}
}
// add some padding
scheduleTime += 3
let duration = scheduleTime
do {
try offlineRender!.renderToURL(dest, seconds: duration)
} catch {
print("error rendering")
}
// cleanup
players.forEach { $0.schedule(from: 0, to: $0.duration, avTime: nil)}
players.forEach({$0.stop()})
players.forEach({$0.disconnectOutput()})
offlineRender!.internalRenderEnabled = true
}
Appreciate any help!
AKOfflineRenderNode has been deprecated as of iOS 11.0. Version 4.0.4 has an AudioKit.renderToFile method to replace it. It was updated recently (in late 2017).
So it looks like the AKOFflineRednerNode is indeed deprecated in the coming versions of AudioKit and is not working on iOS11. Reading comments discussing the issue on GitHub it sounds like the plan is to encapsulate both the new (iOS11+) offline rendering and the old (iOS9-10) under a common interface (AudioKit.renderToFile). However for now it seems to be iOS11 only.
After some testing with the dev version (install instructions here: https://github.com/audiokit/AudioKit/blob/master/Frameworks/README.md) I got the following code to work as intended:
try AudioKit.renderToFile(outputFile, seconds: duration, prerender: {
var scheduleTime : TimeInterval = 0
for player in players{
let dspTime = AVAudioTime(sampleTime: AVAudioFramePosition(scheduleTime * AKSettings.sampleRate), atRate: AKSettings.sampleRate)
player.play(at: dspTime)
scheduleTime += offset
}
})
Unless someone can provide a workaround that gets the OfflineRenderNode working on iOS11 and until the official release of AudioKit with the renderToFile implemented this is the best answer I could find.

How to create files with different names using the least amount of processing power in Swift?

I am writing a video app that records video only when triggered and at the end of all the recordings merges the recordings together into one at the end.
I was just wondering if there is process in swift to make sure the name of the next file of a recording is different than the previous one? I know ways of doing this that are fine, but I am a bit of a memory freak and was wondering if swift has a built in answer to this problem?
Edit:
This works with the variable filenamechanger. I just was wondering if there is an even better way.
var filenamechanger = 0
if motiondetected == true {
do{
let documentsDir = try FileManager.default.url(for:.documentDirectory, in:.userDomainMask, appropriateFor:nil, create:true)
filenamechanger += 1 //name changer
let fileURL = URL(string:"test\(filenamechanger).mp4", relativeTo:documentsDir)!
do {
try FileManager.default.removeItem(at:fileURL)
} catch {
}
self.movieOutput = try MovieOutput(URL:fileURL, size:Size(width:480, height:640), liveVideo:true)
self.camera.audioEncodingTarget = self.movieOutput
self.camera --> self.movieOutput!
self.movieOutput!.startRecording()
sleep(3)
self.camera.audioEncodingTarget = nil
self.movieOutput = nil
motiondetected = false
}
catch{
"recording didn't work"
}
}