Crossfade Loop in AudioKit - swift

Does AudioKit provide any options to crossfade a loop? I've experimented with using an AKBooster to fade in/out an AKSequencer, however I set a varying tempo/rate on the fly which complicates when to start the fades. AKWaveTable provides a great looping option, however I'm not sure if there's any way to create a "soft" loop that crossfades from it. I'm looking to soft-loop the following example:
import AudioKit
class ViewController: UIViewController {
let mixer = AKMixer()
let wavePlayer = AKWaveTable(file: (try! AKAudioFile(readFileName: "sample.mp3")), startPoint: Sample(44100), endPoint: Sample(44100), rate: 1, volume: 1, maximumSamples: 0, completionHandler: {}, loadCompletionHandler: {})
func play(){
wavePlayer.play()
}
override func viewDidLoad() {
wavePlayer >>> mixer
AudioKit.output = mixer
wavePlayer.loopEnabled = true
wavePlayer.play(from: Sample(44100))
do {
try AudioKit.start()
} catch {
AKLog("AudioKit did not start!")
}
play()
}
}
Thanks!

Related

Is there a way to create a spectrogram of an audio file using Swift and AudioKit?

I am trying to create a spectrogram, like the one in the image, from an audio file using Swift for a macOS app. I am using AppKit but could implement SwiftUI as well. I cam across audio kit and it seems like the perfect library to use for this type of thing, but I have not been able to find any examples of what I am looking for in an of the audio kit repositories, audio kit UI nor the cookbook. Is this something that is possible with audio kit? If so, can anyone help me with this?
Thanks so much!
I have previously tried using apple's example project and changed the code in the AudioSpectrogram + AVCaptureAudioDataOutputSampleBufferDelegate file. The original code is as follows:
extension AudioSpectrogram: AVCaptureAudioDataOutputSampleBufferDelegate {
public func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection) {
var audioBufferList = AudioBufferList()
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout.stride(ofValue: audioBufferList),
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer)
guard let data = audioBufferList.mBuffers.mData else {
return
}
/// The _Nyquist frequency_ is the highest frequency that a sampled system can properly
/// reproduce and is half the sampling rate of such a system. Although this app doesn't use
/// `nyquistFrequency` you may find this code useful to add an overlay to the user interface.
if nyquistFrequency == nil {
let duration = Float(CMSampleBufferGetDuration(sampleBuffer).value)
let timescale = Float(CMSampleBufferGetDuration(sampleBuffer).timescale)
let numsamples = Float(CMSampleBufferGetNumSamples(sampleBuffer))
nyquistFrequency = 0.5 / (duration / timescale / numsamples)
}
if self.rawAudioData.count < AudioSpectrogram.sampleCount * 2 {
let actualSampleCount = CMSampleBufferGetNumSamples(sampleBuffer)
let ptr = data.bindMemory(to: Int16.self, capacity: actualSampleCount)
let buf = UnsafeBufferPointer(start: ptr, count: actualSampleCount)
rawAudioData.append(contentsOf: Array(buf))
}
while self.rawAudioData.count >= AudioSpectrogram.sampleCount {
let dataToProcess = Array(self.rawAudioData[0 ..< AudioSpectrogram.sampleCount])
self.rawAudioData.removeFirst(AudioSpectrogram.hopCount)
self.processData(values: dataToProcess)
}
createAudioSpectrogram()
}
func configureCaptureSession() {
// Also note that:
//
// When running in iOS, you must add a "Privacy - Microphone Usage
// Description" entry.
//
// When running in macOS, you must add a "Privacy - Microphone Usage
// Description" entry to `Info.plist`, and check "audio input" and
// "camera access" under the "Resource Access" category of "Hardened
// Runtime".
switch AVCaptureDevice.authorizationStatus(for: .audio) {
case .authorized:
break
case .notDetermined:
sessionQueue.suspend()
AVCaptureDevice.requestAccess(for: .audio,
completionHandler: { granted in
if !granted {
fatalError("App requires microphone access.")
} else {
self.configureCaptureSession()
self.sessionQueue.resume()
}
})
return
default:
// Users can add authorization in "Settings > Privacy > Microphone"
// on an iOS device, or "System Preferences > Security & Privacy >
// Microphone" on a macOS device.
fatalError("App requires microphone access.")
}
captureSession.beginConfiguration()
#if os(macOS)
// Note than in macOS, you can change the sample rate, for example to
// `AVSampleRateKey: 22050`. This reduces the Nyquist frequency and
// increases the resolution at lower frequencies.
audioOutput.audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVLinearPCMIsFloatKey: false,
AVLinearPCMBitDepthKey: 16,
AVNumberOfChannelsKey: 1]
#endif
if captureSession.canAddOutput(audioOutput) {
captureSession.addOutput(audioOutput)
} else {
fatalError("Can't add `audioOutput`.")
}
guard
let microphone = AVCaptureDevice.default(.builtInMicrophone,
for: .audio,
position: .unspecified),
let microphoneInput = try? AVCaptureDeviceInput(device: microphone) else {
fatalError("Can't create microphone.")
}
if captureSession.canAddInput(microphoneInput) {
captureSession.addInput(microphoneInput)
}
captureSession.commitConfiguration()
}
/// Starts the audio spectrogram.
func startRunning() {
sessionQueue.async {
if AVCaptureDevice.authorizationStatus(for: .audio) == .authorized {
self.captureSession.startRunning()
}
}
}
}
I got rid of the configureCaptureSession function and replaced the rest of the code to get the following code:
public func captureBuffer() {
var samplesArray:[Int16] = []
let asset = AVAsset(url: audioFileUrl)
let reader = try! AVAssetReader(asset: asset)
let track = asset.tracks(withMediaType: AVMediaType.audio)[0]
let settings = [
AVFormatIDKey : kAudioFormatLinearPCM
]
let readerOutput = AVAssetReaderTrackOutput(track: track, outputSettings: settings)
reader.add(readerOutput)
reader.startReading()
while let buffer = readerOutput.copyNextSampleBuffer() {
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
buffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout<AudioBufferList>.size,
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
);
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for buffer in buffers {
let samplesCount = Int(buffer.mDataByteSize) / MemoryLayout<Int16>.size
let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: Int16.self, capacity: samplesCount)
let samples = UnsafeMutableBufferPointer<Int16>(start: samplesPointer, count: samplesCount)
for sample in samples {
//do something with you sample (which is Int16 amplitude value)
samplesArray.append(sample)
}
}
guard let data = audioBufferList.mBuffers.mData else {
return
}
/// The _Nyquist frequency_ is the highest frequency that a sampled system can properly
/// reproduce and is half the sampling rate of such a system. Although this app doesn't use
/// `nyquistFrequency` you may find this code useful to add an overlay to the user interface.
if nyquistFrequency == nil {
let duration = Float(CMSampleBufferGetDuration(buffer).value)
let timescale = Float(CMSampleBufferGetDuration(buffer).timescale)
let numsamples = Float(CMSampleBufferGetNumSamples(buffer))
nyquistFrequency = 0.5 / (duration / timescale / numsamples)
}
if self.rawAudioData.count < AudioSpectrogram.sampleCount * 2 {
let actualSampleCount = CMSampleBufferGetNumSamples(buffer)
let ptr = data.bindMemory(to: Int16.self, capacity: actualSampleCount)
let buf = UnsafeBufferPointer(start: ptr, count: actualSampleCount)
rawAudioData.append(contentsOf: Array(buf))
}
while self.rawAudioData.count >= AudioSpectrogram.sampleCount {
let dataToProcess = Array(self.rawAudioData[0 ..< AudioSpectrogram.sampleCount])
self.rawAudioData.removeFirst(AudioSpectrogram.hopCount)
self.processData(values: dataToProcess)
}
createAudioSpectrogram()
}
}
In AudioSpectrogram: CALayer file, I changed the original lines 10-30 from
public class AudioSpectrogram: CALayer {
// MARK: Initialization
override init() {
super.init()
contentsGravity = .resize
configureCaptureSession()
audioOutput.setSampleBufferDelegate(self,
queue: captureQueue)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override public init(layer: Any) {
super.init(layer: layer)
}
to the following:
public class AudioSpectrogram: CALayer {
#objc var audioFileUrl: URL
// MARK: Initialization
override init() {
self.audioFileUrl = selectedTrackUrl!
super.init()
contentsGravity = .resize
captureBuffer()
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override public init(layer: Any) {
self.audioFileUrl = selectedTrackUrl!
super.init(layer: layer)
}
The changed code allows me to specify the audio file to use when the Spectrogram is called from another area in my app.
The following is an example of what I am trying to achieve. It was done using FFMPEG.
Example Spectrogram
This is the output I get from my code:
Output Image
AudioKit is not the tool you want for this. You want AVFoundation. Apple has an example project of exactly what you're describing.
The tool at the heart of this is a DCT (discrete cosine transform) to convert windows of samples into a collection of component frequencies you can visualize. AVFoundation is the tool you use to turn your audio file or live recording into a buffer of audio samples so you can apply the DCT.
There actually is a Spectrogram in the AudioKitUI Swift package: https://github.com/AudioKit/AudioKitUI/blob/main/Sources/AudioKitUI/Visualizations/SpectrogramView.swift
You would need to pass it an AudioKit Node but it should be interchangeable with the other visualizers in the Cookbook.

Timing issues: Metronome using AVAudioEngine scheduleBuffer's completion handler

I want to build a simple metronome app using AVAudioEngine with these features:
Solid timing (I know, I know, I should be using Audio Units, but I'm still struggling with Core Audio stuff / Obj-C wrappers etc.)
Two different sounds on the "1" and on beats "2"/"3"/"4" of the bar.
Some kind of visual feedback (at least a display of the current beat) which needs to be in sync with audio.
So I have created two short click sounds (26ms / 1150 samples # 16 bit / 44,1 kHz / stereo wav files) and load them into 2 buffers. Their lengths will be set to represent one period.
My UI setup is simple: A button to toggle start / pause and a label to display the current beat (my "counter" variable).
When using scheduleBuffer's loop property the timing is okay, but as I need to have 2 different sounds and a way to sync/update my UI while looping the clicks I cannot use this. I figured out to use the completionHandler instead which the restarts my playClickLoop() function - see my code attach below.
Unfortunately while implementing this I didn't really measure the accuracy of the timing. As it now turns out when setting bpm to 120, it plays the loop at only about 117,5 bpm - quite steadily but still way too slow. When bpm is set to 180, my app plays at about 172,3 bpm.
What's going on here? Is this delay introduced by using the completionHandler? Is there any way to improve the timing? Or is my whole approach wrong?
Thanks in advance!
Alex
import UIKit
import AVFoundation
class ViewController: UIViewController {
private let engine = AVAudioEngine()
private let player = AVAudioPlayerNode()
private let fileName1 = "sound1.wav"
private let fileName2 = "sound2.wav"
private var file1: AVAudioFile! = nil
private var file2: AVAudioFile! = nil
private var buffer1: AVAudioPCMBuffer! = nil
private var buffer2: AVAudioPCMBuffer! = nil
private let sampleRate: Double = 44100
private var bpm: Double = 180.0
private var periodLengthInSamples: Double { 60.0 / bpm * sampleRate }
private var counter: Int = 0
private enum MetronomeState {case run; case stop}
private var state: MetronomeState = .stop
#IBOutlet weak var label: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
//
// MARK: Loading buffer1
//
let path1 = Bundle.main.path(forResource: fileName1, ofType: nil)!
let url1 = URL(fileURLWithPath: path1)
do {file1 = try AVAudioFile(forReading: url1)
buffer1 = AVAudioPCMBuffer(
pcmFormat: file1.processingFormat,
frameCapacity: AVAudioFrameCount(periodLengthInSamples))
try file1.read(into: buffer1!)
buffer1.frameLength = AVAudioFrameCount(periodLengthInSamples)
} catch { print("Error loading buffer1 \(error)") }
//
// MARK: Loading buffer2
//
let path2 = Bundle.main.path(forResource: fileName2, ofType: nil)!
let url2 = URL(fileURLWithPath: path2)
do {file2 = try AVAudioFile(forReading: url2)
buffer2 = AVAudioPCMBuffer(
pcmFormat: file2.processingFormat,
frameCapacity: AVAudioFrameCount(periodLengthInSamples))
try file2.read(into: buffer2!)
buffer2.frameLength = AVAudioFrameCount(periodLengthInSamples)
} catch { print("Error loading buffer2 \(error)") }
//
// MARK: Configure + start engine
//
engine.attach(player)
engine.connect(player, to: engine.mainMixerNode, format: file1.processingFormat)
engine.prepare()
do { try engine.start() } catch { print(error) }
}
//
// MARK: Play / Pause toggle action
//
#IBAction func buttonPresed(_ sender: UIButton) {
sender.isSelected = !sender.isSelected
if player.isPlaying {
state = .stop
} else {
state = .run
try! engine.start()
player.play()
playClickLoop()
}
}
private func playClickLoop() {
//
// MARK: Completion handler
//
let scheduleBufferCompletionHandler = { [unowned self] /*(_: AVAudioPlayerNodeCompletionCallbackType)*/ in
DispatchQueue.main.async {
switch state {
case .run:
self.playClickLoop()
case .stop:
engine.stop()
player.stop()
counter = 0
}
}
}
//
// MARK: Schedule buffer + play
//
if engine.isRunning {
counter += 1; if counter > 4 {counter = 1} // Counting from 1 to 4 only
if counter == 1 {
//
// MARK: Playing sound1 on beat 1
//
player.scheduleBuffer(buffer1,
at: nil,
options: [.interruptsAtLoop],
//completionCallbackType: .dataPlayedBack,
completionHandler: scheduleBufferCompletionHandler)
} else {
//
// MARK: Playing sound2 on beats 2, 3 & 4
//
player.scheduleBuffer(buffer2,
at: nil,
options: [.interruptsAtLoop],
//completionCallbackType: .dataRendered,
completionHandler: scheduleBufferCompletionHandler)
}
//
// MARK: Display current beat on UILabel + to console
//
DispatchQueue.main.async {
self.label.text = String(self.counter)
print(self.counter)
}
}
}
}
As Phil Freihofner suggested above, here's the solution to my own problem:
The most important lesson I learned: The completionHandler callback provided by the scheduleBuffer command is not called early enough to trigger re-scheduling of another buffer while the first one is still playing. This will result in (inaudible) gaps between the sounds and mess up the timing. There must already be another buffer "in reserve", i.e. having been schdeduled before the current one has been scheduled.
Using the completionCallbackType parameter of scheduleBuffer didn't change much considering the time of the completion callback: When setting it to .dataRendered or .dataConsumed the callback was already too late to re-schedule another buffer. Using .dataPlayedback made things only worse :-)
So, to achieve seamless playback (with correct timing!) I simply activated a timer that triggers twice per period. All odd numbered timer events will re-schedule another buffer.
Sometimes the solution is so easy it's embarrassing... But sometimes you have to try almost every wrong approach first to find it ;-)
My complete working solution (including the two sound files and the UI) can be found here on GitHub:
https://github.com/Alexander-Nagel/Metronome-using-AVAudioEngine
import UIKit
import AVFoundation
private let DEBUGGING_OUTPUT = true
class ViewController: UIViewController{
private var engine = AVAudioEngine()
private var player = AVAudioPlayerNode()
private var mixer = AVAudioMixerNode()
private let fileName1 = "sound1.wav"
private let fileName2 = "sound2.wav"
private var file1: AVAudioFile! = nil
private var file2: AVAudioFile! = nil
private var buffer1: AVAudioPCMBuffer! = nil
private var buffer2: AVAudioPCMBuffer! = nil
private let sampleRate: Double = 44100
private var bpm: Double = 133.33
private var periodLengthInSamples: Double {
60.0 / bpm * sampleRate
}
private var timerEventCounter: Int = 1
private var currentBeat: Int = 1
private var timer: Timer! = nil
private enum MetronomeState {case running; case stopped}
private var state: MetronomeState = .stopped
#IBOutlet weak var beatLabel: UILabel!
#IBOutlet weak var bpmLabel: UILabel!
#IBOutlet weak var playPauseButton: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
bpmLabel.text = "\(bpm) BPM"
setupAudio()
}
private func setupAudio() {
//
// MARK: Loading buffer1
//
let path1 = Bundle.main.path(forResource: fileName1, ofType: nil)!
let url1 = URL(fileURLWithPath: path1)
do {file1 = try AVAudioFile(forReading: url1)
buffer1 = AVAudioPCMBuffer(
pcmFormat: file1.processingFormat,
frameCapacity: AVAudioFrameCount(periodLengthInSamples))
try file1.read(into: buffer1!)
buffer1.frameLength = AVAudioFrameCount(periodLengthInSamples)
} catch { print("Error loading buffer1 \(error)") }
//
// MARK: Loading buffer2
//
let path2 = Bundle.main.path(forResource: fileName2, ofType: nil)!
let url2 = URL(fileURLWithPath: path2)
do {file2 = try AVAudioFile(forReading: url2)
buffer2 = AVAudioPCMBuffer(
pcmFormat: file2.processingFormat,
frameCapacity: AVAudioFrameCount(periodLengthInSamples))
try file2.read(into: buffer2!)
buffer2.frameLength = AVAudioFrameCount(periodLengthInSamples)
} catch { print("Error loading buffer2 \(error)") }
//
// MARK: Configure + start engine
//
engine.attach(player)
engine.connect(player, to: engine.mainMixerNode, format: file1.processingFormat)
engine.prepare()
do { try engine.start() } catch { print(error) }
}
//
// MARK: Play / Pause toggle action
//
#IBAction func buttonPresed(_ sender: UIButton) {
sender.isSelected = !sender.isSelected
if state == .running {
//
// PAUSE: Stop timer and reset counters
//
state = .stopped
timer.invalidate()
timerEventCounter = 1
currentBeat = 1
} else {
//
// START: Pre-load first sound and start timer
//
state = .running
scheduleFirstBuffer()
startTimer()
}
}
private func startTimer() {
if DEBUGGING_OUTPUT {
print("# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ")
print()
}
//
// Compute interval for 2 events per period and set up timer
//
let timerIntervallInSamples = 0.5 * self.periodLengthInSamples / sampleRate
timer = Timer.scheduledTimer(withTimeInterval: timerIntervallInSamples, repeats: true) { timer in
//
// Only for debugging: Print counter values at start of timer event
//
// Values at begin of timer event
if DEBUGGING_OUTPUT {
print("timerEvent #\(self.timerEventCounter) at \(self.bpm) BPM")
print("Entering \ttimerEventCounter: \(self.timerEventCounter) \tcurrentBeat: \(self.currentBeat) ")
}
//
// Schedule next buffer at 1st, 3rd, 5th & 7th timerEvent
//
var bufferScheduled: String = "" // only needed for debugging / console output
switch self.timerEventCounter {
case 7:
//
// Schedule main sound
//
self.player.scheduleBuffer(self.buffer1, at:nil, options: [], completionHandler: nil)
bufferScheduled = "buffer1"
case 1, 3, 5:
//
// Schedule subdivision sound
//
self.player.scheduleBuffer(self.buffer2, at:nil, options: [], completionHandler: nil)
bufferScheduled = "buffer2"
default:
bufferScheduled = ""
}
//
// Display current beat & increase currentBeat (1...4) at 2nd, 4th, 6th & 8th timerEvent
//
if self.timerEventCounter % 2 == 0 {
DispatchQueue.main.async {
self.beatLabel.text = String(self.currentBeat)
}
self.currentBeat += 1; if self.currentBeat > 4 {self.currentBeat = 1}
}
//
// Increase timerEventCounter, two events per beat.
//
self.timerEventCounter += 1; if self.timerEventCounter > 8 {self.timerEventCounter = 1}
//
// Only for debugging: Print counter values at end of timer event
//
if DEBUGGING_OUTPUT {
print("Exiting \ttimerEventCounter: \(self.timerEventCounter) \tcurrentBeat: \(self.currentBeat) \tscheduling: \(bufferScheduled)")
print()
}
}
}
private func scheduleFirstBuffer() {
player.stop()
//
// pre-load accented main sound (for beat "1") before trigger starts
//
player.scheduleBuffer(buffer1, at: nil, options: [], completionHandler: nil)
player.play()
beatLabel.text = String(currentBeat)
}
}
Thanks so much for your help everyone! This is a wonderful community.
Alex
How accurate is the tool or process which you are using to get your measure?
I can't tell for sure that your files have the correct number of PCM frames as I am not a C programmer. It looks like data from the wav header is included when you load the files. This makes me wonder if maybe there is some latency incurred with the playbacks while the header information is processed repeatedly at the start of each play or loop.
I had good luck building a metronome in Java by using a plan of continuously outputting an endless stream derived from reading PCM frames. Timing is achieved by counting PCM frames and routing in either silence (PCM datapoint = 0) or the click's PCM data, based on the period of the chosen metronome setting and the length of the click in PCM frames.

Deactivate AudioSession in AudioKit

I'm using AudioKit, and trying to set audio session inactive, when I don't need it. So, I wrote some simple code to understand mechanism of this process, and faced one unexpectable trouble. With attempts of deactivating session, I'm getting error:
[avas] AVAudioSession.mm:1079:-[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
ViewController.swift:deactivateAudioSession():73:Error Domain=NSOSStatusErrorDomain Code=560030580 "(null)"
This is my sample code:
import UIKit
import AudioKit
class ViewController: UIViewController {
var reverb: AKReverb?
var delay: AKDelay?
var chorus: AKChorus?
var mic: AKMicrophone!
override func viewDidLoad() {
NotificationCenter.default.addObserver(self, selector: #selector(applicationWillResignActive),
name: UIApplication.willResignActiveNotification,
object: nil)
NotificationCenter.default.addObserver(self, selector: #selector(applicationDidBecomeActive),
name: UIApplication.didBecomeActiveNotification,
object: nil)
super.viewDidLoad()
AKSettings.useBluetooth = true
guard let mic = AKMicrophone() else { return }
self.mic = mic
chorus = AKChorus(mic)
chorus?.depth = 1
chorus?.frequency = 44000
let delay = AKDelay(chorus)
delay.feedback = 0.2
reverb = AKReverb(delay)
let mix = AKMixer(reverb)
AudioKit.output = mix
}
#IBAction func launchEngine() {
do {
try AudioKit.start()
} catch {
AKLog(error)
}
}
#IBAction func deactivateAudioSession() {
// mic.stop()
// reverb?.stop()
// delay?.stop()
// chorus?.stop()
do {
try AKSettings.session.setActive(false)
} catch {
AKLog(error)
}
}
#objc func applicationDidBecomeActive() {
launchEngine()
}
#objc func applicationWillResignActive() {
deactivateAudioSession()
}
}
As you can see, basically, I wanted to handle app state changing, but this error happens even if I call deactivateAudioSession() method using UI. I've tried to stop() all node objects (commented code) before deactivation, but error stays. What I'm doing wrong?

AudioKit 4.2.3 Crash Microphone Frequency Analysis Swift 4.1

I just updated to the latest AudioKit version 4.2.3 and Swift 4.1 I'm getting a crash at audiokit.start() that I can't decipher. Please lmk if you need more of the error code.
AURemoteIO::IOThread (21): EXC_BAD_ACCESS (code=1, address=0x100900000)
FYI I am also using AVAudioRecorder to record the microphone input to file and playing it with AVKit AVAudioPlayer later on in the ViewController. However, since I did not get this crash before updating I do not believe those factors are responsible - but something with the tracker input.
import UIKit
import Speech
import AudioKit
class RecordVoiceViewController: UIViewController {
var tracker: AKFrequencyTracker!
var silence: AKBooster!
var mic: AKMicrophone!
let noteFrequencies = [16.35, 17.32, 18.35, 19.45, 20.6, 21.83, 23.12, 24.5, 25.96, 27.5, 29.14, 30.87]
let noteNamesWithSharps = ["C", "C♯","D","D♯","E","F","F♯","G","G♯","A","A♯","B"]
let noteNamesWithFlats = ["C", "D♭","D","E♭","E","F","G♭","G","A♭","A","B♭","B"]
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
tracker = AKFrequencyTracker.init(mic, hopSize: 200, peakCount: 2000)
silence = AKBooster(tracker, gain: 0)
}
func startAudioKit(){
AudioKit.output = self.silence
do {
try AudioKit.start()
} catch {
AKLog("Something went wrong.")
}
}
}
What's interesting is when I initialize the tracker without the hopSize and peakCount, like:
tracker = AKFrequencyTracker.init(mic)
it does not crash, however it also doesn't return the correct frequency. I'm super thankful for any help. Thanks!!!
I've faced exactly the same issue, but finally found a temporary solution.
All you need to do is to add an additional layer between AKMicrophone and AKFrequencyTracker, in my case it was AKHighPassFilter.
Here's the code that works properly:
let microphone = AKMicrophone()
let filter = AKHighPassFilter(microphone, cutoffFrequency: 200, resonance: 0)
let tracker = AKFrequencyTracker(filter)
let silence = AKBooster(tracker, gain: 0)
AKSettings.audioInputEnabled = true
AudioKit.output = silence
try! AudioKit.start()
Hope this helps, good luck!

Little freeze at the very start of plane animation

Have following code which works pretty as expected besides a little freeze just before first airplane animation begins. And my code is:
enum TurnDirection: String {
case left
case right
case none
}
extension GameScene {
fileprivate func turnPlayerPlane(direction: TurnDirection) {
let forwardAction = SKAction.animate(with: textureArray.reversed(), timePerFrame: 0.05, resize: true, restore: false)
player.run(forwardAction) { [unowned self] in
self.stillTurning = false
}
}
}
class GameScene: SKScene {
let textureArray = [SKTexture(imageNamed: "01"),
SKTexture(imageNamed: "02")]
let motionManager = CMMotionManager()
var xAcceleration: CGFloat = 0
var player: SKSpriteNode!
var stillTurning = false
override func didMove(to view: SKView) {
performPlaneFly()
}
fileprivate func performPlaneFly() {
let planeWaitAction = SKAction.wait(forDuration: 1)
let planeDirectionAction = SKAction.run {
self.playerDirectionCheck()
}
let planeSequence = SKAction.sequence([planeWaitAction, planeDirectionAction])
let planeSequenceForever = SKAction.repeatForever(planeSequence)
player.run(planeSequenceForever)
}
fileprivate func playerDirectionCheck() {
turnPlayerPlane(direction: .right)
}
I commented out different parts of the code and it is seems to be turnPlayerPlane function but really do not understand what is going on there so heavy to perform before first animation begins.
Can someone explain what I did wrong there?
Thank you!
Having sit like for 10 hours with this freezes at the beginning I've found solution. And as usual solution is not as difficult as it seemed to be. So freezes came up because of performance drop because of first render of all textures. After textures are rendered as I said freezes are gone.
So after I rewrote a bit my code I've made separate file private method which I call first in didMove:to view to render all textures before their use.:
fileprivate func planeAnimationFillArrays() {
SKTextureAtlas.preloadTextureAtlases([SKTextureAtlas(named: "PlayerPlane")]) { [unowned self] in
self.leftTextureArrayAnimation = {
var array = [SKTexture]()
for i in stride(from: 10, through: 1, by: -1) {
let number = String(format: "%02d", i)
let texture = SKTexture(imageNamed: "\(number)")
array.append(texture)
}
SKTexture.preload(array, withCompletionHandler: {
print("preload is done")
})
return array
}()
self.rightTextureArrayAnimation = {
var array = [SKTexture]()
for i in stride(from: 10, through: 20, by: 1) {
let number = String(format: "%02d", i)
let texture = SKTexture(imageNamed: "\(number)")
array.append(texture)
}
SKTexture.preload(array, withCompletionHandler: {
print("preload is done")
})
return array
}()
}
}
Before all I used method that rendered my spriteAtlas SKTextureAtlas.preloadTextureAtlases([SKTextureAtlas(named: "XXX")]) and in completion handler of this method I used SKTexture.preload([SKTextures], completionHandler) before each animation array return. After all this little freezes are completely gone.
Now this code has some duplication but it seems to me it is easier to understand what it is done here. Hope someone found this solution helpful.
You know how to reduce this code please fill free to edit or post your own solution.
Thank you!