AKOfflineRenderNode - Scheduling AKAudioPlayers, only the last player renders - swift

Getting some weird results when trying to render a sequence of AKAudioPlayers with AudioKit 4.0, Swift 4 on iOS 11.1
I'm aware of AudioKit.renderToFile alternative on the development branch (https://github.com/AudioKit/AudioKit/commit/09aedf7c119a399ab00026ddfb91ae6778570176) but would like to cover iOS 9+ if possible
Expected result:
A long audio file with the each file (URL) rendered in sequence
Actual result:
Only the last scheduled file is rendered (at the correct offset in the resulting wav file)
Weirdly, if I schedule them all at the 0 offset, they all get rendered. Also, if I play things back without rendering, it sounds correct (though I have to adjust the AVAudioTime to use mach_absolute_time)
It almost seems like scheduling an AKAudioPlayer cancels the previous one.
Setup:
class func initialize (){
// ....
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
AKLog("Could not set session category.")
}
//AKSettings.playbackWhileMuted = true
AKSettings.defaultToSpeaker = true
mainMixer = AKMixer()
offlineRender = AKOfflineRenderNode()
mainMixer! >>> offlineRender!
AudioKit.output = offlineRender!
AudioKit.start()
// ....
Rendering:
class func testRender(urls: [URL], dest: URL, offset: TimeInterval = 2){
// Stop / Start AudioKit when switching internalRenderEnabled, otherwise I get the following error:
// *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'player started when engine not running'
AudioKit.stop()
var players = [AKAudioPlayer]()
var scheduleTime : TimeInterval = 0
// create players
for url in urls {
do {
let file = try AKAudioFile(forReading: url)
let player = try AKAudioPlayer(file: file)
players.append(player)
player.connect(to: mainMixer!)
print("Connecting player")
} catch {
print("error reading")
}
}
offlineRender!.internalRenderEnabled = false
AudioKit.start()
for player in players{
do {
// 0 instead of mach_absolute_time(), otherwise the result is silent
let avTime = AKAudioPlayer.secondsToAVAudioTime(hostTime: 0, time: scheduleTime)
// schedule and play according to:
// https://stackoverflow.com/questions/45799686/how-to-apply-audio-effect-to-a-file-and-write-to-filesystem-ios/45807586#45807586
player.schedule(from: 0, to: player.duration, avTime: nil)
player.play(at: avTime);
scheduleTime += offset
} catch {
print("error reading")
}
}
// add some padding
scheduleTime += 3
let duration = scheduleTime
do {
try offlineRender!.renderToURL(dest, seconds: duration)
} catch {
print("error rendering")
}
// cleanup
players.forEach { $0.schedule(from: 0, to: $0.duration, avTime: nil)}
players.forEach({$0.stop()})
players.forEach({$0.disconnectOutput()})
offlineRender!.internalRenderEnabled = true
}
Appreciate any help!

AKOfflineRenderNode has been deprecated as of iOS 11.0. Version 4.0.4 has an AudioKit.renderToFile method to replace it. It was updated recently (in late 2017).

So it looks like the AKOFflineRednerNode is indeed deprecated in the coming versions of AudioKit and is not working on iOS11. Reading comments discussing the issue on GitHub it sounds like the plan is to encapsulate both the new (iOS11+) offline rendering and the old (iOS9-10) under a common interface (AudioKit.renderToFile). However for now it seems to be iOS11 only.
After some testing with the dev version (install instructions here: https://github.com/audiokit/AudioKit/blob/master/Frameworks/README.md) I got the following code to work as intended:
try AudioKit.renderToFile(outputFile, seconds: duration, prerender: {
var scheduleTime : TimeInterval = 0
for player in players{
let dspTime = AVAudioTime(sampleTime: AVAudioFramePosition(scheduleTime * AKSettings.sampleRate), atRate: AKSettings.sampleRate)
player.play(at: dspTime)
scheduleTime += offset
}
})
Unless someone can provide a workaround that gets the OfflineRenderNode working on iOS11 and until the official release of AudioKit with the renderToFile implemented this is the best answer I could find.

Related

Noisiness when playing sound using an AVAudioSourceNode

I'm using TinySoundFont to use SF2 files on watchOS. I want to play the raw audio generated by the framework in real time (which means calling tsf_note_on as soon as the corresponding button is pressed and calling tsf_render_short as soon as new data is needed). I'm using an AVAudioSourceNode to achieve that.
Despite the sound rendering fine when I render it into a file, it's really noisy when played using the AVAudioSourceNode. (Based on the answer from Rob Napier, this might be because I ignore the timestamp property - I'm looking for a solution that addresses that concern.) What causes this issue and how can I fix it?
I'm looking for a solution that renders audio realtime and not precalculates it, since I want to handle looping sounds correctly as well.
You can download a sample GitHub project here.
ContentView.swift
import SwiftUI
import AVFoundation
struct ContentView: View {
#ObservedObject var settings = Settings.shared
init() {
settings.prepare()
}
var body: some View {
Button("Play Sound") {
Settings.shared.playSound()
if !settings.engine.isRunning {
do {
try settings.engine.start()
} catch {
print(error)
}
}
}
}
}
Settings.swift
import SwiftUI
import AVFoundation
class Settings: ObservableObject {
static let shared = Settings()
var engine: AVAudioEngine!
var sourceNode: AVAudioSourceNode!
var tinySoundFont: OpaquePointer!
func prepare() {
let soundFontPath = Bundle.main.path(forResource: "GMGSx", ofType: "sf2")
tinySoundFont = tsf_load_filename(soundFontPath)
tsf_set_output(tinySoundFont, TSF_MONO, 44100, 0)
setUpSound()
}
func setUpSound() {
if let engine = engine,
let sourceNode = sourceNode {
engine.detach(sourceNode)
}
engine = .init()
let mixerNode = engine.mainMixerNode
let audioFormat = AVAudioFormat(
commonFormat: .pcmFormatInt16,
sampleRate: 44100,
channels: 1,
interleaved: false
)
guard let audioFormat = audioFormat else {
return
}
sourceNode = AVAudioSourceNode(format: audioFormat) { silence, timeStamp, frameCount, audioBufferList in
guard let data = self.getSound(length: Int(frameCount)) else {
return 1
}
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
data.withUnsafeBytes { (intPointer: UnsafePointer<Int16>) in
for index in 0 ..< Int(frameCount) {
let value = intPointer[index]
// Set the same value on all channels (due to the inputFormat, there's only one channel though).
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Int16> = UnsafeMutableBufferPointer(buffer)
buf[index] = value
}
}
}
return noErr
}
engine.attach(sourceNode)
engine.connect(sourceNode, to: mixerNode, format: audioFormat)
do {
try AVAudioSession.sharedInstance().setCategory(.playback)
} catch {
print(error)
}
}
func playSound() {
tsf_note_on(tinySoundFont, 0, 60, 1)
}
func getSound(length: Int) -> Data? {
let array = [Int16]()
var storage = UnsafeMutablePointer<Int16>.allocate(capacity: length)
storage.initialize(from: array, count: length)
tsf_render_short(tinySoundFont, storage, Int32(length), 0)
let data = Data(bytes: storage, count: length)
storage.deallocate()
return data
}
}
The AVAudioSourceNode initializer takes a render block. In the mode you're using (live playback), this is a real-time callback, so you have a very tight deadline to fill the block with the requested data and return it so it can be played. You don't have a ton of time to do calculations. You definitely don't have time to access the filesystem.
In your block, you're re-computing an entire WAV every render cycle, then writing it to disk, then reading it from disk, then filling in the block that was requested. You ignore the timestamp requested, and always fill the buffer starting at sample zero. The mismatch is what's causing the buzzing. The fact that you're so slow about it is probably what's causing the pitch-drop.
Depending on the size of your files, the simplest way to implement this is to first decode everything into memory, and fill in the buffers for the timestamps and lengths requested. It looks like your C code already generates PCM data, so there's no need to convert it into a WAV file. It seems to already be in the right format.
Apple provides a good sample project for a Signal Generator that you should use as a starting point. Download that and make sure it works as expected. Then work to swap in your SF2 code. You may also find the video on this helpful: What’s New in AVAudioEngine.
The easiest tool to use here is probably an AVAudioPlayerNode. Your SoundFontHelper is making things much more complicated, so I've removed it and just call TSF directly from Swift. To do this, create a file called tsf.c as follows:
#define TSF_IMPLEMENTATION
#include "tsf.h"
And add it to BridgingHeader.h:
#import "tsf.h"
Simplify ContentView to this:
import SwiftUI
struct ContentView: View {
#ObservedObject var settings = Settings.shared
init() {
// You'll want error handling here.
try! settings.prepare()
}
var body: some View {
Button("Play Sound") {
settings.play()
}
}
}
And that leaves the new version of Settings, which is the meat of it:
import SwiftUI
import AVFoundation
class Settings: ObservableObject {
static let shared = Settings()
var engine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
var tsf: OpaquePointer
var outputFormat = AVAudioFormat()
init() {
let soundFontPath = Bundle.main.path(forResource: "GMGSx", ofType: "sf2")
tsf = tsf_load_filename(soundFontPath)
engine.attach(playerNode)
engine.connect(playerNode, to: engine.mainMixerNode, format: nil)
updateOutputFormat()
}
// For simplicity, this object assumes the outputFormat does not change during its lifetime.
// It's important to watch for route changes, and recreate this object if they occur. For details, see:
// https://developer.apple.com/documentation/avfaudio/avaudiosession/responding_to_audio_session_route_changes
func updateOutputFormat() {
outputFormat = engine.mainMixerNode.outputFormat(forBus: 0)
}
func prepare() throws {
// Start the engine
try AVAudioSession.sharedInstance().setCategory(.playback)
try engine.start()
playerNode.play()
updateOutputFormat()
// Configure TSF. The only important thing here is the sample rate, which can be different on different hardware.
// Core Audio has a defined format of "deinterleaved 32-bit floating point."
tsf_set_output(tsf,
TSF_STEREO_UNWEAVED, // mode
Int32(outputFormat.sampleRate), // sampleRate
0) // gain
}
func play() {
tsf_note_on(tsf,
0, // preset_index
60, // key (middle C)
1.0) // velocity
// These tones have a long falloff, so you want a lot of source data. This is 10s.
let frameCount = 10 * Int(outputFormat.sampleRate)
// Create a buffer for the samples
let buffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: AVAudioFrameCount(frameCount))!
buffer.frameLength = buffer.frameCapacity
// Render the samples. Do not mix. This buffer has been extended to
// the needed size by the assignment to `frameLength` above. The call to
// `assumingMemoryBound` is known to be correct because the format is Float32.
let ptr = buffer.audioBufferList.pointee.mBuffers.mData?.assumingMemoryBound(to: Float.self)
tsf_render_float(tsf,
ptr, // buffer
Int32(frameCount), // samples
0) // mixing (do not mix)
// All done. Play the buffer, interrupting whatever is currently playing
playerNode.scheduleBuffer(buffer, at: nil, options: .interrupts)
}
}
You can find the full version at my fork. You can also see the first commit, which is another approach that maintains your SoundFontHelper and does conversions to deal with it, but it's much simpler to just render the audio correctly in the first place.

AKAmplitudeTracker amplitude getting 0.0 using audioKit

I want to get the volume of AKAmplitudeTracker but getting -inf what is wrong with me please help out.
AKAudioFile.cleanTempDirectory()
AKSettings.audioInputEnabled = true
AKSettings.bufferLength = .medium
AKSettings.defaultToSpeaker = true
AKSettings.playbackWhileMuted = true
AKSettings.enableRouteChangeHandling = true
AKSettings.enableCategoryChangeHandling = true
AKSettings.enableLogging = true
do {
try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
} catch {
print("error \(error.localizedDescription)")
}
microphone = AKMicrophone()!
tracker = AKAmplitudeTracker(microphone)
booster = AKBooster(tracker, gain: 0)
AudioKit.output = booster
try AudioKit.start()
=================
extension AKAmplitudeTracker {
var volume: Decibel {
return 20.0 * log10(amplitude)
}
}
=================
OutPut print(tracker. amplitude)
0.0
Had a quick look, seems that you followed the basic setup, you do seem to fail to trace the data generated in time correctly! Amplitude data is provided during the time period for the computation that is taken from the microphone, so to look at what it looks like in timeline you can use a timer, as such:
func reset() throws {
do {
self.timer.invalidate()
self.timer = nil
} catch {
throw error
}
}
func microphoneTracker() {
guard self.timer == nil else { return }
self.watcher()
let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in
log.info(self.akMicrophoneAmplitudeTracker.amplitude)
}
self.timer = timer
}
Change the withTimeInterval to how frequently you want to check the amplitude.
I think it's quite readable what I put there for you, but I'll break it down in a few words:
Keep a reference for the AKAmplitudeTracker in a property, here I've named it akMicrophoneAmplitudeTracker
Keep a reference for your timed event, that will check the amplitude value during a period
Compute the data in the closure body, the property holding value is .amplitude
The computation in the example is a logger that prints .amplitude
As required, use the .invalidate method to stop the timer
A few other things you may want to double-check on your code is to make sure that the tracker is part of the signal chain, as that's an AVAudioEngine engine requirement; I've also noticed in some other people's code a call for the method .start in the AKAmplitudeTracker, as follows:
akMicrophoneAmplitudeTracker.start()
To finish, have in mind that if you are testing it through Simulator, look at the microphone settings of your host-machine and expect amplitudes that might be different then the actual hardware.

Swift - Unable to retrieve CMSensorDataList records

I'm making a Watch app that will record user acceleration. I've used CMSensorRecorder from the CoreMotion Framework to do this.
The flow of the program right now is that the user presses a button on the watch, which triggers acceleration to be recorded for 30 seconds. After this, there is a 6-minute delay (referring to answer here :watchOS2 - CMSensorRecorder, a delay is needed to read the data), and the acceleration and timestamp data is printed to the console.
Right now I'm getting a "response invalid" and "Error occurred" when running the app. I've added a motion usage description to the info.plist file.
I'm fairly new to Swift and app development, and I fear something's wrong with the way I'm trying to access the data. I've attached the console logs and code below.
Can anybody provide some insight into the messages and how to resolve this? I've searched around but haven't found any cases of this issue before. Cheers.
func recordAcceleration(){
if CMSensorRecorder.isAccelerometerRecordingAvailable(){
print("recorder started")
recorder.recordAccelerometer(forDuration: 30) //forDuration controls how many seconds data is recorded for.
print("recording done")
}
}
func getData(){
if let list = recorder.accelerometerData(from: Date(timeIntervalSinceNow: -400), to: Date()){
print("listing data")
for data in list{
if let accData = data as? CMRecordedAccelerometerData{
let accX = accData.acceleration.x
let timestamp = accData.startDate
//Do something here.
print(accX)
print(timestamp)
}
}
}
}
//Send data to iphone after time period.
func sendData(dataBlock:CMSensorDataList){
WCSession.default.transferUserInfo(["Data" : dataBlock])
}
//UI Elements
#IBAction func recordButtonPressed() {
print("button pressed")
recordAcceleration()
//A delay is needed to read the data properly.
print("delaying 6 mins")
perform(#selector(callback), with: nil, afterDelay: 6*60)
}
#objc func callback(){
getData()
}
extension CMSensorDataList: Sequence {
public func makeIterator() -> NSFastEnumerationIterator {
return NSFastEnumerationIterator(self)
}
Console output:
button pressed
recorder started
2019-03-12 12:12:12.568962+1100 app_name WatchKit Extension[233:5614] [Motion] Warning - invoking recordDataType:forDuration: on main may lead to deadlock.
2019-03-12 12:12:13.102712+1100 app_name WatchKit Extension[233:5614] [SensorRecorder] Response invalid.
recording done
delaying 6 mins
2019-03-12 12:18:13.115955+1100 app_name WatchKit Extension[233:5614] [Motion] Warning - invoking sensorDataFromDate:toDate:forType: on main may lead to deadlock.
2019-03-12 12:18:13.162476+1100 app_name WatchKit Extension[233:5753] [SensorRecorder] Error occurred while trying to retrieve accelerometer records!
I ran your code and did not get the "Response invalid" or "Error occurred". I did get the main thread warnings. So I changed to a background thread and it works fine.
Also, I don't think you need to wait six minutes. I changed it to one minute.
I hope this helps.
let recorder = CMSensorRecorder()
#IBAction func recordAcceleration() {
if CMSensorRecorder.isAccelerometerRecordingAvailable() {
print("recorder started")
DispatchQueue.global(qos: .background).async {
self.recorder.recordAccelerometer(forDuration: 30)
}
perform(#selector(callback), with: nil, afterDelay: 1 * 60)
}
}
#objc func callback(){
DispatchQueue.global(qos: .background).async { self.getData() }
}
func getData(){
print("getData started")
if let list = recorder.accelerometerData(from: Date(timeIntervalSinceNow: -60), to: Date()) {
print("listing data")
for data in list{
if let accData = data as? CMRecordedAccelerometerData{
let accX = accData.acceleration.x
let timestamp = accData.startDate
//Do something here.
print(accX)
print(timestamp)
}
}
}
}

AudioKit AKMicrophone not outputting any data

I am trying to capture FFT data from a microphone. I've managed to get it to work before with a similar codebase but since macOS Mojave it's broken - the fft data constantly stays 0.
Relevant Code:
var fft: AKFFTTap?
var inputDevice: AKDevice? {
didSet {
inputNode = nil
updateAudioNode()
}
}
var inputNode: AKNode? {
didSet {
if fft != nil {
// According to AKFFTTap class reference, it will always be on tap 0
oldValue?.avAudioNode.removeTap(onBus: 0)
}
fft = inputNode.map { AKFFTTap($0) }
}
}
[...]
guard let device = inputDevice else {
inputNode = ViewController.shared.player.mixer
return
}
do {
try AudioKit.setInputDevice(device)
}
catch {
print("Error setting input device: \(error)")
return
}
let microphoneNode = AKMicrophone()
do {
try microphoneNode.setDevice(device)
}
catch {
print("Failed setting node input device: \(error)")
return
}
microphoneNode.start()
microphoneNode.volume = 3
print("Switched Node: \(microphoneNode), started: \(microphoneNode.isStarted)")
inputNode = microphoneNode
try! AudioKit.start()
All the code is called, no errors are output, but the fft simply stays blank. With some code reordering I get varying errors.
A full version of the class, for completeness, is here.
Finally, I also tried implementing one to one the examples from the playground. Since XCode playgrounds seem to crash with AudioKit, I tried it in my own codebase, but there's no difference there either. AKFrequencyTracker, for example, gets 0s for both amplitude and frequency.
I am not 100% positive of this, but I'd like you to try AudioKit v4.5.1 out. We definitely fixed a bug in AKMicrophone, and that could have downstream consequences. I'll withdraw this answer and keep looking if it is not fixed. Let me know.

Audiokit crashes when changing AKPlayer file

I have recently done the migration from AudioKit 3.7 to 4.2 (using Cocoapods), needed for XCode 9.3. I followed the migration guide and changed AKAudioPlayer to AKPlayer.
The issue
When AKPlayer plays an audio file, AudioKit is crashing with this error:
2018-04-17 09:32:43.042658+0200 hearfit[3509:2521326] [avae] AVAEInternal.h:103:_AVAE_CheckNoErr: [AVAudioEngineGraph.mm:3632:UpdateGraphAfterReconfig: (AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainFullTraversal, *conn.srcNode, isChainActive)): error -10875
2018-04-17 09:32:43.049372+0200 hearfit[3509:2521326] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'error -10875'
*** First throw call stack:
(0x1847d6d8c 0x1839905ec 0x1847d6bf8 0x18a0ff1a0 0x18a11bf58 0x18a12aab0 0x18a128cdc 0x18a1a1738 0x18a1a160c 0x10519192c 0x10519d2f4 0x10519d64c 0x10503afdc 0x10507c4a0 0x10507c01c 0x104f6d9cc 0x1852233d4 0x18477faa8 0x18477f76c 0x18477f010 0x18477cb60 0x18469cda8 0x18667f020 0x18e67d78c 0x10504dfd4 0x18412dfc0)
libc++abi.dylib: terminating with uncaught exception of type NSException
Sometimes it happens on the first play, and sometimes the first play is done correctly, but not the second one.
Everything was working great before the migration. I also tried to keep AKAudioPlayer: sounds are played correctly but AKFrequencyTracker does not work anymore.
Context
This is my setup:
Quick explanation:
AKPlayer 1 plays short audio files (between 1 and 5 seconds)
AKFrequencyTracker is used to display a plot
AKPlayer 2 plays background sound (volume must be configurable)
AKWhiteNoise allows to do some manual volume measurements (using AKMixer 2 volume property)
Use case example
The user starts an exercise. A sound is played continuously (with looping) using AKPlayer 2 and the user listens a word (played with AKPlayer 1), the plot is displayed. Next, several words are displayed on screen and the user must pick the right one. And a new word is listened... and so on.
So I have to change dynamically the played file of AKPlayer 1. All the code is written in a dedicated class, a singleton. All the nodes are setup in the init() function.
// singleton
static let main = AudioPlayer()
private init() {
let silenceUrl = Bundle.main.url(forResource: "silence", withExtension: "m4a", subdirectory: "audio")
self.silenceFile = silenceUrl!
self.mainPlayer = AKPlayer(url: self.silenceFile)!
self.mainPlayer.volume = 1.0
self.freqTracker = AKFrequencyTracker(self.mainPlayer, hopSize: 256, peakCount: 10)
let noiseUrl = Bundle.main.url(forResource: "cocktail-party", withExtension: "m4a", subdirectory: "audio")
self.noiseFile = noiseUrl!
self.noisePlayer = AKPlayer(url: self.noiseFile)!
self.noisePlayer.volume = 1.0
self.noisePlayer.isLooping = true
let mixer = AKMixer(self.freqTracker, self.noisePlayer)
self.whiteNoise = AKWhiteNoise(amplitude: 1.0)
self.whiteNoiseMixer = AKMixer(self.whiteNoise)
self.whiteNoiseMixer.volume = 0
self.mixer = AKMixer(mixer, self.whiteNoiseMixer)
AudioKit.output = self.mixer
do {
try AudioKit.start()
} catch (let error) {
print(error)
}
// stop directly the white noise mixer
self.whiteNoise.stop()
self.whiteNoiseMixer.volume = self.whiteNoiseVolume
self.mainPlayer.completionHandler = {
DispatchQueue.main.async {
if let timer = self.timer {
timer.invalidate()
self.timer = nil
}
if let completion = self.completionHandler {
Timer.scheduledTimer(withTimeInterval: 0.5, repeats: false, block: { (_) in
completion()
self.completionHandler = nil
})
}
}
}
}
To change the AKPlayer 1 audio file, I use this function, on the same class:
func play(fileUrl: URL, tracker: #escaping TrackerCallback, completion: (() -> Void)?) throws {
self.completionHandler = completion
let file = try AKAudioFile(forReading: fileUrl)
self.mainPlayer.load(audioFile: file)
self.mainPlayer.preroll()
self.timer = Timer.scheduledTimer(withTimeInterval: self.trackerRefreshRate, repeats: true) { (timer) in
tracker(self.freqTracker.frequency, self.freqTracker.amplitude)
}
self.mainPlayer.play()
}
Thank you.
I'm not sure what you are replacing into the player, but if the format of the file is different from what you had before, channels, samplerate, etc -- you should create a new AKPlayer instance rather than load into the same one. If your files are all the same format then it should work ok.
That said, I haven't seen the crash you show.
Another thing that is dangerous in your code is force unwrapping those optionals - you should guard against things being nil. AKPlayer actually uses AVAudioFile, no need for AKAudioFile.
guard let akfile = try? AVAudioFile(forReading: url) else { return }
if akfile.channelCount != player?.audioFile?.processingFormat.channelCount ||
akfile.sampleRate != player?.audioFile?.processingFormat.sampleRate {
AKLog("Need to create new player as formats have changed.")
}