Synchronizing Playback of Multiple Audio Files in Audio Kit - swift

I am developing a small audio sequencer application using AudioKit. I only need to play back 4 channels of audio. However I need to play them back perfectly synchronized down to the sample level. When I run a test using just two audio files, I can hear that they are not synchronized. The difference is only a few samples, but even a one sample discrepancy would be a problem. I am currently using multiple AKClipPlayer objects routed to an AKMixer object. I called him with the basics for loop like this:
private var clipPlayers : [AKClipPlayer] = []
func play(){
for player in clipPlayers{
player.play()
}
}
Is sample accurate playback timing of multiple audio files possible using AudioKit?

Yes, you need to schedule playback to start in the future with play(at:).
// This can take longer than expected, so do this before choosing a future time
clipPlayers.forEach { $0.prepare(withFrameCount: 10_000) }
let nearFuture = AVAudioTime.now() + 0.2
clipPlayers.forEach { $0.play(at: nearFuture) }

Related

Get the duration of .m3u audio

I want to play audio by the URL
https://wortcast01.wortfm.org/appfiles/wort_210715_080006buzzthu.m3u
it has a body(with tracks)
https://wortcast01.wortfm.org/pitch/preroll-buzzthu.mp3
https://wortcast01.wortfm.org/mp3/wort_210715_080006buzzthu.mp3
When I set https://wortcast01.wortfm.org/appfiles/wort_210715_080006buzzthu.m3u to the AVPlayer then the duration is equal to Nan. But each track from the list has a duration.
Do you have an idea how to extract duration via AVPlayer?
return Nan:
var itemDuration: Double? {
return currentItem?.duration.seconds
}
AVFoundation(AVPlayer, AVAsset....) automatically marks any .m3u like "Live broadcasting" (my assumption: It looks like Apple uses scenario M3U8 for working with M3U)
Solution:
Create additional functionality which loads m3u(or pls) content, create an internal playlist, and play these internal parts ar AVPlayer.

Flutter record front facing camera at exact same time as playing video

I've been playing around with Flutter and trying to get it so I can record the front facing camera (using the camera plugin [https://pub.dev/packages/camera]) as well as playing a video to the user (using the video_player plugin [https://pub.dev/packages/video_player]).
Next I use ffmpeg to horizontally stack the videos together. This all works fine but when I play back the final output there is a slight delay when listening to the audio. I'm calling Future.wait([cameraController.startVideoRecording(filePath), videoController.play()]); but there is a slight delay in these tasks actually starting. I don't even need them to fire at the exact same time (which I'm realising is probably impossible), instead if I knew exactly when each of the tasks begun then I can use the time difference to sync the audio using ffmpeg or similar.
I've tried adding a listener on the videoController to see when isPlaying first returns true, and also watching the output directory for when the recorded video appears on the filesystem:
listener = () {
if (videoController.value.isPlaying) {
isPlaying = DateTime.now().microsecondsSinceEpoch;
log('isPlaying '+isPlaying.toString());
}
videoController.removeListener(listener);
};
videoController.addListener(listener);
var watcher = DirectoryWatcher('${extDir.path}/');
watcher.events.listen((event) {
if (event.type == ChangeType.ADD) {
fileAdded = DateTime.now().microsecondsSinceEpoch;
log('added '+fileAdded.toString());
}
});
Then likewise for checking if the camera is recording:
var listener;
listener = () {
if (cameraController.value.isRecordingVideo) {
log('isRecordingVideo '+DateTime.now().microsecondsSinceEpoch.toString());
//cameraController.removeListener(listener);
}
};
cameraController.addListener(listener);
This results in (for example) the following order and microseconds for each event:
is playing: 1606478994797247
is recording: 1606478995492889 (695642 microseconds later)
added: 1606478995839676 (346787 microseconds later)
However, when I play back the video the syncing is off by approx 0.152 seconds so doesn't marry up to the time differences reported above.
Does anyone have any idea how I could accomplish near perfect syncing when combining 2 videos? Thanks.

Looping AVPlayer with custom Start/End Times

I am using an AVPlayer to play video. I would like to be able to loop sections of the video based on the user's input (while the video is playing, the user can press a button to start a loop and then press it once more to end after a few seconds pass -- then it should begin playing at the start time and continue to loop once the current time reaches the specified end time)
I can get these start/end loop times just by getting the player's currentTime
var startLoop : CMTime = player.currentTime()
// seconds pass by ....
var endLoop : CMTime = player.currentTime()
I know there is a way to cleanly loop the video back to the beginning once it has finished playing like so:
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: self.player.currentItem, queue: .main) { [weak self] _ in
self?.player?.seek(to: CMTime.zero)
self?.player?.rate = self?.rate ?? 1.0
}
I was wondering if there is a way to do this with my custom startLoop and endLoop times?
There are a few ways to do looping in AVFoundation. The simple way is like you described by listening to the notification and then calling seek(to: startLoop). you can use addBoundaryTimeObserver to listen for specific time.
https://developer.apple.com/documentation/avfoundation/avplayer/1388027-addboundarytimeobserver
However, a more advanced way to do looping is to try to add a Key Value Observer to AVQueuePlayer and try inserting 2 copies of an AVPlayerItem with the specific range of the looping asset you want. Then you can use looping techniques such as the treadmill technique or AVPlayerLooper. The key to creating custom time ranged assets is to use an AVMutableComposition and insertTimeRange.
Also, if you really want to be experimental, you can try a 3rd option which is possibly using an AVMutableComposition as your original source asset and start mutating its underlying tracks on the fly while it is playing. I've tried something similar and it can work if you are not manipulating time ranges that are being played. However, there is no docs on this approach and Apple may change code in future versions that may break this.
References:
https://developer.apple.com/videos/play/wwdc2016/503/
https://developer.apple.com/library/archive/samplecode/avloopplayer/Introduction/Intro.html
https://developer.apple.com/library/archive/samplecode/AVCustomEdit/Introduction/Intro.html
Recommended way of updating timeRange property on AVPlayerLooper
#spitchay reach out to me if you have other AVFoundation questions.

Using multiple audio devices simultaneously on osx

My aim is to write an audio app for low latency realtime audio analysis on OSX. This will involve connecting to one or more USB interfaces and taking specific channels from these devices.
I started with the learning core audio book and writing this using C. As I went down this path it came to light that a lot of the old frameworks have been deprecated. It appears that the majority of what I would like to achieve can be written using AVAudioengine and connecting AVAudioUnits, digging down into core audio level only for the lower things like configuring the hardware devices.
I am confused here as to how to access two devices simultaneously. I do not want to create an aggregate device as I would like to treat the devices individually.
Using core audio I can list the audio device ID for all devices and change the default system output device here (and can do the input device using similar methods). However this only allows me one physical device, and will always track the device in system preferences.
static func setOutputDevice(newDeviceID: AudioDeviceID) {
let propertySize = UInt32(MemoryLayout<UInt32>.size)
var deviceID = newDeviceID
var propertyAddress = AudioObjectPropertyAddress(
mSelector: AudioObjectPropertySelector(kAudioHardwarePropertyDefaultOutputDevice),
mScope: AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement: AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, propertySize, &deviceID)
}
I then found that the kAudioUnitSubType_HALOutput is the way to go for specifying a static device only accessible through this property. I can create a component of this type using:
var outputHAL = AudioComponentDescription(componentType: kAudioUnitType_Output, componentSubType: kAudioUnitSubType_HALOutput, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0)
let component = AudioComponentFindNext(nil, &outputHAL)
guard component != nil else {
print("Can't get input unit")
exit(-1)
}
However I am confused about how you create a description of this component and then find the next device that matches the description. Is there a property where I can select the audio device ID and link the AUHAL to this?
I also cannot figure out how to assign an AUHAL to an AVAudioEngine. I can create a node for the HAL but cannot attach this to the engine. Finally is it possible to create multiple kAudioUnitSubType_HALOutput components and feed these into the mixer?
I have been trying to research this for the last week, but nowhere closer to the answer. I have read up on channel mapping and everything I need to know down the line, but at this level getting the audio at. lower level seems pretty undocumented, especially when using swift.

AudioKit error message: Too Many Frames to Process

I'm using the (very cool) AudioKit framework to process audio for a macOS music visualizer app. My audio source ("mic") is iTunes 12 via Rogue Amoeba Loopback.
In the Xcode debug window, I'm seeing the following error message each time I launch my app:
kAudioUnitErr_TooManyFramesToProcess : inFramesToProcess=513, mMaxFramesPerSlice=512
I've gathered from searches that this is probably related to sample rate, but I haven't found a clear description of what this error indicates (or if it even matters). My app is functioning normally, but I'm wondering if this could be affecting efficiency.
EDIT: The error message does not appear if I use Audio MIDI Setup to set the Loopback device output to 44.1kHz. (I set it initially to 48.0kHz to match my other audio devices, which I keep configured to the video standard.)
Keeping Loopback at 44.1kHz is an acceptable solution, but now my question would be: Is it possible to avoid this error even with a 48.0kHz input? (I tried AKSettings.sampleRate = 48000 but that made no difference.) Or can I just safely ignore the error in any case?
AudioKit is initialized thusly:
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
do {
try mic.setDevice(AudioKit.inputDevices![inputDeviceNumber])
}
catch {
AKLog("Device not set")
}
amplitudeTracker = AKAmplitudeTracker(mic)
AudioKit.output = AKBooster(amplitudeTracker, gain: 0)
do {
try AudioKit.start()
} catch {
AKLog("AudioKit did not start")
}
mic.start()
amplitudeTracker?.start()
This string saved my app
try? AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.02)