Playing Multiple WAV out Multiple Channels AVAudioEngine - swift

I have 15 WAV files that I need to play back in sequence all on individual channels. I'm starting out trying to get two files working with a left / right stereo separation.
I’m creating an audio engine, a mixer and two AVAudioPlayerNodes. The audio files are mono and I’m trying to get the file from PlayerA to come out the left channel and the file from PlayerB to come out the right channel. What I’m having trouble understanding is how the AudioUnitSetProperty works. It seems to relate to a single file only and seems to only be able to have one per audioUnit? I’m wondering if there is a way I can associate a file with an audioUnit? I can’t seem to return the audioUnit object associated with each track.
func testCode(){
// get output hardware format
let output = engine.outputNode
let outputHWFormat = output.outputFormat(forBus: 0)
// connect mixer to output
let mixer = engine.mainMixerNode
engine.connect(mixer, to: output, format: outputHWFormat)
//then work on the player end by first attaching the player to the engine
engine.attach(playerA)
engine.attach(playerB)
//find the audiofile
guard let audioFileURLA = Bundle.main.url(forResource: "test", withExtension: "wav") else {
fatalError("audio file is not in bundle.")
}
guard let audioFileURLB = Bundle.main.url(forResource: "test2", withExtension: "wav") else {
fatalError("audio file is not in bundle.")
}
var songFileA:AVAudioFile?
do {
songFileA = try AVAudioFile(forReading: audioFileURLA)
print(songFileA!.processingFormat)
// connect player to mixer
engine.connect(playerA, to: mixer, format: songFileA!.processingFormat)
} catch {
fatalError("canot create AVAudioFile \(error)")
}
let channelMap: [Int32] = [0, -1] //play channel in left
let propSize: UInt32 = UInt32(channelMap.count) * UInt32(MemoryLayout<sint32>.size)
print(propSize)
let code: OSStatus = AudioUnitSetProperty((engine.inputNode?.audioUnit)!,
kAudioOutputUnitProperty_ChannelMap,
kAudioUnitScope_Global,
1,
channelMap,
propSize);
print(code)
let channelMapB: [Int32] = [-1, 0] //play channel in left
var songFileB:AVAudioFile?
do {
songFileB = try AVAudioFile(forReading: audioFileURLB)
print(songFileB!.processingFormat)
// connect player to mixer
engine.connect(playerB, to: mixer, format: songFileB!.processingFormat)
} catch {
fatalError("canot create AVAudioFile \(error)")
}
let codeB: OSStatus = AudioUnitSetProperty((engine.inputNode?.audioUnit)!,
kAudioOutputUnitProperty_ChannelMap,
kAudioUnitScope_Global,
1,
channelMapB,
propSize);
print(codeB)
do {
try engine.start()
} catch {
fatalError("Could not start engine. error: \(error).")
}
playerA.scheduleFile(songFileA!, at: nil) {
print("done")
self.playerA.play()
}
playerB.scheduleFile(songFileA!, at: nil) {
print("done")
self.playerB.play()
}
playerA.play()
playerB.play()
print(playerA.isPlaying)
}

engine.connect(mixer, to: output, format: outputHWFormat)
This isn't necessary, the mixer will be implicitly connected when accessed.
As for panning: AudioUnitSetProperty also isn't necessary. AVAudioPlayerNode conforms to AVAudioMixing, so since there is a mixer node downstream from the player, all you have to do is this:
playerA.pan = -1
playerB.pan = 1

Related

How to play and listen to audio at the same time using AVAudioSession

I'm using standard code to start listening to microphone and using SHSession delegate detect song with ShazamKit.
let audioSession = AVAudioSession.sharedInstance()
audioSession.requestRecordPermission { isGranted in
guard isGranted else { return }
try? audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = self.audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {
(buffer: AVAudioPCMBuffer, when: AVAudioTime) in
self.session.matchStreamingBuffer(buffer, at: nil)
}
self.audioEngine.prepare()
do {
try self.audioEngine.start()
} catch (let error) {
assertionFailure(error.localizedDescription)
}
}
Everything works fine when song detection is happening by listening to some external sound sources like music column. But I need to provide opportunity to turn on some song from another app (Soundcloud for example), open my app and detect it. But when this block of code if executed, song playing stops. I tried to change bus value, buffer size, add some categories via setCategory method but nothing helps me. As I suggest, issue might be caused by using the same resources like bus, but as I already mentioned, I tried to change this value
Use option .mixWithOthers to allow background audio at full volume.
Or use option .duckOthers to allow background audio at reduced volume.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers])
try audioSession.setActive(true)
} catch {
print("Unable to activate audio session: \(String(describing: error))")
}
// ..later, when ready to deactivate:
do {
try AVAudioSession.sharedInstance().setActive(false, options: .notifyOthersOnDeactivation)
} catch {
print("Unable to deactivate audio session: \(String(describing: error))")
}

Change BPM in real time with AVAudioEngine using Swift

Hello I am trying to implement simple audio app using AVAudioEngine, which plays short wav audio files in a loop at some bpm, that can be changed in real time (by slider or something).
Current solution logic:
set bpm=60
create audioFile from sample.wav
calculate bufferSize: AVAudioFrameCount(audioFile.processingFormat.sampleRate * 60 / Double(bpm))
set bufferSize to audioBuffer
load file audioFile into audioBuffer.
schedule audioBuffer to play
This solution works, but the issue is - if I want to change bpm I need to recreate buffer with different bufferSize, so it will not be in real time, since I need to stop player and reschedule buffer with different bufferSize.
Any thoughts how it can be done ?
Thanks in advance !
Code (main part):
var bpm:Float = 30
let engine = AVAudioEngine()
var player = AVAudioPlayerNode()
var audioBuffer: AVAudioPCMBuffer?
var audioFile: AVAudioFile?
override func viewDidLoad() {
super.viewDidLoad()
audioFile = loadfile(from: "sound.wav")
audioBuffer = tickBuffer(audioFile: audioFile!)
engine.attach(player)
engine.connect(player, to: engine.mainMixerNode, format: audioFile?.processingFormat)
do {
engine.prepare()
try engine.start()
} catch {
print(error)
}
}
private func loadfile(from fileName: String) -> AVAudioFile? {
let path = Bundle.main.path(forResource: fileName, ofType: nil)!
let url = URL(fileURLWithPath: path)
do {
let audioFile = try AVAudioFile(forReading: url)
return audioFile
} catch {
print("Error loading buffer1 \(error)")
}
return nil
}
func tickBuffer(audioFile: AVAudioFile) -> AVAudioPCMBuffer {
let periodLength = AVAudioFrameCount(audioFile.processingFormat.sampleRate * 60 / Double(bpm))
let buffer = AVAudioPCMBuffer(pcmFormat: audioFile.processingFormat, frameCapacity: periodLength)!
try! audioFile.read(into: buffer)
buffer.frameLength = periodLength
return buffer
}
func play() {
player.scheduleBuffer(audioBuffer, at: nil, options: .loops, completionHandler: nil)
player.play()
}
func stop() {
player.stop()
}

AVAudioEngine MIDI file play (Current progress + MIDI end callback) Swift

Am playing MID using AVAudioEngine, AVAudioSequencer, AVAudioUnitSampler.
AVAudioUnitSampler loads Soundfont and AVAudioSequencer load MIDI file.
My initial configurations are
engine = AVAudioEngine()
sampler = AVAudioUnitSampler()
speedControl = AVAudioUnitVarispeed()
pitchControl = AVAudioUnitTimePitch()
engine.attach(sampler)
engine.attach(pitchControl)
engine.attach(speedControl)
engine.connect(sampler, to: speedControl, format: nil)
engine.connect(speedControl, to: pitchControl, format: nil)
engine.connect(pitchControl, to: engine.mainMixerNode, format: nil)
Here is how my sequence loads MIDI file
func setupSequencer() {
self.sequencer = AVAudioSequencer(audioEngine: self.engine)
let options = AVMusicSequenceLoadOptions()
let documentsDirectoryURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let introurl = URL(string: songDetails!.intro!)
let midiFileURL = documentsDirectoryURL.appendingPathComponent(introurl!.lastPathComponent)
do {
try sequencer.load(from: midiFileURL, options: options)
print("loaded \(midiFileURL)")
} catch {
print("something screwed up \(error)")
return
}
sequencer.prepareToPlay()
if sequencer.isPlaying == false{
}
}
And here is how sampler load SoundFont
func loadSF2PresetIntoSampler(_ preset: UInt8,bankURL:URL ) {
do {
try self.sampler.loadSoundBankInstrument(at: bankURL,
program: preset,
bankMSB: UInt8(kAUSampler_DefaultMelodicBankMSB),
bankLSB: UInt8(kAUSampler_DefaultBankLSB))
} catch {
print("error loading sound bank instrument")
}
}
And it's playing fine no issue with this. I have 2 other requirements, and am having problem in those
I have to play another MIDI file after first MID ends playing, for that i need to get the complete/finish MIDI file callback from either Engine or Sequence OR How can i load multiple MIDI files in Sequence? I have tried many ways but didn't help.
I need to show the progress of MIDI file play, like current time and total time. For this i have tried a method found in stack answers somewhere which is:
var currentPositionInSeconds: TimeInterval {
get {
guard let offsetTime = offsetTime else { return 0 }
guard let lastRenderTime = engine.outputNode.lastRenderTime else { return 0 }
let frames = lastRenderTime.sampleTime - offsetTime.sampleTime
return Double(frames) / offsetTime.sampleRate
}
}
Here offsetTime is
offsetTime = engine.outputNode.lastRenderTime
And it always return nil.
Glad to see someone using my example code.
It's a missing feature. Please file a Radar. Nothing will happen without a Radar.
They do pay attention to them to schedule what to work on.
I fake it by getting the length of the sequence.
if let ft = sequencer.tracks.first {
self.lengthInSeconds = ft.lengthInSeconds
} else {
self.lengthInSeconds = 0
}
Then in my play function,
do {
try sequencer.start()
Timer.scheduledTimer(withTimeInterval: self.lengthInSeconds, repeats: false) {
[weak self] (t: Timer) in
guard let self = self else {return}
t.invalidate()
self.logger.debug("sequencer finished")
self.sequencer.stop()
}
...
You can use a DispatchSourceTimer if you want more accuracy.

Change Sample rate with AudioConverter

I am trying to re-sample the input audio 44.1 kHz to 48 kHz.
using AudioToolbox's AUAudioUnit.inputHandler
writing out the input 44.1 kHZ to a wav file (this is working perfectly)
converting the 44.1 kHz to 48 kHz and writing out this converted bytes to file. https://developer.apple.com/documentation/audiotoolbox/1503098-audioconverterfillcomplexbuffer
The problem is in the 3rd step. After writing out to a file the voice is very noisy.
here is my code:
// convert to 48kHz
var audioConverterRef: AudioConverterRef?
CheckError(AudioConverterNew(&self.hardwareFormat,
&self.convertingFormat,
&audioConverterRef), "AudioConverterNew failed")
let outputBufferSize = inNumBytes
let outputBuffer = UnsafeMutablePointer<Int16>.allocate(capacity: MemoryLayout<Int16>.size * Int(outputBufferSize))
let convertedData = AudioBufferList.allocate(maximumBuffers: 1)
convertedData[0].mNumberChannels = self.hardwareFormat.mChannelsPerFrame
convertedData[0].mDataByteSize = outputBufferSize
convertedData[0].mData = UnsafeMutableRawPointer(outputBuffer)
var ioOutputDataPackets = UInt32(inNumPackets)
CheckError(AudioConverterFillComplexBuffer(audioConverterRef!,
self.coverterCallback,
&bufferList,
&ioOutputDataPackets,
convertedData.unsafeMutablePointer,
nil), "AudioConverterFillComplexBuffer error")
let convertedmData = convertedData[0].mData!
let convertedmDataByteSize = convertedData[0].mDataByteSize
// Write converted packets to file -> audio_unit_int16_48.wav
CheckError(AudioFileWritePackets(self.outputFile48000!,
false,
convertedmDataByteSize,
nil,
recordPacket,
&ioOutputDataPackets,
convertedmData), "AudioFileWritePackets error")
and the conversion callback body is here:
let buffers = UnsafeMutableBufferPointer<AudioBuffer>(start: &bufferList.mBuffers, count: Int(bufferList.mNumberBuffers))
let dataPtr = UnsafeMutableAudioBufferListPointer(ioData)
dataPtr[0].mNumberChannels = 1
dataPtr[0].mData = buffers[0].mData
dataPtr[0].mDataByteSize = buffers[0].mDataByteSize
ioDataPacketCount.pointee = buffers[0].mDataByteSize / UInt32(MemoryLayout<Int16>.size)
the sample project is here: https://drive.google.com/file/d/1GvCJ5hEqf7PsBANwUpVTRE1L7S_zQxnL/view?usp=sharing
If part of your chain is still AVAudioEngine, there's sample code from Apple for offline processing of AVAudioFiles.
Here's a modified version that includes the sampleRate change:
import Cocoa
import AVFoundation
import PlaygroundSupport
let outputSampleRate = 48_000.0
let outputAudioFormat = AVAudioFormat(standardFormatWithSampleRate: outputSampleRate, channels: 2)!
// file needs to be in ~/Documents/Shared Playground Data
let localURL = playgroundSharedDataDirectory.appendingPathComponent("inputFile_44.aiff")
let outputURL = playgroundSharedDataDirectory.appendingPathComponent("outputFile_48.aiff")
let sourceFile: AVAudioFile
let format: AVAudioFormat
do {
sourceFile = try AVAudioFile(forReading: localURL)
format = sourceFile.processingFormat
} catch {
fatalError("Unable to load the source audio file: \(error.localizedDescription).")
}
let sourceSettings = sourceFile.fileFormat.settings
var outputSettings = sourceSettings
outputSettings[AVSampleRateKey] = outputSampleRate
let engine = AVAudioEngine()
let player = AVAudioPlayerNode()
engine.attach(player)
// Connect the nodes.
engine.connect(player, to: engine.mainMixerNode, format: format)
// Schedule the source file.
player.scheduleFile(sourceFile, at: nil)
do {
// The maximum number of frames the engine renders in any single render call.
let maxFrames: AVAudioFrameCount = 4096
try engine.enableManualRenderingMode(.offline, format: outputAudioFormat,
maximumFrameCount: maxFrames)
} catch {
fatalError("Enabling manual rendering mode failed: \(error).")
}
do {
try engine.start()
player.play()
} catch {
fatalError("Unable to start audio engine: \(error).")
}
let buffer = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat, frameCapacity: engine.manualRenderingMaximumFrameCount)!
var outputFile: AVAudioFile?
do {
outputFile = try AVAudioFile(forWriting: outputURL, settings: outputSettings)
} catch {
fatalError("Unable to open output audio file: \(error).")
}
let outputLengthD = Double(sourceFile.length) * outputSampleRate / sourceFile.fileFormat.sampleRate
let outputLength = Int64(ceil(outputLengthD)) // no sample left behind
while engine.manualRenderingSampleTime < outputLength {
do {
let frameCount = outputLength - engine.manualRenderingSampleTime
let framesToRender = min(AVAudioFrameCount(frameCount), buffer.frameCapacity)
let status = try engine.renderOffline(framesToRender, to: buffer)
switch status {
case .success:
// The data rendered successfully. Write it to the output file.
try outputFile?.write(from: buffer)
case .insufficientDataFromInputNode:
// Applicable only when using the input node as one of the sources.
break
case .cannotDoInCurrentContext:
// The engine couldn't render in the current render call.
// Retry in the next iteration.
break
case .error:
// An error occurred while rendering the audio.
fatalError("The manual rendering failed.")
}
} catch {
fatalError("The manual rendering failed: \(error).")
}
}
// Stop the player node and engine.
player.stop()
engine.stop()
outputFile = nil // AVAudioFile won't close until it goes out of scope, so we set output file back to nil here

CI Face Detector stopping audio from recording with AVAssetWriter

I'm wanting to detect a face in real time video using apples CI Face Detector and then I want to record the video to file using AVAssetWriter.
I thought I had it working but the audio is being temperamental. Sometimes it will record properly with the video, other times it will start recording but then go mute, other times it's out of sync with the video, and sometimes it won't work at all.
With a print statement I can see that the audio sample buffer is there. It must have something to do with the face detection as when I comment out that code the recording works fine.
Here's my code:
// MARK: AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioOutputSampleBufferDelegate
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let writable = canWrite()
if writable {
print("Writable")
}
if writable,
sessionAtSourceTime == nil {
// Start Writing
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
print("session started")
}
// processing on the images, not audio
if output == videoDataOutput {
connection.videoOrientation = .portrait
if connection.isVideoMirroringSupported {
connection.isVideoMirrored = true
}
// convert current frame to CIImage
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, pixelBuffer!, CMAttachmentMode(kCMAttachmentMode_ShouldPropagate)) as? [String: Any]
let ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: attachments)
// Detects faces based on your ciimage
let features = faceDetector?.features(in: ciImage, options: [CIDetectorSmile : true,
CIDetectorEyeBlink : true,
]).compactMap({ $0 as? CIFaceFeature })
// Retreive frame of your buffer
let desc = CMSampleBufferGetFormatDescription(sampleBuffer)
let bufferFrame = CMVideoFormatDescriptionGetCleanAperture(desc!, false)
// Draw face masks
DispatchQueue.main.async { [weak self] in
UIView.animate(withDuration: 0.2) {
self?.drawFaceMasksFor(features: features!, bufferFrame: bufferFrame)
}
}
}
if writable,
output == videoDataOutput,
(videoWriterInput.isReadyForMoreMediaData) {
// write video buffer
videoWriterInput.append(sampleBuffer)
print("video buffering")
} else if writable,
output == audioDataOutput,
(audioWriterInput.isReadyForMoreMediaData) {
// write audio buffer
audioWriterInput?.append(sampleBuffer)
print("audio buffering")
}
}