How to keep AVMIDIPlayer playing? - swift

I'm trying to use Apple's AVMIDIPlayer object for playing a MIDI file. It seems easy enough in Swift, using the following code:
let midiFile:NSURL = NSURL(fileURLWithPath:"/path/to/midifile.mid")
var midiPlayer: AVMIDIPlayer?
do {
try midiPlayer = AVMIDIPlayer(contentsOf: midiFile as URL, soundBankURL: nil)
midiPlayer?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
midiPlayer?.play {
print("finished playing")
}
And it plays for about 0.05 seconds. I presume I need to frame it in some kind of loop. I've tried a simple solution:
while stillGoing {
midiPlayer?.play {
let stillGoing = false
}
}
which works, but ramps up the CPU massively. Is there a better way?
Further to the first comment, I've tried making a class, and while it doesn't flag any errors, it doesn't work either.
class midiPlayer {
var player: AVMIDIPlayer?
func play(file: String) {
let myURL = URL(string: file)
do {
try self.player = AVMIDIPlayer.init(contentsOf: myURL!, soundBankURL: nil)
self.player?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
self.player?.play()
}
func stop() {
self.player?.stop()
}
}
// main
let myPlayer = midiPlayer()
let midiFile = "/path/to/midifile.mid"
myPlayer.play(file: midiFile)

You were close with your loop. You just need to give the CPU time to go off and do other things instead of constantly checking to see if midiPlayer is finished yet. Add a call to usleep() in your loop. This one checks every tenth of a second:
let midiFile:NSURL = NSURL(fileURLWithPath:"/Users/steve/Desktop/Untitled.mid")
var midiPlayer: AVMIDIPlayer?
do {
try midiPlayer = AVMIDIPlayer(contentsOfURL: midiFile, soundBankURL: nil)
midiPlayer?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
var stillGoing = true
while stillGoing {
midiPlayer?.play {
print("finished playing")
stillGoing = false
}
usleep(100000)
}

You need to ensure that the midiPlayer object exists until it's done playing. If the above code is just in a single function, midiPlayer will be destroyed when the function returns because there are no remaining references to it. Typically you would declare midiPlayer as a property of an object, like a subclassed controller.

Combining Brendan and Steve's answers, the key is sleep or usleep and sticking the play method outside the loop to avoid revving the CPU.
player?.play({return})
while player!.isPlaying {
sleep(1) // or usleep(10000)
}
The original stillGoing value works, but there is also an isPlaying method.
.play needs something between its brackets to avoid hanging forever after completion.
Many thanks.

Related

SNAudioStreamAnalyzer not stopping sound classification request

I'm a student studying iOS development currently working on a simple AI project that utilizes SNAudioStreamAnalyzer to classify an incoming audio stream from the device's microphone. I can start the stream and analyze audio no problem, but I've noticed I can't seem to get my app to stop analyzing and close the audio input stream when I'm done. At the beginning, I initialize the audio engine and create the classification request like so:
private func startAudioEngine() {
do {
// start the stream of audio data
try audioEngine.start()
let snoreClassifier = try? SnoringClassifier2_0().model
let classifySoundRequest = try audioAnalyzer.makeRequest(snoreClassifier)
try streamAnalyzer.add(classifySoundRequest,
withObserver: self.audioAnalyzer)
} catch {
print("Unable to start AVAudioEngine: \(error.localizedDescription)")
}
}
After I'm done classifying my audio stream, I attempt to stop the audio engine and close the stream like so:
private func terminateNight() {
streamAnalyzer.removeAllRequests()
audioEngine.stop()
stopAndSaveNight()
do {
let session = AVAudioSession.sharedInstance()
try session.setActive(false)
} catch {
print("unable to terminate audio session")
}
nightSummary = true
}
However, after I call the terminateNight() function my app will continue using the microphone and classifying the incoming audio. Here's my SNResultsObserving implementation:
class AudioAnalyzer: NSObject, SNResultsObserving {
var prediction: String?
var confidence: Double?
let snoringEventManager: SnoringEventManager
internal init(prediction: String? = nil, confidence: Double? = nil, snoringEventManager: SnoringEventManager) {
self.prediction = prediction
self.confidence = confidence
self.snoringEventManager = snoringEventManager
}
func makeRequest(_ customModel: MLModel? = nil) throws -> SNClassifySoundRequest {
if let model = customModel {
let customRequest = try SNClassifySoundRequest(mlModel: model)
return customRequest
} else {
throw AudioAnalysisErrors.ModelInterpretationError
}
}
func request(_ request: SNRequest, didProduce: SNResult) {
guard let classificationResult = didProduce as? SNClassificationResult else { return }
let topClassification = classificationResult.classifications.first
let timeRange = classificationResult.timeRange
self.prediction = topClassification?.identifier
self.confidence = topClassification?.confidence
if self.prediction! == "snoring" {
self.snoringEventManager.snoringDetected()
} else {
self.snoringEventManager.nonSnoringDetected()
}
}
func request(_ request: SNRequest, didFailWithError: Error) {
print("ended with error \(didFailWithError)")
}
func requestDidComplete(_ request: SNRequest) {
print("request finished")
}
}
It was my understanding that upon calling streamAnalyzer.removeAllRequests() and audioEngine.stop() the app would stop streaming from the microphone and call the requestDidComplete function, but this isn't the behavior I'm getting. Any help is appreciated!
From OP's edition:
So I've realized it was a SwiftUI problem. I was calling the startAudioEngine() function in the initializer of the view it was declared on. I thought this would be fine, but since this view was embedded in a parent view when SwiftUI updated the parent it was re-initializing my view and as such calling startAudioEngine() again. The solution was to call this function in on onAppear block so that it activates the audio engine only when the view appears, and not when SwiftUI initializes it.
I don't believe you should expect to receive requestDidComplete due to removing a request. You'd expect to receive that when you call completeAnalysis.

How to set NowPlaying properties with a AVQueuePlayer in Swift?

I have an AVQueuePlayer that gets songs from a Firebase Storage via their URL and plays them in sequence.
static func playQueue() {
for song in songs {
guard let url = song.url else { return }
lofiSongs.append(AVPlayerItem(url: url))
}
if queuePlayer == nil {
queuePlayer = AVQueuePlayer(items: lofiSongs)
} else {
queuePlayer?.removeAllItems()
lofiSongs.forEach { queuePlayer?.insert($0, after: nil) }
}
queuePlayer?.seek(to: .zero) // In case we added items back in
queuePlayer?.play()
}
And this works great.
I can also make the lock screen controls appear and use the play pause button like this:
private static func setRemoteControlActions() {
let commandCenter = MPRemoteCommandCenter.shared()
// Add handler for Play Command
commandCenter.playCommand.addTarget { [self] event in
queuePlayer?.play()
return .success
}
// Add handler for Pause Command
commandCenter.pauseCommand.addTarget { [self] event in
if queuePlayer?.rate == 1.0 {
queuePlayer?.pause()
return .success
}
return .commandFailed
}
}
The problem comes with setting the metadata of the player (name, image, etc).
I know it can be done once by setting MPMediaItemPropertyTitle and MPMediaItemArtwork, but how would I change it when the next track loads?
I'm not sure if my approach works for AVQueueplayer, but for playing live streams with AVPlayer you can "listen" to metadata receiving.
extension ViewController: AVPlayerItemMetadataOutputPushDelegate {
func metadataOutput(_ output: AVPlayerItemMetadataOutput, didOutputTimedMetadataGroups groups: [AVTimedMetadataGroup], from track: AVPlayerItemTrack?) {
//look for metadata in groups
}
}
I added the AVPlayerItemMetadataOutputPushDelegate via an extension to my ViewController.
I also found this post.
I hope this gives you a lead to a solution. As said I'm not sure how this works with AVQueuePlayer.

AVAudioPlayerNode skips a lot of sounds when stopped and started again

I'm scheduling a buffer periodically using scheduleBuffer function. The completion handler schedules the next sound when current one is done, like so:
func scheduleSounds() {
if !isPlaying { return }
while beatsScheduled < beatsToScheduleAhead {
// Returns AVAudioTime of the next sound, based on sample time
// first sound is always sample time 0, then increments.
let nextTime = getNextTime()
player.scheduleBuffer(
soundBuffer,
at: nextTime,
options: AVAudioPlayerNodeBufferOptions(rawValue: 0),
completionHandler: {
self.queue!.sync {
print("Finished playing a sound!")
scheduleSounds()
}
}
)
beatsScheduled += 1
if !playerStarted {
player.play(at: nil)
playerStarted = true
}
}
}
When I stop and then start the player/engine there is a delay before the sound starts playing and after logging which sounds play when I noticed that there are couple of sounds that the engine does not play. My suspicion is that the player nodes think that the sample time has already passed relative to the player node time. I think I might not be resetting or stopping something properly, here's how I start and stop the sounds:
func start() {
do {
try engine.start()
isPlaying = true
queue!.sync() {
self.scheduleSounds()
}
} catch {
print("\(error)")
}
}
func stop() {
isPlaying = false
player.stop()
engine.stop()
playerStarted = false
}
So the problem is how do I make sure that there are no sounds that AVAudioPlayerNode skips before playing the sound?

DispatchGroup On Do, Try, Catch?

I'm working with the AudioKit framework and looking to use DispatchGroup to make a method work async. I'd like for the player.load method to run only after the audioFile has been created; right now it's throwing an error ~50% of the time and I suspect it's due to timing. I've used DispatchGroup with success in other circumstances, but never in a do/try/catch. Is there a way to make this part of the function work with it? If not, is there a way to set up a closure? Thanks!
func createPlayer(fileName: String) -> AKPlayer {
let player = AKPlayer()
let audioFile : AKAudioFile
player.mixer >>> mixer
do {
try audioFile = AKAudioFile(readFileName: "\(fileName).mp3")
player.load(audioFile: audioFile)
print("AudioFile \(fileName), \(audioFile) loaded")
} catch { print("No audio file read, looking for \(fileName).mp3")
}
player.isLooping = false
player.fade.inTime = 2 // in seconds
player.fade.outTime = 2
player.stopEnvelopeTime = 2
player.completionHandler = {
print("Completion")
self.player.detach()
}
player.play()
return player
}

How to play the same sound overlapping with AVAudioPlayer?

This code plays the sound when the button is tapped but cancels the previous if it is pressed again. I do not want this to happen I want the same sound to overlap when repeatedly pressed. I believe it might be due to using the same AVAudioPlayer as I have looked on the internet but I am new to swift and want to know how to create a new AVAudioPlayer everytime the method runs so the sounds overlap.
func playSound(sound:String){
// Set the sound file name & extension
let soundPath = NSURL(fileURLWithPath:NSBundle.mainBundle().pathForResource(sound, ofType: "mp3")!)
do {
//Preperation
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
} catch _{
}
do {
try AVAudioSession.sharedInstance().setActive(true)
}
catch _ {
}
//Play the sound
var error:NSError?
do{
audioPlayer = try AVAudioPlayer(contentsOfURL: soundPath)
}catch let error1 as NSError {
error = error1
}
audioPlayer.prepareToPlay()
audioPlayer.play()
}
To play two sounds simultaneously with AVAudioPlayer you just have to use a different player for each sound.
In my example I've declared two players, playerBoom and playerCrash, in the Viewcontroller, and I'm populating them with a sound to play via a function, then trigger the play at once:
import AVFoundation
class ViewController: UIViewController {
var playerBoom:AVAudioPlayer?
var playerCrash:AVAudioPlayer?
override func viewDidLoad() {
super.viewDidLoad()
playerBoom = preparePlayerForSound(named: "sound1")
playerCrash = preparePlayerForSound(named: "sound2")
playerBoom?.prepareToPlay()
playerCrash?.prepareToPlay()
playerBoom?.play()
playerCrash?.play()
}
func preparePlayerForSound(named sound: String) -> AVAudioPlayer? {
do {
if let soundPath = NSBundle.mainBundle().pathForResource(sound, ofType: "mp3") {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
try AVAudioSession.sharedInstance().setActive(true)
return try AVAudioPlayer(contentsOfURL: NSURL(fileURLWithPath: soundPath))
} else {
print("The file '\(sound).mp3' is not available")
}
} catch let error as NSError {
print(error)
}
return nil
}
}
It works very well but IMO is not suitable if you have many sounds to play. It's a perfectly valid solution for just a few ones, though.
This example is with two different sounds but of course the idea is exactly the same for two identic sounds.
I could not find a solution using just AVAudioPlayer.
Instead, I have found a solution to this problem with a library that is built on top of AVAudioPlayer.
The library allows same sounds to be played overlapped with each other.
https://github.com/adamcichy/SwiftySound