I'm working with the AudioKit framework and looking to use DispatchGroup to make a method work async. I'd like for the player.load method to run only after the audioFile has been created; right now it's throwing an error ~50% of the time and I suspect it's due to timing. I've used DispatchGroup with success in other circumstances, but never in a do/try/catch. Is there a way to make this part of the function work with it? If not, is there a way to set up a closure? Thanks!
func createPlayer(fileName: String) -> AKPlayer {
let player = AKPlayer()
let audioFile : AKAudioFile
player.mixer >>> mixer
do {
try audioFile = AKAudioFile(readFileName: "\(fileName).mp3")
player.load(audioFile: audioFile)
print("AudioFile \(fileName), \(audioFile) loaded")
} catch { print("No audio file read, looking for \(fileName).mp3")
}
player.isLooping = false
player.fade.inTime = 2 // in seconds
player.fade.outTime = 2
player.stopEnvelopeTime = 2
player.completionHandler = {
print("Completion")
self.player.detach()
}
player.play()
return player
}
Related
Playing 2 sounds in succession ... but before 2nd sound finishes, trying to stop the entire 2 sound sequence.
Note that it does not matter where I choose to stop the sequence - even, e.g., while the 1st sound is playing.
Here are the playSound and stopSound + 1 helper func code snippets:
func playSound(theSoundName: String) {
guard let url = Bundle.main.url(forResource: "audio/" + theSoundName,
withExtension: "mp3") else {
print("sound not found")
return
}
itsSoundPlayer = setupAudioPlayer(theURL: url)
if theSoundName == "roar" {
itsSoundPlayer?.numberOfLoops = -1 // forever
}
itsSoundPlayer?.play()
} // playSound
func stopSound() {
itsSoundPlayer?.stop() // stops whatever is playing via playSound(...)
} // stopSound
func setupAudioPlayer(theURL: URL) -> AVAudioPlayer? {
do {
// Make this App ready to takeover the device audio
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try AVAudioSession.sharedInstance().setActive(true)
let soundPlayer = try AVAudioPlayer(contentsOf: theURL)
return soundPlayer
}
catch let error as NSError {
print("error: \(error.localizedDescription)")
return nil
}
} // setupAudioPlayer
Okay, so the above are the basic building blocks ... now for a specific example of their use:
func attaBoy() {
stopSound() // stop whatever is playing via playSound(...)
playSound(theSoundName: "attaboy") // then play the new sound
// give "attaboy" time to finish before returning to "roar"
let theDelay = Double(2.0)
DispatchQueue.main.asyncAfter(deadline: .now() + theDelay) {
self.playSound(theSoundName: "roar") // replay
}
} // attaBoy
At an undetermined point when either attaboy or roar is playing, I want to call stopSound(). This stoppage could occur while attaboy is playing or while roar is playing.
I've tried to use the above code as is ... but when I try to stop the 2-sound sequence while attaboy is playing, attaboy stops as it should, but roar still plays.
Is there some other approach I should try ?
I'm a student studying iOS development currently working on a simple AI project that utilizes SNAudioStreamAnalyzer to classify an incoming audio stream from the device's microphone. I can start the stream and analyze audio no problem, but I've noticed I can't seem to get my app to stop analyzing and close the audio input stream when I'm done. At the beginning, I initialize the audio engine and create the classification request like so:
private func startAudioEngine() {
do {
// start the stream of audio data
try audioEngine.start()
let snoreClassifier = try? SnoringClassifier2_0().model
let classifySoundRequest = try audioAnalyzer.makeRequest(snoreClassifier)
try streamAnalyzer.add(classifySoundRequest,
withObserver: self.audioAnalyzer)
} catch {
print("Unable to start AVAudioEngine: \(error.localizedDescription)")
}
}
After I'm done classifying my audio stream, I attempt to stop the audio engine and close the stream like so:
private func terminateNight() {
streamAnalyzer.removeAllRequests()
audioEngine.stop()
stopAndSaveNight()
do {
let session = AVAudioSession.sharedInstance()
try session.setActive(false)
} catch {
print("unable to terminate audio session")
}
nightSummary = true
}
However, after I call the terminateNight() function my app will continue using the microphone and classifying the incoming audio. Here's my SNResultsObserving implementation:
class AudioAnalyzer: NSObject, SNResultsObserving {
var prediction: String?
var confidence: Double?
let snoringEventManager: SnoringEventManager
internal init(prediction: String? = nil, confidence: Double? = nil, snoringEventManager: SnoringEventManager) {
self.prediction = prediction
self.confidence = confidence
self.snoringEventManager = snoringEventManager
}
func makeRequest(_ customModel: MLModel? = nil) throws -> SNClassifySoundRequest {
if let model = customModel {
let customRequest = try SNClassifySoundRequest(mlModel: model)
return customRequest
} else {
throw AudioAnalysisErrors.ModelInterpretationError
}
}
func request(_ request: SNRequest, didProduce: SNResult) {
guard let classificationResult = didProduce as? SNClassificationResult else { return }
let topClassification = classificationResult.classifications.first
let timeRange = classificationResult.timeRange
self.prediction = topClassification?.identifier
self.confidence = topClassification?.confidence
if self.prediction! == "snoring" {
self.snoringEventManager.snoringDetected()
} else {
self.snoringEventManager.nonSnoringDetected()
}
}
func request(_ request: SNRequest, didFailWithError: Error) {
print("ended with error \(didFailWithError)")
}
func requestDidComplete(_ request: SNRequest) {
print("request finished")
}
}
It was my understanding that upon calling streamAnalyzer.removeAllRequests() and audioEngine.stop() the app would stop streaming from the microphone and call the requestDidComplete function, but this isn't the behavior I'm getting. Any help is appreciated!
From OP's edition:
So I've realized it was a SwiftUI problem. I was calling the startAudioEngine() function in the initializer of the view it was declared on. I thought this would be fine, but since this view was embedded in a parent view when SwiftUI updated the parent it was re-initializing my view and as such calling startAudioEngine() again. The solution was to call this function in on onAppear block so that it activates the audio engine only when the view appears, and not when SwiftUI initializes it.
I don't believe you should expect to receive requestDidComplete due to removing a request. You'd expect to receive that when you call completeAnalysis.
I'm trying to use Apple's AVMIDIPlayer object for playing a MIDI file. It seems easy enough in Swift, using the following code:
let midiFile:NSURL = NSURL(fileURLWithPath:"/path/to/midifile.mid")
var midiPlayer: AVMIDIPlayer?
do {
try midiPlayer = AVMIDIPlayer(contentsOf: midiFile as URL, soundBankURL: nil)
midiPlayer?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
midiPlayer?.play {
print("finished playing")
}
And it plays for about 0.05 seconds. I presume I need to frame it in some kind of loop. I've tried a simple solution:
while stillGoing {
midiPlayer?.play {
let stillGoing = false
}
}
which works, but ramps up the CPU massively. Is there a better way?
Further to the first comment, I've tried making a class, and while it doesn't flag any errors, it doesn't work either.
class midiPlayer {
var player: AVMIDIPlayer?
func play(file: String) {
let myURL = URL(string: file)
do {
try self.player = AVMIDIPlayer.init(contentsOf: myURL!, soundBankURL: nil)
self.player?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
self.player?.play()
}
func stop() {
self.player?.stop()
}
}
// main
let myPlayer = midiPlayer()
let midiFile = "/path/to/midifile.mid"
myPlayer.play(file: midiFile)
You were close with your loop. You just need to give the CPU time to go off and do other things instead of constantly checking to see if midiPlayer is finished yet. Add a call to usleep() in your loop. This one checks every tenth of a second:
let midiFile:NSURL = NSURL(fileURLWithPath:"/Users/steve/Desktop/Untitled.mid")
var midiPlayer: AVMIDIPlayer?
do {
try midiPlayer = AVMIDIPlayer(contentsOfURL: midiFile, soundBankURL: nil)
midiPlayer?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
var stillGoing = true
while stillGoing {
midiPlayer?.play {
print("finished playing")
stillGoing = false
}
usleep(100000)
}
You need to ensure that the midiPlayer object exists until it's done playing. If the above code is just in a single function, midiPlayer will be destroyed when the function returns because there are no remaining references to it. Typically you would declare midiPlayer as a property of an object, like a subclassed controller.
Combining Brendan and Steve's answers, the key is sleep or usleep and sticking the play method outside the loop to avoid revving the CPU.
player?.play({return})
while player!.isPlaying {
sleep(1) // or usleep(10000)
}
The original stillGoing value works, but there is also an isPlaying method.
.play needs something between its brackets to avoid hanging forever after completion.
Many thanks.
I have an application that is constantly receiving integer data from a bluetooth sensor and I made it so that if the integer is less than 50, then it should play the MP3.
The problem is that the sensor is very rapidly checking and sending the integers, which is resulting in too many audio instances, basically the the mp3 file is being played too many times at the same time. How can I have it so that it finishes the audio before starting again?
This is the main code:
var player: AVAudioPlayer?
if let unwrappedString = Reading {
let optionalInt = Int(unwrappedString)
if let upwrappedInt = optionalInt {
if(upwrappedInt < 50){
DispatchQueue.global(qos: .background).async {
self.playSound()
}
}
}
}
Sound function:
func playSound() {
guard let url = Bundle.main.url(forResource: "beep1", withExtension: "mp3") else {
print("url not found")
return
}
do {
/// this codes for making this app ready to takeover the device audio
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
try AVAudioSession.sharedInstance().setActive(true)
/// change fileTypeHint according to the type of your audio file (you can omit this)
player = try AVAudioPlayer(contentsOf: url, fileTypeHint: AVFileTypeMPEGLayer3)
// no need for prepareToPlay because prepareToPlay is happen automatically when calling play()
player!.play()
} catch let error as NSError {
print("error: \(error.localizedDescription)")
}
}
If the audio player is already playing (isPlaying), don't start playing!
https://developer.apple.com/reference/avfoundation/avaudioplayer/1390139-isplaying
I believe AVAudioPlayer has a delegate method to check if the audio has finished playing:
-(void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag
{
// ----------------------------------------------
// set your custom boolean flag 'isPlayingAudio'
// to false so you can play another audio again
// ----------------------------------------------
}
...
-(void)monitorBluetoothNumber
{
if(bluetoothNumber < 50 && !self.isPlayingAudio)
{
[self playMusic];
self.isPlayingAudio = YES;
}
}
You'll need to setup your audio player and set its delegate obviously.
The code is Objective C but you can easily adapt to Swift.
This code plays the sound when the button is tapped but cancels the previous if it is pressed again. I do not want this to happen I want the same sound to overlap when repeatedly pressed. I believe it might be due to using the same AVAudioPlayer as I have looked on the internet but I am new to swift and want to know how to create a new AVAudioPlayer everytime the method runs so the sounds overlap.
func playSound(sound:String){
// Set the sound file name & extension
let soundPath = NSURL(fileURLWithPath:NSBundle.mainBundle().pathForResource(sound, ofType: "mp3")!)
do {
//Preperation
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
} catch _{
}
do {
try AVAudioSession.sharedInstance().setActive(true)
}
catch _ {
}
//Play the sound
var error:NSError?
do{
audioPlayer = try AVAudioPlayer(contentsOfURL: soundPath)
}catch let error1 as NSError {
error = error1
}
audioPlayer.prepareToPlay()
audioPlayer.play()
}
To play two sounds simultaneously with AVAudioPlayer you just have to use a different player for each sound.
In my example I've declared two players, playerBoom and playerCrash, in the Viewcontroller, and I'm populating them with a sound to play via a function, then trigger the play at once:
import AVFoundation
class ViewController: UIViewController {
var playerBoom:AVAudioPlayer?
var playerCrash:AVAudioPlayer?
override func viewDidLoad() {
super.viewDidLoad()
playerBoom = preparePlayerForSound(named: "sound1")
playerCrash = preparePlayerForSound(named: "sound2")
playerBoom?.prepareToPlay()
playerCrash?.prepareToPlay()
playerBoom?.play()
playerCrash?.play()
}
func preparePlayerForSound(named sound: String) -> AVAudioPlayer? {
do {
if let soundPath = NSBundle.mainBundle().pathForResource(sound, ofType: "mp3") {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
try AVAudioSession.sharedInstance().setActive(true)
return try AVAudioPlayer(contentsOfURL: NSURL(fileURLWithPath: soundPath))
} else {
print("The file '\(sound).mp3' is not available")
}
} catch let error as NSError {
print(error)
}
return nil
}
}
It works very well but IMO is not suitable if you have many sounds to play. It's a perfectly valid solution for just a few ones, though.
This example is with two different sounds but of course the idea is exactly the same for two identic sounds.
I could not find a solution using just AVAudioPlayer.
Instead, I have found a solution to this problem with a library that is built on top of AVAudioPlayer.
The library allows same sounds to be played overlapped with each other.
https://github.com/adamcichy/SwiftySound