Radio Streaming AVPlayer latency (delay) is to high swift 3 - swift

In my app I play audio live streaming and the delay is very important. I'm using AVPlayer but it takes 5-6 sec to start and I need it 3 sec max of delay. How can I do it to start playing faster and reduce that delay?
Set a small buffer will do the job? how to set it up with AVPlayer?
This is my RadioPlayer class:
import Foundation
import AVFoundation
class RadioPlayer {
static let sharedInstance = RadioPlayer()
private var player = AVPlayer()
private var isPlaying = false
private var language: LanguageDOM?
func play() {
player.play()
isPlaying = true
}
func pause() {
player.pause()
isPlaying = false
}
func toggle() {
if isPlaying == true {
pause()
} else {
play()
}
}
func currentTimePlaying() -> CMTime {
return player.currentTime()
}
func changeLanguage(nlanguage: LanguageDOM){
self.pause()
self.language = nlanguage
player = AVPlayer(url: NSURL(string: nlanguage.url)! as URL)
self.play()
}
func currentlyPlaying() -> Bool {
return isPlaying
}
func currentLanguage() -> LanguageDOM {
return self.language!
}
func currentLanguageId() -> Int {
if self.language == nil {
return -1
}
else {
return language!.id
}
}
}

I'm assuming your network is fast enough to load the necessary buffer for 3 second delays.
What you want to look at is the -prerollAtRate: of AVPlayer. When used properly, it will allow for minimal latency between when you press play, and when you hear the sound. It requires however that part of the song be downloaded already for processing.
As for AVAudioSession, it is not what you're looking for, AVPlayer is the correct class for you.
If AVPlayer isn't fast enough I suggest looking into BASS which is a low level C like audio library built upon AudioUnits framework which allow for precise and fast control over your stream.

Related

SNAudioStreamAnalyzer not stopping sound classification request

I'm a student studying iOS development currently working on a simple AI project that utilizes SNAudioStreamAnalyzer to classify an incoming audio stream from the device's microphone. I can start the stream and analyze audio no problem, but I've noticed I can't seem to get my app to stop analyzing and close the audio input stream when I'm done. At the beginning, I initialize the audio engine and create the classification request like so:
private func startAudioEngine() {
do {
// start the stream of audio data
try audioEngine.start()
let snoreClassifier = try? SnoringClassifier2_0().model
let classifySoundRequest = try audioAnalyzer.makeRequest(snoreClassifier)
try streamAnalyzer.add(classifySoundRequest,
withObserver: self.audioAnalyzer)
} catch {
print("Unable to start AVAudioEngine: \(error.localizedDescription)")
}
}
After I'm done classifying my audio stream, I attempt to stop the audio engine and close the stream like so:
private func terminateNight() {
streamAnalyzer.removeAllRequests()
audioEngine.stop()
stopAndSaveNight()
do {
let session = AVAudioSession.sharedInstance()
try session.setActive(false)
} catch {
print("unable to terminate audio session")
}
nightSummary = true
}
However, after I call the terminateNight() function my app will continue using the microphone and classifying the incoming audio. Here's my SNResultsObserving implementation:
class AudioAnalyzer: NSObject, SNResultsObserving {
var prediction: String?
var confidence: Double?
let snoringEventManager: SnoringEventManager
internal init(prediction: String? = nil, confidence: Double? = nil, snoringEventManager: SnoringEventManager) {
self.prediction = prediction
self.confidence = confidence
self.snoringEventManager = snoringEventManager
}
func makeRequest(_ customModel: MLModel? = nil) throws -> SNClassifySoundRequest {
if let model = customModel {
let customRequest = try SNClassifySoundRequest(mlModel: model)
return customRequest
} else {
throw AudioAnalysisErrors.ModelInterpretationError
}
}
func request(_ request: SNRequest, didProduce: SNResult) {
guard let classificationResult = didProduce as? SNClassificationResult else { return }
let topClassification = classificationResult.classifications.first
let timeRange = classificationResult.timeRange
self.prediction = topClassification?.identifier
self.confidence = topClassification?.confidence
if self.prediction! == "snoring" {
self.snoringEventManager.snoringDetected()
} else {
self.snoringEventManager.nonSnoringDetected()
}
}
func request(_ request: SNRequest, didFailWithError: Error) {
print("ended with error \(didFailWithError)")
}
func requestDidComplete(_ request: SNRequest) {
print("request finished")
}
}
It was my understanding that upon calling streamAnalyzer.removeAllRequests() and audioEngine.stop() the app would stop streaming from the microphone and call the requestDidComplete function, but this isn't the behavior I'm getting. Any help is appreciated!
From OP's edition:
So I've realized it was a SwiftUI problem. I was calling the startAudioEngine() function in the initializer of the view it was declared on. I thought this would be fine, but since this view was embedded in a parent view when SwiftUI updated the parent it was re-initializing my view and as such calling startAudioEngine() again. The solution was to call this function in on onAppear block so that it activates the audio engine only when the view appears, and not when SwiftUI initializes it.
I don't believe you should expect to receive requestDidComplete due to removing a request. You'd expect to receive that when you call completeAnalysis.

How to set NowPlaying properties with a AVQueuePlayer in Swift?

I have an AVQueuePlayer that gets songs from a Firebase Storage via their URL and plays them in sequence.
static func playQueue() {
for song in songs {
guard let url = song.url else { return }
lofiSongs.append(AVPlayerItem(url: url))
}
if queuePlayer == nil {
queuePlayer = AVQueuePlayer(items: lofiSongs)
} else {
queuePlayer?.removeAllItems()
lofiSongs.forEach { queuePlayer?.insert($0, after: nil) }
}
queuePlayer?.seek(to: .zero) // In case we added items back in
queuePlayer?.play()
}
And this works great.
I can also make the lock screen controls appear and use the play pause button like this:
private static func setRemoteControlActions() {
let commandCenter = MPRemoteCommandCenter.shared()
// Add handler for Play Command
commandCenter.playCommand.addTarget { [self] event in
queuePlayer?.play()
return .success
}
// Add handler for Pause Command
commandCenter.pauseCommand.addTarget { [self] event in
if queuePlayer?.rate == 1.0 {
queuePlayer?.pause()
return .success
}
return .commandFailed
}
}
The problem comes with setting the metadata of the player (name, image, etc).
I know it can be done once by setting MPMediaItemPropertyTitle and MPMediaItemArtwork, but how would I change it when the next track loads?
I'm not sure if my approach works for AVQueueplayer, but for playing live streams with AVPlayer you can "listen" to metadata receiving.
extension ViewController: AVPlayerItemMetadataOutputPushDelegate {
func metadataOutput(_ output: AVPlayerItemMetadataOutput, didOutputTimedMetadataGroups groups: [AVTimedMetadataGroup], from track: AVPlayerItemTrack?) {
//look for metadata in groups
}
}
I added the AVPlayerItemMetadataOutputPushDelegate via an extension to my ViewController.
I also found this post.
I hope this gives you a lead to a solution. As said I'm not sure how this works with AVQueuePlayer.

AVAudioPlayerNode skips a lot of sounds when stopped and started again

I'm scheduling a buffer periodically using scheduleBuffer function. The completion handler schedules the next sound when current one is done, like so:
func scheduleSounds() {
if !isPlaying { return }
while beatsScheduled < beatsToScheduleAhead {
// Returns AVAudioTime of the next sound, based on sample time
// first sound is always sample time 0, then increments.
let nextTime = getNextTime()
player.scheduleBuffer(
soundBuffer,
at: nextTime,
options: AVAudioPlayerNodeBufferOptions(rawValue: 0),
completionHandler: {
self.queue!.sync {
print("Finished playing a sound!")
scheduleSounds()
}
}
)
beatsScheduled += 1
if !playerStarted {
player.play(at: nil)
playerStarted = true
}
}
}
When I stop and then start the player/engine there is a delay before the sound starts playing and after logging which sounds play when I noticed that there are couple of sounds that the engine does not play. My suspicion is that the player nodes think that the sample time has already passed relative to the player node time. I think I might not be resetting or stopping something properly, here's how I start and stop the sounds:
func start() {
do {
try engine.start()
isPlaying = true
queue!.sync() {
self.scheduleSounds()
}
} catch {
print("\(error)")
}
}
func stop() {
isPlaying = false
player.stop()
engine.stop()
playerStarted = false
}
So the problem is how do I make sure that there are no sounds that AVAudioPlayerNode skips before playing the sound?

How to keep AVMIDIPlayer playing?

I'm trying to use Apple's AVMIDIPlayer object for playing a MIDI file. It seems easy enough in Swift, using the following code:
let midiFile:NSURL = NSURL(fileURLWithPath:"/path/to/midifile.mid")
var midiPlayer: AVMIDIPlayer?
do {
try midiPlayer = AVMIDIPlayer(contentsOf: midiFile as URL, soundBankURL: nil)
midiPlayer?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
midiPlayer?.play {
print("finished playing")
}
And it plays for about 0.05 seconds. I presume I need to frame it in some kind of loop. I've tried a simple solution:
while stillGoing {
midiPlayer?.play {
let stillGoing = false
}
}
which works, but ramps up the CPU massively. Is there a better way?
Further to the first comment, I've tried making a class, and while it doesn't flag any errors, it doesn't work either.
class midiPlayer {
var player: AVMIDIPlayer?
func play(file: String) {
let myURL = URL(string: file)
do {
try self.player = AVMIDIPlayer.init(contentsOf: myURL!, soundBankURL: nil)
self.player?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
self.player?.play()
}
func stop() {
self.player?.stop()
}
}
// main
let myPlayer = midiPlayer()
let midiFile = "/path/to/midifile.mid"
myPlayer.play(file: midiFile)
You were close with your loop. You just need to give the CPU time to go off and do other things instead of constantly checking to see if midiPlayer is finished yet. Add a call to usleep() in your loop. This one checks every tenth of a second:
let midiFile:NSURL = NSURL(fileURLWithPath:"/Users/steve/Desktop/Untitled.mid")
var midiPlayer: AVMIDIPlayer?
do {
try midiPlayer = AVMIDIPlayer(contentsOfURL: midiFile, soundBankURL: nil)
midiPlayer?.prepareToPlay()
} catch {
print("could not create MIDI player")
}
var stillGoing = true
while stillGoing {
midiPlayer?.play {
print("finished playing")
stillGoing = false
}
usleep(100000)
}
You need to ensure that the midiPlayer object exists until it's done playing. If the above code is just in a single function, midiPlayer will be destroyed when the function returns because there are no remaining references to it. Typically you would declare midiPlayer as a property of an object, like a subclassed controller.
Combining Brendan and Steve's answers, the key is sleep or usleep and sticking the play method outside the loop to avoid revving the CPU.
player?.play({return})
while player!.isPlaying {
sleep(1) // or usleep(10000)
}
The original stillGoing value works, but there is also an isPlaying method.
.play needs something between its brackets to avoid hanging forever after completion.
Many thanks.

Play NSSound more times, simultaneously

I need to insert a code, in function, that when called play a sound.
The problem is that the function is called faster than sound duration and so, the sound is played less times than function calls.
function keyDownSound(){
NSSound(named: "tennis")?.play()
}
The problem is that NSSound starts playing only if isn't already played. Any ideas to fix?
Source of problem is that init?(named name: String) return same NSSound instance for same name. You can copy NSSound instance, then several sounds will play simultaneously:
function keyDownSound(){
NSSound(named: "tennis")?.copy().play()
}
Alternative way - start sound again after finish playing. For that you need to implement sound(sound: NSSound, didFinishPlaying aBool: Bool) delegate method. For example:
var sound: NSSound?
var playCount: UInt = 0
func playSoundIfNeeded() {
if playCount > 0 {
if sound == nil {
sound = NSSound(named: "Blow")!
sound?.delegate = self
}
if sound?.playing == false {
playCount -= 1
sound?.play()
}
}
}
func keyDownSound() {
playCount += 1
playSoundIfNeeded()
}
func sound(sound: NSSound, didFinishPlaying aBool: Bool) {
playSoundIfNeeded()
}