Multiple AVPlayer instances working simulator but not on Apple TV - swift

I'm currently trying to play multiple AVPlayers in parallel using AVPlayer and AVPlayerLayer on tvOS. In the simulator this is working correctly, on the device only some players play and for the rest the player layer simply stays blank (not even black but simply blank). I've heard rumors that the internal implementation only supports 24 simultaneous instances so I already limited the number to 24. However on the physical device, a number of ~15 players can play in parallel. Surprisingly that number tends to differ, sometimes its just 13, sometimes even 16.
I'm creating the players using the following code (which is executed in a closure, hence the weak self as the input and the strongSelf cast):
guard let strongSelf = self else { return }
strongSelf.player = AVPlayer(URL: localURL)
strongSelf.player?.volume = 0.0
strongSelf.player?.actionAtItemEnd = .Pause
NSNotificationCenter.defaultCenter().addObserver(strongSelf, selector: "playerPlayedToEnd:", name: AVPlayerItemDidPlayToEndTimeNotification, object: strongSelf.player?.currentItem)
strongSelf.playerLayer = AVPlayerLayer(player: strongSelf.player)
strongSelf.playerLayer?.frame = strongSelf.contentView.bounds
strongSelf.playerLayer?.videoGravity = AVLayerVideoGravityResizeAspect
strongSelf.contentView.layer.addSublayer(strongSelf.playerLayer!)
strongSelf.player?.play()
strongSelf.activityIndicatorView.stopAnimating()
Would any of you have an idea what could cause this problem? I'm also open to any workarounds if any of you could suggest one :)

Related

Swift - AVAudioEngine fails to default to system output

I try to use AVAudioEngine to playback audio. Here is my code stripped down to the minimum:
import AVKit
let audioEngine = AVAudioEngine()
let audioPlayer = AVAudioPlayerNode()
let mainMixer = audioEngine.mainMixerNode
/*
let outputNode: AVAudioOutputNode = audioEngine.outputNode
let outputUnit: AudioUnit = outputNode.audioUnit!
var outputDeviceID: AudioDeviceID = 240 // Magic Numbers: 246 Line out, Built-in Speaker: 240 - these will be different on other computers
AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioDeviceID>.size))
*/
audioEngine.attach(audioPlayer)
audioEngine.connect(audioPlayer, to: audioEngine.mainMixerNode, format: nil)
try! audioEngine.start()
let kPlaybacklFileLocation = "/System/Library/Sounds/Submarine.aiff"
let audioFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, kPlaybacklFileLocation as CFString, .cfurlposixPathStyle, false)
let audiofile = try! AVAudioFile(forReading: audioFileURL! as URL)
audioPlayer.scheduleFile(audiofile, at: nil)
audioPlayer.play()
The code runs in an XCode 11 Playground and should output the "Submarine" alert found in the system sounds, or you may direct it to any other soundfile.
I have a macbook pro mid 2012 (non retina) running on Mojave and the code runs as expected.
However, on my 2018 Mac mini, also running Mojave, the playground runs without errors but no sound is heard. Only after I set the outputNode explicitly to a specific output (as found in the system preferences) by adding the code in comments it works.
Anyway, this would not be a viable solution, as the output so would be bound to the specified port. On my macbook, where the code works without the commented part, the output will be rerouted to whatever I specify in the system panel whithout even having to restart XCode.
I retreived the outputDeviceID 240 using AudioObjectGetPropertyData, the code needed is to verbose to show, but it is noteworthy that besides the physical outputs "BuiltInHeadphoneOutputDevice" and "BuiltInSpeakerDevice" there is also an "CADefaultDeviceAggregate-2576-0".
Also noteworthy is that the device list differs from the list I get using AudioComponentFindNext which only gives me entries like "AudioDeviceOutput" and "DefaultOutputUnit".
I would expect that the AVAudioEnginewould use the AUHAL to be routed to the default unit.
So why does the same code in the same XCode version and the same OS runs fine on one machine and not the other?
What do I have do change within the code to guarantee that on any machine the audio goes to the output which is set in the system preferences?
Some further considerations:
The Mac mini has the T2 security chip which puts more restraints to what XCode may have access to
I have no paid developer account, some entitlements need certificates I therefore cannot grant
I have some virtual outputs (blackhole, soundflower) installed on the mini, but system output is set to either built in speakers or headphones/lineOut

increase volume of audio file recorded with swift

I am developing an application with swift. I would like to be able to increase the volume of a recorded file. Is there a way to do it directly inside the application?
I found Audiokit Here and this question but it didn't help me much.
Thanks!
With AudioKit
Option A:
Do you just want to import a file, then play it louder than you imported it? You can use an AKBooster for that.
import AudioKit
do {
let file = try AKAudioFile(readFileName: "yourfile.wav")
let player = try AKAudioPlayer(file: file)
// Define your gain below. >1 means amplifying it to be louder
let booster = AKBooster(player, gain: 1.3)
AudioKit.output = booster
try AudioKit.start()
// And then to play your file:
player.play()
} catch {
// Log your error
}
Just set the gain value of booster to make it louder.
Option B: You could also try normalizing the audio file, which essentially applies a multiple constant across the recording (with respect to the highest signal level in the recording) so it reaches a new target maximum that you define. Here, I set it to -4dB.
let url = Bundle.main.url(forResource: "sound", withExtension: "wav")
if let file = try? AKAudioFile(forReading: url) {
// Set the new max level (in dB) for the gain here.
if let normalizedFile = try? file.normalized(newMaxLevel: -4) {
print(normalizedFile.maxLevel)
// Play your normalizedFile...
}
}
This method increases the amplitude of everything to a level of dB - so it won't effect the dynamics (SNR) of your file, and it only increases by the amount it needs to reach that new maximum (so you can safely apply it to ALL of your files to have them be uniform).
With AVAudioPlayer
Option A: If you want to adjust/control volume, AVAudioPlayer has a volume member but the docs say:
The playback volume for the audio player, ranging from 0.0 through 1.0 on a linear scale.
Where 1.0 is the volume of the original file and the default. So you can only make it quieter with that. Here's the code for it, in case you're interested:
let soundFileURL = Bundle.main.url(forResource: "sound", withExtension: "mp3")!
let audioPlayer = try? AVAudioPlayer(contentsOf: soundFileURL, fileTypeHint: AVFileType.mp3.rawValue)
audioPlayer?.play()
// Only play once
audioPlayer?.numberOfLoops = 0
// Set the volume of playback here.
audioPlayer?.volume = 1.0
Option B: if your sound file is too quiet, it might be coming out the receiver of the phone. In which case, you could try overriding the output port to use the speaker instead:
do {
try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch let error {
print("Override failed: \(error)")
}
You can also set that permanently with this code (but I can't guarantee your app will get into the AppStore):
try? audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, with: AVAudioSessionCategoryOptions.defaultToSpeaker)
Option C: If Option B doesn't do it for you, you might be out of luck on 'how to make AVAudioPlayer play louder.' You're best off editing the source file with some external software yourself - I can recommend Audacity as a good option to do this.
Option D: One last option I've only heard of. You could also look into MPVolumeView, which has UI to control the system output and volume. I'm not too familiar with it though - may be approaching legacy at this point.
I want to mention a few things here because I was working on a similar problem.
On the contrary to what's written on Apple Docs on the AVAudioPlayer.volume property (https://developer.apple.com/documentation/avfoundation/avaudioplayer/1389330-volume) the volume can go higher than 1.0... And actually this works. I bumped up the volume to 100.0 on my application and recorded audio is way louder and easier to hear.
Another thing that helped me was setting the mode of AVAudioSession like so:
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetooh])
try session.setMode(.videoRecording)
try session.setActive(true)
} catch {
debugPrint("Problem with AVAudioSession")
}
session.setMode(.videoRecording) is the key line here. This helps you to send the audio through the louder speakers of the phone and not just the phone call speaker that's next to the face camera in the front. I was having a problem with this and posted a question that helped me here:
AVAudioPlayer NOT playing through speaker after recording with AVAudioRecorder
There are several standard AudioKit DSP components that can increase the volume.
For example, you can use a simple method like AKBooster: http://audiokit.io/docs/Classes/AKBooster.html
OR
Use the following code,
AKSettings.defaultToSpeaker = true
See more details in this post:
https://github.com/audiokit/AudioKit/issues/599
https://github.com/AudioKit/AudioKit/issues/586

Swift/SpriteKit - Any way to pool/cache SKReferenceNodes?

Is there a way to pool/cache SKReferenceNodes in SpriteKit using Swift?
I am creating a game using xCodes visual level editor. I am creating different .sks files with the visual level editor that I am than calling in code when I need to. I am calling them in code because I am using them to create random levels or obstacles so I don't need all of them added to the scene at once.
At the moment I am doing it like this
I create a convince init method for SKReferenceNodes to init them with URLs. I am doing this because there is some sort of bug calling SKReferenceNodes by file name directly (https://forums.developer.apple.com/thread/44090).
Using such an extension makes makes the code a bit cleaner.
extension SKReferenceNode {
convenience init(roomName: String) {
let path: String
if let validPath = NSBundle.mainBundle().pathForResource(roomName, ofType: "sks") {
path = validPath
} else {
path = NSBundle.mainBundle().pathForResource("RoomTemplate", ofType: "sks")! // use empty roomTemplate as backup so force unwrap
}
self.init(URL: NSURL(fileURLWithPath: path))
}
}
and than in my scenes I can create them and add them like so (about every 10 seconds)
let room = SKReferenceNode(roomName: "Room1") // will cause lag without GCD
room.position = ...
addChild(room)
This works ok but I am getting some lag/stutter when creating these. So I am using GCD to reduce this to basically no stutter. It works well but I am wondering if I can preload all .sks files first.
I tried using arrays to do this but I am getting crashes and it just doesn't seem to work (I also get a message already adding node that has a parent).
I was trying to preload them like so at app launch
let allRooms = [SKReferenceNode]()
for i in 0...3 {
let room = SKReferenceNode(roomName: "Room\(i)")
allRooms.append(room)
}
and than use the array when I need too. This doesn't work however and I am getting a crash when trying to use code like this
let room = allRooms[0]
room.position =
addChild(room) // causes error/crash -> node already has parent
Has anyone done something similar? Is there another way I can pool/cache those reference nodes?. Am i missing something here?
Speaking about the SKReferenceNode preload, I think the policy to be followed is to load your object, find what kind they are and use the official preloading methods available in Sprite-Kit:
SKTextureAtlas.preloadTextureAtlases(_:withCompletionHandler:)
SKTexture.preloadTextures(_:withCompletionHandler:)
To avoid this kind of error you should create separate instances of the nodes.
Try to doing this:
let room = allRooms[0]
room.position = ...
room.removeFromParent()
addChild(room)
I just figured it out, I was just being an idiot.
Using arrays like I wanted to is fine, the problem I had which caused the crash was the following.
When the game scene first loads I am adding 3 rooms but when testing with the arrays I kept adding the same room
let room = allRooms[0]
instead of using a randomiser.
This obviously meant I was adding the same instance multiple times, hence the crash.
With a randomiser, that doesn't repeat the same room, this does not happen.
Furthermore I make sure to remove the room from the scene when I no longer need it. I have a node in the rooms (roomRemover) which fires a method to remove/create a new room once it comes into contact with the player.
This would be the code in DidBeginContact.
guard let roomToRemove = secondBody?.node.parent?.parent?.parent?.parent else { return }
// secondBody is the roomRemover node which is a child of the SKReferenceNode.
// If I want to remove the whole SKReferenceNode I need to add .parent 4 times.
// Not exactly sure why 4 times but it works
for room in allRooms {
if room == roomToRemove {
room.removeFromParent()
}
}
loadRandomRoom()
Hope this helps someone trying to do the same.

CUICatalog: Invalid Request: requesting subtype without specifying idiom (Where is it coming from and how to fix it?)

When I run my SpriteKit game, I receive this error multiple times in the console. As far as I can tell (though I'm not completely sure), the game itself is unaffected, but the error might have some other implications, along with crowding the debug console.
I did some research into the error, and found a few possible solutions, none of which seem to have completely worked. These solutions include turning ignoresSiblingOrder to false, and specifying textures as SKTextureAtlas(named: "atlasName").textureNamed("textureName"), but these did not work.
I think the error is coming somewhere from the use of textures and texture atlases in the assets catalogue, though I'm not completely sure. Here is how I am implementing some of these textures/images:
let Texture = SKTextureAtlas(named: "character").textureNamed("\character1")
character = SKSpriteNode(texture: Texture)
also:
let Atlas = SKTextureAtlas(named: "character")
var Frames = [SKTexture]()
let numImages = Atlas.textureNames.count
for var i=1; i<=numImages; i++ {
let textureName = "character(i)"
Frames.append(Atlas.textureNamed(textureName))
}
for var i=numImages; i>=1; i-- {
let TextureName = "character(i)"
Frames.append(Atlas.textureNamed(textureName))
}
let firstFrame = Frames[0]
character = SKSpriteNode(texture: firstFrame)
The above code is just used to create an array from which to animate the character, and the animation runs completely fine.
For all my other sprite nodes, I initialize with SKSpriteNode(imageNamed: "imageName") with the image name from the asset catalogue, but not within a texture atlas. All the images have #1x, #2x, and #3x versions.
I'm not sure if there are any other possible sources for the error message, or if the examples above are the sources of the error.
Is this just a bug with sprite kit, or a legitimate error with my code or assets?
Thanks!
I have this error too. In my opinion, it's the Xcode 7.2 bug and not your fault. I've updated Xcode in the middle of making an app and this message starts to show up constantly in the console. According to this and that links, you have nothing to fear here.
Product > Clean
seems to do the trick.
Error seems to start popping up when you delete an Item from Asset Catalogue but its reference still stay buried in code somewhere. (In my case it was the default spaceship asset which I deleted.)

Trouble with iOS 8 CoreData iCloud sync consistency

I have a universal app written in Swift using xCode 6.3.2. It is currently very simple in that when I push a button a random number is generated and then stored using CoreData. This works perfectly until I implement iCloud. With iCloud enabled storing a new random number doesn't always propagate onto additional devices. It does most of the time, but not always.
I am testing using an iPad Air, iPhone 6 Plus and iPhone 4s
I am using the following three notification observers:
NSNotificationCenter.defaultCenter().addObserver(self, selector: "persistentStoreDidChange", name: NSPersistentStoreCoordinatorStoresDidChangeNotification, object: nil)
NSNotificationCenter.defaultCenter().addObserver(self, selector: "persistentStoreWillChange:", name: NSPersistentStoreCoordinatorStoresWillChangeNotification, object: managedContext.persistentStoreCoordinator)
NSNotificationCenter.defaultCenter().addObserver(self, selector: "receiveiCloudChanges:", name: NSPersistentStoreDidImportUbiquitousContentChangesNotification, object: managedContext.persistentStoreCoordinator)
and here is the function for the third one:
func receiveiCloudChanges(notification: NSNotification)
{
println("iCloud changes have occured")
dispatch_async(dispatch_get_main_queue())
{
self.activityIndicator.startAnimating()
self.updateLabel.text = "iCloud changes have occured"
self.managedContext.performBlockAndWait
{ () -> Void in
self.managedContext.mergeChangesFromContextDidSaveNotification(notification)
}
self.reloadTextViewAndTableView()
self.activityIndicator.stopAnimating()
}
}
I am not attempting to update the UI until the managedContext is finished with the merge, and I am performing everything on the main thread. I am really at a loss why the changes on one device are only displayed on the second or third one about 90-95% of the time.
As part of my trial and error, when I went to delete the app from my test devices and reinstall there is sometimes a message that an iCloud operation is pending, but it doesn't matter how long I wait, once the devices are out of sync they stay that way. Even when they are out of sync if I add another number or two those will still propagate to the other devices, but then I will invariably lose more data. It seems to work about 90% of the time.
I use the following to update the UI:
func reloadTextViewAndTableView()
{
let allPeopleFetch = NSFetchRequest(entityName: "Person")
var error : NSError?
let result = managedContext.executeFetchRequest(allPeopleFetch, error: &error) as! [Person]?
//Now reload the textView by grabbing every person in the DB and appending it to the textView
textView.text = ""
allPeople = managedContext.executeFetchRequest(allPeopleFetch, error: &error) as! [Person]
for dude in allPeople
{
textView.text = textView.text.stringByAppendingString(dude.name)
}
tableView.reloadData()
println("allPeople.count = \(allPeople.count)")
}
I am really at a stand still here. I am just not sure why it "usually" works...
So I am still not sure how or why CoreData is sometimes getting out of sync as described above. I have found though that if I enter one new number after another very rapidly is when it usually occurs.
As a workaround I have added a button to the UI that allows the user to force a resync with iCloud by rebuilding the NSPersistentStore using the following option.
NSPersistentStoreRebuildFromUbiquitousContentOption: true
I would much rather the store stayed in sync with all other devices all the time, but at least this way the user will never lose data. If they notice that they are missing a record they know they entered on another device, then all they have to do is hit the resync button.