AVFoundation crash when setting isEnabled on AVCaptureAudioChannel - swift

For some AVCaptureDevices, trying to set isEnabled on an AVCaptureAudioChannel results in a crash:
Assertion failed: (pSrcASBD->mChannelsPerFrame == [_internal->audioChannels count]), function -[AVCaptureConnection_Tundra copyPostSplitSummaryAudioFormatDescription], file AVCaptureConnection.m, line 817.
guard let channel = connection.audioChannels[safe: channelIndex] else { return }
if channel.isEnabled != enabled {
channel.isEnabled = enabled
}
The same issue does not occur when setting the volume:
guard let channel = connection.audioChannels[safe: channelIndex] else { return }
if channel.volume != volume {
channel.volume = volume
}
On a slightly unrelated note I would love to know why any issues I seem to see with AVFoundation result in the _tundra suffix. I believe this is the codename, but I do not see other crashes and log entries using this online.

Related

Unable to connect to AppleSMC from XCode

I have the following code to access the AppleSMC and control the fan speed. When I execute the swift file directly from command line (swift ./my_file.swift), then connection happens properly. If I transpose it to XCode and run it from there, then I get an error as indicated below. Anything I am forgetting ?
var mainport: mach_port_t = 0
var result = IOMainPort(kIOMainPortDefault, &mainport)
guard result == kIOReturnSuccess else { throw result }
let serviceDir = IOServiceMatching("AppleSMC")
let service = IOServiceGetMatchingService(mainport, serviceDir)
result = IOServiceOpen(service, mach_task_self_ , 0, &con). ---> expression unexpectedly raised an error: -536870174
guard result == kIOReturnSuccess else { throw result }

FirebaseMLNLTranslation model files not found

I am adding a translate text feature to my app. I have the text and run the initial setup, but this error keeps coming up.
error: Error Domain=com.firebase.ml Code=13 "Translation model files not found. Make sure to call downloadModelIfNeeded and if that fails, delete the models and retry." UserInfo={NSLocalizedDescription=Translation model files not found. Make sure to call downloadModelIfNeeded and if that fails, delete the models and retry.}
This is the function that translates the text...
translator.translate(textView.text) { [self] translatedText, error in
print("Translator Translated")
guard error == nil, let translatedText = translatedText else {
print(error!); return // prints the error below...
}
print(translatedText) // nil (when I comment out the guard statement)
}
This is the setup code run before the translation.
let options = TranslatorOptions(sourceLanguage: .en, targetLanguage: .es) //
print("Options: \(options)") // Options: sourceLanguage: en, targetLanguage: es
let translator = NaturalLanguage.naturalLanguage().translator(options: options)
print("Translator: \(translator)") // Translator: <FIRTranslator: 0x280173ac0>
let conditions = ModelDownloadConditions (
allowsCellularAccess: true,
allowsBackgroundDownloading: false
); print("Conditions: \(conditions)") // Conditions: allowsCellularAccess: 1, allowsBackgroundDownloading: 0
translator.downloadModelIfNeeded(with: conditions) { error in
guard error == nil else {
print("Model Failed To Download because: \(error!)"); return
}
}
textView.text is not nil. I have no idea how to solve this problem and there are no solutions that I have found online. It actually used to work and I revisited my project after some time off and it all of a sudden started presenting this error. Any help is appreciated.

Screen Sharing using Twilio in iOS

I am using Twilio iOS framework to connect in the group room.
On the click on connect room button below is the code which I used
let recorder = RPScreenRecorder.shared()
recorder.isMicrophoneEnabled = false
recorder.isCameraEnabled = false
// The source produces either downscaled buffers with smoother motion, or an HD screen recording.
videoSource = ReplayKitVideoSource(isScreencast: true, telecineOptions: ReplayKitVideoSource.TelecineOptions.disabled)
screenTrack = LocalVideoTrack(source: videoSource!,
enabled: true,
name: "Screen")
recorder.startCapture(handler: { (sampleBuffer, type, error) in
if error != nil {
print("Capture error: ", error as Any)
return
}
switch type {
case RPSampleBufferType.video:
self.videoSource?.processFrame(sampleBuffer: sampleBuffer)
break
case RPSampleBufferType.audioApp:
break
case RPSampleBufferType.audioMic:
// We use `TVIDefaultAudioDevice` to capture and playback audio for conferencing.
break
}
}) { (error) in
if error != nil {
print("Screen capture error: ", error as Any)
} else {
print("Screen capture started.")
}
}
if (accessToken == "TWILIO_ACCESS_TOKEN") {
do {
accessToken = try TokenUtils.fetchToken(url: tokenUrl)
} catch {
let message = "Failed to fetch access token"
logMessage(messageText: message)
return
}
}
// Prepare local media which we will share with Room Participants.
self.prepareLocalMedia()
// Preparing the connect options with the access token that we fetched (or hardcoded).
let connectOptions = ConnectOptions(token: accessToken) { (builder) in
// Use the local media that we prepared earlier.
builder.audioTracks = self.localAudioTrack != nil ? [self.localAudioTrack!] : [LocalAudioTrack]()
builder.videoTracks = self.localVideoTrack != nil ? [self.localVideoTrack!, self.screenTrack!] : [LocalVideoTrack]()
// Use the preferred audio codec
if let preferredAudioCodec = Settings.shared.audioCodec {
builder.preferredAudioCodecs = [preferredAudioCodec]
}
// Use the preferred video codec
if let preferredVideoCodec = Settings.shared.videoCodec {
builder.preferredVideoCodecs = [preferredVideoCodec]
}
// Use the preferred encoding parameters
if let encodingParameters = Settings.shared.getEncodingParameters() {
builder.encodingParameters = encodingParameters
}
// Use the preferred signaling region
if let signalingRegion = Settings.shared.signalingRegion {
builder.region = signalingRegion
}
builder.roomName = self.roomTextField.text
}
// Connect to the Room using the options we provided.
room = TwilioVideoSDK.connect(options: connectOptions, delegate: self)
logMessage(messageText: "Attempting to connect to room \(String(describing: self.roomTextField.text))")
When I connected in the group with remote participant I want to share the screen with remote participant.
To implement this feature I have referred the “ReplayKitExample” with in-app capture method. But not able to do that.
Remote participant not able to see the screen share content.
Nothing is happening related to screen share with this, and looking for inputs on implementing it.
I want to share the screen to remote participant.
Its happening because you are trying to send "cameraSource" and "videoSource" both data at the same time you have to unsubscribe the "cameraSource" before sending "viseoSource".
Heres my code you can refer:
//MARK: - Screen Sharing via replaykit
extension CallRoomViewController: RPScreenRecorderDelegate {
func broadCastButtonTapped(){
guard screenRecorder.isAvailable else {
print("Not able to Broadcast")
return
}
print("Can Broadcast")
if self.videoSource != nil {
self.stopConference()
} else {
self.startConference()
}
}
func publishVideoTrack(){
if let participant = self.room?.localParticipant,
let videoTrack = self.localVideoTrack {
participant.publishVideoTrack(videoTrack)
}
}
func unpublishVideoTrack(){
if let participant = self.room?.localParticipant,
let videoTrack = self.localVideoTrack {
participant.unpublishVideoTrack(videoTrack)
}
}
func stopConference() {
self.unpublishVideoTrack()
self.localVideoTrack = nil
self.videoSource = nil
self.localVideoTrack = LocalVideoTrack(source: cameraSource!, enabled: true, name: "Camera")
screenRecorder.stopCapture{ (captureError) in
if let error = captureError {
print("Screen capture stop error: ", error as Any)
} else {
print("Screen capture stopped.")
self.publishVideoTrack()
}
}
}
func startConference() {
self.unpublishVideoTrack()
self.localVideoTrack = nil
// We are only using ReplayKit to capture the screen.
// Use a LocalAudioTrack to capture the microphone for sharing audio in the room.
screenRecorder.isMicrophoneEnabled = false
// Use a LocalVideoTrack with a CameraSource to capture the camera for sharing camera video in the room.
screenRecorder.isCameraEnabled = false
// The source produces either downscaled buffers with smoother motion, or an HD screen recording.
self.videoSource = ReplayKitVideoSource(isScreencast: true,
telecineOptions: ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
self.localVideoTrack = LocalVideoTrack(source: videoSource!,
enabled: true,
name: "Screen")
let videoCodec = Settings.shared.videoCodec ?? Vp8Codec()!
let (_, outputFormat) = ReplayKitVideoSource.getParametersForUseCase(codec: videoCodec,
isScreencast: true,
telecineOptions:ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
self.videoSource?.requestOutputFormat(outputFormat)
screenRecorder.startCapture(handler: { (sampleBuffer, type, error) in
if error != nil {
print("Capture error: ", error as Any)
return
}
switch type {
case RPSampleBufferType.video:
self.videoSource?.processFrame(sampleBuffer: sampleBuffer)
break
case RPSampleBufferType.audioApp:
break
case RPSampleBufferType.audioMic:
// We use `TVIDefaultAudioDevice` to capture and playback audio for conferencing.
break
default:
print(error ?? "screenRecorder error")
}
}) { (error) in
if error != nil {
print("Screen capture error: ", error as Any)
} else {
print("Screen capture started.")
self.publishVideoTrack()
}
}
}
}
And you can connect room from viewDidLoad()
func connectToChatRoom(){
// Configure access token either from server or manually.
// If the default wasn't changed, try fetching from server.
accessToken = self.callRoomDetail.charRoomAccessToken
guard accessToken != "TWILIO_ACCESS_TOKEN" else {
let message = "Failed to fetch access token"
print( message)
return
}
// Prepare local media which we will share with Room Participants.
self.prepareLocalMedia()
// Preparing the connect options with the access token that we fetched (or hardcoded).
let connectOptions = ConnectOptions(token: accessToken) { (builder) in
// The name of the Room where the Client will attempt to connect to. Please note that if you pass an empty
// Room `name`, the Client will create one for you. You can get the name or sid from any connected Room.
builder.roomName = self.callRoomDetail.chatRoomName
// Use the local media that we prepared earlier.
if let audioTrack = self.localAudioTrack {
builder.audioTracks = [ audioTrack ]
}
if let videoTrack = self.localVideoTrack {
builder.videoTracks = [ videoTrack ]
}
// Use the preferred audio codec
if let preferredAudioCodec = Settings.shared.audioCodec {
builder.preferredAudioCodecs = [preferredAudioCodec]
}
// Use the preferred video codec
if let preferredVideoCodec = Settings.shared.videoCodec {
builder.preferredVideoCodecs = [preferredVideoCodec]
}
// Use the preferred encoding parameters
let videoCodec = Settings.shared.videoCodec ?? Vp8Codec()!
let (encodingParams, _) = ReplayKitVideoSource.getParametersForUseCase(codec: videoCodec,
isScreencast: true,
telecineOptions:ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
builder.encodingParameters = encodingParams
// Use the preferred signaling region
if let signalingRegion = Settings.shared.signalingRegion {
builder.region = signalingRegion
}
builder.isAutomaticSubscriptionEnabled = true
builder.isNetworkQualityEnabled = true
builder.networkQualityConfiguration = NetworkQualityConfiguration(localVerbosity: .minimal,
remoteVerbosity: .minimal)
}
// Connect to the Room using the options we provided.
room = TwilioVideoSDK.connect(options: connectOptions, delegate: self)
print( "Attempting to connect to room \(self.callRoomDetail.chatRoomName ?? ""))")
self.showRoomUI(inRoom: true)
}
You can get ReplayKitVideoSource file and other files from twilio repository https://github.com/twilio/video-quickstart-ios/tree/master/ReplayKitExample
I work at Twilio and I can confirm that you should be able to publish video tracks for both camera and screen at the same time without issue.
It is difficult to identify why this is not working for you without a completely functional example app.
However I have tested this using one of our reference apps and confirmed it is working. More details are here: https://github.com/twilio/video-quickstart-ios/issues/650#issuecomment-1178232542
Hopefully this is a useful example for how to publish both camera and screen video at the same time.

AudioKit AKMicrophone not outputting any data

I am trying to capture FFT data from a microphone. I've managed to get it to work before with a similar codebase but since macOS Mojave it's broken - the fft data constantly stays 0.
Relevant Code:
var fft: AKFFTTap?
var inputDevice: AKDevice? {
didSet {
inputNode = nil
updateAudioNode()
}
}
var inputNode: AKNode? {
didSet {
if fft != nil {
// According to AKFFTTap class reference, it will always be on tap 0
oldValue?.avAudioNode.removeTap(onBus: 0)
}
fft = inputNode.map { AKFFTTap($0) }
}
}
[...]
guard let device = inputDevice else {
inputNode = ViewController.shared.player.mixer
return
}
do {
try AudioKit.setInputDevice(device)
}
catch {
print("Error setting input device: \(error)")
return
}
let microphoneNode = AKMicrophone()
do {
try microphoneNode.setDevice(device)
}
catch {
print("Failed setting node input device: \(error)")
return
}
microphoneNode.start()
microphoneNode.volume = 3
print("Switched Node: \(microphoneNode), started: \(microphoneNode.isStarted)")
inputNode = microphoneNode
try! AudioKit.start()
All the code is called, no errors are output, but the fft simply stays blank. With some code reordering I get varying errors.
A full version of the class, for completeness, is here.
Finally, I also tried implementing one to one the examples from the playground. Since XCode playgrounds seem to crash with AudioKit, I tried it in my own codebase, but there's no difference there either. AKFrequencyTracker, for example, gets 0s for both amplitude and frequency.
I am not 100% positive of this, but I'd like you to try AudioKit v4.5.1 out. We definitely fixed a bug in AKMicrophone, and that could have downstream consequences. I'll withdraw this answer and keep looking if it is not fixed. Let me know.

AVMIDIPlayer DLSBankManager::AddBank: Bank load failed

When I use AVMIDIPlayer to play a MusicSequence with only one note message. Most of times it works fine but sometimes it has no sound and logged as below:
DLSBankManager::AddBank: Bank load failed
Error Domain=com.apple.coreaudio.avfaudio Code=-10871 "(null)"
It works well on iOS9, but when i test it on iOS10 it runs into this issue.
I'm sure that the sf2 sound bank file url is set properly.
I paste the code as below:
func playAVMIDIPlayerPreview(_ musicSequence:MusicSequence) {
guard let bankURL = Bundle.main.url(forResource: "FluidR3 GM2-2", withExtension: "sf2") else {
fatalError("soundbank file not found.")
}
var status = OSStatus(noErr)
var data:Unmanaged<CFData>?
status = MusicSequenceFileCreateData (musicSequence,
MusicSequenceFileTypeID.midiType,
MusicSequenceFileFlags.eraseFile,
480, &data)
if status != OSStatus(noErr) {
print("bad status \(status)")
}
if let md = data {
let midiData = md.takeUnretainedValue() as Data
do {
try self.midiPlayerPreview = AVMIDIPlayer(data: midiData, soundBankURL: bankURL)
} catch let error as NSError {
print("Error \(error)")
}
data?.release()
self.midiPlayerPreview?.play({ () -> Void in
self.midiPlayerPreview = nil
self.musicSequencePreview = nil
})
}
}
The error is occur on this line:
try self.midiPlayerPreview = AVMIDIPlayer(data: midiData, soundBankURL: bankURL)
Try setting the global variable errno to 0 errno = 0 before loading the soundfont with
try self.midiPlayerPreview = AVMIDIPlayer(data: midiData, soundBankURL: bankURL)
We experienced the same issue and at the same time this one.
So we tried to apply the fix of the other issue to this one and it just worked.