stopContinuousRecognition() blocks the app for 5-7 seconds - swift

I am trying to implement speech recognition using the Azure Speech SDK in iOS project using Swift and I ran into the problem that the speech recognition completion function (stopContinuousRecognition()) blocks the app UI for a few seconds, but there is no memory or processor load or leak. I tried to move this function to DispatchQueue.main.async {}, but it gave no results. Maybe someone faced such a problem? Is it necessary to put this in a separate thread and why does the function take so long to finish?
Edit:
It is very hard to provide working example, but basically I am calling this function on button press:
private func startListenAzureRecognition(lang:String) {
let audioFormat = SPXAudioStreamFormat.init(usingPCMWithSampleRate: 8000, bitsPerSample: 16, channels: 1)
azurePushAudioStream = SPXPushAudioInputStream(audioFormat: audioFormat!)
let audioConfig = SPXAudioConfiguration(streamInput: azurePushAudioStream!)!
var speechConfig: SPXSpeechConfiguration?
do {
let sub = "enter your code here"
let region = "enter you region here"
try speechConfig = SPXSpeechConfiguration(subscription: sub, region: region)
speechConfig!.enableDictation();
speechConfig?.speechRecognitionLanguage = lang
} catch {
print("error \(error) happened")
speechConfig = nil
}
self.azureRecognition = try! SPXSpeechRecognizer(speechConfiguration: speechConfig!, audioConfiguration: audioConfig)
self.azureRecognition!.addRecognizingEventHandler() {reco, evt in
if (evt.result.text != nil && evt.result.text != "") {
print(evt.result.text ?? "no result")
}
}
self.azureRecognition!.addRecognizedEventHandler() {reco, evt in
if (evt.result.text != nil && evt.result.text != "") {
print(evt.result.text ?? "no result")
}
}
do {
try! self.azureRecognition?.startContinuousRecognition()
} catch {
print("error \(error) happened")
}
}
And when I press the button again to stop recognition, I am calling this function:
private func stopListenAzureRecognition(){
DispatchQueue.main.async {
print("start")
// app blocks here
try! self.azureRecognition?.stopContinuousRecognition()
self.azurePushAudioStream!.close()
self.azureRecognition = nil
self.azurePushAudioStream = nil
print("stop")
}
}
Also I am using raw audio data from mic (recognizeOnce works perfectly for first phrase, so everything is fine with audio data)

Try closing the stream first and then stopping the continuous recognition:
azurePushAudioStream!.close()
try! azureRecognition?.stopContinuousRecognition()
azureRecognition = nil
azurePushAudioStream = nil
You don't even need to do it asynchronously.
At least this worked for me.

Related

Screen Sharing using Twilio in iOS

I am using Twilio iOS framework to connect in the group room.
On the click on connect room button below is the code which I used
let recorder = RPScreenRecorder.shared()
recorder.isMicrophoneEnabled = false
recorder.isCameraEnabled = false
// The source produces either downscaled buffers with smoother motion, or an HD screen recording.
videoSource = ReplayKitVideoSource(isScreencast: true, telecineOptions: ReplayKitVideoSource.TelecineOptions.disabled)
screenTrack = LocalVideoTrack(source: videoSource!,
enabled: true,
name: "Screen")
recorder.startCapture(handler: { (sampleBuffer, type, error) in
if error != nil {
print("Capture error: ", error as Any)
return
}
switch type {
case RPSampleBufferType.video:
self.videoSource?.processFrame(sampleBuffer: sampleBuffer)
break
case RPSampleBufferType.audioApp:
break
case RPSampleBufferType.audioMic:
// We use `TVIDefaultAudioDevice` to capture and playback audio for conferencing.
break
}
}) { (error) in
if error != nil {
print("Screen capture error: ", error as Any)
} else {
print("Screen capture started.")
}
}
if (accessToken == "TWILIO_ACCESS_TOKEN") {
do {
accessToken = try TokenUtils.fetchToken(url: tokenUrl)
} catch {
let message = "Failed to fetch access token"
logMessage(messageText: message)
return
}
}
// Prepare local media which we will share with Room Participants.
self.prepareLocalMedia()
// Preparing the connect options with the access token that we fetched (or hardcoded).
let connectOptions = ConnectOptions(token: accessToken) { (builder) in
// Use the local media that we prepared earlier.
builder.audioTracks = self.localAudioTrack != nil ? [self.localAudioTrack!] : [LocalAudioTrack]()
builder.videoTracks = self.localVideoTrack != nil ? [self.localVideoTrack!, self.screenTrack!] : [LocalVideoTrack]()
// Use the preferred audio codec
if let preferredAudioCodec = Settings.shared.audioCodec {
builder.preferredAudioCodecs = [preferredAudioCodec]
}
// Use the preferred video codec
if let preferredVideoCodec = Settings.shared.videoCodec {
builder.preferredVideoCodecs = [preferredVideoCodec]
}
// Use the preferred encoding parameters
if let encodingParameters = Settings.shared.getEncodingParameters() {
builder.encodingParameters = encodingParameters
}
// Use the preferred signaling region
if let signalingRegion = Settings.shared.signalingRegion {
builder.region = signalingRegion
}
builder.roomName = self.roomTextField.text
}
// Connect to the Room using the options we provided.
room = TwilioVideoSDK.connect(options: connectOptions, delegate: self)
logMessage(messageText: "Attempting to connect to room \(String(describing: self.roomTextField.text))")
When I connected in the group with remote participant I want to share the screen with remote participant.
To implement this feature I have referred the “ReplayKitExample” with in-app capture method. But not able to do that.
Remote participant not able to see the screen share content.
Nothing is happening related to screen share with this, and looking for inputs on implementing it.
I want to share the screen to remote participant.
Its happening because you are trying to send "cameraSource" and "videoSource" both data at the same time you have to unsubscribe the "cameraSource" before sending "viseoSource".
Heres my code you can refer:
//MARK: - Screen Sharing via replaykit
extension CallRoomViewController: RPScreenRecorderDelegate {
func broadCastButtonTapped(){
guard screenRecorder.isAvailable else {
print("Not able to Broadcast")
return
}
print("Can Broadcast")
if self.videoSource != nil {
self.stopConference()
} else {
self.startConference()
}
}
func publishVideoTrack(){
if let participant = self.room?.localParticipant,
let videoTrack = self.localVideoTrack {
participant.publishVideoTrack(videoTrack)
}
}
func unpublishVideoTrack(){
if let participant = self.room?.localParticipant,
let videoTrack = self.localVideoTrack {
participant.unpublishVideoTrack(videoTrack)
}
}
func stopConference() {
self.unpublishVideoTrack()
self.localVideoTrack = nil
self.videoSource = nil
self.localVideoTrack = LocalVideoTrack(source: cameraSource!, enabled: true, name: "Camera")
screenRecorder.stopCapture{ (captureError) in
if let error = captureError {
print("Screen capture stop error: ", error as Any)
} else {
print("Screen capture stopped.")
self.publishVideoTrack()
}
}
}
func startConference() {
self.unpublishVideoTrack()
self.localVideoTrack = nil
// We are only using ReplayKit to capture the screen.
// Use a LocalAudioTrack to capture the microphone for sharing audio in the room.
screenRecorder.isMicrophoneEnabled = false
// Use a LocalVideoTrack with a CameraSource to capture the camera for sharing camera video in the room.
screenRecorder.isCameraEnabled = false
// The source produces either downscaled buffers with smoother motion, or an HD screen recording.
self.videoSource = ReplayKitVideoSource(isScreencast: true,
telecineOptions: ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
self.localVideoTrack = LocalVideoTrack(source: videoSource!,
enabled: true,
name: "Screen")
let videoCodec = Settings.shared.videoCodec ?? Vp8Codec()!
let (_, outputFormat) = ReplayKitVideoSource.getParametersForUseCase(codec: videoCodec,
isScreencast: true,
telecineOptions:ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
self.videoSource?.requestOutputFormat(outputFormat)
screenRecorder.startCapture(handler: { (sampleBuffer, type, error) in
if error != nil {
print("Capture error: ", error as Any)
return
}
switch type {
case RPSampleBufferType.video:
self.videoSource?.processFrame(sampleBuffer: sampleBuffer)
break
case RPSampleBufferType.audioApp:
break
case RPSampleBufferType.audioMic:
// We use `TVIDefaultAudioDevice` to capture and playback audio for conferencing.
break
default:
print(error ?? "screenRecorder error")
}
}) { (error) in
if error != nil {
print("Screen capture error: ", error as Any)
} else {
print("Screen capture started.")
self.publishVideoTrack()
}
}
}
}
And you can connect room from viewDidLoad()
func connectToChatRoom(){
// Configure access token either from server or manually.
// If the default wasn't changed, try fetching from server.
accessToken = self.callRoomDetail.charRoomAccessToken
guard accessToken != "TWILIO_ACCESS_TOKEN" else {
let message = "Failed to fetch access token"
print( message)
return
}
// Prepare local media which we will share with Room Participants.
self.prepareLocalMedia()
// Preparing the connect options with the access token that we fetched (or hardcoded).
let connectOptions = ConnectOptions(token: accessToken) { (builder) in
// The name of the Room where the Client will attempt to connect to. Please note that if you pass an empty
// Room `name`, the Client will create one for you. You can get the name or sid from any connected Room.
builder.roomName = self.callRoomDetail.chatRoomName
// Use the local media that we prepared earlier.
if let audioTrack = self.localAudioTrack {
builder.audioTracks = [ audioTrack ]
}
if let videoTrack = self.localVideoTrack {
builder.videoTracks = [ videoTrack ]
}
// Use the preferred audio codec
if let preferredAudioCodec = Settings.shared.audioCodec {
builder.preferredAudioCodecs = [preferredAudioCodec]
}
// Use the preferred video codec
if let preferredVideoCodec = Settings.shared.videoCodec {
builder.preferredVideoCodecs = [preferredVideoCodec]
}
// Use the preferred encoding parameters
let videoCodec = Settings.shared.videoCodec ?? Vp8Codec()!
let (encodingParams, _) = ReplayKitVideoSource.getParametersForUseCase(codec: videoCodec,
isScreencast: true,
telecineOptions:ReplayKitVideoSource.TelecineOptions.p60to24or25or30)
builder.encodingParameters = encodingParams
// Use the preferred signaling region
if let signalingRegion = Settings.shared.signalingRegion {
builder.region = signalingRegion
}
builder.isAutomaticSubscriptionEnabled = true
builder.isNetworkQualityEnabled = true
builder.networkQualityConfiguration = NetworkQualityConfiguration(localVerbosity: .minimal,
remoteVerbosity: .minimal)
}
// Connect to the Room using the options we provided.
room = TwilioVideoSDK.connect(options: connectOptions, delegate: self)
print( "Attempting to connect to room \(self.callRoomDetail.chatRoomName ?? ""))")
self.showRoomUI(inRoom: true)
}
You can get ReplayKitVideoSource file and other files from twilio repository https://github.com/twilio/video-quickstart-ios/tree/master/ReplayKitExample
I work at Twilio and I can confirm that you should be able to publish video tracks for both camera and screen at the same time without issue.
It is difficult to identify why this is not working for you without a completely functional example app.
However I have tested this using one of our reference apps and confirmed it is working. More details are here: https://github.com/twilio/video-quickstart-ios/issues/650#issuecomment-1178232542
Hopefully this is a useful example for how to publish both camera and screen video at the same time.

SwiftUI and Firebase - Stream error: 'Not found: No document to update:

So, I have a program that, when it opens, looks for a specific document name in a specific collection (both specified) and, when it is found, copies the document name and starts a listener. If it doesn't find the document name after 5 x 5 second intervals, the app stops. For some reason, when I run the code, after it does the first check I get about a thousand writes of this error:
[Firebase/Firestore][I-FST000001] WriteStream (7ffcbec0eac8) Stream error: 'Not found: No document to update:
Here's the code I'm using to call firestore:
let capturedCode: String? = "party"
.onAppear(perform: {
Timer.scheduledTimer(withTimeInterval: 5, repeats: true) { timer in
print("running code check sequence")
if let code = capturedCode {
calcCloud.checkSessionCode(code)
if env.doesCodeExist {
print("code found! applying to environment!")
env.currentSessionCode = code
calcCloud.watchCloudDataAndUpdate()
allClear(env: env)
timer.invalidate()
}
else if timerCycles < 5 {
timerCycles += 1
print("code not found, this is cycle \(timerCycles) of 5")
} else {
print("could not find document on firebase, now committing suicide")
let x = ""
let _ = Int(x)!
}
}
}
})
here is the code I'm using to check firebase:
func checkSessionCode(_ code: String) {
print("checkSessionCode running")
let docRef = self.env.db.collection(K.sessions).document(code)
docRef.getDocument { (document, error) in
if document!.exists {
print("Document data: \(document!.data())")
self.env.doesCodeExist = true
} else {
print("Document does not exist")
self.env.doesCodeExist = false
}
}
}
and here is the code that should be executed if the code is found and applied:
func watchCloudDataAndUpdate() {
env.db.collection(K.sessions).document(env.currentSessionCode!).addSnapshotListener { (documentSnapshot, error) in
guard let document = documentSnapshot else {
print("Error fetching snapshot: \(error!)")
return
}
guard let data = document.data() else {
print("Document data was empty.")
return
}
Where did I go wrong, and what is this error all about...thanks in advance :)
EDIT: For clarity, it seems that the errors begin once the onAppear finishes executing...
This is why I need to stop coding after 1am...on my simulator, I deleted my app and relaunched and everything started working again...sometimes the simplest answers are the right ones...

Function not stopping after handleComplete

I created a basic Google Places app that lets users check-in to a location. When a user tries to check in, I loop through the list of likelihood places to verify that the user is actually at the location in the app. However, when I try to escape the loop after confirming the location is correct, my function still ends up going to my "else" situation (an error message that asks the user to please check in to the correct location).
The following function gets called in viewWillAppear:
func checkIn(handleComplete:#escaping (()->())){
guard let currentUserID = User.current?.key else {return}
// Specify the place data types to return.
let fields: GMSPlaceField = GMSPlaceField(rawValue: UInt(GMSPlaceField.name.rawValue) |
UInt(GMSPlaceField.placeID.rawValue))!
placesClient.findPlaceLikelihoodsFromCurrentLocation(withPlaceFields: fields, callback: {
(placeLikelihoodList: Array<GMSPlaceLikelihood>?, error: Error?) in
if let error = error {
print("An error occurred: \(error.localizedDescription)")
return
}
if let placeLikelihoodList = placeLikelihoodList {
for likelihood in placeLikelihoodList {
let place = likelihood.place
if likelihood.likelihood >= 0.75 && place.placeID! == self.hangoutID {
let place = likelihood.place
print("Current Place name \(String(describing: place.name!)) at likelihood \(likelihood.likelihood)")
print("Current PlaceID \(String(describing: place.placeID!))")
self.delta = 0.0
// update checkin
DispatchQueue.main.async {
let hangoutRef = self.db.collection("users").document(currentUserID).collection("hangout").document(self.hangoutID).updateData([
"lastCheckin": Date()
]) { err in
if let err = err {
print("Error updating document: \(err)")
} else {
print("Document successfully updated")
}
}
}
handleComplete()
}
}
self.presentDismissableAlert(title: "", message: "Please check in to the hangout to join this chat", button: "OK", dismissed: { (UIAlertAction) in
self.performSegue(withIdentifier: "unwindSegueToChats", sender: self)
})
}
})
}
If the correct conditions are met, the code will land on the handleComplete() line but then it will still execute the dismissableAlert underneath and segue the user out of the room. How can I fix the flow so that the app will cycle through the list of likely Places and stop the function on handleComplete if the correct condition is met, or else then proceed to the error message if the correct conditions are not met (user is not at the correct Place)?
Thanks

Updating UI after retrieving device settings

I want to do something simple in Swift. I have to retrieve some setting from a device and then initialize some UI controls with those settings. It may take a few seconds to complete the retrieval so I don't want the code to continue until after the retrieval (async).
I have read countless posts on many websites including this one and read many tutorials. None seem to work for me.
Also, in the interest of encapsulation, I want to keep the details within the device object.
When I run the app I see the print from the initializing method before I see the print from the method.
// Initializing method
brightnessLevel = 100
device.WhatIsTheBrightnessLevel(level: &brightnessLevel)
print("The brightness level is \(brightnessLevel)")
// method with the data retrieval code
func WhatIsTheBrightnessLevel(level brightness: inout Int) -> CResults
{
var brightness: Int
var characteristic: HMCharacteristic
var name: String
var results: CResults
var timeout: DispatchTime
var timeoutResult: DispatchTimeoutResult
// Refresh the value by querying the lightbulb
name = m_lightBulbName
characteristic = m_brightnessCharacteristic!
brightness = 100
timeout = DispatchTime.now() + .seconds(CLightBulb.READ_VALUE_TIMEOUT)
timeoutResult = .success
results = CResults()
results.SetResult(code: CResults.code.success)
let dispatchGroup = DispatchGroup()
DispatchQueue.global(qos: .userInteractive).async
{
//let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
characteristic.readValue(completionHandler:
{ (error) in
if error != nil
{
results.SetResult(code: CResults.code.homeKitError)
results.SetHomeKitDescription(text: error!.localizedDescription)
print("Error in reading the brightness level for \(name): \(error!.localizedDescription)")
}
else
{
brightness = characteristic.value as! Int
print("CLightBulb: -->Read the brightness level. It is \(brightness) at " + Date().description(with: Locale.current))
}
dispatchGroup.leave()
})
timeoutResult = dispatchGroup.wait(timeout: timeout)
if (timeoutResult == .timedOut)
{
results.SetResult(code: CResults.code.timedOut)
}
else
{
print("CLightBulb: (After wait) The brightness level is \(brightness) at " + Date().description(with: Locale.current))
self.m_brightnessLevel = brightness
}
}
return(results)
}
Thank you!
If you're going to wrap an async function with your own function, it's generally best to give your wrapper function a completion handler as well. Notice the call to your completion handler. This is where you'd pass the resulting values (i.e. within the closure):
func getBrightness(characteristic: HMCharacteristic, completion: #escaping (Int?, Error?) -> Void) {
characteristic.readValue { (error) in
//Program flows here second
if error == nil {
completion(characteristic.value as? Int, nil)
} else {
completion(nil, error)
}
}
//Program flows here first
}
Then when you call your function, you just need to make sure that you're handling the results within the completion handler (i.e. closure):
getBrightness(characteristic: characteristic) { (value, error) in
//Program flows here second
if error == nil {
if let value = value {
print(value)
}
} else {
print("an error occurred: \(error.debugDescription)")
}
}
//Program flows here first
Always keep in mind that code will flow through before the async function completes. So you have to structure your code so that anything that's depending on the value or error returned, doesn't get executed before completion.

AVMIDIPlayer DLSBankManager::AddBank: Bank load failed

When I use AVMIDIPlayer to play a MusicSequence with only one note message. Most of times it works fine but sometimes it has no sound and logged as below:
DLSBankManager::AddBank: Bank load failed
Error Domain=com.apple.coreaudio.avfaudio Code=-10871 "(null)"
It works well on iOS9, but when i test it on iOS10 it runs into this issue.
I'm sure that the sf2 sound bank file url is set properly.
I paste the code as below:
func playAVMIDIPlayerPreview(_ musicSequence:MusicSequence) {
guard let bankURL = Bundle.main.url(forResource: "FluidR3 GM2-2", withExtension: "sf2") else {
fatalError("soundbank file not found.")
}
var status = OSStatus(noErr)
var data:Unmanaged<CFData>?
status = MusicSequenceFileCreateData (musicSequence,
MusicSequenceFileTypeID.midiType,
MusicSequenceFileFlags.eraseFile,
480, &data)
if status != OSStatus(noErr) {
print("bad status \(status)")
}
if let md = data {
let midiData = md.takeUnretainedValue() as Data
do {
try self.midiPlayerPreview = AVMIDIPlayer(data: midiData, soundBankURL: bankURL)
} catch let error as NSError {
print("Error \(error)")
}
data?.release()
self.midiPlayerPreview?.play({ () -> Void in
self.midiPlayerPreview = nil
self.musicSequencePreview = nil
})
}
}
The error is occur on this line:
try self.midiPlayerPreview = AVMIDIPlayer(data: midiData, soundBankURL: bankURL)
Try setting the global variable errno to 0 errno = 0 before loading the soundfont with
try self.midiPlayerPreview = AVMIDIPlayer(data: midiData, soundBankURL: bankURL)
We experienced the same issue and at the same time this one.
So we tried to apply the fix of the other issue to this one and it just worked.