I am trying to record screen with audio in OSX with AVFoundation, When i record video is working perfectly. But when adding audio input and appending it to AVAssetWriterInput, the asset writer status changes to .failed.
if let sampleBuffer = sampleBuffer {
if CMSampleBufferDataIsReady(sampleBuffer) {
if assetWriter.status == .unknown {
let startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: startTime)
}
if assetWriter.status == .failed {
print("writer error \(String(describing: assetWriter.error?.localizedDescription))")
return false
}
if isVideo {
if videoInputWriter.isReadyForMoreMediaData {
videoInputWriter.append(sampleBuffer)
return true
}
} else {
if audioInputWriter.isReadyForMoreMediaData {
audioInputWriter.append(sampleBuffer)
return true
}
}
}
}
The error message is
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x600002841320 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}}
Welcome!
I'm guessing, but it seems you are using the same callback for processing the audio and video samples. The problem might be that audio and video samples will be delivered concurrently in different queues (threads), which means that assetWriter.startSession(atSourceTime: startTime) could be accidentally executed multiple times in different threads—which is not allowed.
You need to somehow (atomically) protect that call, for instance by using a separate synchronization queue. Alternatively, you could only start the session with the first video buffer that arrives and ignore any audio buffer that comes before that (which would also prevent accidental black frames at the beginning of the video).
Related
Challenge: I have an IP camera live streaming via RTSP and I need to consume/play the stream and at times record the live stream to mp4 (for saving on IOS). I can easily play the stream directly with MobileVLC but I would like an FFmpeg session to re-stream the input signal to MobileVLC, so that I can reuse the session for recording the stream to MP4 (like described here: [https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs][1])
However, I can't even get FFmpeg to pipe the stream from the source to MobileVLC.
Streaming the rtsp stream directly with MobileVLC with low latency is done via (and works):
func startStream() {
guard let url = URL(string: "rtsp://xxx.xxx.xxx.xxx:554") else { return }
mediaPlayer.drawable = vlcView
mediaPlayer.media = VLCMedia(url: url)
mediaPlayer.media?.addOptions([
"rtsp-tcp": 1,
"tcp-caching": 0,
"network-caching": 150,
"live-caching": 150,
"clock-jitter": 0,
"adaptive-lowlatency": 0,
"no-audio": 0,
"skip-frames": 0,
])
mediaPlayer.play()
}
Below is what I have tried to do with FFmpeg (will link to the original post on stack overflow when i re-find it...). Of course I then replace the url from rtsp://xxx.xxx.xxx.xxx:554 to rtsp://127.0.0.1:1234?pkt_size=1316 in the startSt(){...} above.
func asyncCommand() {
session = FFmpegKit.executeAsync("-re -i rtsp://xxx.xxx.xxx.xxx:554 -f rtsp -rtsp_transport tcp rtsp://127.0.0.1:1234?pkt_size=1316") { session in
guard let session = session else {
print("!! Invalid session")
return
}
print(session.getState())
guard let returnCode = session.getReturnCode() else {
print("!! Invalid return code")
return
}
print("FFmpeg process exited with state \(FFmpegKitConfig.sessionState(toString: session.getState()) ?? "Unknown") and rc \(returnCode).\(session.getFailStackTrace() ?? "Unknown")")
} withLogCallback: { logs in
guard let logs = logs else { return }
// CALLED WHEN SESSION PRINTS LOGS
} withStatisticsCallback: { stats in
guard let stats = stats else { return }
// CALLED WHEN SESSION GENERATES STATISTICS
}
I am using FFmpeg-kit and MobileVLC. The output I get is this
Loading ffmpeg-kit.
Loaded ffmpeg-kit-full-gpl-x86_64-4.5.1-20220114
[si_destination_compare] send failed: Invalid argument
[si_destination_compare] send failed: Undefined error: 0
What do I need to give as input to FFmpeg for it to consume an RTSP stream on pipe it (without any processing) to MobileVLC? And what else am I doing wrong?
Thanks!
We have successfully integrated video DRM logic using AVAssetResourceLoaderDelegate.
It worked well. But lately, after playing the same movie several times in a row, we started getting this error when trying to get SPC.
do {
let spcData = try loadingRequest.streamingContentKeyRequestData(forApp: certificateData, contentIdentifier: assetIDData, options: [AVAssetResourceLoadingRequestStreamingContentKeyRequestRequiresPersistentKey: true as AnyObject])
return spcData
} catch {
print(error)
return nil
}
Error:
Domain=AVFoundationErrorDomain Code=-11879 "(null)" UserInfo={NSUnderlyingError=0x2805f9980 {Error Domain=NSOSStatusErrorDomain Code=-15841 "(null)"}}
I found that this code -11879 means that the request has been canceled.
I don't know what the second code means.
Why is the device not issuing more SPCs for content?
Maybe it's cached and needs to be updated somehow.
I am trying to use SFSpeechURLRecognitionRequest to transcribe audio files in a terminal application. While I have it working in a preliminary form, I'm running into an odd issue. It appears to only output the voice recognition results (both partial and complete) after the main thread terminates in my test applications. Note I am a Swift noob, so I might be missing something obvious.
Below I have a complete Xcode Playground application which demonstrates the issue. The output from the playground writes Playground Execution Complete and then I begin receiving partial outputs followed by the final output. Note that if I add a sleep(5) prior to the print it will wait 5 seconds and then output the print, and only then after the main thread has concluded begin processing the text. I have seen similar behavior in a GUI test application, where it only begins processing the text after the method call kicking off the request completes.
I have tried repeatedly checking the state of the task that is returned, sleeping between each check with no luck.
I have also tried calling the recognition task inside a DispatchQueue, which appears to run successfully in the background based on CPU usage, but the Partial and Final prints never appear until the application completes, at which point the console fills up with Partials followed by the Final.
Does anyone know of a way to have the speech recognition begin processing without the application thread completing? Ideally I would like to be able to kick it off and sleep for brief periods repeatedly, checking if the recognition task has completed in between each.
Edited below to match version immediately prior to figuring out the solution.
import Speech
var complete = false
SFSpeechRecognizer.requestAuthorization {
authStatus in
DispatchQueue.main.async {
if authStatus == .authorized {
print("Good to go!")
} else {
print("Transcription permission was declined.")
exit(1)
}
}
}
guard let myRecognizer = SFSpeechRecognizer() else {
print("Recognizer not supported for current locale!")
exit(1)
}
if !myRecognizer.isAvailable {
// The recognizer is not available right now
print("Recognizer not available right now")
exit(1)
}
if !myRecognizer.supportsOnDeviceRecognition {
print("On device recognition not possible!")
exit(1)
}
let path_to_wav = NSURL.fileURL(withPath: "/tmp/harvard.wav", isDirectory: false)
let request = SFSpeechURLRecognitionRequest(url: path_to_wav)
request.requiresOnDeviceRecognition = true
print("About to create recognition task...")
myRecognizer.recognitionTask(with: request) {
(result, error) in
guard let result = result else {
// Recognition failed, so check error for details and handle it
print("Recognition failed!!!")
print(error!)
exit(1)
}
if result.isFinal {
print("Final: \(result.bestTranscription.formattedString)")
complete = true
} else {
print("Partial: \(result.bestTranscription.formattedString)")
}
}
print("Playground execution complete.")
I figured it out! sleep doesn't actually let background tasks execute. Instead by adding the following:
let runLoop = RunLoop.current
let distantFuture = NSDate.distantFuture as NSDate
while complete == false && runLoop.run(mode: RunLoop.Mode.default, before: distantFuture as Date) {}
to the end just before the last print works (results begin appearing immediately, and the final print prints right after the final results).
I am trying to save a video in my photo library. But sometimes I get an error: The operation couldn’t be completed. (PHPhotosErrorDomain error -1.)
This is my code:
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: exporter!.outputURL!)
}) { saved, error in
if saved {
print("video saved to camera roll")
} else {
print(error?.localizedDescription)
}
}
I was able to resolve this by removing the AVVideoQualityKey within AVVideoCompressionPropertiesKey in my video output settings for AVAssetWriter.
When sending a message, it throws an error with a random code. No description or anything. Here is my code:
if #available(iOSApplicationExtension 11.0, *) {
conversation?.send(message, completionHandler: { (error) in
if error != nil {
print(error?.localizedDescription)
}
})
} else {
conversation?.insert(message, completionHandler: { (error) in
if error != nil {
print(error?.localizedDescription)
}
})
}
Error: The operation couldn’t be completed. (com.apple.messages.messagesapp-error error 9.)
It works fine when I use the insert function. Really bugging me haha, get it? Bug-ing... no? ok.
Found the error code on Apple's beta documentation:
https://developer.apple.com/documentation/messages/msmessageerrorcode/2909031-sendwithoutrecentinteraction
Apparently the app requires user interaction before sending (which my app had,) but for some reason it still triggers it.