AVAssetWriter queue guidance Swift 3 - swift

Can anyone give me some guidance on using queues in AVFoundation, please?
Later on, in my app, I want to do some processing on individual frames so I need to use AVCaptureVideoDataOutput.
To get started I thought I'd capture images and then write them (unprocessed) using AVAssetWriter.
I am successfully streaming frames from the camera to image preview by setting up an AVCaptureSession as follows:
func initializeCameraAndMicrophone() {
// set up the captureSession
captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPreset1280x720 // set resolution to Medium
// set up the camera
let camera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
do {
let cameraInput = try AVCaptureDeviceInput(device: camera)
if captureSession.canAddInput(cameraInput){
captureSession.addInput(cameraInput)
}
} catch {
print("Error setting device camera input: \(error)")
return
}
videoOutputStream.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sampleBuffer", attributes: []))
if captureSession.canAddOutput(videoOutputStream) {
captureSession.addOutput(videoOutputStream)
}
captureSession.startRunning()
}
Each new frame then triggers the captureOutput delegate:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
{
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let bufferImage = UIImage(ciImage: cameraImage)
DispatchQueue.main.async {
// send captured frame to the videoPreview
self.videoPreview.image = bufferImage
// if recording is active append bufferImage to video frame
while (recordingNow == true) {
print("OK we're recording!")
// append images to video
while (writerInput.isReadyForMoreMediaData) {
let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: presentationTime)
frameCount += 1
}
}
}
}
So this streams frames to the image preview perfectly until I press the record button which calls the startVideoRecording function (which sets up AVAssetWriter). From that point on the delegate never gets called again!
AVAssetWriter is being set up like this:
func startVideoRecording() {
guard let assetWriter = createAssetWriter(path: filePath!, size: videoSize) else {
print("Error converting images to video: AVAssetWriter not created")
return
}
// AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!
let sourceBufferAttributes: [String : AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB) as AnyObject,
kCVPixelBufferWidthKey as String : videoSize.width as AnyObject,
kCVPixelBufferHeightKey as String : videoSize.height as AnyObject,
]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)
// Start writing session
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: kCMTimeZero)
if (pixelBufferAdaptor.pixelBufferPool == nil) {
print("Error converting images to video: pixelBufferPool nil after starting session")
assetWriter.finishWriting{
print("assetWritter stopped!")
}
recordingNow = false
return
}
frameCount = 0
print("Recording started!")
}
I'm new to AVFoundation but I suspect I'm screwing up my queues somewhere.

You have to use a separate serial queue for capturing video/audio.
Add this queue property to your class:
let captureSessionQueue: DispatchQueue = DispatchQueue(label: "sampleBuffer", attributes: [])
Start the session on captureSessionQueue, according to the Apple docs:
The startRunning() method is a blocking call which can take some time, therefore you should
perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive).
captureSessionQueue.async {
captureSession.startRunning()
}
Set this queue to your capture output pixel buffer delegate:
videoOutputStream.setSampleBufferDelegate(self, queue: captureSessionQueue)
Call startVideoRecording inside captureSessionQueue:
captureSessionQueue.async {
startVideoRecording()
}
In the captureOutput delegate method put all AVFoundation methods calls into captureSessionQueue.async:
DispatchQueue.main.async
{
// send captured frame to the videoPreview
self.videoPreview.image = bufferImage
captureSessionQueue.async {
// if recording is active append bufferImage to video frame
while (recordingNow == true){
print("OK we're recording!")
/// Append images to video
while (writerInput.isReadyForMoreMediaData) {
let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: presentationTime)
frameCount += 1
}
}
}
}

Related

Change BPM in real time with AVAudioEngine using Swift

Hello I am trying to implement simple audio app using AVAudioEngine, which plays short wav audio files in a loop at some bpm, that can be changed in real time (by slider or something).
Current solution logic:
set bpm=60
create audioFile from sample.wav
calculate bufferSize: AVAudioFrameCount(audioFile.processingFormat.sampleRate * 60 / Double(bpm))
set bufferSize to audioBuffer
load file audioFile into audioBuffer.
schedule audioBuffer to play
This solution works, but the issue is - if I want to change bpm I need to recreate buffer with different bufferSize, so it will not be in real time, since I need to stop player and reschedule buffer with different bufferSize.
Any thoughts how it can be done ?
Thanks in advance !
Code (main part):
var bpm:Float = 30
let engine = AVAudioEngine()
var player = AVAudioPlayerNode()
var audioBuffer: AVAudioPCMBuffer?
var audioFile: AVAudioFile?
override func viewDidLoad() {
super.viewDidLoad()
audioFile = loadfile(from: "sound.wav")
audioBuffer = tickBuffer(audioFile: audioFile!)
engine.attach(player)
engine.connect(player, to: engine.mainMixerNode, format: audioFile?.processingFormat)
do {
engine.prepare()
try engine.start()
} catch {
print(error)
}
}
private func loadfile(from fileName: String) -> AVAudioFile? {
let path = Bundle.main.path(forResource: fileName, ofType: nil)!
let url = URL(fileURLWithPath: path)
do {
let audioFile = try AVAudioFile(forReading: url)
return audioFile
} catch {
print("Error loading buffer1 \(error)")
}
return nil
}
func tickBuffer(audioFile: AVAudioFile) -> AVAudioPCMBuffer {
let periodLength = AVAudioFrameCount(audioFile.processingFormat.sampleRate * 60 / Double(bpm))
let buffer = AVAudioPCMBuffer(pcmFormat: audioFile.processingFormat, frameCapacity: periodLength)!
try! audioFile.read(into: buffer)
buffer.frameLength = periodLength
return buffer
}
func play() {
player.scheduleBuffer(audioBuffer, at: nil, options: .loops, completionHandler: nil)
player.play()
}
func stop() {
player.stop()
}

How can I add display only cifilter on live video capture in swift?

I am working on a functionality where I need to create a video recorder. I want to display cifilter on live capture. Yes it is possible to add filter on live capture and save it using this code:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
connection.videoOrientation = orientation
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)
let comicEffect = CIFilter(name: "CISepiaTone")
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvImageBuffer: pixelBuffer!)
comicEffect!.setValue(cameraImage, forKey: kCIInputImageKey)
let cgImage = self.context.createCGImage(comicEffect!.outputImage!, from: cameraImage.extent)!
DispatchQueue.main.async {
let filteredImage = UIImage(cgImage: cgImage)
self.filteredImage.image = filteredImage
}
}
But my requirement is different. I want to display only the filters on video capture. The filter applied should be displayed to user only at capture time and the video should not be saved with filters. I want captured video to be saved without any filter. I am able to capture video using AVFoundation as:
let captureSession = AVCaptureSession()
let movieOutput = AVCaptureMovieFileOutput()
var previewLayer: AVCaptureVideoPreviewLayer!
var activeInput: AVCaptureDeviceInput!
//Setup
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.frame = camPreview.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
camPreview.layer.addSublayer(previewLayer)
//MARK:- Camera Session
func startSession() {
if !captureSession.isRunning {
videoQueue().async {
self.captureSession.startRunning()
}
}
}
func stopSession() {
if captureSession.isRunning {
videoQueue().async {
self.captureSession.stopRunning()
}
}
}
func videoQueue() -> DispatchQueue {
return DispatchQueue.main
}
///Get output after stop recording
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
if (error != nil) {
print("Error recording movie: \(error!.localizedDescription)")
} else {
let videoRecorded = outputURL! as URL
DispatchQueue.main.async {
let savedVDOAsset = AVAsset(url: videoRecorded)
let playerController = AVPlayerViewController()
let playerItem = AVPlayerItem(asset: savedVDOAsset)
let player = AVPlayer(playerItem: playerItem)
playerController.player = player
self.present(playerController, animated: true, completion: {
playerController.player!.play()
})
}
}
}
I mean I can do both things(To capture a video and to display filter on video) differently. But can't find how to do both things at the same time.
Again: I don't want to save video with filters. Filters are only to display. And at the same time video need to be captured and saved.
How can I do this? Please suggest me.
I'm not an expert with video capture, but I'll try to answer anyway.
To display (live) the video with filter, you should render the output of the CIFilter in a MTKView. In short, you create a CIImage with a filter and ask the MTKView to draw itself using that image, each time that captureOutput is called (every frame).
You can find how to stream the live camera feed in a MTKView in this tutorial: https://betterprogramming.pub/using-cifilters-metal-to-make-a-custom-camera-in-ios-c76134993316
Be aware there could be some issues with scaling your frames into the MTKView drawable, in that case refer to this thread:
here
Also, in the thread, you can find some sample code taken from a project where I do exactly what I just wrote.

AVFoundation PDF417 scanner doesn't always work

I am creating an app using Swift 4 and Xcode 9 that scans PDF417 barcodes using AVFoundation. The scanner works with some codes but doesn't recognize the PDF417 barcode that you would find on the front of a CA Lottery scratchers ticket for example.
Is there anything I am missing to make it work? Below is my code:
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualCamera], mediaType: AVMediaType.video, position: .back)
guard let captureDevice = deviceDiscoverySession.devices.first else {
print("Failed to get the camera device")
return
}
do {
captureSession = AVCaptureSession()
let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession!.addInput(input)
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession!.addOutput(captureMetadataOutput)
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
captureMetadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.pdf417]
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
captureSession?.startRunning()
} catch {
print(error)
return
}
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
//Get the metadata object
let metadataObj = metadataObjects[0] as! AVMetadataMachineReadableCodeObject
if scanType.contains(metadataObj.type) {
let barCodeObj = videoPreviewLayer?.transformedMetadataObject(for: metadataObj)
if(metadataObj.stringValue != nil) {
callDelegate(metadataObj.stringValue)
captureSession?.stopRunning()
AudioServicesPlayAlertSound(SystemSoundID(kSystemSoundID_Vibrate))
navigationController?.popViewController(animated: true)
}
}
}
Thanks!
Replace your initialization code for the scanner with the following code either in your viewDidLoad or some method that you'd like it to be in
// Global vars used in init below
var captureSession: AVCaptureSession!
var previewLayer: AVCaptureVideoPreviewLayer!
func setupCaptureInputDevice() {
let cameraMediaType = AVMediaType.video
captureSession = AVCaptureSession()
// get the video capture device, which should be of type video
guard let videoCaptureDevice = AVCaptureDevice.default(for: .video) else {
// if there is an error then something is wrong, so dismiss
dismiss(animated: true, completion: nil)
return
}
let videoInput: AVCaptureDeviceInput
// create a capture input for the above device input that was created
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
} catch {
return
}
// this is important to check for if we are able to add the input
// because adding before this could cause a crash or it could not happen
if (captureSession.canAddInput(videoInput)) {
captureSession.addInput(videoInput)
} else {
// dismiss or display error
return
}
// get ready to capture output somewhere
let metadataOutput = AVCaptureMetadataOutput()
// again check to make sure that we can do this operation before doing it
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
// setting the metadataOutput's delegate to be self and then requesting it run on the main thread
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
// specify your code type
metadataOutput.metadataObjectTypes = [.pdf417]
} else {
// dismiss or display error
return
}
// the preview layer now becomes the capture session
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
// just add it to the screen
previewLayer.frame = view.layer.bounds
previewLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(previewLayer)
// and begin input
captureSession.startRunning()
}

CI Face Detector stopping audio from recording with AVAssetWriter

I'm wanting to detect a face in real time video using apples CI Face Detector and then I want to record the video to file using AVAssetWriter.
I thought I had it working but the audio is being temperamental. Sometimes it will record properly with the video, other times it will start recording but then go mute, other times it's out of sync with the video, and sometimes it won't work at all.
With a print statement I can see that the audio sample buffer is there. It must have something to do with the face detection as when I comment out that code the recording works fine.
Here's my code:
// MARK: AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioOutputSampleBufferDelegate
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let writable = canWrite()
if writable {
print("Writable")
}
if writable,
sessionAtSourceTime == nil {
// Start Writing
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoWriter.startSession(atSourceTime: sessionAtSourceTime!)
print("session started")
}
// processing on the images, not audio
if output == videoDataOutput {
connection.videoOrientation = .portrait
if connection.isVideoMirroringSupported {
connection.isVideoMirrored = true
}
// convert current frame to CIImage
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, pixelBuffer!, CMAttachmentMode(kCMAttachmentMode_ShouldPropagate)) as? [String: Any]
let ciImage = CIImage(cvImageBuffer: pixelBuffer!, options: attachments)
// Detects faces based on your ciimage
let features = faceDetector?.features(in: ciImage, options: [CIDetectorSmile : true,
CIDetectorEyeBlink : true,
]).compactMap({ $0 as? CIFaceFeature })
// Retreive frame of your buffer
let desc = CMSampleBufferGetFormatDescription(sampleBuffer)
let bufferFrame = CMVideoFormatDescriptionGetCleanAperture(desc!, false)
// Draw face masks
DispatchQueue.main.async { [weak self] in
UIView.animate(withDuration: 0.2) {
self?.drawFaceMasksFor(features: features!, bufferFrame: bufferFrame)
}
}
}
if writable,
output == videoDataOutput,
(videoWriterInput.isReadyForMoreMediaData) {
// write video buffer
videoWriterInput.append(sampleBuffer)
print("video buffering")
} else if writable,
output == audioDataOutput,
(audioWriterInput.isReadyForMoreMediaData) {
// write audio buffer
audioWriterInput?.append(sampleBuffer)
print("audio buffering")
}
}

AVAssetWriter error: Cannot append media data after ending session

This error occurs when capturing video with AVAssetWriter. However, after calling AVAssetWriter's finishWriting inside of endVideoCapture, there isn't another call to start writing again, so why is this occurring?
As you can see in the delegate function, captureOutput, we check the recording state before trying to append to the asset writer. The recording state is set to false in endVideoCapture.
Optional(Error Domain=AVFoundationErrorDomain Code=-11862 "Cannot
append media data after ending session"
UserInfo={NSLocalizedFailureReason=The application encountered a
programming error., NSLocalizedDescription=The operation is not
allowed, NSDebugDesc
func startVideoCapture() {
// Get capture resolution
let resolution = getCaptureResolution()
// Return if capture resolution not set
if resolution.width == 0 || resolution.height == 0 {
printError("Error starting capture because resolution invalid")
return
}
// If here, start capture
assetWriter = createAssetWriter(Int(resolution.width), outputHeight: Int(resolution.height))
let recordingClock = captureSession.masterClock
assetWriter!.startWriting()
assetWriter!.startSession(atSourceTime: CMClockGetTime(recordingClock!))
// Update time stamp
startTime = CACurrentMediaTime()
// Update <recording> flag & notify delegate
recording = true
delegate?.cameraDidStartVideoCapture()
}
func createAssetWriter(_ outputWidth: Int, outputHeight: Int) -> AVAssetWriter? {
// Update <outputURL> with temp file to hold video
let tempPath = gFile.getUniqueTempPath(gFile.MP4File)
outputURL = URL(fileURLWithPath: tempPath)
// Return new asset writer or nil
do {
// Create asset writer
let newWriter = try AVAssetWriter(outputURL: outputURL, fileType: AVFileTypeMPEG4)
// Define video settings
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264 as AnyObject,
AVVideoWidthKey : outputWidth as AnyObject,
AVVideoHeightKey : outputHeight as AnyObject,
]
// Add video input to writer
assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assetWriterVideoInput!.expectsMediaDataInRealTime = true
newWriter.add(assetWriterVideoInput!)
// Define audio settings
let audioSettings : [String : AnyObject] = [
AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC) as AnyObject,
AVNumberOfChannelsKey : 2 as AnyObject,
AVSampleRateKey : NSNumber(value: 44100.0 as Double)
]
// Add audio input to writer
assetWriterAudioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
assetWriterAudioInput!.expectsMediaDataInRealTime = true
newWriter.add(assetWriterAudioInput!)
// Return writer
print("Created asset writer for \(outputWidth)x\(outputHeight) video")
return newWriter
} catch {
printError("Error creating asset writer: \(error)")
return nil
}
}
func endVideoCapture() {
// Update flag to stop data capture
recording = false
// Return if asset writer undefined
if assetWriter == nil {
return
}
// If here, end capture
// -- Mark inputs as done
assetWriterVideoInput!.markAsFinished()
assetWriterAudioInput!.markAsFinished()
// -- Finish writing
assetWriter!.finishWriting() {
self.assetWriterDidFinish()
}
}
func assetWriterDidFinish() {
print("Asset writer finished with status: \(getAssetWriterStatus())")
// Return early on error & tell delegate
if assetWriter!.error != nil {
printError("Error finishing asset writer: \(assetWriter!.error)")
delegate?.panabeeCameraDidEndVideoCapture(videoURL: nil, videoDur: 0, error: assetWriter!.error)
logEvent("Asset Writer Finish Error", userData: ["Error" : assetWriter!.error.debugDescription])
return
}
// If here, no error so extract video properties & tell delegate
let videoAsset = AVURLAsset(url: outputURL, options: nil)
let videoDur = CMTimeGetSeconds(videoAsset.duration)
let videoTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]
print("Camera created video. Duration: \(videoDur). Size: \(videoTrack.naturalSize). Transform: \(videoTrack.preferredTransform). URL: \(outputURL).")
// Tell delegate
delegate?.cameraDidEndVideoCapture(videoURL: outputURL.path, videoDur: videoDur, error: assetWriter!.error)
// Reset <assetWriter> to nil
assetWriter = nil
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// Return if not recording
if !recording {
return
}
// If here, capture data
// Write video data?
if captureOutput == videoOutput && assetWriterVideoInput!.isReadyForMoreMediaData {
assetWriterVideoQueue!.async {
self.assetWriterVideoInput!.append(sampleBuffer)
}
}
// No, write audio data?
if captureOutput == audioOutput && assetWriterAudioInput!.isReadyForMoreMediaData {
assetWriterAudioQueue!.async {
self.assetWriterAudioInput!.append(sampleBuffer)
}
}
}