Refreshing AVCaptureSession...? - swift

I am having some difficulty with AVCaptureSession when popping view controllers. I have a view controller in a navigation controller where a user takes a photo. After the photo is captured, I segue to a "preview photo" view controller. If the user doesn't like the photo, they can go back and re take it. When I pop the preview photo view controller, the app crashes with error "Multiple audio/video AVCaptureInputs are not currently supported'"
I thought that maybe I can remove/ refresh the input session but it's still crashing.
Any support/ advice is greatly appreciated!
segue:
#IBAction func cancelPressed(_ sender: UIButton) {
_ = self.navigationController?.popViewController(animated: true)
}
camera config (which works fine):
func setupCaptureSessionCamera() {
//this makes sure to get full res of camera
captureSession.sessionPreset = AVCaptureSession.Preset.photo
var devices = AVCaptureDevice.devices(for: .video)
//query available devices
for device in devices {
if device.position == .front {
frontFacingCamera = device
} else if device.position == .back {
backFacingCamera = device
}
}//end iteration
//set a default device
currentDevice = backFacingCamera
//configure session w output for capturing still img
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey : AVVideoCodecType.jpeg]
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice!)
captureSession.addInput(captureDeviceInput)
captureSession.addOutput(stillImageOutput!)
//setup camera preview layer
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
//add the preview to our specified view in the UI
view.layer.addSublayer(cameraPreviewLayer!)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.frame = cameraView.frame
captureSession.startRunning()
} catch let error {
print(error)
}//end do
}
What I tried (remove inputs in view will appear if the sender is preview photo controller):
func refreshCamera() {
captureSession.beginConfiguration()
for input in captureSession.inputs {
captureSession.removeInput(input as! AVCaptureDeviceInput)
}
captureSession.commitConfiguration()
}

It was much simpler than I was imagining. All that is needed is to first check if there is already input or not before calling the setupCameraSession method:
if captureSession.inputs.isEmpty {
setupCaptureSessionCamera()
}

Related

Memory leak when displaying a modal view and dismissing it

When an AVExportSession is finished exporting, I have my app display a modal view displaying the video and an array of images. Dismissing the modal view, and making it display again over and over shows a memory increase that continuously grows. I'm suspicious of a strong reference cycle that could be occurring.
I'm setting required variables on the modal view (manageCaptureVC). fileURL is a global variable that manageCaptureVC can read from to get the video. The video is removed based on that URL when the modal view is dismissed. The leak is larger depending on the size of the media that is captured and displayed in the modal view.
I have used the Leaks Instrument. Unfortunately, it never points to any of my functions. It shows memory addresses that displays assembly language. I am also using a device.
Here is a screen shot of my leaks instrument at the point I display and dismiss my view, and the instrument indicates leaks:
Anything obvious what could cause a leak in my case?
Presenting the modal view (manageCaptureVC)
// video done exporting
guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
exporter.outputURL = mainVideoURL
exporter.outputFileType = AVFileType.mov
let manageCaptureVC = self.storyboard?.instantiateViewController(withIdentifier: "ManageCaptureVC") as! ManageCaptureVC
exporter.exportAsynchronously(completionHandler: {[weak self]
() -> Void in
let fileManagement = FileManagement()
fileManagement.checkForAndDeleteExportFile() // delete export file
self?.myTimer.invalidate()
fileURL = mainVideoURL
guard let imgCaptureModeRawVal = self?.imageCaptureMode.rawValue else { return }
manageCaptureVC.imageCaptureMode = ManageCaptureVC.imageCaptureModes(rawValue: imgCaptureModeRawVal)!
manageCaptureVC.delegate = self
DispatchQueue.main.async(){
manageCaptureVC.modalPresentationStyle = .fullScreen
self?.present(manageCaptureVC, animated: true, completion: nil)
}
})
Dismissing the view:
func goBackTask(){
// turn off manage capture tutorial if needed
if debug_ManageCaptureTutorialModeOn {
debug_ManageCaptureTutorialModeOn = false
delegate?.resetFiltersToPrime()
}
// no longer ignore interface orientation
ignoreSelectedInterfaceOrientation = false
// remove observer for the application becoming active in this view
NotificationCenter.default.removeObserver(self,
name: UIApplication.didBecomeActiveNotification,
object: nil)
if let videoEndedObs = self.videoEndedObserver {
NotificationCenter.default.removeObserver(videoEndedObs)
}
// invalidate thumb timer
thumbColorTimer.invalidate()
// empty UIImages
uiImages.removeAll()
// delete video
let fileManagement = FileManagement()
fileManagement.checkForAndDeleteFile()
let group = DispatchGroup()
group.enter()
DispatchQueue.main.async {
self.enableButtons(enabled:false)
if let p = self.player, let pl = self.playerLayer {
p.pause()
pl.removeObserver(self, forKeyPath: "videoRect")
pl.removeFromSuperlayer()
p.replaceCurrentItem(with: nil)
}
group.leave()
}
let group2 = DispatchGroup()
group.notify(queue: .main) {
group2.enter()
DispatchQueue.main.async {
self.enableButtons(enabled:true)
group2.leave()
}
}
group2.notify(queue: .main) {
self.dismiss(animated: true)
}
}
I came across this problem as well. It took me days to track it down.
Setting modalPresentationStyle to .fullScreen resulted in the View Controller not being released. I was able to reproduce this on a trivially simple example.
I got round it by setting modalPresentationStyle to .currentContext.
None of the Instruments identified this retain cycle - I guess because it was in low level Apple code.

Swift - Recorded Video is Mirrored on Front Camera - How to flip?

I'm trying to mirror the recorded video from a capture session. The video preview for front facing camera shows a mirrored version, however, when I go to save the file and play it back, the captured video is actually mirrored. I'm using Apple's AVCam demo as a reference and can't seem to figure this out! Please help.
I've tried creating an AVCaptureConnection and trying to set the .isVideoMirrored parameter. However, I get this error:
cannot be added to the session because the source and destination media types are incompatible'
I would have thought mirroring the video would be much easier. I think I may be creating my connection incorrectly. The code below doesn't actually "Add connection" when I call the .canAddConnection check.
var captureSession: AVCaptureSession!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
captureSession = AVCaptureSession()
//Setup Camera
if let dualCameraDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .front) {
defaultVideoDevice = dualCameraDevice
} else if let frontCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front) {
// If the rear wide angle camera isn't available, default to the front wide angle camera.
defaultVideoDevice = frontCameraDevice
}
guard let videoDevice = defaultVideoDevice else {
print("Default video device is unavailable.")
// setupResult = .configurationFailed
captureSession.commitConfiguration()
return
}
let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
if captureSession.canAddInput(videoDeviceInput) {
captureSession.addInput(videoDeviceInput)
}
let movieOutput = AVCaptureMovieFileOutput()
//Video Input variable for AVCapture Connection
let videoInput: [AVCaptureInput.Port] = videoDeviceInput.ports
if captureSession.canAddOutput(movieOutput) {
captureSession.beginConfiguration()
captureSession.addOutput(movieOutput)
captureSession.sessionPreset = .medium
Then I try to setup the AVCapture connection and try to set the parameters for mirroring. Please tell me if there is an easier way to mirror the output / playback.
avCaptureConnection = AVCaptureConnection(inputPorts: videoInput, output: movieOutput)
avCaptureConnection.isEnabled = true
//Mirror the capture connection?
avCaptureConnection.automaticallyAdjustsVideoMirroring = false
avCaptureConnection.isVideoMirrored = false
//Check if we can add a connection
if captureSession.canAddConnection(avCaptureConnection) {
//Add the connection
captureSession.addConnection(avCaptureConnection)
}
captureSession.commitConfiguration()
self.movieOutput = movieOutput
setupLivePreview()
}
}
Somewhere else in the code, connected to an IBAaction, I initialize the recording
// Start recording video to a temporary file.
let outputFileName = NSUUID().uuidString
let outputFilePath = (NSTemporaryDirectory() as NSString).appendingPathComponent((outputFileName as NSString).appendingPathExtension("mov")!)
print("Recording in tap function")
movieOutput.startRecording(to: URL(fileURLWithPath: outputFilePath), recordingDelegate: self)
I think I'm using AVCaptureConnection incorrectly, especially because of the error stating media types are incompatible. If there is a proper way to implement this function please do let me know. Also open to hearing suggestions for an easier way to mirror the playback. Thank you!

What is wrong with my custom camera view?

I followed this video: https://www.youtube.com/watch?v=7TqXrMnfJy8&t=45s to the T. But when I open the camera view all I see is the black screen and white button. I get no error messages when I try I load the camera view. Can someone please assist me with what I'm doing wrong?
My code is below:
import UIKit
import AVFoundation
class CameraViewController: UIViewController {
var captureSession = AVCaptureSession()
var backCamera: AVCaptureDevice?
var currentCamera: AVCaptureDevice?
var photoOutput: AVCapturePhotoOutput?
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
setupCaptureSession()
setupDevice()
setupInputOutput()
setupPreviewLayer()
startRunningCaptureSession()
}
func setupCaptureSession(){
captureSession.sessionPreset = AVCaptureSession.Preset.photo
}
func setupDevice(){
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
let devices = deviceDiscoverySession.devices
for device in devices{
if device.position == AVCaptureDevice.Position.back {
backCamera = device
}
}
currentCamera = backCamera
}
func setupInputOutput(){
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: currentCamera!)
captureSession.addInput(captureDeviceInput)
photoOutput?.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format:[AVVideoCodecKey: AVVideoCodecType.jpeg])], completionHandler: nil)
} catch {
print(error)
}
}
func setupPreviewLayer(){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraPreviewLayer?.frame = self.view.frame
self.view.layer.insertSublayer(cameraPreviewLayer!, at: 1)
}
func startRunningCaptureSession(){
captureSession.startRunning()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated
}
}
I ran your code and it worked perfectly fine — almost! The only problem is that I had to add a Privacy — Camera Usage Description entry to the app's Info.plist. Otherwise the app crashes.
Once I did that and ran your code, I saw the live camera view on my device.
So why isn't it working for you? Let's think of some possible reasons. You didn't give enough info to know for sure (seeing as the code itself works just fine), but here are some possibilities:
You don't have the Privacy — Camera Usage Description entry in the app's Info.plist.
You are testing on the Simulator. Maybe this code works only on a device.
There is something in your interface in front of the sublayer that you add when you say insertSublayer. To test this, try saying addSublayer instead; this will make the camera layer the frontmost layer (this is just for testing purposes, remember).
Maybe your code never runs at all? Perhaps we never actually go to this view controller. To test that theory, put a print statement in your viewDidLoad and see if it actually prints to the console.
Maybe your code runs too soon? To test that theory, move all those calls out of viewDidLoad and into something later, such as viewDidAppear. Remember, this is just for testing purposes.
Hopefully one of those will help you figure out what the problem is.

swift Avcapture session for barcode scanning is not working

I am trying to build a barcode scanner. I adapted some of this tutorial. The video capture session is working but it is not detecting any barcode. I have gone through the code multiple times and still could not find what the problem could be. Here is my code for detecting the barcode
class ScanController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
var captureSession: AVCaptureSession?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var qrCodeFrameView: UIView?
let supportedCodeTypes = [AVMetadataObject.ObjectType.upce,
AVMetadataObject.ObjectType.code39,
AVMetadataObject.ObjectType.qr]
override func viewDidLoad() {
super.viewDidLoad()
//Get an instance of the AVCaptureDevice class a device object and provide the video as the media type parameter
let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
do {
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
let input = try AVCaptureDeviceInput(device: captureDevice!)
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input)
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
captureMetadataOutput.metadataObjectTypes = supportedCodeTypes
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
// Start video capture.
captureSession?.startRunning()
// Add the message label
self.view.addSubview(messageLabel)
//initialize QR Code Frame to highlight the QR Code
qrCodeFrameView = UIView()
if let qrCodeFrameView = qrCodeFrameView {
qrCodeFrameView.layer.borderColor = UIColor.green.cgColor
qrCodeFrameView.layer.borderWidth = 2
view.addSubview(qrCodeFrameView)
view.bringSubview(toFront: qrCodeFrameView)
}
} catch {
// If any error occurs, simply print it out and don't continue any more.
print("THERE IS A PROBLEM WITH THE CAPTURE SESSION *****************")
print(error)
return
}
}
}
what am I missing ?
maybe you missing the Delegate Methods? In the Tutorial is the delegate method :
optional func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)
under the section Decoding the QR Code

Custom camera in xcode, Swift 3

So I have this issue where I am trying to create a custom camera in Xcode however, for some reason I cannot get it so that it is set to use the front camera. No matter what I change in the code it seems to only use the back camera and I was hoping that someone might be generous enough to take a quick look at my code below and see whether there is something that I am missing or somewhere that I went wrong. Any help would be very much appreciated, thank you for your time.
func SelectInputDevice() {
let devices = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera,
mediaType: AVMediaTypeVideo, position: .front)
if devices?.position == AVCaptureDevicePosition.front {
print(devices?.position)
frontCamera = devices
}
currentCameraDevice = frontCamera
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: currentCameraDevice)
captureSession.addInput(captureDeviceInput)
} catch {
print(error.localizedDescription)
}
}
This is where frontCamera and currentCameraDevice are a AVCaptureDevice's.
Seems there are a few things missing from your code:
1) In order to change input devices you need to reconfigure the session by calling session.beginConfiguration() before adding the new device and ending with session.commitConfiguration(). Also all changes should be made on the background queue (that hopefully you've created for the session) so that UI isn't blocked when session is configured.
2) Code would be safer to check with the session it allows the new device before adding it with session.canAddInput(captureDeviceInput) + removing the previous device (the back camera) as front+back config isn't allowed.
3) Also would be cleaner to check your device has a working front camera (might be broken) before to prevent any crashes.
Full code for changing capture device to front camera would look like:
func switchCameraToFront() {
//session & sessionQueue are references to the capture session and its dispatch queue
sessionQueue.async { [unowned self] in
let currentVideoInput = self.videoDeviceInput //ref to current videoInput as setup in initial session config
let preferredPosition: AVCaptureDevicePosition = .front
let preferredDeviceType: AVCaptureDeviceType = .builtInWideAngleCamera
let devices = self.videoDeviceDiscoverySession.devices!
var newVideoDevice: AVCaptureDevice? = nil
// First, look for a device with both the preferred position and device type. Otherwise, look for a device with only the preferred position.
if let device = devices.filter({ $0.position == preferredPosition && $0.deviceType == preferredDeviceType }).first {
newVideoDevice = device
}
else if let device = devices.filter({ $0.position == preferredPosition }).first {
newVideoDevice = device
}
if let videoDevice = newVideoDevice {
do {
let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
self.session.beginConfiguration()
// Remove the existing device input first, since using the front and back camera simultaneously is not supported.
self.session.removeInput(currentVideoInput)
if self.session.canAddInput(videoDeviceInput) {
self.session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
}
else {
//fallback to current device
self.session.addInput(self.videoDeviceInput);
}
self.session.commitConfiguration()
}
catch {
}
}
}
}