Improve Barcode scanner in Swift - swift

I've implemented barcode scanning following the "standards" around the tutorials. But I think the performance is kind of terrible. I can point my camera against a barcode with perfekt focus and no glare and still the code doesnt detect the barcode.
And I'm kind of jealous of the app ScanLife - its amazingly fast and detect codes without even being in focus.
Any ideas how to improve scanning?
Here's a snippet of my code (the detection part):
var captureSession: AVCaptureSession!
var previewLayer: AVCaptureVideoPreviewLayer!
let videoCaptureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
captureSession = AVCaptureSession()
let videoInput: AVCaptureDeviceInput
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
if captureSession.canAddInput(videoInput) {
captureSession.addInput(videoInput)
let metadataOutput = AVCaptureMetadataOutput()
if captureSession.canAddOutput(metadataOutput) {
captureSession.addOutput(metadataOutput)
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
metadataOutput.metadataObjectTypes = metadataOutput.availableMetadataObjectTypes // Use all metadata object types by default.
metadataOutput.rectOfInterest = CGRect.zero
} else {
failed()
return
}
if (videoCaptureDevice?.isFocusModeSupported(.continuousAutoFocus))! {
do {
if(try videoCaptureDevice?.lockForConfiguration()) != nil {
videoCaptureDevice?.exposureMode = .continuousAutoExposure
videoCaptureDevice?.focusMode = .continuousAutoFocus
videoCaptureDevice?.unlockForConfiguration()
}
} catch {
}
}
videoCaptureDevice?.addObserver(self, forKeyPath: "adjustingFocus", options: NSKeyValueObservingOptions.new, context: nil)
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(ScannerViewController.focus(_:)))
mainView.addGestureRecognizer(tapGesture)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession);
previewLayer.frame = view.layer.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
mainView.layer.addSublayer(previewLayer);
/*
// Initialize code Frame to highlight the code
codeFrameView.layer.borderColor = UIColor.green.cgColor
codeFrameView.layer.borderWidth = 2
view.addSubview(codeFrameView)
view.bringSubview(toFront: codeFrameView)
*/
captureSession.startRunning()
} else {
failed()
}
} catch {
failed()
}

For whats its worth, it seem to improve the performance to define a rect to search in. Also as the documentation says:
Specifying a rectOfInterest may improve detection performance for certain types of metadata.
Code could be
metadataOutput.rectOfInterest = focusView.frame
Where focusView is a view displayed on top of the preview layer, to signal

Related

Having trouble with flipping camera in swiftui / avfoundation / AVCaptureDeviceInput

I am coding a camera with swiftui using avfoundation and was able to get the setup to work as intended. However, as I'm implementing a flip camera functionality I'm running into an error where after flipping it just goes to a black screen as I'm assuming the input gets removed but the correct flipped input doesn't get shown:
Here is my code
class CameraViewModel: NSObject,ObservableObject,AVCaptureFileOutputRecordingDelegate, AVCapturePhotoCaptureDelegate{
...
#Published var session = AVCaptureSession()
#objc dynamic var videoDeviceInput: AVCaptureDeviceInput!
private let sessionQueue = DispatchQueue(label: "session queue")
func setUp(){
do{
self.session.beginConfiguration()
let cameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
let videoInput = try AVCaptureDeviceInput(device: cameraDevice!)
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
// MARK: Audio Input
if self.session.canAddInput(videoInput) && self.session.canAddInput(audioInput){
self.session.addInput(videoInput)
self.session.addInput(audioInput)
self.videoDeviceInput = videoInput
}
if self.session.canAddOutput(self.output){
self.session.addOutput(self.output)
}
if self.session.canAddOutput(self.photoOutput){
self.session.addOutput(self.photoOutput)
}
self.session.commitConfiguration()
}
catch{
print(error.localizedDescription)
}
}
func changeCamera() {
sessionQueue.async {
if self.videoDeviceInput != nil {
let currentVideoDevice = self.videoDeviceInput.device
let currentPosition = currentVideoDevice.position
let preferredPosition: AVCaptureDevice.Position
switch currentPosition {
case .unspecified, .front:
preferredPosition = .back
case .back:
preferredPosition = .front
#unknown default:
print("Unknown capture position. Defaulting to back, dual-camera.")
preferredPosition = .back
}
print("current pos is \(currentPosition.rawValue) and preferred position is \(preferredPosition.rawValue)")
do{
self.session.beginConfiguration()
//remove device as needed
self.session.removeInput(self.videoDeviceInput)
let newCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: preferredPosition)
let newVideoInput = try AVCaptureDeviceInput(device: newCameraDevice!)
let newAudioDevice = AVCaptureDevice.default(for: .audio)
let newAudioInput = try AVCaptureDeviceInput(device: newAudioDevice!)
// MARK: Audio Input
if self.session.canAddInput(newVideoInput) && self.session.canAddInput(newAudioInput){
self.session.addInput(newVideoInput)
self.session.addInput(newAudioInput)
self.videoDeviceInput = newVideoInput
}
self.session.commitConfiguration()
}catch{
print(error.localizedDescription)
}
}
}
}
}
I'm not sure what I'm doing wrong as I've looked up previous stack overflow threads and online resources and all they say is to get the device input position and change that, remove it before committing configuration. Any help will be greatly appreciated!
Edit: I found the solution was to get rid of the audio input code as the audio is still being captured without it
The problem is in the code you are not showing.
When you are displaying the Image in your SwiftUI view, the orientation depends on the used camera.
For the front camera, it is .upMirrored.
If you switch to use the back camera, you need to use .up.

QR scanner in Swift 5

I built a QR scanner in Swift 5. It will acknowledge the QR Code and scan it, however it only pulls up the url that is embedded in the QR code. Does anyone have any advice on how to make it so that I can tap the link and open it in a browser?
This is the code I have for the scanner:
import UIKit
import AVFoundation
extension QRScannerController: AVCaptureMetadataOutputObjectsDelegate {
}
class QRScannerController: UIViewController {
var captureSession = AVCaptureSession()
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var qrcodeFrameView: UIView?
#IBOutlet var messageLabel: UILabel!
#IBOutlet var topBar: UIView!
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
// Check if the metadataObjects array is not nil and it cotains at least one object
if metadataObjects.count == 0 {
qrcodeFrameView?.frame = CGRect.zero
messageLabel.text = "No QR Code is detected"
return
}
//Get the metadata object
let metadataObj = metadataObjects[0] as! AVMetadataMachineReadableCodeObject
if metadataObj.type == AVMetadataObject.ObjectType.qr {
// If the found metadata is equal to the QR code metadata then update the status label's text and set the bounds
let barCodeObject = videoPreviewLayer?.transformedMetadataObject(for: metadataObj)
qrcodeFrameView?.frame = barCodeObject!.bounds
if metadataObj.stringValue != nil {
messageLabel.text = metadataObj.stringValue
}
}
}
override func viewDidLoad() {
super.viewDidLoad()
// get back camera for capture
guard let captureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) else {
print("Failed to get camera device")
return
}
do {
// Get an instance of the AVCaptureDeviceInput class using the previous device object
let input = try AVCaptureDeviceInput(device: captureDevice)
//Set the input device on the capture session
captureSession.addInput(input)
//Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession.addOutput(captureMetadataOutput)
//Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
captureMetadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.qr]
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
//Start video capture
captureSession.startRunning()
// Move the message label and top bar to the front
view.bringSubviewToFront(messageLabel)
view.bringSubviewToFront(topBar)
// Initialize QR Code Frame to highlight the QR Code
qrcodeFrameView = UIView()
if let qrcodeFrameView = qrcodeFrameView {
qrcodeFrameView.layer.borderColor = UIColor.yellow.cgColor
qrcodeFrameView.layer.borderWidth = 2
view.addSubview(qrcodeFrameView)
view.bringSubviewToFront(qrcodeFrameView)
}
} catch {
print(error)
return
}
}
}

Initializer error in Camera App for Xcode in Swift

I am building an app similar to a camera app in Xcode 10.1 using Swift. To do this, I have imported AVFoundation, and am close to finishing my code. However, upon this line of code
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
which is in this block of code
func beginSession () {
do {
let captureDeviceInput = try AVCaptureDeviceInput( device: captureDevice!)
captureSession.addInput(captureDeviceInput)
} catch {
print(error.localizedDescription)
}
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) {
self.previewLayer = self.previewLayer
self.view.layer.addSublayer(self.previewLayer)
self.previewLayer.frame = self.view.layer.frame
captureSession.startRunning()
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString): NSNumber(value: kCVPixelFormatType_32BGRA)] as [String : Any]
dataOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(dataOutput) {
captureSession.addOutput(dataOutput)
}
There appears an error that reads "Cannot invoke initializer for type 'AVCaptureVideoPreviewLayer' with an argument list of type '(session: AVCaptureSession, () -> ())'"
I don't exactly know what this means or how to fix it as I am relatively new to programming.
Where have you initialized captureSession?
Try something like this in your UIViewController:
var captureSession = AVCaptureSession()
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
beginSession()
}
func beginSession() {
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video as the media type parameter.
if let captureDevice = AVCaptureDevice.default(for: AVMediaType.video) {
do {
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
let input = try AVCaptureDeviceInput(device: captureDevice)
// Set the input device on the capture session.
captureSession.addInput(input)
// Initialize a AVCaptureVideoDataOutput object and set it as the output device to the capture session.
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString): NSNumber(value: kCVPixelFormatType_32BGRA)] as [String : Any]
dataOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(dataOutput) {
captureSession.addOutput(dataOutput)
}
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = self.view.layer.bounds // It may be best to setup an UIView outlet instead of using self.view
self.view.layer.addSublayer(videoPreviewLayer!)
// Start video capture.
captureSession.startRunning()
} catch {
// If any error occurs, simply print it out and don't continue any more.
print(error)
return
}
}
}
Hope it helps you!

AVFoundation PDF417 scanner doesn't always work

I am creating an app using Swift 4 and Xcode 9 that scans PDF417 barcodes using AVFoundation. The scanner works with some codes but doesn't recognize the PDF417 barcode that you would find on the front of a CA Lottery scratchers ticket for example.
Is there anything I am missing to make it work? Below is my code:
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualCamera], mediaType: AVMediaType.video, position: .back)
guard let captureDevice = deviceDiscoverySession.devices.first else {
print("Failed to get the camera device")
return
}
do {
captureSession = AVCaptureSession()
let input = try AVCaptureDeviceInput(device: captureDevice)
captureSession!.addInput(input)
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession!.addOutput(captureMetadataOutput)
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
captureMetadataOutput.metadataObjectTypes = [AVMetadataObject.ObjectType.pdf417]
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
captureSession?.startRunning()
} catch {
print(error)
return
}
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
//Get the metadata object
let metadataObj = metadataObjects[0] as! AVMetadataMachineReadableCodeObject
if scanType.contains(metadataObj.type) {
let barCodeObj = videoPreviewLayer?.transformedMetadataObject(for: metadataObj)
if(metadataObj.stringValue != nil) {
callDelegate(metadataObj.stringValue)
captureSession?.stopRunning()
AudioServicesPlayAlertSound(SystemSoundID(kSystemSoundID_Vibrate))
navigationController?.popViewController(animated: true)
}
}
}
Thanks!
Replace your initialization code for the scanner with the following code either in your viewDidLoad or some method that you'd like it to be in
// Global vars used in init below
var captureSession: AVCaptureSession!
var previewLayer: AVCaptureVideoPreviewLayer!
func setupCaptureInputDevice() {
let cameraMediaType = AVMediaType.video
captureSession = AVCaptureSession()
// get the video capture device, which should be of type video
guard let videoCaptureDevice = AVCaptureDevice.default(for: .video) else {
// if there is an error then something is wrong, so dismiss
dismiss(animated: true, completion: nil)
return
}
let videoInput: AVCaptureDeviceInput
// create a capture input for the above device input that was created
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
} catch {
return
}
// this is important to check for if we are able to add the input
// because adding before this could cause a crash or it could not happen
if (captureSession.canAddInput(videoInput)) {
captureSession.addInput(videoInput)
} else {
// dismiss or display error
return
}
// get ready to capture output somewhere
let metadataOutput = AVCaptureMetadataOutput()
// again check to make sure that we can do this operation before doing it
if (captureSession.canAddOutput(metadataOutput)) {
captureSession.addOutput(metadataOutput)
// setting the metadataOutput's delegate to be self and then requesting it run on the main thread
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
// specify your code type
metadataOutput.metadataObjectTypes = [.pdf417]
} else {
// dismiss or display error
return
}
// the preview layer now becomes the capture session
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
// just add it to the screen
previewLayer.frame = view.layer.bounds
previewLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(previewLayer)
// and begin input
captureSession.startRunning()
}

Couple issue with custom camera

I currently have a custom camera implemented into my application. I am running into two small issues.
1) When I switch b/w the views of the camera (front & back) the audio input dies, and only records video.
2) My method for deciding which camera view (front & back) is which, is depreciated, & I don't know how to exactly go about resolving it. For this one, the code is as follows: The depreciated part is the devices is storing as its variables. xCode is telling me: "Use AVCaptureDeviceDiscoverySession instead."
let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as! [AVCaptureDevice]
// Get the front and back-facing camera for taking photos
for device in devices {
if device.position == AVCaptureDevicePosition.back {
backFacingCamera = device
} else if device.position == AVCaptureDevicePosition.front {
frontFacingCamera = device
}
}
currentDevice = backFacingCamera
guard let captureDeviceInput = try? AVCaptureDeviceInput(device: currentDevice) else {
return
}
As for the general camera recording here are the codes:
My Variables:
let captureSession = AVCaptureSession()
var currentDevice:AVCaptureDevice?
var backFacingCamera: AVCaptureDevice?
var frontFacingCamera: AVCaptureDevice?
var videoFileOutput : AVCaptureMovieFileOutput?
var cameraPreviewLayer : AVCaptureVideoPreviewLayer?
#IBOutlet weak var recordingView: UIView!
switching cameras:
var device = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .back)
func switchCameras() {
captureSession.beginConfiguration()
// Change the device based on the current camera
let newDevice = (currentDevice?.position == AVCaptureDevicePosition.back) ? frontFacingCamera : backFacingCamera
// Remove all inputs from the session
for input in captureSession.inputs {
captureSession.removeInput(input as! AVCaptureDeviceInput)
}
// Change to the new input
let cameraInput:AVCaptureDeviceInput
do {
cameraInput = try AVCaptureDeviceInput(device: newDevice)
} catch {
print(error)
return
}
if captureSession.canAddInput(cameraInput) {
captureSession.addInput(cameraInput)
}
currentDevice = newDevice
captureSession.commitConfiguration()
if currentDevice?.position == .front {
flashButton.isHidden = true
flashButton.isEnabled = false
} else if currentDevice?.position == .back {
flashButton.isHidden = false
flashButton.isEnabled = true
}
}
& In my view will appear:
mediaViewCapture.frame = CGRect(x: self.view.frame.size.width * 0, y: self.view.frame.size.height * 0, width:self.view.frame.size.width, height: self.view.frame.size.height)
self.view.addSubview(mediaViewCapture)
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as! [AVCaptureDevice]
// Get the front and back-facing camera for taking photos
for device in devices {
if device.position == AVCaptureDevicePosition.back {
backFacingCamera = device
} else if device.position == AVCaptureDevicePosition.front {
frontFacingCamera = device
}
}
currentDevice = backFacingCamera
guard let captureDeviceInput = try? AVCaptureDeviceInput(device: currentDevice) else {
return
}
let audioInputDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
do
{
let audioInput = try AVCaptureDeviceInput(device: audioInputDevice)
// Add Audio Input
if captureSession.canAddInput(audioInput)
{
captureSession.addInput(audioInput)
}
else
{
NSLog("Can't Add Audio Input")
}
}
catch let error
{
NSLog("Error Getting Input Device: \(error)")
}
videoFileOutput = AVCaptureMovieFileOutput()
captureSession.addInput(captureDeviceInput)
captureSession.addOutput(videoFileOutput)
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(cameraPreviewLayer!)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
cameraPreviewLayer?.frame = mediaViewCapture.layer.frame
captureSession.startRunning()
& Finally my capture:
func capture(_ captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAt outputFileURL: URL!, fromConnections connections: [Any]!, error: Error!) {
if error == nil {
turnFlashOff()
let videoVC = VideoPreviewVC()
videoVC.url = outputFileURL
self.navigationController?.pushViewController(videoVC, animated: false)
} else {
print("Error saving the video \(error)")
}
}
You can look use AVCaptureDeviceDiscoverySession instead of AVCaptureDevice as it is deprecated following is the code for it:
let deviceDiscovery = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .back)
let devices = deviceDiscovery?.devices
for device in devices! {
if device.hasMediaType(AVMediaTypeVideo) {
captureDevice = device
}
}
AVCaptureDeviceType has following types: builtInMicrophone, builtInWideAngleCamera, builtInTelephotoCamera, builtInDualCamera and builtInDuoCamera.
Need to check the audioInput issue when camera is switched.