What is wrong with my custom camera view? - swift

I followed this video: https://www.youtube.com/watch?v=7TqXrMnfJy8&t=45s to the T. But when I open the camera view all I see is the black screen and white button. I get no error messages when I try I load the camera view. Can someone please assist me with what I'm doing wrong?
My code is below:
import UIKit
import AVFoundation
class CameraViewController: UIViewController {
var captureSession = AVCaptureSession()
var backCamera: AVCaptureDevice?
var currentCamera: AVCaptureDevice?
var photoOutput: AVCapturePhotoOutput?
var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
setupCaptureSession()
setupDevice()
setupInputOutput()
setupPreviewLayer()
startRunningCaptureSession()
}
func setupCaptureSession(){
captureSession.sessionPreset = AVCaptureSession.Preset.photo
}
func setupDevice(){
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
let devices = deviceDiscoverySession.devices
for device in devices{
if device.position == AVCaptureDevice.Position.back {
backCamera = device
}
}
currentCamera = backCamera
}
func setupInputOutput(){
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: currentCamera!)
captureSession.addInput(captureDeviceInput)
photoOutput?.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format:[AVVideoCodecKey: AVVideoCodecType.jpeg])], completionHandler: nil)
} catch {
print(error)
}
}
func setupPreviewLayer(){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraPreviewLayer?.frame = self.view.frame
self.view.layer.insertSublayer(cameraPreviewLayer!, at: 1)
}
func startRunningCaptureSession(){
captureSession.startRunning()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated
}
}

I ran your code and it worked perfectly fine — almost! The only problem is that I had to add a Privacy — Camera Usage Description entry to the app's Info.plist. Otherwise the app crashes.
Once I did that and ran your code, I saw the live camera view on my device.
So why isn't it working for you? Let's think of some possible reasons. You didn't give enough info to know for sure (seeing as the code itself works just fine), but here are some possibilities:
You don't have the Privacy — Camera Usage Description entry in the app's Info.plist.
You are testing on the Simulator. Maybe this code works only on a device.
There is something in your interface in front of the sublayer that you add when you say insertSublayer. To test this, try saying addSublayer instead; this will make the camera layer the frontmost layer (this is just for testing purposes, remember).
Maybe your code never runs at all? Perhaps we never actually go to this view controller. To test that theory, put a print statement in your viewDidLoad and see if it actually prints to the console.
Maybe your code runs too soon? To test that theory, move all those calls out of viewDidLoad and into something later, such as viewDidAppear. Remember, this is just for testing purposes.
Hopefully one of those will help you figure out what the problem is.

Related

Loading Entities from a file

I created a .reality file which i exported from Reality Composer and added to the project.
The code:
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
if let anchor = try? Entity.loadAnchor(named: "ARAnchorTestFile") {
arView.scene.addAnchor(anchor)
}
}
}
On devices with iOS version 13.5 or higher the app crash when the anchoring is triggered and the 3D model should be displayed.
The error:
Thread 1: EXC_BAD_ACCESS (code=1, address=0x40)
The entire project has been uploaded to this repo: https://github.com/evjand/ARAnchorTest
UPDATE: After filing a bug repport to Apple it seems like they have fixed it in the iOS 14 beta.
Seems there's a bug when reading .reality file. Use .rcproject format instead. It works.
if let anchor = try? Entity.loadAnchor(named: "AR") {
arView.scene.addAnchor(anchor)
print(anchor)
}

swift Avcapture session for barcode scanning is not working

I am trying to build a barcode scanner. I adapted some of this tutorial. The video capture session is working but it is not detecting any barcode. I have gone through the code multiple times and still could not find what the problem could be. Here is my code for detecting the barcode
class ScanController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {
var captureSession: AVCaptureSession?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
var qrCodeFrameView: UIView?
let supportedCodeTypes = [AVMetadataObject.ObjectType.upce,
AVMetadataObject.ObjectType.code39,
AVMetadataObject.ObjectType.qr]
override func viewDidLoad() {
super.viewDidLoad()
//Get an instance of the AVCaptureDevice class a device object and provide the video as the media type parameter
let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
do {
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
let input = try AVCaptureDeviceInput(device: captureDevice!)
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input)
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
captureMetadataOutput.metadataObjectTypes = supportedCodeTypes
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
videoPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer!)
// Start video capture.
captureSession?.startRunning()
// Add the message label
self.view.addSubview(messageLabel)
//initialize QR Code Frame to highlight the QR Code
qrCodeFrameView = UIView()
if let qrCodeFrameView = qrCodeFrameView {
qrCodeFrameView.layer.borderColor = UIColor.green.cgColor
qrCodeFrameView.layer.borderWidth = 2
view.addSubview(qrCodeFrameView)
view.bringSubview(toFront: qrCodeFrameView)
}
} catch {
// If any error occurs, simply print it out and don't continue any more.
print("THERE IS A PROBLEM WITH THE CAPTURE SESSION *****************")
print(error)
return
}
}
}
what am I missing ?
maybe you missing the Delegate Methods? In the Tutorial is the delegate method :
optional func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)
under the section Decoding the QR Code

Refreshing AVCaptureSession...?

I am having some difficulty with AVCaptureSession when popping view controllers. I have a view controller in a navigation controller where a user takes a photo. After the photo is captured, I segue to a "preview photo" view controller. If the user doesn't like the photo, they can go back and re take it. When I pop the preview photo view controller, the app crashes with error "Multiple audio/video AVCaptureInputs are not currently supported'"
I thought that maybe I can remove/ refresh the input session but it's still crashing.
Any support/ advice is greatly appreciated!
segue:
#IBAction func cancelPressed(_ sender: UIButton) {
_ = self.navigationController?.popViewController(animated: true)
}
camera config (which works fine):
func setupCaptureSessionCamera() {
//this makes sure to get full res of camera
captureSession.sessionPreset = AVCaptureSession.Preset.photo
var devices = AVCaptureDevice.devices(for: .video)
//query available devices
for device in devices {
if device.position == .front {
frontFacingCamera = device
} else if device.position == .back {
backFacingCamera = device
}
}//end iteration
//set a default device
currentDevice = backFacingCamera
//configure session w output for capturing still img
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey : AVVideoCodecType.jpeg]
do {
let captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice!)
captureSession.addInput(captureDeviceInput)
captureSession.addOutput(stillImageOutput!)
//setup camera preview layer
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
//add the preview to our specified view in the UI
view.layer.addSublayer(cameraPreviewLayer!)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreviewLayer?.frame = cameraView.frame
captureSession.startRunning()
} catch let error {
print(error)
}//end do
}
What I tried (remove inputs in view will appear if the sender is preview photo controller):
func refreshCamera() {
captureSession.beginConfiguration()
for input in captureSession.inputs {
captureSession.removeInput(input as! AVCaptureDeviceInput)
}
captureSession.commitConfiguration()
}
It was much simpler than I was imagining. All that is needed is to first check if there is already input or not before calling the setupCameraSession method:
if captureSession.inputs.isEmpty {
setupCaptureSessionCamera()
}

How to capture depth data from camera in iOS 11 and Swift 4?

I'm trying to get depth data from the camera in iOS 11 with AVDepthData, tho when I setup a photoOutput with the AVCapturePhotoCaptureDelegate the photo.depthData is nil.
So I tried setting up the AVCaptureDepthDataOutputDelegate with a AVCaptureDepthDataOutput, tho I don't know how to capture the depth photo?
Has anyone ever got an image from AVDepthData?
Edit:
Here's the code I tried:
// delegates: AVCapturePhotoCaptureDelegate & AVCaptureDepthDataOutputDelegate
#IBOutlet var image_view: UIImageView!
#IBOutlet var capture_button: UIButton!
var captureSession: AVCaptureSession?
var sessionOutput: AVCapturePhotoOutput?
var depthOutput: AVCaptureDepthDataOutput?
var previewLayer: AVCaptureVideoPreviewLayer?
#IBAction func capture(_ sender: Any) {
self.sessionOutput?.capturePhoto(with: AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]), delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
self.previewLayer?.removeFromSuperlayer()
self.image_view.image = UIImage(data: photo.fileDataRepresentation()!)
let depth_map = photo.depthData?.depthDataMap
print("depth_map:", depth_map) // is nil
}
func depthDataOutput(_ output: AVCaptureDepthDataOutput, didOutput depthData: AVDepthData, timestamp: CMTime, connection: AVCaptureConnection) {
print("depth data") // never called
}
override func viewDidLoad() {
super.viewDidLoad()
self.captureSession = AVCaptureSession()
self.captureSession?.sessionPreset = .photo
self.sessionOutput = AVCapturePhotoOutput()
self.depthOutput = AVCaptureDepthDataOutput()
self.depthOutput?.setDelegate(self, callbackQueue: DispatchQueue(label: "depth queue"))
do {
let device = AVCaptureDevice.default(for: .video)
let input = try AVCaptureDeviceInput(device: device!)
if(self.captureSession?.canAddInput(input))!{
self.captureSession?.addInput(input)
if(self.captureSession?.canAddOutput(self.sessionOutput!))!{
self.captureSession?.addOutput(self.sessionOutput!)
if(self.captureSession?.canAddOutput(self.depthOutput!))!{
self.captureSession?.addOutput(self.depthOutput!)
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.previewLayer?.frame = self.image_view.bounds
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
self.image_view.layer.addSublayer(self.previewLayer!)
}
}
}
} catch {}
self.captureSession?.startRunning()
}
I'm trying two things, one where the depth data is nil and one where I'm trying to call a depth delegate method.
Dose anyone know what I'm missing?
First, you need to use the dual camera, otherwise you won't get any depth data.
let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)
And keep a reference to your queue
let dataOutputQueue = DispatchQueue(label: "data queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
You'll also probably want to synchronize the video and depth data
var outputSynchronizer: AVCaptureDataOutputSynchronizer?
Then you can synchronize the two outputs in your viewDidLoad() method like this
if sessionOutput?.isDepthDataDeliverySupported {
sessionOutput?.isDepthDataDeliveryEnabled = true
depthDataOutput?.connection(with: .depthData)!.isEnabled = true
depthDataOutput?.isFilteringEnabled = true
outputSynchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [sessionOutput!, depthDataOutput!])
outputSynchronizer!.setDelegate(self, queue: self.dataOutputQueue)
}
I would recommend watching WWDC session 507 - they also provide a full sample app that does exactly what you want.
https://developer.apple.com/videos/play/wwdc2017/507/
To give more details to #klinger answer, here is what you need to do to get Depth Data for each pixel, I wrote some comments, hope it helps!
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
//## Convert Disparity to Depth ##
let depthData = (photo.depthData as AVDepthData!).converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let depthDataMap = depthData.depthDataMap //AVDepthData -> CVPixelBuffer
//## Data Analysis ##
// Useful data
let width = CVPixelBufferGetWidth(depthDataMap) //768 on an iPhone 7+
let height = CVPixelBufferGetHeight(depthDataMap) //576 on an iPhone 7+
CVPixelBufferLockBaseAddress(depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
// Convert the base address to a safe pointer of the appropriate type
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthDataMap), to: UnsafeMutablePointer<Float32>.self)
// Read the data (returns value of type Float)
// Accessible values : (width-1) * (height-1) = 767 * 575
let distanceAtXYPoint = floatBuffer[Int(x * y)]
}
There are two ways to do this, and you are trying to do both at once:
Capture depth data along with the image. This is done by using the photo.depthData object from photoOutput(_:didFinishProcessingPhoto:error:). I explain why this did not work for you below.
Use a AVCaptureDepthDataOutput and implement depthDataOutput(_:didOutput:timestamp:connection:). I am not sure why this did not work for you, but implementing depthDataOutput(_:didOutput:timestamp:connection:) might help you figure out why.
I think that #1 is a better option, because it pairs the depth data with the image. Here's how you would do that:
#IBAction func capture(_ sender: Any) {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
settings.isDepthDataDeliveryEnabled = true
self.sessionOutput?.capturePhoto(with: settings, delegate: self)
}
// ...
override func viewDidLoad() {
// ...
self.sessionOutput = AVCapturePhotoOutput()
self.sessionOutput.isDepthDataDeliveryEnabled = true
// ...
}
Then, depth_map shouldn't be nil. Make sure to read both this and this (separate but similar pages) for more information about obtaining depth data.
For #2, I'm not quite sure why depthDataOutput(_:didOutput:timestamp:connection:) isn't being called, but you should implement depthDataOutput(_:didDrop:timestamp:connection:reason:) to see if depth data is being dropped for some reason.
The way you init your capture device is not right.
You should use the dual camera mode.
as for oc like follows:
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInDualCamera mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack];

Swift - Getting UIImages from Camera (AVCaptureSession)

Intro and background:
I have been working on a project for sometime that lets the user do some custom manipulations from their camera (a live feed)
At the moment, I start the capture session in the following way:
var session: AVCaptureSession?
var stillImageOutput: AVCaptureStillImageOutput?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
videoPreviewLayer!.frame = CameraView.bounds
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
session = AVCaptureSession()
session!.sessionPreset = AVCaptureSessionPresetPhoto
let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
var error: NSError?
var input: AVCaptureDeviceInput!
do {
input = try AVCaptureDeviceInput(device: backCamera)
} catch let error1 as NSError {
error = error1
input = nil
}
if error == nil && session!.canAddInput(input) {
session!.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput?.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
if session!.canAddOutput(stillImageOutput) {
session!.addOutput(stillImageOutput)
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspect
videoPreviewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
CameraView.layer.addSublayer(videoPreviewLayer!)
session!.startRunning()
}
}
}
where CameraView is the UIView of my viewcontroller. I now have a function called: singleTapped() that I want to get every frame of the capture, process it, then put into the CameraView frame (Perhaps I should be using a UIImageView instead?)...
Research:
I have looked here and here, as well as many others for getting the frames of the camera, yet these don't necessarily conclude where I need. What's interesting is in the first link I provided: In their answer they have:
self.stillImageOutput.captureStillImageAsynchronouslyFromConnection(self.stillImageOutput.connectionWithMediaType(AVMediaTypeVideo)) { (buffer:CMSampleBuffer!, error:NSError!) -> Void in
var image = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buffer)
var data_image = UIImage(data: image) //THEY EXTRACTED A UIIMAGE HERE
self.imageView.image = data_image
}
which does indeed get a UIImage from the camera, but is this a viable method for 30fps?
Rational and Constraints:
The reason for why I need a UIImage is because I am utilizing a library someone else wrote for transforming a UIImage in a custom way quickly. I want to present this transformation to the user "live".
In conclusion
Please let me know if I am missing something, or if I should reword something. As said above this is my first post, so I am not quite strong with SO nuances. Thanks, and cheers
You should maybe try reconsider using AVCaptureSession. For what you are doing (I assume) you should try using OpenCV. Its a great utility for image manipulations, especially if you are doing so at 30/60fps* (The actual frame rate after processing might, and I guarantee will, be less). Depending on what this manipulation is you have been given, you can easily port that over into XCode using bridging headers or converting everything entirely to C++ for use with OpenCV.
With OpenCV you can call the camera from built-in functions and that can save you lots of processing time and therefore runtime. For example, take a look at this.
I have used OpenCV for similar situations to which you just described, and I think you could benefit. Swift is nice, but sometimes handling certain things are better through other means...