ARKit and AVCamera simultaneously - arkit

As there is no autofocus in ARKit, I wanted to load ARKit in a view that is half the screen and second half will have AVFoundation -> AVCamera.
Is it possible to load AVCamera and ARKit simultaneously in same app?
Thanks.

Nope.
ARKit uses AVCapture internally (as explained in the WWDC talk introducing ARKit). Only one AVCaptureSession can be running at a time, so if you run your own capture session it’ll suspend ARKit’s session (and break tracking).
Update: However, in iOS 11.3 (aka "ARKit 1.5"), ARKit enables autofocus by default, and you can choose to disable it with the isAutoFocusEnabled option.

Changing the camera focus would disrupt the tracking - so this is definitely not possible (right now at least).
Update: See #rickster answer above.

I managed to use AVFoundation with ARKit by calling self.sceneView.session.run(self.sceneView.session.configuration!) right after taking the photo.
Use self.captureSession?.stopRunning() right after taking photo to make the session resume faster.
self.takePhoto()
self.sceneView.session.run(self.sceneView.session.configuration!)
func startCamera() {
captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSession.Preset.photo
cameraOutput = AVCapturePhotoOutput()
if let device = AVCaptureDevice.default(for: .video),
let input = try? AVCaptureDeviceInput(device: device) {
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(cameraOutput)) {
captureSession.addOutput(cameraOutput)
captureSession.startRunning()
}
} else {
print("issue here : captureSesssion.canAddInput")
}
} else {
print("some problem here")
}
}
func takePhoto() {
startCamera()
let settings = AVCapturePhotoSettings()
cameraOutput.capturePhoto(with: settings, delegate: self)
self.captureSession?.stopRunning()
}

Related

AVFoundation Camera in view

I'm having a lot of problems trying to get a view to show my back Camera feed. I looked throughout apples docs and came up with this, but all it seems to do is make a black screen. I also added the perms in my plist and am running on a real device. I don't need it to take a photo or save anything. Just simply show the camera live in a view.
import UIKit
import AVFoundation
class ViewController: UIViewController {
#IBOutlet weak var cameraView: UIView!
var captureSession = AVCaptureSession()
var previewLayer = AVCaptureVideoPreviewLayer()
override func viewDidLoad() {
super.viewDidLoad()
loadCamera()
}
func loadCamera() {
let device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back)
do {
let input = try AVCaptureDeviceInput(device: device!)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer)
}
} catch {
print(error)
}
}
}
Welcome!
The problem is that there is actually no data flow (of video frames) happening with your current setup. You need to at least attach one input and one output to your capture session. The preview layer doesn't count as an output itself since it will only attach to an existing connection between input and output.
So to fix it, you can just add an AVCapturePhotoOutput to the session (probably before you add the layer) but never use it. The preview layer should start displaying the frames then.
You probably also want to set the session's sessionPreset to .photo before you add the inputs and outputs. This will cause the session to produce video frames that have an ideal size for displaying on your device's screen.

ARKit 3.0 – People Occlusion with Motion Capture

I am trying to load both people occlusion and motion capture on the same app.
Since ARBodyTrackingConfiguration does not support personSegmentationWithDepth, I am creating 2 ARViews, giving each a different configuration (ARWorldTrackingConfiguration and ARBodyTrackingConfiguration).
The problem is that for some reason only one of the delegates callback is fired, and no depth data is available.
What am I doing wrong here?
Is it not OK to have more than one ARSession live at the same time?
In ARKit 4.0 both features can be run simultaneously. However they are both CPU intensive.
override func viewDidLoad() {
super.viewDidLoad()
guard ARBodyTrackingConfiguration.isSupported
else { fatalError("MoCap is supported on devices with A12 and higher") }
guard ARBodyTrackingConfiguration.supportsFrameSemantics(
.personSegmentationWithDepth)
else { fatalError("People occlusion is not supported on this device.") }
let config = ARBodyTrackingConfiguration()
config.frameSemantics = .personSegmentationWithDepth
config.automaticSkeletonScaleEstimationEnabled = true
arView.session.run(config, options: [])
}

Unsupported IOSurface format: 0x26424741 using twilio video in scenekit

I am using twilio to send a video and use that video in a scenekit as a texture. But the problem is it works fine with iPhone X, but it gave this error Unsupported IOSurface format: 0x26424741 on iPhone XR and XS.
this is what I am doing:
Get Video:
func subscribed(to videoTrack: TVIRemoteVideoTrack, publication: TVIRemoteVideoTrackPublication, for participant: TVIRemoteParticipant) {
print("Participant \(participant.identity) added a video track.")
let remoteView = TVIVideoView.init(frame: UIWindow().frame,
delegate:self)
videoTrack.addRenderer(remoteView!)
delegate.participantAdded(with: remoteView!)
}
delegate:
func participantAdded(with videoView: UIView) {
sceneView.addVideo(with: videoView)
}
and add video to plane:
func addVideo(with view: UIView){
videoPlane.geometry?.firstMaterial?.diffuse.contents = view
}
The problem was actually with renderingType of remoteView. For older devices using metal was fine but newer devices it needed openGLES. I dont know why but it was the fix.
I used this solution to find out the device type.
Next I determined which renderingType to use
var renderingType: VideoView.RenderingType {
get{
let device = UIDevice()
switch device.type{
case .iPhoneXS:
return .openGLES
case .iPhoneXR:
return .openGLES
case .iPhoneXSMax:
return .openGLES
default:
return .metal
}
}
}
And used it to initialize remoteView
func didSubscribeToVideoTrack(videoTrack: RemoteVideoTrack, publication: RemoteVideoTrackPublication, participant: RemoteParticipant) {
print("Participant \(participant.identity) added a video track.")
let remoteView = VideoView.init(frame: UIWindow().frame,
delegate:self,
renderingType: renderingType)
videoTrack.addRenderer(remoteView!)
delegate.participantAddedVideo(for: participant.identity, with: remoteView!)
}

macOS App using iPhone camera

I am trying to build a simple swift (4) macOS app to use an iPhone camera connected to my Mac.
I have started an blank macOS template app and have turned on sandbox to allow camera, mic and USB and added the following code to my ViewController.
import Cocoa
import AVFoundation
class ViewController: NSViewController {
#IBOutlet weak var camera: NSView!
override func viewDidLoad() {
super.viewDidLoad()
camera.layer = CALayer()
let session:AVCaptureSession = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.high
let device:AVCaptureDevice = (AVCaptureDevice.default(for: AVMediaType.video))!
// let listdevices = (AVCaptureDevice.devices())
do {
try session.addInput(AVCaptureDeviceInput(device: device))
//Preview
let previewLayer:AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
let myView:NSView = self.view
previewLayer.frame = myView.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.camera.layer?.addSublayer(previewLayer)
session.startRunning()
// print(listdevices)
// print(device)
} catch {
print(device)
}
}
override var representedObject: Any? {
didSet {
// Update the view, if already loaded.
}
}
}
In storyboard I have dropped in a Custom View.
App builds ok and uses facetime camera no problem, however with iPhone connected I dont see if as a device that AVFoundation can use. Not sure on next steps on how to get the previewLayer to select the USB camera to use aka iPhone.
p.s Needs to be landscape for all cameras orietation
According to this Apple Developer Forum post, capturing the camera of a connected iOS device from a macOS app is not supported.
The closest you can do (as the post suggests), is to capture the screen of the iOS device while the camera (Camera.app) is running, effectively capturing the live camera preview (or you can roll your own companion camera app in iOS, if you want to remove the camera app’s UI from the captured screen).

Use Front Camera for AVCaptureDevice Preview Layer Automatically Like Snapchat or Houseparty (Swift 3) [duplicate]

This question already has answers here:
How to get the front camera in Swift?
(8 answers)
Closed 6 years ago.
Essentially what I'm trying to accomplish is having the front camera of the AVCaptureDevice be the first and only option on a application during an AVCaptureSession.
I've looked around StackOverflow and all the methods and answers provided are deprecated as of iOS 10, Swift 3 and Xcode 8.
I know you're supposed to enumerate the devices with AVCaptureDeviceDiscoverySession and look at them to distinguish front from back, but I'm unsure of how to do so.
Could anyone help? It would amazing if so!
Here's my code:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
previewLayer.frame = singleViewCameraSlot.bounds
self.singleViewCameraSlot.layer.addSublayer(previewLayer)
captureSession.startRunning()
}
lazy var captureSession: AVCaptureSession = {
let capture = AVCaptureSession()
capture.sessionPreset = AVCaptureSessionPreset1920x1080
return capture
}()
lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let preview = AVCaptureVideoPreviewLayer(session: self.captureSession)
preview?.videoGravity = AVLayerVideoGravityResizeAspect
preview?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
preview?.bounds = CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height)
preview?.position = CGPoint(x: self.view.bounds.midX, y: self.view.bounds.midY)
return preview!
}()
func setupCameraSession() {
let frontCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice
do {
let deviceInput = try AVCaptureDeviceInput(device: frontCamera)
captureSession.beginConfiguration()
if (captureSession.canAddInput(deviceInput) == true) {
captureSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (captureSession.canAddOutput(dataOutput) == true) {
captureSession.addOutput(dataOutput)
}
captureSession.commitConfiguration()
let queue = DispatchQueue(label: "io.goodnight.videoQueue")
dataOutput.setSampleBufferDelegate(self, queue: queue)
}
catch let error as NSError {
NSLog("\(error), \(error.localizedDescription)")
}
}
If you just need to find a single device based on simple characteristics (like a front-facing camera that can shoot video), just use AVCaptureDevice.default(_:for:position:). For example:
guard let device = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video,
position: .front)
else { fatalError("no front camera. but don't all iOS 10 devices have them?")
// then use the device: captureSession.addInput(device) or whatever
Really that's all there is to it for most use cases.
There's also AVCaptureDeviceDiscoverySession as a replacement for the old method of iterating through the devices array. However, most of the things you'd usually iterate through the devices array for can be found using the new default(_:for:position:) method, so you might as well use that and write less code.
The cases where AVCaptureDeviceDiscoverySession is worth using are the less common, more complicated cases: say you want to find all the devices that support a certain frame rate, or use key-value observing to see when the set of available devices changes.
By the way...
I've looked around StackOverflow and all the methods and answers provided are deprecated as of iOS 10, Swift 3 and Xcode 8.
If you read Apple's docs for those methods (at least this one, this one, and this one), you'll see along with those deprecation warnings some recommendations for what to use instead. There's also a guide to the iOS 10 / Swift 3 photo capture system and some sample code that both show current best practices for these APIs.
If you explicitly need the front camera, you can use AVCaptureDeviceDiscoverySession as specified here.
https://developer.apple.com/reference/avfoundation/avcapturedevicediscoverysession/2361539-init
This allows you to specify the types of devices you want to search for. The following (untested) should give you the front facing camera.
let deviceSessions = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.front)
This deviceSessions has a devices property which is an array of AVCaptureDevice types containing only the devices matching that search criteria.
deviceSessions?.devices
That should be either 0 or 1 depending on if the device has a front facing camera or not (some iPods won't for example).