How to set back camera zoom level to 0.5x using Swift? - swift

I have zoom feature working(1x onwards) for custom camera implemented using AVFoundation. This is fine till the iPhone X models. But I wanted to have 0.5x zoom in iPhone 11 and iPhone 11 Pro devices.
Code that I wrote is not working to put it to 0.5x zoom. I have tried all the possible combinations of [.builtInTripleCamera, .builtInDualWideCamera, .builtInUltraWideCamera]. The capture device with the device type .builtinUltraWideCamera is not giving 0.5 for minAvailableVideoZoomFactor.
While testing on iPhone 11, I also removed [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera, .builtInTrueDepthCamera] from the deviceTypes.
Appreciate any help to solve this. Below is the code which is working for 1x zoom onwards.
/// Called from -handlePinchGesture
private func zoom(_ scale: CGFloat) {
let captureDevice = cameraDevice(.back)
do {
try captureDevice?.lockForConfiguration()
var minZoomFactor: CGFloat = captureDevice?.minAvailableVideoZoomFactor ?? 1.0
let maxZoomFactor: CGFloat = captureDevice?.maxAvailableVideoZoomFactor ?? 1.0
if #available(iOS 13.0, *) {
if captureDevice?.deviceType == .builtInDualWideCamera || captureDevice?.deviceType == .builtInTripleCamera || captureDevice?.deviceType == .builtInUltraWideCamera {
minZoomFactor = 0.5
}
}
zoomScale = max(minZoomFactor, min(beginZoomScale * scale, maxZoomFactor))
captureDevice?.videoZoomFactor = zoomScale
captureDevice?.unlockForConfiguration()
} catch {
print("ERROR: locking configuration")
}
}
#objc private func handlePinchGesture(_ recognizer: UIPinchGestureRecognizer) {
var allTouchesOnPreviewLayer = true
let numTouch = recognizer.numberOfTouches
for i in 0 ..< numTouch {
let location = recognizer.location(ofTouch: i, in: view)
let convertedTouch = previewLayer.convert(location, from: previewLayer.superlayer)
if !previewLayer.contains(convertedTouch) {
allTouchesOnPreviewLayer = false
break
}
}
if allTouchesOnPreviewLayer {
zoom(recognizer.scale)
}
}
func cameraDevice(_ position: AVCaptureDevice.Position) -> AVCaptureDevice? {
var deviceTypes = [AVCaptureDevice.DeviceType]()
deviceTypes.append(contentsOf: [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera, .builtInTrueDepthCamera])
if #available(iOS 13.0, *) {
deviceTypes.append(contentsOf: [.builtInTripleCamera, .builtInDualWideCamera, .builtInUltraWideCamera])
}
let availableCameraDevices = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: .video, position: position).devices
guard availableCameraDevices.isEmpty == false else {
debugPrint("ERROR: No camera devices found!!!")
return nil
}
for device in availableCameraDevices {
if device.position == position {
return device
}
}
guard let defaultDevice = AVCaptureDevice.default(for: AVMediaType.video) else {
debugPrint("ERROR: Can't initialize default back camera!!!")
return nil
}
return defaultDevice
}

Updating for people who are looking to set the optical zoom level 0.5x
courtesy: https://github.com/NextLevel/NextLevel/issues/187
public class func primaryVideoDevice(forPosition position: AVCaptureDevice.Position) -> AVCaptureDevice? {
// -- Changes begun
if #available(iOS 13.0, *) {
let hasUltraWideCamera: Bool = true // Set this variable to true if your device is one of the following - iPhone 11, iPhone 11 Pro, & iPhone 11 Pro Max
if hasUltraWideCamera {
// Your iPhone has UltraWideCamera.
let deviceTypes: [AVCaptureDevice.DeviceType] = [AVCaptureDevice.DeviceType.builtInUltraWideCamera]
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: AVMediaType.video, position: position)
return discoverySession.devices.first
}
}
// -- Changes end
var deviceTypes: [AVCaptureDevice.DeviceType] = [AVCaptureDevice.DeviceType.builtInWideAngleCamera] // builtInWideAngleCamera // builtInUltraWideCamera
if #available(iOS 11.0, *) {
deviceTypes.append(.builtInDualCamera)
} else {
deviceTypes.append(.builtInDuoCamera)
}
// prioritize duo camera systems before wide angle
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: AVMediaType.video, position: position)
for device in discoverySession.devices {
if #available(iOS 11.0, *) {
if (device.deviceType == AVCaptureDevice.DeviceType.builtInDualCamera) {
return device
}
} else {
if (device.deviceType == AVCaptureDevice.DeviceType.builtInDuoCamera) {
return device
}
}
}
return discoverySession.devices.first
}

The minimum "zoomFactor" property of an AVCaptureDevice can't be less than 1.0 according to the Apple Docs. It's a little confusing becuase depending on what camera you've selected, a zoom factor of 1 will be a different field of view or optical view angle. The default iPhone camera app shows a label reading "0.5" but that's just a label for the ultra wide lens in relation to the standard camera's zoom factor.
You're already getting the minZoomFactor from the device, (which will probably be 1), so you should use the device's min and max that you're reading to set the bounds of the factor you input into "captureDevice.videoZoomFactor". Then when you;ve selected the ultra wide lens, setting the zoomfactor to 1 will be as wide as you can go! (a factor of 0.5 in relation to the standard lens's field of view).

The problem is when you try to get a device of some type from discoverySession.devices it returns the default device that can be not supporting ultrawide that you need.
That was the case for me for iPhone 12Pro Max, returning only one device for Back position, reporting type BuiltInWideAngleCamera, but that was just lyes, it was the middle camera, not wide, not telephoto. Dunno why apple devs made it like that, looks like an outdated legacy architecture.
The solution was not obvious: use AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) to get the real device capable of zooming from 1 (your logical 0.5).

We cannot set the zoom factor to less than 1.
I resolve this issue by using ".builtInDualWideCamera".
In this case, we use "Ultra-Wide Camera" with the zoom factor 2.0 (will be the default value) equal to the normal zoom factor on the "Wide Angle Camera". (minimum value will be 1.0)
If your iPhone doesn't support ".builtInDualWideCamera", we will using ".builtInWideAngleCamera" as normally and the zoom factor is 1.0 (minimum value)
func getCameraDevices() -> [AVCaptureDevice] {
var deviceTypes = [AVCaptureDevice.DeviceType]()
if #available(iOS 13.0, *) {
deviceTypes.append(contentsOf: [.builtInDualWideCamera])
self.isUltraWideCamera = true
self.defaultZoomFactor = 2.0
}
if(deviceTypes.isEmpty){
deviceTypes.append(contentsOf: [.builtInWideAngleCamera])
self.isUltraWideCamera = false
self.defaultZoomFactor = 1.0
}
return AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: .video, position: .unspecified).devices
}

Related

Can I use zoom with ARView?

I would like my user to use zoom in AR application. Is it possible to zoom using ARView?
I have written the following code and added it to tap action.
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes:
[.builtInTrueDepthCamera, .builtInDualCamera, .builtInWideAngleCamera],
mediaType: .video, position: .back)
let devices : [AVCaptureDevice] = discoverySession.devices
let zoomFactor:CGFloat = 2
for de in devices {
print("name of camera")
print(de.localizedName)
do{
try de.lockForConfiguration()
de.videoZoomFactor = zoomFactor
de.unlockForConfiguration()
}catch {
print ("error")
}
}
I run it on IPhone X and see result in log
name of camera
Back Dual Camera
name of camera
Back Camera
But it has no effect on zoom.
Is it even possible to zoom in while using ARKit?
You can't use the camera's zoom with the reality kit. Reality kit uses the camera provided by ARKit and it's fixed with a focal length of 28mm. But you can zoom on the ArView like Andy Jazz's answer.
Try this approach:
// Use ARView as a subview of UIView
#IBOutlet var arView: ARView!
// Set minimumValue and default value of slider to 1
#IBAction func sliderForZooming(_ sender: UISlider) {
// CGAffineTransform 3x3 matrix
arView.transform = .init(a: CGFloat(sender.value), b: 0, c: 0,
d: CGFloat(sender.value), tx: 0, ty: 0)
}

Swift5 UIView rotation behavior different for iPhone vs iPad

I have struggled with odd differences in behavior between iPhone and iPad (both in simulators as well as real devices) and despite trying different ways to diagnosis this with many visits to Stackoverflow, I am still struggling to get to a root cause. Specifically, I have a simple Test view controller that performs as expected with iPad but the same code behaves differently and not as expected on iPhone. I have one UIImageView centered on each device in portrait mode with 10px margins left, right and top. When I rotate the device, the objective is to resize the image in landscape so it remains 10px for these margins IE it gets scaled to fit the new geometry and the image always appears in its original orientation. The iPad does this perfectly without a lot of code. However the iPhone performs the scaling correctly but the image does not stay in its original orientation ... it rotates with the device rotation. How can the same code produce two different results?
I can solve this by detecting iPhone and writing code to rotate the image and determine the new origin for placement, in fact I have this working. However, it doesn't seem right to me to have different logic for iPhone versus iPad.
Some details: I am using Swift 5 Xcode 12 MacOS 10.15.6 Simulator 11.5 iPhone 11 with IOS 14.0.1 and iPad 7th Gen IOS 14.0.1
Using interface builder to build layout initially and linking to code with IBOutlet however I am using translatesAutoresizingMaskIntoConstraints = false and anchor constraints programmatically to place the UIImageView. I am using Notification Center to add and remove an observer to trigger rotation events. I am using begin and endGeneratingDeviceOrientationNotifications(). I override shouldAutorotate as true, supportedInterfaceOrientations as all, and preferredInterfaceOrientationForPresentation as portrait my Test VC as well as creating extensions for UINavigationController and UITabBarController in SceneDelegate to propagate these values given Test VC is embedded in a Nav Controller and uses tab bar. Info plist lists all 4 modes for supported interface orientations and the general tab for the Xcode project selects iPhone and iPad as deployable and all 4 orientation modes are unselected for Device Orientation.
I can add code here if helpful as well as screenshots. If anyone has had a similar experience or any ideas about this I would be grateful! Here is the code for TestVC:
import UIKit
class Test: UIViewController {
#IBOutlet weak var testImage: UIImageView!
let debug = true
let program = "TestViewController"
var deviceSize = CGRect.zero
var deviceWidth: CGFloat = 0
var deviceHeight: CGFloat = 0
let imageAsset = UIImage(named: "Cera.jpg")
var aspectRatio: CGFloat = 0.0
override var shouldAutorotate: Bool { return true }
override var supportedInterfaceOrientations: UIInterfaceOrientationMask { return UIInterfaceOrientationMask.all }
override var preferredInterfaceOrientationForPresentation: UIInterfaceOrientation { return UIInterfaceOrientation.portrait }
// This routine triggered the first time this view controiller is loaded
override func viewDidLoad() {
super.viewDidLoad()
let rtn = "viewDidLoad"
top = self.view.topAnchor
lead = self.view.leadingAnchor
deviceSize = UIScreen.main.bounds
deviceWidth = deviceSize.width
deviceHeight = deviceSize.height
UIDevice.current.beginGeneratingDeviceOrientationNotifications()
NotificationCenter.default.addObserver(self, selector: #selector(deviceRotated), name: UIDevice.orientationDidChangeNotification, object: nil)
if debug { print(">>> \(program): \(rtn): device width[\(deviceWidth)] device height[\(deviceHeight)]") }
determineOrientation()
if debug { print(">>> \(program): \(rtn): rotated device width[\(rotatedDeviceWidth)] rotated device height[\(rotatedDeviceHeight)]") }
testImage.image = imageAsset
let imageWidth = testImage.image!.size.width
let imageHeight = testImage.image!.size.height
aspectRatio = imageHeight / imageWidth
calculateContraints()
}
// This routine triggered every time this view controller is presented
override func viewWillAppear(_ animated: Bool) {
let rtn = "viewWillAppear"
if debug { print(">>> \(program): \(rtn): device width[\(deviceWidth)] device height[\(deviceHeight)]") }
determineOrientation()
if debug { print(">>> \(program): \(rtn): rotated device width[\(rotatedDeviceWidth)] rotated device height[\(rotatedDeviceHeight)]") }
}
// This routine added to remove observer for rotation events
override func viewWillDisappear(_ animated: Bool) {
NotificationCenter.default.removeObserver(self, name: UIDevice.orientationDidChangeNotification, object: nil)
UIDevice.current.endGeneratingDeviceOrientationNotifications()
}
var orientation = "Portrait"
var rotatedDeviceWidth: CGFloat = 0
var rotatedDeviceHeight: CGFloat = 0
// This routine called by "viewWillTransition" to determoine "orientation" value
func determineOrientation() {
let rtn = "determineOrientation"
if debug { print(">>> \(program): \(rtn)") }
if UIDevice.current.orientation == UIDeviceOrientation.portrait { orientation = "Portrait" }
if UIDevice.current.orientation == UIDeviceOrientation.landscapeLeft { orientation = "LandscapeLeft" }
if UIDevice.current.orientation == UIDeviceOrientation.landscapeRight { orientation = "LandscapeRight" }
if UIDevice.current.orientation == UIDeviceOrientation.portraitUpsideDown { orientation = "PortraitUpsideDown" }
if orientation == "Portrait" || orientation == "PortraitUpsideDown" {
rotatedDeviceWidth = deviceWidth
rotatedDeviceHeight = deviceHeight
} else {
rotatedDeviceWidth = deviceHeight
rotatedDeviceHeight = deviceWidth
}
}
var imageWidth: CGFloat = 0
var imageHeight: CGFloat = 0
var imageXpos: CGFloat = 0
var imageYpos: CGFloat = 0
var v: CGFloat = 0
var h: CGFloat = 0
var w: CGFloat = 0
var ht: CGFloat = 0
// This routine determines the position of the display object "testImage"
func calculateContraints() {
let rtn = "calculateContraints"
if debug { print(">>> \(program): \(rtn): orientation[\(orientation)]") }
if orientation == "Portrait" {
imageWidth = deviceWidth / 2 - 20
imageHeight = imageWidth * CGFloat(aspectRatio)
imageXpos = 10
imageYpos = 10
if debug { print(">>> \(imageWidth): \(imageHeight)") }
}
if orientation == "LandscapeLeft" {
imageWidth = rotatedDeviceWidth / 2 - 20
imageHeight = imageWidth * CGFloat(aspectRatio)
imageXpos = 10
imageYpos = 10
if debug { print(">>> \(imageWidth): \(imageHeight)") }
}
if orientation == "LandscapeRight" {
imageWidth = rotatedDeviceWidth / 2 - 20
imageHeight = imageWidth * CGFloat(aspectRatio)
imageXpos = 10
imageYpos = 10
if debug { print(">>> \(imageWidth): \(imageHeight)") }
}
if orientation == "PortraitUpsideDown" {
imageWidth = deviceWidth / 2 - 20
imageHeight = imageWidth * CGFloat(aspectRatio)
imageXpos = 10
imageYpos = 10
if debug { print(">>> \(imageWidth): \(imageHeight)") }
}
layoutConstraints(v: imageXpos, h: imageYpos, w: imageWidth, ht: imageHeight)
}
var testImageTopConstraint: NSLayoutConstraint!
var testImageLeftConstraint: NSLayoutConstraint!
var testImageWidthConstraint: NSLayoutConstraint!
var testImageHeightConstraint: NSLayoutConstraint!
var top: NSLayoutYAxisAnchor!
var lead: NSLayoutXAxisAnchor!
var trail: NSLayoutXAxisAnchor!
var bot: NSLayoutYAxisAnchor!
// This routine lays out the display object "testImage"
func layoutConstraints(v: CGFloat, h: CGFloat, w: CGFloat, ht: CGFloat) {
let rtn = "layoutConstraints"
if debug { print(">>> \(program): \(rtn)") }
testImage.translatesAutoresizingMaskIntoConstraints = false
if testImageTopConstraint != nil { testImageTopConstraint.isActive = false }
if testImageLeftConstraint != nil { testImageLeftConstraint.isActive = false }
if testImageWidthConstraint != nil { testImageWidthConstraint.isActive = false }
if testImageHeightConstraint != nil { testImageHeightConstraint.isActive = false }
testImageTopConstraint = testImage.topAnchor.constraint(equalTo: top, constant: v)
testImageLeftConstraint = testImage.leadingAnchor.constraint(equalTo: lead, constant: h)
testImageWidthConstraint = testImage.widthAnchor.constraint(equalToConstant: w)
testImageHeightConstraint = testImage.heightAnchor.constraint(equalToConstant: ht)
testImageTopConstraint.isActive = true
testImageLeftConstraint.isActive = true
testImageWidthConstraint.isActive = true
testImageHeightConstraint.isActive = true
}
}
#objc extension Test {
func deviceRotated(_ notification: NSNotification) {
let device = notification.object as! UIDevice
let deviceOrientation = device.orientation
switch deviceOrientation {
case .landscapeLeft: print("<<<Landscape Left>>>")
case .landscapeRight: print("<<<Landscape Right>>>")
case .portrait: print("<<<Portrait>>>")
case .portraitUpsideDown: print("<<<Portrait Upside Down>>>")
case .faceDown: print("<<<Face Down>>>")
case .faceUp: print("<<<Face Up>>>")
case .unknown: print("<<<Unknown>>>")
#unknown default: print("<<<Default>>>")
}
let rtn = "deviceRotated2"
determineOrientation()
if debug { print(">>> \(program): \(rtn): Device rotated to: \(orientation)") }
if debug { print(">>> \(program): \(rtn): rotated device width[\(rotatedDeviceWidth)] rotated device height[\(rotatedDeviceHeight)]") }
calculateContraints()
}
}
Here is the code in SceneDelegate.swift
extension UINavigationController {
override open var shouldAutorotate: Bool {
get {
if let visibleVC = visibleViewController { return visibleVC.shouldAutorotate }
return super.shouldAutorotate } }
override open var preferredInterfaceOrientationForPresentation: UIInterfaceOrientation {
get {
if let visibleVC = visibleViewController { return visibleVC.preferredInterfaceOrientationForPresentation }
return super.preferredInterfaceOrientationForPresentation } }
override open var supportedInterfaceOrientations: UIInterfaceOrientationMask {
get {
if let visibleVC = visibleViewController { return visibleVC.supportedInterfaceOrientations }
return super.supportedInterfaceOrientations } }
}
// ===================================================================================
// UITabBarController Extension - used to manage tab bar style
//
extension UITabBarController {
open override var childForStatusBarStyle: UIViewController? {
return selectedViewController?.childForStatusBarStyle ?? selectedViewController
}
}
// ===================================================================================
// UITabBarController Extension - used to manage rotation
//
extension UITabBarController {
override open var shouldAutorotate: Bool {
if let viewController = self.viewControllers?[self.selectedIndex] { return viewController.shouldAutorotate }
return super.shouldAutorotate }
override open var preferredInterfaceOrientationForPresentation: UIInterfaceOrientation {
if let viewController = self.viewControllers?[self.selectedIndex] { return viewController.preferredInterfaceOrientationForPresentation }
return super.preferredInterfaceOrientationForPresentation }
override open var supportedInterfaceOrientations: UIInterfaceOrientationMask {
if let viewController = self.viewControllers?[self.selectedIndex] { return viewController.supportedInterfaceOrientations }
return super.supportedInterfaceOrientations }
}
Here are the rotation results for the iPhone in the simulator:
Cera rotations for iPhone
... and iPad:
Cera rotations for iPad

Implementing AVVideoCompositing causes video rotation problems

I using Apple's example https://developer.apple.com/library/ios/samplecode/AVCustomEdit/Introduction/Intro.html and have some issues with video transformation.
If source assets have preferredTransform other than identity, output video will have incorrectly rotated frames. This problem can be fixed if AVMutableVideoComposition doesn't have value in property customVideoCompositorClass and when AVMutableVideoCompositionLayerInstruction's transform is setted up with asset.preferredTransform. But in reason of using custom video compositor, which adopting an AVVideoCompositing protocol I can't use standard video compositing instructions.
How can I pre-transform input asset tracks before it's CVPixelBuffer's putted into Metal shaders? Or there are any other way to fix it?
Fragment of original code:
func buildCompositionObjectsForPlayback(_ forPlayback: Bool, overwriteExistingObjects: Bool) {
// Proceed only if the composition objects have not already been created.
if self.composition != nil && !overwriteExistingObjects { return }
if self.videoComposition != nil && !overwriteExistingObjects { return }
guard !clips.isEmpty else { return }
// Use the naturalSize of the first video track.
let videoTracks = clips[0].tracks(withMediaType: AVMediaType.video)
let videoSize = videoTracks[0].naturalSize
let composition = AVMutableComposition()
composition.naturalSize = videoSize
/*
With transitions:
Place clips into alternating video & audio tracks in composition, overlapped by transitionDuration.
Set up the video composition to cycle between "pass through A", "transition from A to B", "pass through B".
*/
let videoComposition = AVMutableVideoComposition()
if self.transitionType == TransitionType.diagonalWipe.rawValue {
videoComposition.customVideoCompositorClass = APLDiagonalWipeCompositor.self
} else {
videoComposition.customVideoCompositorClass = APLCrossDissolveCompositor.self
}
// Every videoComposition needs these properties to be set:
videoComposition.frameDuration = CMTimeMake(1, 30) // 30 fps.
videoComposition.renderSize = videoSize
buildTransitionComposition(composition, andVideoComposition: videoComposition)
self.composition = composition
self.videoComposition = videoComposition
}
UPDATE:
I did workaround for transforming like this:
private func makeTransformedPixelBuffer(fromBuffer buffer: CVPixelBuffer, withTransform transform: CGAffineTransform) -> CVPixelBuffer? {
guard let newBuffer = renderContext?.newPixelBuffer() else {
return nil
}
// Correct transformation example I took from https://stackoverflow.com/questions/29967700/coreimage-coordinate-system
var preferredTransform = transform
preferredTransform.b *= -1
preferredTransform.c *= -1
var transformedImage = CIImage(cvPixelBuffer: buffer).transformed(by: preferredTransform)
preferredTransform = CGAffineTransform(translationX: -transformedImage.extent.origin.x, y: -transformedImage.extent.origin.y)
transformedImage = transformedImage.transformed(by: preferredTransform)
let filterContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
filterContext.render(transformedImage, to: newBuffer)
return newBuffer
}
But wondering if there are more memory-effective way without creation of new pixel buffers
How can I pre-transform input asset tracks before it's CVPixelBuffer's
putted into Metal shaders?
The best way to achieve maximum performance is to transform your video frame directly in shader. You just need to add rotation matrix in your Vertex shader.

zoom in and zoom out camera on pinch gesture swift

I am using front camera in my app. I want that while taking photos user can zoom in and out camera
I tried this code
let device = AVCaptureDevice.default(for: .video)
print(sender.scale)
let vZoomFactor = sender.scale * prevZoomFactor
if sender.state == .ended {
prevZoomFactor = vZoomFactor >= 1 ? vZoomFactor : 1
}
if sender.state == .changed{
do {
try device!.lockForConfiguration()
if (vZoomFactor <= device!.activeFormat.videoMaxZoomFactor) {
device!.videoZoomFactor = max(1.0, min(vZoomFactor, device!.activeFormat.videoMaxZoomFactor))
device?.unlockForConfiguration()
} else {
print("Unable to set videoZoom: (max \(device!.activeFormat.videoMaxZoomFactor), asked \(vZoomFactor))")
}
} catch {
print("\(error.localizedDescription)")
}
}
every thing is working fine in back camera but zoom is not applying on front camera.
well, after spending hours on this code i got the there where i was making the mistake.
let device = AVCaptureDevice.default(for: .video)
this will by default get the back camera and work perfect but when i switch it to front it is till considering it as back camera , so i just added a condition
if currentcam == frontcam {
let device = frontcam
//did other stuff for zooimng
}
else {
let device = AVCaptureDevice.default(for: .video)
//did other stuff for zooimng
}
this worked fine for me

Why is an iPhone XS getting worse CPU performance when using the camera live than an iPhone 6S Plus?

I'm using live camera output to update a CIImage on a MTKView. My main issue is that I have a large, negative performance difference where an older iPhone gets better CPU performance than a newer one, despite all their settings I've come across are the same.
This is a lengthy post, but I decided to include these details since they could be important to the cause of this problem. Please let me know what else I can include.
Below, I have my captureOutput function with two debug bools that I can turn on and off while running. I used this to try to determine the cause of my issue.
applyLiveFilter - bool whether or not to manipulate the CIImage with a CIFilter.
updateMetalView - bool whether or not to update the MTKView's CIImage.
// live output from camera
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
/*
Create CIImage from camera.
Here I save a few percent of CPU by using a function
to convert a sampleBuffer to a Metal texture, but
whether I use this or the commented out code
(without captureOutputMTLOptions) does not have
significant impact.
*/
guard let texture:MTLTexture = convertToMTLTexture(sampleBuffer: sampleBuffer) else{
return
}
var cameraImage:CIImage = CIImage(mtlTexture: texture, options: captureOutputMTLOptions)!
var transform: CGAffineTransform = .identity
transform = transform.scaledBy(x: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -cameraImage.extent.height)
cameraImage = cameraImage.transformed(by: transform)
/*
// old non-Metal way of getting the ciimage from the cvPixelBuffer
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
{
return
}
var cameraImage:CIImage = CIImage(cvPixelBuffer: pixelBuffer)
*/
var orientation = UIImage.Orientation.right
if(isFrontCamera){
orientation = UIImage.Orientation.leftMirrored
}
// apply filter to camera image
if debug_applyLiveFilter {
cameraImage = self.applyFilterAndReturnImage(ciImage: cameraImage, orientation: orientation, currentCameraRes:currentCameraRes!)
}
DispatchQueue.main.async(){
if debug_updateMetalView {
self.MTLCaptureView!.image = cameraImage
}
}
}
Below is a chart of results between both phones toggling the different combinations of bools discussed above:
Even without the Metal view's CIIMage updating and no filters being applied, the iPhone XS's CPU is 2% greater than iPhone 6S Plus's, which isn't a significant overhead, but makes me suspect that somehow how the camera is capturing is different between the devices.
My AVCaptureSession's preset is set identically between both phones
(AVCaptureSession.Preset.hd1280x720)
The CIImage created from captureOutput is the same size (extent)
between both phones.
Are there any settings I need to set manually between these two phones AVCaptureDevice's settings, including activeFormat properties, to make them the same between devices?
The settings I have now are:
if let captureDevice = AVCaptureDevice.default(for:AVMediaType.video) {
do {
try captureDevice.lockForConfiguration()
captureDevice.isSubjectAreaChangeMonitoringEnabled = true
captureDevice.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
captureDevice.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice.unlockForConfiguration()
} catch {
// Handle errors here
print("There was an error focusing the device's camera")
}
}
My MTKView is based off code written by Simon Gladman, with some edits for performance and to scale the render before it is scaled up to the width of the screen using Core Animation suggested by Apple.
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// The image to display
var image: CIImage?
{
didSet
{
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard var
image = image,
let targetTexture:MTLTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let customDrawableSize:CGSize = drawableSize
let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = customDrawableSize.width / image.extent.width
let scaleY = customDrawableSize.height / image.extent.height
let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
image = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(image,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
My AVCaptureSession (captureSession) and AVCaptureVideoDataOutput (videoOutput) are setup below:
func setupCameraAndMic(){
let backCamera = AVCaptureDevice.default(for:AVMediaType.video)
var error: NSError?
var videoInput: AVCaptureDeviceInput!
do {
videoInput = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError {
error = error1
videoInput = nil
print(error!.localizedDescription)
}
if error == nil &&
captureSession!.canAddInput(videoInput) {
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, MetalDevice, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return
}
captureSession!.addInput(videoInput)
setDeviceFrameRateForCurrentFilter(device:backCamera)
stillImageOutput = AVCapturePhotoOutput()
if captureSession!.canAddOutput(stillImageOutput!) {
captureSession!.addOutput(stillImageOutput!)
let q = DispatchQueue(label: "sample buffer delegate", qos: .default)
videoOutput.setSampleBufferDelegate(self, queue: q)
videoOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: NSNumber(value: kCVPixelFormatType_32BGRA),
kCVPixelBufferMetalCompatibilityKey as String: true
]
videoOutput.alwaysDiscardsLateVideoFrames = true
if captureSession!.canAddOutput(videoOutput){
captureSession!.addOutput(videoOutput)
}
captureSession!.startRunning()
}
}
setDefaultFocusAndExposure()
}
The video and mic are recorded on two separate streams. Details on the microphone and recording video have been left out since my focus is performance of live camera output.
UPDATE - I have a simplified test project on GitHub that makes it a lot easier to test the problem I'm having: https://github.com/PunchyBass/Live-Filter-test-project
From the top of my mind, you are not comparing pears with pears, even if you are running with the 2.49 GHz of A12 against 1.85 GHz of A9, the differences between the cameras are also huge, even if you use them with the same parameters there are several features from XS's camera that require more CPU resources (dual camera, stabilization, smart HDR, etc).
Sorry for the sources, I tried to find metrics of the CPU cost of those features, but I couldn't find it, unfortunately for your needs, that information is not relevant for marketing, when they are selling it as the best camera ever for an smartphone.
They are selling it as the best processor as well, we don't know what would happen using the XS camera with an A9 processor, it would probably crash, we will never know...
PS.... Your metrics are for the whole processor or for the used core? For the whole processor, you also need to consider other tasks that the devices can be executing, for the single core, is 21% of 200% against 39% of 600%