this is my first question here so if it's not explained well please let me know.
So basically I'm trying to access the front camera with this code:
captureSession = AVCaptureSession()
captureSession?.sessionPreset = AVCaptureSessionPreset1920x1080
let cameraDevice = AVCaptureDevice.defaultDevice(withDeviceType: AVCaptureDeviceType.builtInWideAngleCamera , mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.front)
print(cameraDevice!)
let error: NSError? = nil
do{
let input = try AVCaptureDeviceInput(device: cameraDevice)
print(captureSession.canAddInput(input))
if error == nil && captureSession.canAddInput(input){
captureSession?.addInput(input)
stillImageOutput = AVCaptureStillImageOutput()
if captureSession.canAddOutput(stillImageOutput){
captureSession.addOutput(stillImageOutput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer!)
stillImageOutput?.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]
captureSession?.startRunning()
}
}
Using that code print(captureSession.canAddInput(input)) returns false but when i change the position to the back camera everything works like a charm. Am i missing something?
I'm not sure what device you're using, but on an iPhone 6 / 6plus, that preset resolution is too high for the front camera.
https://developer.apple.com/library/content/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html
As per the Apple documentation, on those devices the highest still image capture resolution is 1280 x 960. Try using a lower preset and see if it works on the front camera.
This could explain why it works fine on the back camera, because that DOES support the higher resolution.
Related
I downloaded Apple's project about recognizing Objects in Live Capture.
When I tried the app I saw that if I put the object to recognize on the top or on the bottom of the camera view, the app doesn't recognize the object:
In this first image the banana is in the center of the camera view and the app is able to recognize it.
image object in center
In these two images the banana is near to the camera view's border and it is not able to recognize the object.
image object on top
image object on bottom
This is how session and previewLayer are set:
func setupAVCapture() {
var deviceInput: AVCaptureDeviceInput!
// Select a video device, make an input
let videoDevice = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back).devices.first
do {
deviceInput = try AVCaptureDeviceInput(device: videoDevice!)
} catch {
print("Could not create video device input: \(error)")
return
}
session.beginConfiguration()
session.sessionPreset = .vga640x480 // Model image size is smaller.
// Add a video input
guard session.canAddInput(deviceInput) else {
print("Could not add video device input to the session")
session.commitConfiguration()
return
}
session.addInput(deviceInput)
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
// Add a video data output
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
} else {
print("Could not add video data output to the session")
session.commitConfiguration()
return
}
let captureConnection = videoDataOutput.connection(with: .video)
// Always process the frames
captureConnection?.isEnabled = true
do {
try videoDevice!.lockForConfiguration()
let dimensions = CMVideoFormatDescriptionGetDimensions((videoDevice?.activeFormat.formatDescription)!)
bufferSize.width = CGFloat(dimensions.width)
bufferSize.height = CGFloat(dimensions.height)
videoDevice!.unlockForConfiguration()
} catch {
print(error)
}
session.commitConfiguration()
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
rootLayer = previewView.layer
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
}
You can download the project here,
I am wondering if it is normal or not.
Is there any solutions to fix?
Does it take square photos to elaborate with coreml and the two ranges are not included?
Any hints? Thanks
That's probably because the imageCropAndScaleOption is set to centerCrop.
The Core ML model expects a square image but the video frames are not square. This can be fixed by setting the imageCropAndScaleOption option on the VNCoreMLRequest. However, the results may not be as good as with center crop (it depends on how the model was originally trained).
See also VNImageCropAndScaleOption in the Apple docs.
I'm using live camera output to update a CIImage on a MTKView. My main issue is that I have a large, negative performance difference where an older iPhone gets better CPU performance than a newer one, despite all their settings I've come across are the same.
This is a lengthy post, but I decided to include these details since they could be important to the cause of this problem. Please let me know what else I can include.
Below, I have my captureOutput function with two debug bools that I can turn on and off while running. I used this to try to determine the cause of my issue.
applyLiveFilter - bool whether or not to manipulate the CIImage with a CIFilter.
updateMetalView - bool whether or not to update the MTKView's CIImage.
// live output from camera
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
/*
Create CIImage from camera.
Here I save a few percent of CPU by using a function
to convert a sampleBuffer to a Metal texture, but
whether I use this or the commented out code
(without captureOutputMTLOptions) does not have
significant impact.
*/
guard let texture:MTLTexture = convertToMTLTexture(sampleBuffer: sampleBuffer) else{
return
}
var cameraImage:CIImage = CIImage(mtlTexture: texture, options: captureOutputMTLOptions)!
var transform: CGAffineTransform = .identity
transform = transform.scaledBy(x: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -cameraImage.extent.height)
cameraImage = cameraImage.transformed(by: transform)
/*
// old non-Metal way of getting the ciimage from the cvPixelBuffer
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
{
return
}
var cameraImage:CIImage = CIImage(cvPixelBuffer: pixelBuffer)
*/
var orientation = UIImage.Orientation.right
if(isFrontCamera){
orientation = UIImage.Orientation.leftMirrored
}
// apply filter to camera image
if debug_applyLiveFilter {
cameraImage = self.applyFilterAndReturnImage(ciImage: cameraImage, orientation: orientation, currentCameraRes:currentCameraRes!)
}
DispatchQueue.main.async(){
if debug_updateMetalView {
self.MTLCaptureView!.image = cameraImage
}
}
}
Below is a chart of results between both phones toggling the different combinations of bools discussed above:
Even without the Metal view's CIIMage updating and no filters being applied, the iPhone XS's CPU is 2% greater than iPhone 6S Plus's, which isn't a significant overhead, but makes me suspect that somehow how the camera is capturing is different between the devices.
My AVCaptureSession's preset is set identically between both phones
(AVCaptureSession.Preset.hd1280x720)
The CIImage created from captureOutput is the same size (extent)
between both phones.
Are there any settings I need to set manually between these two phones AVCaptureDevice's settings, including activeFormat properties, to make them the same between devices?
The settings I have now are:
if let captureDevice = AVCaptureDevice.default(for:AVMediaType.video) {
do {
try captureDevice.lockForConfiguration()
captureDevice.isSubjectAreaChangeMonitoringEnabled = true
captureDevice.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
captureDevice.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice.unlockForConfiguration()
} catch {
// Handle errors here
print("There was an error focusing the device's camera")
}
}
My MTKView is based off code written by Simon Gladman, with some edits for performance and to scale the render before it is scaled up to the width of the screen using Core Animation suggested by Apple.
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// The image to display
var image: CIImage?
{
didSet
{
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard var
image = image,
let targetTexture:MTLTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let customDrawableSize:CGSize = drawableSize
let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = customDrawableSize.width / image.extent.width
let scaleY = customDrawableSize.height / image.extent.height
let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
image = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(image,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
My AVCaptureSession (captureSession) and AVCaptureVideoDataOutput (videoOutput) are setup below:
func setupCameraAndMic(){
let backCamera = AVCaptureDevice.default(for:AVMediaType.video)
var error: NSError?
var videoInput: AVCaptureDeviceInput!
do {
videoInput = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError {
error = error1
videoInput = nil
print(error!.localizedDescription)
}
if error == nil &&
captureSession!.canAddInput(videoInput) {
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, MetalDevice, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return
}
captureSession!.addInput(videoInput)
setDeviceFrameRateForCurrentFilter(device:backCamera)
stillImageOutput = AVCapturePhotoOutput()
if captureSession!.canAddOutput(stillImageOutput!) {
captureSession!.addOutput(stillImageOutput!)
let q = DispatchQueue(label: "sample buffer delegate", qos: .default)
videoOutput.setSampleBufferDelegate(self, queue: q)
videoOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: NSNumber(value: kCVPixelFormatType_32BGRA),
kCVPixelBufferMetalCompatibilityKey as String: true
]
videoOutput.alwaysDiscardsLateVideoFrames = true
if captureSession!.canAddOutput(videoOutput){
captureSession!.addOutput(videoOutput)
}
captureSession!.startRunning()
}
}
setDefaultFocusAndExposure()
}
The video and mic are recorded on two separate streams. Details on the microphone and recording video have been left out since my focus is performance of live camera output.
UPDATE - I have a simplified test project on GitHub that makes it a lot easier to test the problem I'm having: https://github.com/PunchyBass/Live-Filter-test-project
From the top of my mind, you are not comparing pears with pears, even if you are running with the 2.49 GHz of A12 against 1.85 GHz of A9, the differences between the cameras are also huge, even if you use them with the same parameters there are several features from XS's camera that require more CPU resources (dual camera, stabilization, smart HDR, etc).
Sorry for the sources, I tried to find metrics of the CPU cost of those features, but I couldn't find it, unfortunately for your needs, that information is not relevant for marketing, when they are selling it as the best camera ever for an smartphone.
They are selling it as the best processor as well, we don't know what would happen using the XS camera with an A9 processor, it would probably crash, we will never know...
PS.... Your metrics are for the whole processor or for the used core? For the whole processor, you also need to consider other tasks that the devices can be executing, for the single core, is 21% of 200% against 39% of 600%
Run cameraApp in xcode, Width size is not full
cameraview is not full on uiview.
please help me T_T
also I can't speak english well so ask question very difficult
override func viewWillAppear(_ animated: Bool) {
captureSession = AVCaptureSession()
stillImageOutput = AVCapturePhotoOutput()
captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
do {
let input = try AVCaptureDeviceInput(device: device)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(stillImageOutput)) {
captureSession.addOutput(stillImageOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer!)
previewLayer?.position = CGPoint(x: self.cameraView.frame.width/2, y: self.cameraView.frame.height/2)
previewLayer?.bounds = cameraView.bounds
}
}
}
catch {
print(error)
}
}
If I use your code and add the previewLayer to my main view, I get a full size camera view in portrait but a half-size camera view in landscape.
So, the isuse might be in how you have set up the cameraView, or it could be something else. I can't be certain. Check the size and bounds of your cameraView to see how it is set up.
Also, the videoGravity property for previewLayer controls how the preview is displayed. You have it set to AVLayerVideoGravityResizeAspect which will fit the video within the layer's bounds. If the cameraView is set correctly, hou can try a different setting like AVLayerVideoGravityResizeAspectFill to see if that would give you the result you want.
This question already has answers here:
How to get the front camera in Swift?
(8 answers)
Closed 6 years ago.
Essentially what I'm trying to accomplish is having the front camera of the AVCaptureDevice be the first and only option on a application during an AVCaptureSession.
I've looked around StackOverflow and all the methods and answers provided are deprecated as of iOS 10, Swift 3 and Xcode 8.
I know you're supposed to enumerate the devices with AVCaptureDeviceDiscoverySession and look at them to distinguish front from back, but I'm unsure of how to do so.
Could anyone help? It would amazing if so!
Here's my code:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
previewLayer.frame = singleViewCameraSlot.bounds
self.singleViewCameraSlot.layer.addSublayer(previewLayer)
captureSession.startRunning()
}
lazy var captureSession: AVCaptureSession = {
let capture = AVCaptureSession()
capture.sessionPreset = AVCaptureSessionPreset1920x1080
return capture
}()
lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let preview = AVCaptureVideoPreviewLayer(session: self.captureSession)
preview?.videoGravity = AVLayerVideoGravityResizeAspect
preview?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
preview?.bounds = CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height)
preview?.position = CGPoint(x: self.view.bounds.midX, y: self.view.bounds.midY)
return preview!
}()
func setupCameraSession() {
let frontCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice
do {
let deviceInput = try AVCaptureDeviceInput(device: frontCamera)
captureSession.beginConfiguration()
if (captureSession.canAddInput(deviceInput) == true) {
captureSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (captureSession.canAddOutput(dataOutput) == true) {
captureSession.addOutput(dataOutput)
}
captureSession.commitConfiguration()
let queue = DispatchQueue(label: "io.goodnight.videoQueue")
dataOutput.setSampleBufferDelegate(self, queue: queue)
}
catch let error as NSError {
NSLog("\(error), \(error.localizedDescription)")
}
}
If you just need to find a single device based on simple characteristics (like a front-facing camera that can shoot video), just use AVCaptureDevice.default(_:for:position:). For example:
guard let device = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video,
position: .front)
else { fatalError("no front camera. but don't all iOS 10 devices have them?")
// then use the device: captureSession.addInput(device) or whatever
Really that's all there is to it for most use cases.
There's also AVCaptureDeviceDiscoverySession as a replacement for the old method of iterating through the devices array. However, most of the things you'd usually iterate through the devices array for can be found using the new default(_:for:position:) method, so you might as well use that and write less code.
The cases where AVCaptureDeviceDiscoverySession is worth using are the less common, more complicated cases: say you want to find all the devices that support a certain frame rate, or use key-value observing to see when the set of available devices changes.
By the way...
I've looked around StackOverflow and all the methods and answers provided are deprecated as of iOS 10, Swift 3 and Xcode 8.
If you read Apple's docs for those methods (at least this one, this one, and this one), you'll see along with those deprecation warnings some recommendations for what to use instead. There's also a guide to the iOS 10 / Swift 3 photo capture system and some sample code that both show current best practices for these APIs.
If you explicitly need the front camera, you can use AVCaptureDeviceDiscoverySession as specified here.
https://developer.apple.com/reference/avfoundation/avcapturedevicediscoverysession/2361539-init
This allows you to specify the types of devices you want to search for. The following (untested) should give you the front facing camera.
let deviceSessions = AVCaptureDeviceDiscoverySession(deviceTypes: [AVCaptureDeviceType.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: AVCaptureDevicePosition.front)
This deviceSessions has a devices property which is an array of AVCaptureDevice types containing only the devices matching that search criteria.
deviceSessions?.devices
That should be either 0 or 1 depending on if the device has a front facing camera or not (some iPods won't for example).
At the moment, this is how I'm playing a video on the subview of my UIViewController:
override func viewDidAppear(animated: Bool) {
let filePath = NSBundle.mainBundle().pathForResource("musicvideo", ofType: "mp4")
self.moviePlayerController.contentURL = NSURL.fileURLWithPath(filePath)
self.moviePlayerController.play()
self.moviePlayerController.repeatMode = .One
self.moviePlayerController.view.frame = self.view.bounds
self.moviePlayerController.scalingMode = .AspectFill
self.moviePlayerController.controlStyle = .None
self.moviePlayerController.allowsAirPlay = false
self.view.addSubview(self.moviePlayerController.view)
}
I've read on ways to disable the audio by doing the following below (none of which work, at all). Keep in mind I'm trying to disable it to the point of not interrupting the current music playing via the Music app, Spotify, etc.
// Playing media items with the applicationMusicPlayer will restore the user's Music state after the application quits.
// The current volume of playing music, in the range of 0.0 to 1.0.
// This property is deprecated -- use MPVolumeView for volume control instead.
1) MPMusicPlayerController.applicationMusicPlayer().volume = 0
2) MPVolumeView doesn't even have a setting for setting the actual volume? It's a control.
3) self.moviePlayerController.useApplicationAudioSession = false
So I found this answer.
This is my Swift code that I ended up going with. I then used an AVPlayerLayer to add to the view as a sublayer, which works perfectly.
Thanks to the OP who managed to get a hold of an Apple technician and provided the original Objective-C code.
The only problems I'm facing now is that it:
1) Interrupts current music playback, whether it's from Music, Spotify, etc.
2) Video stops playing if I close the app and open it up again.
override func viewDidAppear(animated: Bool) {
let filePath = NSBundle.mainBundle().pathForResource("musicvideo", ofType: "mp4")
var asset: AVURLAsset?
asset = AVURLAsset.URLAssetWithURL(NSURL.fileURLWithPath(filePath), options: nil)
var audioTracks = NSArray()
audioTracks = asset!.tracksWithMediaType(AVMediaTypeAudio)
// Mute all the audio tracks
let allAudioParams = NSMutableArray()
for track: AnyObject in audioTracks {
// AVAssetTrack
let audioInputParams = AVMutableAudioMixInputParameters()
audioInputParams.setVolume(0.0, atTime: kCMTimeZero)
audioInputParams.trackID = track.trackID
allAudioParams.addObject(audioInputParams)
}
let audioZeroMix = AVMutableAudioMix()
audioZeroMix.inputParameters = allAudioParams
// Create a player item
let playerItem = AVPlayerItem(asset: asset)
playerItem.audioMix = audioZeroMix
// Create a new Player, and set the player to use the player item
// with the muted audio mix
let player = AVPlayer.playerWithPlayerItem(playerItem) as AVPlayer
player.play()
let layer = AVPlayerLayer(player: player)
player.actionAtItemEnd = .None
layer.frame = self.view.bounds
layer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.view.layer.addSublayer(layer)
}