Run cameraApp in xcode, Width size is not full
cameraview is not full on uiview.
please help me T_T
also I can't speak english well so ask question very difficult
override func viewWillAppear(_ animated: Bool) {
captureSession = AVCaptureSession()
stillImageOutput = AVCapturePhotoOutput()
captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
do {
let input = try AVCaptureDeviceInput(device: device)
if (captureSession.canAddInput(input)) {
captureSession.addInput(input)
if (captureSession.canAddOutput(stillImageOutput)) {
captureSession.addOutput(stillImageOutput)
captureSession.startRunning()
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer!)
previewLayer?.position = CGPoint(x: self.cameraView.frame.width/2, y: self.cameraView.frame.height/2)
previewLayer?.bounds = cameraView.bounds
}
}
}
catch {
print(error)
}
}
If I use your code and add the previewLayer to my main view, I get a full size camera view in portrait but a half-size camera view in landscape.
So, the isuse might be in how you have set up the cameraView, or it could be something else. I can't be certain. Check the size and bounds of your cameraView to see how it is set up.
Also, the videoGravity property for previewLayer controls how the preview is displayed. You have it set to AVLayerVideoGravityResizeAspect which will fit the video within the layer's bounds. If the cameraView is set correctly, hou can try a different setting like AVLayerVideoGravityResizeAspectFill to see if that would give you the result you want.
Related
I'm having a lot of problems trying to get a view to show my back Camera feed. I looked throughout apples docs and came up with this, but all it seems to do is make a black screen. I also added the perms in my plist and am running on a real device. I don't need it to take a photo or save anything. Just simply show the camera live in a view.
import UIKit
import AVFoundation
class ViewController: UIViewController {
#IBOutlet weak var cameraView: UIView!
var captureSession = AVCaptureSession()
var previewLayer = AVCaptureVideoPreviewLayer()
override func viewDidLoad() {
super.viewDidLoad()
loadCamera()
}
func loadCamera() {
let device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back)
do {
let input = try AVCaptureDeviceInput(device: device!)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
cameraView.layer.addSublayer(previewLayer)
}
} catch {
print(error)
}
}
}
Welcome!
The problem is that there is actually no data flow (of video frames) happening with your current setup. You need to at least attach one input and one output to your capture session. The preview layer doesn't count as an output itself since it will only attach to an existing connection between input and output.
So to fix it, you can just add an AVCapturePhotoOutput to the session (probably before you add the layer) but never use it. The preview layer should start displaying the frames then.
You probably also want to set the session's sessionPreset to .photo before you add the inputs and outputs. This will cause the session to produce video frames that have an ideal size for displaying on your device's screen.
I downloaded Apple's project about recognizing Objects in Live Capture.
When I tried the app I saw that if I put the object to recognize on the top or on the bottom of the camera view, the app doesn't recognize the object:
In this first image the banana is in the center of the camera view and the app is able to recognize it.
image object in center
In these two images the banana is near to the camera view's border and it is not able to recognize the object.
image object on top
image object on bottom
This is how session and previewLayer are set:
func setupAVCapture() {
var deviceInput: AVCaptureDeviceInput!
// Select a video device, make an input
let videoDevice = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back).devices.first
do {
deviceInput = try AVCaptureDeviceInput(device: videoDevice!)
} catch {
print("Could not create video device input: \(error)")
return
}
session.beginConfiguration()
session.sessionPreset = .vga640x480 // Model image size is smaller.
// Add a video input
guard session.canAddInput(deviceInput) else {
print("Could not add video device input to the session")
session.commitConfiguration()
return
}
session.addInput(deviceInput)
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
// Add a video data output
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
} else {
print("Could not add video data output to the session")
session.commitConfiguration()
return
}
let captureConnection = videoDataOutput.connection(with: .video)
// Always process the frames
captureConnection?.isEnabled = true
do {
try videoDevice!.lockForConfiguration()
let dimensions = CMVideoFormatDescriptionGetDimensions((videoDevice?.activeFormat.formatDescription)!)
bufferSize.width = CGFloat(dimensions.width)
bufferSize.height = CGFloat(dimensions.height)
videoDevice!.unlockForConfiguration()
} catch {
print(error)
}
session.commitConfiguration()
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
rootLayer = previewView.layer
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
}
You can download the project here,
I am wondering if it is normal or not.
Is there any solutions to fix?
Does it take square photos to elaborate with coreml and the two ranges are not included?
Any hints? Thanks
That's probably because the imageCropAndScaleOption is set to centerCrop.
The Core ML model expects a square image but the video frames are not square. This can be fixed by setting the imageCropAndScaleOption option on the VNCoreMLRequest. However, the results may not be as good as with center crop (it depends on how the model was originally trained).
See also VNImageCropAndScaleOption in the Apple docs.
Currently, I have a vie controller which modally presents a view controller which contains a camera. However, whenever I transition, the preview layer has an animation so it circularly grows from the top left corner to fill the rest of the screen. I've tried disabling CALayer implicit animations but to no success. Here's the code when the view appears.
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
previewLayer?.frame = self.view.frame
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
capturedImageView.center = self.view.center
captureSession = AVCaptureSession()
if usingFrontCamera == true {
captureSession?.sessionPreset = AVCaptureSession.Preset.hd1920x1080
}
else {
captureSession?.sessionPreset = AVCaptureSession.Preset.hd1280x720
}
captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
do {
let input = try AVCaptureDeviceInput(device: captureDevice!)
if (captureSession?.canAddInput(input) != nil) {
captureSession?.addInput(input)
stillImageOutput = AVCapturePhotoOutput()
captureSession?.addOutput(stillImageOutput!)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspect
self.view.layer.addSublayer(previewLayer!)
captureSession?.startRunning()
}
} catch {
}
}
Is there anyway to remove this growing animation? Here's a gif of the problem:
When you change the layer frame, there is an implicit animation. You can use CATransaction to disable the animation.
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
CATransaction.begin()
CATransaction.setDisableActions(true)
previewLayer?.frame = self.view.frame
CATransaction.commit()
}
You are doing things in two stages. In viewWillAppear, you add the preview layer without giving it any size at all, so it is a zero size layer at the zero origin:
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspect
self.view.layer.addSublayer(previewLayer!)
Then later, in viewDidAppear, you grow the preview layer by giving it an actual frame:
previewLayer?.frame = self.view.frame
The two stages happen in that order and we are able to see the jump caused by the change in the frame of the preview layer.
If you don't want to see a jump, don't do that. Don't add the preview layer until you can give it its actual frame first.
I'm using live camera output to update a CIImage on a MTKView. My main issue is that I have a large, negative performance difference where an older iPhone gets better CPU performance than a newer one, despite all their settings I've come across are the same.
This is a lengthy post, but I decided to include these details since they could be important to the cause of this problem. Please let me know what else I can include.
Below, I have my captureOutput function with two debug bools that I can turn on and off while running. I used this to try to determine the cause of my issue.
applyLiveFilter - bool whether or not to manipulate the CIImage with a CIFilter.
updateMetalView - bool whether or not to update the MTKView's CIImage.
// live output from camera
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
/*
Create CIImage from camera.
Here I save a few percent of CPU by using a function
to convert a sampleBuffer to a Metal texture, but
whether I use this or the commented out code
(without captureOutputMTLOptions) does not have
significant impact.
*/
guard let texture:MTLTexture = convertToMTLTexture(sampleBuffer: sampleBuffer) else{
return
}
var cameraImage:CIImage = CIImage(mtlTexture: texture, options: captureOutputMTLOptions)!
var transform: CGAffineTransform = .identity
transform = transform.scaledBy(x: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -cameraImage.extent.height)
cameraImage = cameraImage.transformed(by: transform)
/*
// old non-Metal way of getting the ciimage from the cvPixelBuffer
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
{
return
}
var cameraImage:CIImage = CIImage(cvPixelBuffer: pixelBuffer)
*/
var orientation = UIImage.Orientation.right
if(isFrontCamera){
orientation = UIImage.Orientation.leftMirrored
}
// apply filter to camera image
if debug_applyLiveFilter {
cameraImage = self.applyFilterAndReturnImage(ciImage: cameraImage, orientation: orientation, currentCameraRes:currentCameraRes!)
}
DispatchQueue.main.async(){
if debug_updateMetalView {
self.MTLCaptureView!.image = cameraImage
}
}
}
Below is a chart of results between both phones toggling the different combinations of bools discussed above:
Even without the Metal view's CIIMage updating and no filters being applied, the iPhone XS's CPU is 2% greater than iPhone 6S Plus's, which isn't a significant overhead, but makes me suspect that somehow how the camera is capturing is different between the devices.
My AVCaptureSession's preset is set identically between both phones
(AVCaptureSession.Preset.hd1280x720)
The CIImage created from captureOutput is the same size (extent)
between both phones.
Are there any settings I need to set manually between these two phones AVCaptureDevice's settings, including activeFormat properties, to make them the same between devices?
The settings I have now are:
if let captureDevice = AVCaptureDevice.default(for:AVMediaType.video) {
do {
try captureDevice.lockForConfiguration()
captureDevice.isSubjectAreaChangeMonitoringEnabled = true
captureDevice.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
captureDevice.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice.unlockForConfiguration()
} catch {
// Handle errors here
print("There was an error focusing the device's camera")
}
}
My MTKView is based off code written by Simon Gladman, with some edits for performance and to scale the render before it is scaled up to the width of the screen using Core Animation suggested by Apple.
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// The image to display
var image: CIImage?
{
didSet
{
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard var
image = image,
let targetTexture:MTLTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let customDrawableSize:CGSize = drawableSize
let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = customDrawableSize.width / image.extent.width
let scaleY = customDrawableSize.height / image.extent.height
let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
image = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(image,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
My AVCaptureSession (captureSession) and AVCaptureVideoDataOutput (videoOutput) are setup below:
func setupCameraAndMic(){
let backCamera = AVCaptureDevice.default(for:AVMediaType.video)
var error: NSError?
var videoInput: AVCaptureDeviceInput!
do {
videoInput = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError {
error = error1
videoInput = nil
print(error!.localizedDescription)
}
if error == nil &&
captureSession!.canAddInput(videoInput) {
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, MetalDevice, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return
}
captureSession!.addInput(videoInput)
setDeviceFrameRateForCurrentFilter(device:backCamera)
stillImageOutput = AVCapturePhotoOutput()
if captureSession!.canAddOutput(stillImageOutput!) {
captureSession!.addOutput(stillImageOutput!)
let q = DispatchQueue(label: "sample buffer delegate", qos: .default)
videoOutput.setSampleBufferDelegate(self, queue: q)
videoOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: NSNumber(value: kCVPixelFormatType_32BGRA),
kCVPixelBufferMetalCompatibilityKey as String: true
]
videoOutput.alwaysDiscardsLateVideoFrames = true
if captureSession!.canAddOutput(videoOutput){
captureSession!.addOutput(videoOutput)
}
captureSession!.startRunning()
}
}
setDefaultFocusAndExposure()
}
The video and mic are recorded on two separate streams. Details on the microphone and recording video have been left out since my focus is performance of live camera output.
UPDATE - I have a simplified test project on GitHub that makes it a lot easier to test the problem I'm having: https://github.com/PunchyBass/Live-Filter-test-project
From the top of my mind, you are not comparing pears with pears, even if you are running with the 2.49 GHz of A12 against 1.85 GHz of A9, the differences between the cameras are also huge, even if you use them with the same parameters there are several features from XS's camera that require more CPU resources (dual camera, stabilization, smart HDR, etc).
Sorry for the sources, I tried to find metrics of the CPU cost of those features, but I couldn't find it, unfortunately for your needs, that information is not relevant for marketing, when they are selling it as the best camera ever for an smartphone.
They are selling it as the best processor as well, we don't know what would happen using the XS camera with an A9 processor, it would probably crash, we will never know...
PS.... Your metrics are for the whole processor or for the used core? For the whole processor, you also need to consider other tasks that the devices can be executing, for the single core, is 21% of 200% against 39% of 600%
I have a AVPlayer, and am using it as a background in my UIView as an aesthetic. To initialize and instantiate it, I do
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let path = Bundle.main.path(forResource: "Background", ofType:"mp4") else {
debugPrint("Background.mp4 not found")
return
}
let player = AVPlayer(url: URL(fileURLWithPath: path))
let playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = self.view.bounds
player.actionAtItemEnd = .none
player.volume = 0.0
//chage content mode ** TODO **
self.view.layer.addSublayer(playerLayer)
player.play()
}
When I run this, the video plays just as expected, however the AVPlayer sets the content mode to fitToWidth as a default. I searched through all the methods of player and playerLayer for some type of method on changing content type, but to no avail.
What I want is instead of fitToWidth I would like fitToHeight.
Any thoughts? Thanks for looking!
I can think of some really easy way to do this. When your video content is fetched, use videoRect method in AVPlayerLayer to extract the frame of the video image. Then resize your view according to the size of the image such that height goes to the extent that you want and width is proportion of the ratio of video height vs the height of view.