When I enable LivePhotoCapture on my AVCapturePhotoOutput and switch to builtInUltraWideCamera on my iPhone 12, I get a distorted image on the preview layer. The issue goes away if LivePhotoCapture is disabled.
This issue isn't reproducible on iPhone 13 Pro.
Tried to play with videoGravity settings, but no luck. Any tips are appreciated!
On my AVCapturePhotoOutput:
if self.photoOutput.isLivePhotoCaptureSupported {
self.photoOutput.isLivePhotoCaptureEnabled = true
}
Preview layer:
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspect
videoPreviewLayer.connection?.videoOrientation = .portrait
previewView.layer.addSublayer((videoPreviewLayer)!)
self.captureSession.startRunning()
self.videoPreviewLayer.frame = self.previewView.bounds
Result (the picture is mirrored, but it's not a problem, the problem is on the right and bottom edges of the picture):
Related
I have a custom video player. Im using AVRoutePickerView to enable AirPlay. It works fine and it's playing on my Apple TV.
But the AVRoutePickerView is not changing color / animating when AirPlay is active.
This only applies when AirPlay to an Apple TV / TV. When AirPods is selected it works fine, and the icon is animating.
Anyone know what could cause this problem?
This is my config of AVRoutePickerView:
private func setupRoutePicker() {
if let controlsOverlayView = controlsOverlayView {
routerPickerView = AVRoutePickerView(frame: controlsOverlayView.airPlayButtonContainer.frame)
controlsOverlayView.airPlayButtonContainer.addSubview(routerPickerView)
routerPickerView.contentMode = .scaleAspectFit
routerPickerView.backgroundColor = .clear
routerPickerView.tintColor = settings.airPlayIconTintColor
routerPickerView.activeTintColor = settings.airPlayIconActivTintColor
routerPickerView.delegate = self
if #available(iOS 13.0, *) {
routerPickerView.prioritizesVideoDevices = true
}
}
}
I am currently trying to render a bounding boxes inside a UIView, however currently I'm facing the issue that there is a misalignment in the X axis when trying to render the box as can be seen in the screenshot below.
When the object is on the left of the view the misalignment will be on the right like seen in the image. However when the object is on the right the misalignment will be to the left. The misalignment increases the further it gets to the edge of the screen.
Currently are use ARKit to capture the current frame as a pixel buffer.
let pixelBuffer = sceneView.session.currentFrame?.capturedImage
// Capture current device orientation
let orientation = CGImagePropertyOrientation(rawValue: UIDevice.current.exifOrientation)
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: orientation)
Additionally additionally my CoroML vision request looks as follows
findObjectRequest = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
findObjectRequest?.imageCropAndScaleOption = .scaleFit
I then try to reschedule the normalised bounding box to image Space like this:
public func scaleImageForCameraOutput(predictionRect finderrItem: FinderrItem, viewRect: CGRect) -> FinderrItem {
let scale = CGAffineTransform.identity.scaledBy(x: viewRect.width, y: viewRect.height)
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
let bgRect = finderrItem.box.applying(transform).applying(scale)
finderrItem.box = bgRect
return finderrItem
}
I also tried to follow the Apple developer documentation and using the API code to re-scale the banding boxes as follows
let newBox = VNImageRectForNormalizedRect(
boundingBox,
Int(self.sceneView.bounds.width),
Int(self.sceneView.bounds.height))
However this still has the same issue with another issue that the y-axis is now inverted.
Does anyone know why I'm having this problem I've been stuck on it for quite awhile now and can't seem to figure it out.
I downloaded Apple's project about recognizing Objects in Live Capture.
When I tried the app I saw that if I put the object to recognize on the top or on the bottom of the camera view, the app doesn't recognize the object:
In this first image the banana is in the center of the camera view and the app is able to recognize it.
image object in center
In these two images the banana is near to the camera view's border and it is not able to recognize the object.
image object on top
image object on bottom
This is how session and previewLayer are set:
func setupAVCapture() {
var deviceInput: AVCaptureDeviceInput!
// Select a video device, make an input
let videoDevice = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back).devices.first
do {
deviceInput = try AVCaptureDeviceInput(device: videoDevice!)
} catch {
print("Could not create video device input: \(error)")
return
}
session.beginConfiguration()
session.sessionPreset = .vga640x480 // Model image size is smaller.
// Add a video input
guard session.canAddInput(deviceInput) else {
print("Could not add video device input to the session")
session.commitConfiguration()
return
}
session.addInput(deviceInput)
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
// Add a video data output
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
} else {
print("Could not add video data output to the session")
session.commitConfiguration()
return
}
let captureConnection = videoDataOutput.connection(with: .video)
// Always process the frames
captureConnection?.isEnabled = true
do {
try videoDevice!.lockForConfiguration()
let dimensions = CMVideoFormatDescriptionGetDimensions((videoDevice?.activeFormat.formatDescription)!)
bufferSize.width = CGFloat(dimensions.width)
bufferSize.height = CGFloat(dimensions.height)
videoDevice!.unlockForConfiguration()
} catch {
print(error)
}
session.commitConfiguration()
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
rootLayer = previewView.layer
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
}
You can download the project here,
I am wondering if it is normal or not.
Is there any solutions to fix?
Does it take square photos to elaborate with coreml and the two ranges are not included?
Any hints? Thanks
That's probably because the imageCropAndScaleOption is set to centerCrop.
The Core ML model expects a square image but the video frames are not square. This can be fixed by setting the imageCropAndScaleOption option on the VNCoreMLRequest. However, the results may not be as good as with center crop (it depends on how the model was originally trained).
See also VNImageCropAndScaleOption in the Apple docs.
How do I maintain an 'image size' in the previewLayer similar to that of the camera app on the iPhone?
At the moment my AVCaptureVideoPreviewLayer is presenting the frames from the capture session fitted to the rectangle size of the of the preview layer. My preview layer dimension are quite small so I’d like to maintain the same sort of scale as the iPhone’s camera app and view whatever portion of the image can be seen within the bounds of the preview layer - effectively leaving same image size as the built in camera app but with a smaller field of view.
I’ve tried setting the preview layer’s contents to .resizeAspectFill but it still just fits the video output to the frame. I’ve also tried to change the videoScaleAndCropFactor and the AVCaptureDevice’s videoZoomFactor with no success.
// how I've set up my device:
let availableDevices = AVCaptureDevice.DiscoverySession(
deviceTypes: [.builtInWideAngleCamera],
mediaType: AVMediaType.video,
position: .back).devices
// the preview layer
let previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
self.previewLayer = previewLayer
self.previewLayer.frame = CGRect(x: 0, y: 0,
width: self.spineView.spine.frame.width,
height: self.spineView.spine.frame.height)
self.previewLayer.cornerRadius = 5.0
self.previewLayer.videoGravity = .resizeAspectFill
self.spineView.spine.layer.addSublayer(self.previewLayer)
I am working on this piece of swift 3 code:
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let videoCaptureDevice = AVCaptureDevice.defaultDevice(withDeviceType: AVCaptureDeviceType.builtInWideAngleCamera,mediaType: AVMediaTypeVideo, position: .back)
let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
if captureSession.canAddInput(videoInput)
{
captureSession.addInput(videoInput)
}
Then, i take a picture with an AVCapturePhotoOutput object and i get the picture in AVCapturePhotoCaptureDelegate object.
It works fine.
What i want to do is to take a picture with the iPhone 7 Plus dual camera. I want to get 2 pictures, like the official iOS camera app:
- One picture with background blur
- A second picture, without blur
Do you think it is possible ?
Thanks