When I enable LivePhotoCapture on my AVCapturePhotoOutput and switch to builtInUltraWideCamera on my iPhone 12, I get a distorted image on the preview layer. The issue goes away if LivePhotoCapture is disabled.
This issue isn't reproducible on iPhone 13 Pro.
Tried to play with videoGravity settings, but no luck. Any tips are appreciated!
On my AVCapturePhotoOutput:
if self.photoOutput.isLivePhotoCaptureSupported {
self.photoOutput.isLivePhotoCaptureEnabled = true
}
Preview layer:
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer.videoGravity = .resizeAspect
videoPreviewLayer.connection?.videoOrientation = .portrait
previewView.layer.addSublayer((videoPreviewLayer)!)
self.captureSession.startRunning()
self.videoPreviewLayer.frame = self.previewView.bounds
Result (the picture is mirrored, but it's not a problem, the problem is on the right and bottom edges of the picture):
Going off this post which describes how to get the first window - has definitely helped me learned more about this spec, but it wasn't able to get me the solution I need. How to resolve: 'keyWindow' was deprecated in iOS 13.0
UIApplication.shared.windows.last works like a charm, on iOS 13, 14 and 15. However, it's depreciated in iOS 15 and I am adding an #available statement to make sure it works properly in the future, etc.
Moreover, this is my code and I cannot seem to achieve showing the view I am displaying on top of the UIKeyboard...
if #available(iOS 15.0, *) {
let scenes = UIApplication.shared.connectedScenes.first as? UIWindowScene
let window = scenes?.windows.last
if let window = window {
layer.zPosition = CGFloat(MAXFLOAT)
frame = window.bounds
window.addSubview(self)
}
} else {
if let window = UIApplication.shared.windows.last {
///
layer.zPosition = CGFloat(Float.greatestFiniteMagnitude)
frame = window.bounds
window.addSubview(self)
}
}
To reiterate;
This code works on iOS 13 & iOS 15:
if let window = UIApplication.shared.windows.last {
layer.zPosition = CGFloat(Float.greatestFiniteMagnitude)
frame = window.bounds
window.addSubview(self)
}
This would be fine. However, as of iOS 15, UIApplication.shared.windows has been depreciated. The note in UIApplication.h states: 'Use UIWindowScene.windows on a relevant window scene instead.'
Moreover, this code does NOT work on iOS 15.0:
if #available(iOS 15.0, *) {
let scenes = UIApplication.shared.connectedScenes.first as? UIWindowScene
let window = scenes?.windows.last
if let window = window {
layer.zPosition = CGFloat(MAXFLOAT)
frame = window.bounds
window.addSubview(self)
}
I am working on a swift project, using XCode 12 and after the iOS 14 upgrade, the play icon is missing from AVPlayer. The issue is very weird
When I load the app for the first time I am able to see the play icon (ScreenShot1). After going back to the home screen and coming back, the play icon is not visible (ScreenShot2).
ScreenShot1
ScreenShot2
Currently, AVPlayer is embedded inside UIImage View. Below is the Code
imageView.isUserInteractionEnabled = true
mediaPlayer.view.translatesAutoresizingMaskIntoConstraints = false
imageView.addSubview(mediaPlayer.view)
mediaPlayer.view.pin(to: imageView)
guard let itemURL = URL(string: asset.AssetURL) else { return }
let item = AVPlayerItem(url: itemURL)
let player = AVPlayer(playerItem: item)
mediaPlayer.player = player
mediaPlayer.videoGravity = .resizeAspectFill
mediaPlayer.entersFullScreenWhenPlaybackBegins = true
mediaPlayer.exitsFullScreenWhenPlaybackEnds = true
I am trying to build a simple swift (4) macOS app to use an iPhone camera connected to my Mac.
I have started an blank macOS template app and have turned on sandbox to allow camera, mic and USB and added the following code to my ViewController.
import Cocoa
import AVFoundation
class ViewController: NSViewController {
#IBOutlet weak var camera: NSView!
override func viewDidLoad() {
super.viewDidLoad()
camera.layer = CALayer()
let session:AVCaptureSession = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.high
let device:AVCaptureDevice = (AVCaptureDevice.default(for: AVMediaType.video))!
// let listdevices = (AVCaptureDevice.devices())
do {
try session.addInput(AVCaptureDeviceInput(device: device))
//Preview
let previewLayer:AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
let myView:NSView = self.view
previewLayer.frame = myView.bounds
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.camera.layer?.addSublayer(previewLayer)
session.startRunning()
// print(listdevices)
// print(device)
} catch {
print(device)
}
}
override var representedObject: Any? {
didSet {
// Update the view, if already loaded.
}
}
}
In storyboard I have dropped in a Custom View.
App builds ok and uses facetime camera no problem, however with iPhone connected I dont see if as a device that AVFoundation can use. Not sure on next steps on how to get the previewLayer to select the USB camera to use aka iPhone.
p.s Needs to be landscape for all cameras orietation
According to this Apple Developer Forum post, capturing the camera of a connected iOS device from a macOS app is not supported.
The closest you can do (as the post suggests), is to capture the screen of the iOS device while the camera (Camera.app) is running, effectively capturing the live camera preview (or you can roll your own companion camera app in iOS, if you want to remove the camera app’s UI from the captured screen).
Hello Community,
I try to build a App with Swift 4 and the great upcoming ARKit-Framework but I am stuck. I need to take a Video with the Framework or at least a UIImage-sequence but I dont know how.
This is what I've tried:
In ARKit you have a session which tracks your world. This session has a capturedImage instance where you can get the current Image. So I createt a Timer which appends the capturedImage every 0.1s to a List. This would work for me but If I start the Timer by clicking a "start"-button, the camera starts to lag. Its not about the Timer i guess because If I invalidate the Timer by clicking a "stop"-button the camera is fluent again.
Is there a way to solve the lags or even a better way?
Thanks
I was able to use ReplayKit to do exactly that.
To see what ReplayKit is like
On your iOS device, go to Settings -> Control Center -> Customize Controls. Move "Screen Recording" to the "Include" section, and swipe up to bring up Control Center. You should now see the round Screen Recording icon, and you'll notice that when you press it, iOS starts to record your screen. Tapping the blue bar will end recording and save the video to Photos.
Using ReplayKit, you can make your app invoke the screen recorder and capture your ARKit content.
How-to
To start recording:
RPScreenRecorder.shared().startRecording { error in
// Handle error, if any
}
To stop recording:
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
// Do things
})
After you're done recording, .stopRecording gives you an optional RPPreviewViewController, which is
An object that displays a user interface where users preview and edit a screen recording created with ReplayKit.
So in our example, you can present previewVc if it isn't nil
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
if let previewVc = previewVc {
previewVc.delegate = self
self.present(previewVc, animated: true, completion: nil)
}
})
You'll be able to edit and save the vide right from the previewVc, but you might want to make self (or someone) the RPPreviewViewControllerDelegate, so you can easily dismiss the previewVc when you're finished.
extension MyViewController: RPPreviewViewControllerDelegate {
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
// Called when the preview vc is ready to be dismissed
}
}
Caveats
You'll notice that startRecording will record "the app display", so if any view you have (buttons, labels, etc) will be recorded as well.
I found it useful to hide the controls while recording and let my users know that tapping the screen stops recording, but I've also read about others having success putting their essential controls on a separate UIWindow.
Excluding views from recording
The separate UIWindow trick works. I was able to make an overlay window where I had my a record button and a timer and these weren't recorded.
let overlayWindow = UIWindow(frame: view.frame)
let recordButton = UIButton( ... )
overlayWindow.backgroundColor = UIColor.clear
The UIWindow will be hidden by default. So when you want to show your controls, you must set isHidden to false.
Best of luck to you!
Use a custom renderer.
Render the scene using the custom renderer, then get texture from the custom renderer, finally covert that to a CVPixelBufferRef
- (void)viewDidLoad {
[super viewDidLoad];
self.rgbColorSpace = CGColorSpaceCreateDeviceRGB();
self.bytesPerPixel = 4;
self.bitsPerComponent = 8;
self.bitsPerPixel = 32;
self.textureSizeX = 640;
self.textureSizeY = 960;
// Set the view's delegate
self.sceneView.delegate = self;
// Show statistics such as fps and timing information
self.sceneView.showsStatistics = YES;
// Create a new scene
SCNScene *scene = [SCNScene scene];//[SCNScene sceneNamed:#"art.scnassets/ship.scn"];
// Set the scene to the view
self.sceneView.scene = scene;
self.sceneView.preferredFramesPerSecond = 30;
[self setupMetal];
[self setupTexture];
self.renderer.scene = self.sceneView.scene;
}
- (void)setupMetal
{
if (self.sceneView.renderingAPI == SCNRenderingAPIMetal) {
self.device = self.sceneView.device;
self.commandQueue = [self.device newCommandQueue];
self.renderer = [SCNRenderer rendererWithDevice:self.device options:nil];
}
else {
NSAssert(nil, #"Only Support Metal");
}
}
- (void)setupTexture
{
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm_sRGB width:self.textureSizeX height:self.textureSizeY mipmapped:NO];
descriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageRenderTarget;
id<MTLTexture> textureA = [self.device newTextureWithDescriptor:descriptor];
self.offscreenTexture = textureA;
}
- (void)renderer:(id <SCNSceneRenderer>)renderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time
{
[self doRender];
}
- (void)doRender
{
if (self.rendering) {
return;
}
self.rendering = YES;
CGRect viewport = CGRectMake(0, 0, self.textureSizeX, self.textureSizeY);
id<MTLTexture> texture = self.offscreenTexture;
MTLRenderPassDescriptor *renderPassDescriptor = [MTLRenderPassDescriptor new];
renderPassDescriptor.colorAttachments[0].texture = texture;
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1.0);
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];
self.renderer.pointOfView = self.sceneView.pointOfView;
[self.renderer renderAtTime:0 viewport:viewport commandBuffer:commandBuffer passDescriptor:renderPassDescriptor];
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> _Nonnull bf) {
[self.recorder writeFrameForTexture:texture];
self.rendering = NO;
}];
[commandBuffer commit];
}
Then in the recorder, set up the AVAssetWriterInputPixelBufferAdaptor with AVAssetWriter. And convert the texture to CVPixelBufferRef:
- (void)writeFrameForTexture:(id<MTLTexture>)texture {
CVPixelBufferPoolRef pixelBufferPool = self.assetWriterPixelBufferInput.pixelBufferPool;
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
[texture getBytes:pixelBufferBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
[self.assetWriterPixelBufferInput appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
}
Make sure the custom renderer and the adaptor share the same pixel encoding.
I tested this for the default ship.scn and it and it only consume 30% CPU compared to almost 90% compared to use snapshot method for every frame. And this will not pop up a permission dialog.
I have released an open source framework taking care of this. https://github.com/svtek/SceneKitVideoRecorder
It works by getting the drawables from scene views metal layer.
You can attach a display link to get your renderer called as the screen refreshes:
displayLink = CADisplayLink(target: self, selector: #selector(updateDisplayLink))
displayLink?.add(to: .main, forMode: .commonModes)
And then grab the drawable from metal layer by:
let metalLayer = sceneView.layer as! CAMetalLayer
let nextDrawable = metalLayer.nextDrawable()
Be wary that nextDrawable() call expends the drawables. You should call this as less as possible and do so in an autoreleasepool{} so the drawable gets released properly and replaced with a new one.
Then you should read the MTLTexture from the drawable to a pixel buffer which you can append to AVAssetWriter to create a video.
let destinationTexture = currentDrawable.texture
destinationTexture.getBytes(...)
With these in mind the rest is pretty straightforward video recording on iOS/Cocoa.
You can find all these implemented in the repo I've shared above.
I had a similar need and wanted to record the ARSceneView in the app internally, and without ReplayKit so that I can manipulate the video that is generated from the recording. I ended up using this project: https://github.com/lacyrhoades/SceneKit2Video . The project is made to render a SceneView to a video, but you can configure it to accept ARSceneViews. It works pretty well, and you can choose to get an imagefeed instead of the video using the delegate function if you like.