I do have a straight forward CAMetalLayer setup, except the CAMetalLayer is initialized on a global thread, which looks like this (call init, setup basic properties and add as sublayer):
DispatchQueue.global().async {
let renderLayer = CAMetalLayer()
renderLayer.isOpaque = false
renderLayer.frame = frame
renderLayer.drawableSize = drawableSize
renderLayer.framebufferOnly = false
renderLayer.device = metalDevice
renderLayer.pixelFormat = metalPixelFormat
renderLayer.contentsScale = contentScale
DispatchQueue.main.async {
view.layer.addSublayer(renderLayer)
}
}
This code runs without any runtime errors, but unfortunately when rendering to the CAMetalLayer drawable there is no render output shown on the screen. Tests show, that the CAMetalLayer drawable does have a proper texture (with desired size) and the data is written to it as expected, but not shown on screen.
To my surprise running the CAMetalLayer initialization on the main thread fixes this problem. So the following is a solution:
DispatchQueue.global().async {
var renderLayer: CAMetalLayer!
DispatchQueue.main.sync {
renderLayer = CAMetalLayer()
}
renderLayer.isOpaque = false
renderLayer.frame = frame
renderLayer.drawableSize = drawableSize
renderLayer.framebufferOnly = false
renderLayer.device = metalDevice
renderLayer.pixelFormat = metalPixelFormat
renderLayer.contentsScale = contentScale
DispatchQueue.main.async {
view.layer.addSublayer(renderLayer)
}
}
As I want to be able to run the CAMetalLayer creation completely on the background thread I would like to avoid calling CAMetalLayer() on the main thread. Also, I couldn't find any requirement for CAMetalLayer to be initialized on the main thread in the documentation.
So my questions are:
Is it possible to run the init CAMetalLayer() on a background thread and if so how?
If it is not possible: why?
Additional Info: I am using XCode 11 and running the code on iOS 13.1. I also had the same issue with XCode 10 and iOS version 12.
Related
This problem is caused by user interface interactions such as showing the titlebar while in fullsreen. That question's answer provides a solution, but not how to implement that solution.
The solution is to render on a background thread. The issue is, the code provided in Apple's is made to cover a lot of content so most of it will extraneous code, so even if I could understand it, it isn't feasible to use Apple's code. And I can't understand it so it just plain isn't an option. How would I make a simple Swift Metal game use a background thread being as concise as possible?
Take this, for example:
class ViewController: NSViewController {
var MetalView: MTKView {
return view as! MTKView
}
var Device: MTLDevice = MTLCreateSystemDefaultDevice()!
override func viewDidLoad() {
super.viewDidLoad()
MetalView.delegate = self
MetalView.device = Device
MetalView.colorPixelFormat = .bgra8Unorm_srgb
Device = MetalView.device
//setup code
}
}
extension ViewController: MTKViewDelegate {
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
func draw(in view: MTKView) {
//drawing code
}
}
That is the start of a basic Metal game. What would that code look like, if it were rendering on a background thread?
To fix that bug when showing the titlebar in Metal, I need to render it on a background thread. Well, how do I render it on a background thread?
I've noticed this answer suggests to manually redraw it 60 times a second. Presumably using a loop that is on a background thread? But that seems... not a clean way to fix it. Is there a cleaner way?
The main trick in getting this to work seems to be setting up the CVDisplayLink. This is awkward in Swift, but doable. After some work I was able to modify the "Game" template in Xcode to use a custom view backed by CAMetalLayer instead of MTKView, and a CVDisplayLink to render in the background, as suggested in the sample code you linked — see below.
Edit Oct 22:
The approach mentioned in this thread seems to work just fine: still using an MTKView, but drawing it manually from the display link callback. Specifically I was able to follow these steps:
Create a new macOS Game project in Xcode.
Modify GameViewController to add a CVDisplayLink, similar to below (see this question for more on using CVDisplayLink from Swift). Start the display link in viewWillAppear and stop it in viewWillDisappear.
Set mtkView.isPaused = true in viewDidLoad to disable automatic rendering, and instead explicitly call mtkView.draw() from the display link callback.
The full content of my modified GameViewController.swift is available here.
I didn't review the Renderer class for thread safety, so I can't be sure no more changes are required, but this should get you up and running.
Older implementation with CAMetalLayer instead of MTKView:
This is just a proof of concept and I can't guarantee it's the best way to do everything. You might find these articles helpful too:
I didn't try this idea, but given how much convenience MTKView generally provides over CAMetalLayer, it might be worth giving it a shot:
https://developer.apple.com/forums/thread/89241?answerId=268384022#268384022
Is drawing to an MTKView or CAMetalLayer required to take place on the main thread? and https://developer.apple.com/documentation/quartzcore/cametallayer/1478157-presentswithtransaction
class MyMetalView: NSView {
var displayLink: CVDisplayLink?
var metalLayer: CAMetalLayer!
override init(frame frameRect: NSRect) {
super.init(frame: frameRect)
setupMetalLayer()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
setupMetalLayer()
}
override func makeBackingLayer() -> CALayer {
return CAMetalLayer()
}
func setupMetalLayer() {
wantsLayer = true
metalLayer = layer as! CAMetalLayer?
metalLayer.device = MTLCreateSystemDefaultDevice()!
// ...other configuration of the metalLayer...
}
// handle display link callback at 60fps
static let _outputCallback: CVDisplayLinkOutputCallback = { (displayLink, inNow, inOutputTime, flagsIn, flagsOut, context) -> CVReturn in
// convert opaque context pointer back into a reference to our view
let view = Unmanaged<MyMetalView>.fromOpaque(context!).takeUnretainedValue()
/*** render something into view.metalLayer here! ***/
return kCVReturnSuccess
}
override func viewDidMoveToWindow() {
super.viewDidMoveToWindow()
guard CVDisplayLinkCreateWithActiveCGDisplays(&displayLink) == kCVReturnSuccess,
let displayLink = displayLink
else {
fatalError("unable to create display link")
}
// pass a reference to this view as an opaque pointer
guard CVDisplayLinkSetOutputCallback(displayLink, MyMetalView._outputCallback, Unmanaged<MyMetalView>.passUnretained(self).toOpaque()) == kCVReturnSuccess else {
fatalError("unable to configure output callback")
}
guard CVDisplayLinkStart(displayLink) == kCVReturnSuccess else {
fatalError("unable to start display link")
}
}
deinit {
if let displayLink = displayLink {
CVDisplayLinkStop(displayLink)
}
}
}
I have an MTKView whose contents I draw into a UIView. I want to swap display from MTKView to UIView without perceptible changes. How to achieve?
Currently, I have
let strokeCIImage = CIImage(mtlTexture: metalTextureComposite...) // get MTLTexture
let imageCropCG = cicontext.createCGImage(strokeCIImage...) // convert to CGImage
let layerStroke = CALayer() // create layer
layerStroke.contents = imageCropCG // populate with CGImage
strokeUIView.layer.addSublayer(layerStroke) // add to view
strokeUIView.layerWillDraw(layerStroke) //heads up to strokeUIView
and a delegate method within layerWillDraw() that clears the MTKView.
strokeViewMetal.metalClearDisplay()
The result is that I'll see a frame drop every so often in which nothing is displayed.
In the hopes of cleanly separating the two tasks, I also tried the following:
let dispatchWorkItem = DispatchWorkItem{
print("lyr add start")
self.pageCanvasImage.layer.addSublayer(sublayer)
print("lyr add end")
}
let dg = DispatchGroup()
DispatchQueue.main.async(group: dg, execute: dispatchWorkItem)
//print message when all blocks in the group finish
dg.notify(queue: DispatchQueue.main) {
print("dispatch mtl clear")
self.strokeCanvasMetal.setNeedsDisplay() // clear MTKView
}
The idea being add the new CALayer to UIImageView, and THEN clear the MTKView.
Over many screen draws, I think this result in fewer frame drops during the View swap, but I'd like a foolproof solution with NO drops. Basically what I'm after is to only clear strokeViewMetal once strokeUIView is ready to display. Any pointers would be appreciated
Synchronicity issues between MTKView and UIView are resolved for 99% of my tests when I set MTKView's presentsWithTransaction property to true. According to Apple's documentation:
Setting this value to true changes this default behavior so that your
MTKView displays its drawable content synchronously, using whichever
Core Animation transaction is current at the time the drawable’s
present() method is called.
Once that is done, the draw loop has to be modified from:
commandEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
to:
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilScheduled() // synchronously wait until the drawable is ready
drawable.present() // call the drawable’s present() method directly
This is done to prevent Core activities to end before we're ready to present MTKView's drawable.
With all of this set up, I can simply:
let strokeCIImage = CIImage(mtlTexture: metalTextureComposite...) // get MTLTexture
let imageCropCG = cicontext.createCGImage(strokeCIImage...) // convert to CGImage
let layerStroke = CALayer() // create layer
layerStroke.contents = imageCropCG // populate with CGImage
// the last two events will happen synchronously
strokeUIView.layer.addSublayer(layerStroke) // add to view
strokeViewMetal.metalClearDisplay() // empty out MTKView
With all of this said, I do see overlapping of the views every now and then, but at a much, much lower frequency
Is it possible to change the background color of a AVPlayerView when used in a macOS application. I want to do this to remove the black bars when playing a video.
I've tried the following:
videoView.contentOverlayView?.wantsLayer = true
videoView.contentOverlayView?.layer?.backgroundColor = NSColor.blue.cgColor
also tried adding these:
view.wantsLayer = true
videoView.wantsLayer = true
but the background is still black.
AVPlayerView does not have layer after the initialization or setting wantsLayer property, but creates it later at some point. I was able to change the background with the next code in my AVPlayerView subclass:
override var layer: CALayer? {
get { super.layer }
set {
newValue?.backgroundColor = CGColor.clear
super.layer = newValue
}
}
videoView.contentOverlayView?.layer?.setNeedsDisplay()
try this maybe you just need to update the view. However if you would post more of your code I could try to help more.
Hello Community,
I try to build a App with Swift 4 and the great upcoming ARKit-Framework but I am stuck. I need to take a Video with the Framework or at least a UIImage-sequence but I dont know how.
This is what I've tried:
In ARKit you have a session which tracks your world. This session has a capturedImage instance where you can get the current Image. So I createt a Timer which appends the capturedImage every 0.1s to a List. This would work for me but If I start the Timer by clicking a "start"-button, the camera starts to lag. Its not about the Timer i guess because If I invalidate the Timer by clicking a "stop"-button the camera is fluent again.
Is there a way to solve the lags or even a better way?
Thanks
I was able to use ReplayKit to do exactly that.
To see what ReplayKit is like
On your iOS device, go to Settings -> Control Center -> Customize Controls. Move "Screen Recording" to the "Include" section, and swipe up to bring up Control Center. You should now see the round Screen Recording icon, and you'll notice that when you press it, iOS starts to record your screen. Tapping the blue bar will end recording and save the video to Photos.
Using ReplayKit, you can make your app invoke the screen recorder and capture your ARKit content.
How-to
To start recording:
RPScreenRecorder.shared().startRecording { error in
// Handle error, if any
}
To stop recording:
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
// Do things
})
After you're done recording, .stopRecording gives you an optional RPPreviewViewController, which is
An object that displays a user interface where users preview and edit a screen recording created with ReplayKit.
So in our example, you can present previewVc if it isn't nil
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
if let previewVc = previewVc {
previewVc.delegate = self
self.present(previewVc, animated: true, completion: nil)
}
})
You'll be able to edit and save the vide right from the previewVc, but you might want to make self (or someone) the RPPreviewViewControllerDelegate, so you can easily dismiss the previewVc when you're finished.
extension MyViewController: RPPreviewViewControllerDelegate {
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
// Called when the preview vc is ready to be dismissed
}
}
Caveats
You'll notice that startRecording will record "the app display", so if any view you have (buttons, labels, etc) will be recorded as well.
I found it useful to hide the controls while recording and let my users know that tapping the screen stops recording, but I've also read about others having success putting their essential controls on a separate UIWindow.
Excluding views from recording
The separate UIWindow trick works. I was able to make an overlay window where I had my a record button and a timer and these weren't recorded.
let overlayWindow = UIWindow(frame: view.frame)
let recordButton = UIButton( ... )
overlayWindow.backgroundColor = UIColor.clear
The UIWindow will be hidden by default. So when you want to show your controls, you must set isHidden to false.
Best of luck to you!
Use a custom renderer.
Render the scene using the custom renderer, then get texture from the custom renderer, finally covert that to a CVPixelBufferRef
- (void)viewDidLoad {
[super viewDidLoad];
self.rgbColorSpace = CGColorSpaceCreateDeviceRGB();
self.bytesPerPixel = 4;
self.bitsPerComponent = 8;
self.bitsPerPixel = 32;
self.textureSizeX = 640;
self.textureSizeY = 960;
// Set the view's delegate
self.sceneView.delegate = self;
// Show statistics such as fps and timing information
self.sceneView.showsStatistics = YES;
// Create a new scene
SCNScene *scene = [SCNScene scene];//[SCNScene sceneNamed:#"art.scnassets/ship.scn"];
// Set the scene to the view
self.sceneView.scene = scene;
self.sceneView.preferredFramesPerSecond = 30;
[self setupMetal];
[self setupTexture];
self.renderer.scene = self.sceneView.scene;
}
- (void)setupMetal
{
if (self.sceneView.renderingAPI == SCNRenderingAPIMetal) {
self.device = self.sceneView.device;
self.commandQueue = [self.device newCommandQueue];
self.renderer = [SCNRenderer rendererWithDevice:self.device options:nil];
}
else {
NSAssert(nil, #"Only Support Metal");
}
}
- (void)setupTexture
{
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm_sRGB width:self.textureSizeX height:self.textureSizeY mipmapped:NO];
descriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageRenderTarget;
id<MTLTexture> textureA = [self.device newTextureWithDescriptor:descriptor];
self.offscreenTexture = textureA;
}
- (void)renderer:(id <SCNSceneRenderer>)renderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time
{
[self doRender];
}
- (void)doRender
{
if (self.rendering) {
return;
}
self.rendering = YES;
CGRect viewport = CGRectMake(0, 0, self.textureSizeX, self.textureSizeY);
id<MTLTexture> texture = self.offscreenTexture;
MTLRenderPassDescriptor *renderPassDescriptor = [MTLRenderPassDescriptor new];
renderPassDescriptor.colorAttachments[0].texture = texture;
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1.0);
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];
self.renderer.pointOfView = self.sceneView.pointOfView;
[self.renderer renderAtTime:0 viewport:viewport commandBuffer:commandBuffer passDescriptor:renderPassDescriptor];
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> _Nonnull bf) {
[self.recorder writeFrameForTexture:texture];
self.rendering = NO;
}];
[commandBuffer commit];
}
Then in the recorder, set up the AVAssetWriterInputPixelBufferAdaptor with AVAssetWriter. And convert the texture to CVPixelBufferRef:
- (void)writeFrameForTexture:(id<MTLTexture>)texture {
CVPixelBufferPoolRef pixelBufferPool = self.assetWriterPixelBufferInput.pixelBufferPool;
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
[texture getBytes:pixelBufferBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
[self.assetWriterPixelBufferInput appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
}
Make sure the custom renderer and the adaptor share the same pixel encoding.
I tested this for the default ship.scn and it and it only consume 30% CPU compared to almost 90% compared to use snapshot method for every frame. And this will not pop up a permission dialog.
I have released an open source framework taking care of this. https://github.com/svtek/SceneKitVideoRecorder
It works by getting the drawables from scene views metal layer.
You can attach a display link to get your renderer called as the screen refreshes:
displayLink = CADisplayLink(target: self, selector: #selector(updateDisplayLink))
displayLink?.add(to: .main, forMode: .commonModes)
And then grab the drawable from metal layer by:
let metalLayer = sceneView.layer as! CAMetalLayer
let nextDrawable = metalLayer.nextDrawable()
Be wary that nextDrawable() call expends the drawables. You should call this as less as possible and do so in an autoreleasepool{} so the drawable gets released properly and replaced with a new one.
Then you should read the MTLTexture from the drawable to a pixel buffer which you can append to AVAssetWriter to create a video.
let destinationTexture = currentDrawable.texture
destinationTexture.getBytes(...)
With these in mind the rest is pretty straightforward video recording on iOS/Cocoa.
You can find all these implemented in the repo I've shared above.
I had a similar need and wanted to record the ARSceneView in the app internally, and without ReplayKit so that I can manipulate the video that is generated from the recording. I ended up using this project: https://github.com/lacyrhoades/SceneKit2Video . The project is made to render a SceneView to a video, but you can configure it to accept ARSceneViews. It works pretty well, and you can choose to get an imagefeed instead of the video using the delegate function if you like.
I am trying to take a picture every 2 seconds by using a while loop. but when I try this the screen freezes.
This is the function that takes the photo:
func didPressTakePhoto(){
if let videoConnection = stillImageOutput?.connectionWithMediaType(AVMediaTypeVideo){
videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {
(sampleBuffer, error) in
if sampleBuffer != nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
let dataProvider = CGDataProviderCreateWithCFData(imageData)
let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, .RenderingIntentDefault)
let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
//Adds every image taken to an array each time the while loop loops which will then be used to create a timelapse.
self.images.append(image)
}
})
}
}
To take the picture I have a button which will use this function in a while loop when a variable called count is equal to 0, but when the end button is pressed, this variable is equal to 1, so the while loop ends.
This is what the startPictureButton action looks like:
#IBAction func TakeScreanshotClick(sender: AnyObject) {
TipsView.hidden = true
XBtnTips.hidden = true
self.takePictureBtn.hidden = true
self.stopBtn.hidden = false
controls.hidden = true
ExitBtn.hidden = true
PressedLbl.text = "Started"
print("started")
while count == 0{
didPressTakePhoto()
print(images)
pressed = pressed + 1
PressedLbl.text = "\(pressed)"
print(pressed)
sleep(2)
}
}
But when I run this and start the timelapse the screen looks frozen.
Does anyone know how to stop the freeze from happening - but also to add each image taken to an array - so that I can turn that into a video?
The problem is that the method that processes clicks on the button (TakeScreanshotClick method) is run on the UI thread. So, if this method never exits, the UI thread gets stuck in it, and the UI freezes.
In order to avoid it, you can run your loop on the background thread (read about NSOperation and NSOperationQueue). Occasionally you might need to dispatch something from the background thread to the UI thread (for instance, commands for UI updates).
UPDATE: Apple has a really great documentation (best of what I've seen so far). Have a look at this: Apple Concurrency Programming Guide.
You are calling the sleep command on the main UI thread, thus freezing all other activity.
Also, I can't see where you set count = 1? Wouldn't the while loop continue forever?