I have an MTKView whose contents I draw into a UIView. I want to swap display from MTKView to UIView without perceptible changes. How to achieve?
Currently, I have
let strokeCIImage = CIImage(mtlTexture: metalTextureComposite...) // get MTLTexture
let imageCropCG = cicontext.createCGImage(strokeCIImage...) // convert to CGImage
let layerStroke = CALayer() // create layer
layerStroke.contents = imageCropCG // populate with CGImage
strokeUIView.layer.addSublayer(layerStroke) // add to view
strokeUIView.layerWillDraw(layerStroke) //heads up to strokeUIView
and a delegate method within layerWillDraw() that clears the MTKView.
strokeViewMetal.metalClearDisplay()
The result is that I'll see a frame drop every so often in which nothing is displayed.
In the hopes of cleanly separating the two tasks, I also tried the following:
let dispatchWorkItem = DispatchWorkItem{
print("lyr add start")
self.pageCanvasImage.layer.addSublayer(sublayer)
print("lyr add end")
}
let dg = DispatchGroup()
DispatchQueue.main.async(group: dg, execute: dispatchWorkItem)
//print message when all blocks in the group finish
dg.notify(queue: DispatchQueue.main) {
print("dispatch mtl clear")
self.strokeCanvasMetal.setNeedsDisplay() // clear MTKView
}
The idea being add the new CALayer to UIImageView, and THEN clear the MTKView.
Over many screen draws, I think this result in fewer frame drops during the View swap, but I'd like a foolproof solution with NO drops. Basically what I'm after is to only clear strokeViewMetal once strokeUIView is ready to display. Any pointers would be appreciated
Synchronicity issues between MTKView and UIView are resolved for 99% of my tests when I set MTKView's presentsWithTransaction property to true. According to Apple's documentation:
Setting this value to true changes this default behavior so that your
MTKView displays its drawable content synchronously, using whichever
Core Animation transaction is current at the time the drawable’s
present() method is called.
Once that is done, the draw loop has to be modified from:
commandEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
to:
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilScheduled() // synchronously wait until the drawable is ready
drawable.present() // call the drawable’s present() method directly
This is done to prevent Core activities to end before we're ready to present MTKView's drawable.
With all of this set up, I can simply:
let strokeCIImage = CIImage(mtlTexture: metalTextureComposite...) // get MTLTexture
let imageCropCG = cicontext.createCGImage(strokeCIImage...) // convert to CGImage
let layerStroke = CALayer() // create layer
layerStroke.contents = imageCropCG // populate with CGImage
// the last two events will happen synchronously
strokeUIView.layer.addSublayer(layerStroke) // add to view
strokeViewMetal.metalClearDisplay() // empty out MTKView
With all of this said, I do see overlapping of the views every now and then, but at a much, much lower frequency
Related
So I've got a background view with a gradient sublayer, animating continuously to change the colors slowly. I'm doing it with a CATransaction, because I need to animate other properties as well:
CATransaction.begin()
gradientLayer.add(colorAnimation, forKey: "colors")
// other animations
CATransaction.setCompletionBlock({
// start animation again, loop forever
}
CATransaction.commit()
Now I want to replicate this gradient animation, let's say, for the title of a button for instance.
Note 1: I can't just "make a hole" in the button, if such a thing is possible, because I might have other opaque views between the button and the background.
Note 2: The gradient position on the button is not important. I don't want the text gradient to replicate the exact colors underneath, but rather to mimic the "mood" of the background.
So when the button is created, I add its gradient sublayer to a list of registered layers, that the background manager will update as well:
func register(layer: CAGradientLayer) {
let pointer = Unmanaged.passUnretained(layer).toOpaque()
registeredLayers.addPointer(pointer)
}
So while it's easy to animate the text gradient at the next iteration of the animation, I would prefer that the button starts animating as soon as it's added, since the animation usually takes a few seconds. How can I copy the background animation, i.e. set the text gradient to the current state of the background animation, and animate it with the right duration left and timing function?
The solution was indeed to use the beginTime property, as suggested by #Shivam Gaur's comment. I implemented it as follows:
// The background layer, with the original animation
var backgroundLayer: CAGradientLayer!
// The animation
var colorAnimation: CABasicAnimation!
// Variable to store animation begin time
var animationBeginTime: CFTimeInterval!
// Registered layers replicating the animation
private var registeredLayers: NSPointerArray = NSPointerArray.weakObjects()
...
// Somewhere in our code, the setup function
func setup() {
colorAnimation = CABasicAnimation(keyPath: "colors")
// do the animation setup here
...
}
...
// Called by an external class when we add a view that should replicate the background animation
func register(layer: CAGradientLayer) {
// Store a pointer to the layer in our array
let pointer = Unmanaged.passUnretained(layer).toOpaque()
registeredLayers.addPointer(pointer)
layer.colors = colorAnimation.toValue as! [Any]?
// HERE'S THE KEY: We compute time elapsed since the beginning of the animation, and start the animation at that time, using 'beginTime'
let timeElapsed = CACurrentMediaTime() - animationBeginTime
colorAnimation.beginTime = -timeElapsed
layer.add(colorAnimation, forKey: "colors")
colorAnimation.beginTime = 0
}
// The function called recursively for an endless animation
func animate() {
// Destination layer
let toLayer = newGradient() // some function to create a new color gradient
toLayer.frame = UIScreen.main.bounds
// Setup animation
colorAnimation.fromValue = backgroundLayer.colors;
colorAnimation.toValue = toLayer.colors;
// Update background layer
backgroundLayer.colors = toLayer.colors
// Update registered layers (iterate is a custom function I declared as an extension of NSPointerArray)
registeredLayers.iterate() { obj in
guard let layer = obj as? CAGradientLayer else { return }
layer.colors = toLayer.colors
}
CATransaction.begin()
CATransaction.setCompletionBlock({
animate()
})
// Add animation to background
backgroundLayer.add(colorAnimation, forKey: "colors")
// Store starting time
animationBeginTime = CACurrentMediaTime();
// Add animation to registered layers
registeredLayers.iterate() { obj in
guard let layer = obj as? CAGradientLayer else { return }
layer.add(colorAnimation, forKey: "colors")
}
CATransaction.commit()
}
Hello Community,
I try to build a App with Swift 4 and the great upcoming ARKit-Framework but I am stuck. I need to take a Video with the Framework or at least a UIImage-sequence but I dont know how.
This is what I've tried:
In ARKit you have a session which tracks your world. This session has a capturedImage instance where you can get the current Image. So I createt a Timer which appends the capturedImage every 0.1s to a List. This would work for me but If I start the Timer by clicking a "start"-button, the camera starts to lag. Its not about the Timer i guess because If I invalidate the Timer by clicking a "stop"-button the camera is fluent again.
Is there a way to solve the lags or even a better way?
Thanks
I was able to use ReplayKit to do exactly that.
To see what ReplayKit is like
On your iOS device, go to Settings -> Control Center -> Customize Controls. Move "Screen Recording" to the "Include" section, and swipe up to bring up Control Center. You should now see the round Screen Recording icon, and you'll notice that when you press it, iOS starts to record your screen. Tapping the blue bar will end recording and save the video to Photos.
Using ReplayKit, you can make your app invoke the screen recorder and capture your ARKit content.
How-to
To start recording:
RPScreenRecorder.shared().startRecording { error in
// Handle error, if any
}
To stop recording:
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
// Do things
})
After you're done recording, .stopRecording gives you an optional RPPreviewViewController, which is
An object that displays a user interface where users preview and edit a screen recording created with ReplayKit.
So in our example, you can present previewVc if it isn't nil
RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
if let previewVc = previewVc {
previewVc.delegate = self
self.present(previewVc, animated: true, completion: nil)
}
})
You'll be able to edit and save the vide right from the previewVc, but you might want to make self (or someone) the RPPreviewViewControllerDelegate, so you can easily dismiss the previewVc when you're finished.
extension MyViewController: RPPreviewViewControllerDelegate {
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
// Called when the preview vc is ready to be dismissed
}
}
Caveats
You'll notice that startRecording will record "the app display", so if any view you have (buttons, labels, etc) will be recorded as well.
I found it useful to hide the controls while recording and let my users know that tapping the screen stops recording, but I've also read about others having success putting their essential controls on a separate UIWindow.
Excluding views from recording
The separate UIWindow trick works. I was able to make an overlay window where I had my a record button and a timer and these weren't recorded.
let overlayWindow = UIWindow(frame: view.frame)
let recordButton = UIButton( ... )
overlayWindow.backgroundColor = UIColor.clear
The UIWindow will be hidden by default. So when you want to show your controls, you must set isHidden to false.
Best of luck to you!
Use a custom renderer.
Render the scene using the custom renderer, then get texture from the custom renderer, finally covert that to a CVPixelBufferRef
- (void)viewDidLoad {
[super viewDidLoad];
self.rgbColorSpace = CGColorSpaceCreateDeviceRGB();
self.bytesPerPixel = 4;
self.bitsPerComponent = 8;
self.bitsPerPixel = 32;
self.textureSizeX = 640;
self.textureSizeY = 960;
// Set the view's delegate
self.sceneView.delegate = self;
// Show statistics such as fps and timing information
self.sceneView.showsStatistics = YES;
// Create a new scene
SCNScene *scene = [SCNScene scene];//[SCNScene sceneNamed:#"art.scnassets/ship.scn"];
// Set the scene to the view
self.sceneView.scene = scene;
self.sceneView.preferredFramesPerSecond = 30;
[self setupMetal];
[self setupTexture];
self.renderer.scene = self.sceneView.scene;
}
- (void)setupMetal
{
if (self.sceneView.renderingAPI == SCNRenderingAPIMetal) {
self.device = self.sceneView.device;
self.commandQueue = [self.device newCommandQueue];
self.renderer = [SCNRenderer rendererWithDevice:self.device options:nil];
}
else {
NSAssert(nil, #"Only Support Metal");
}
}
- (void)setupTexture
{
MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm_sRGB width:self.textureSizeX height:self.textureSizeY mipmapped:NO];
descriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageRenderTarget;
id<MTLTexture> textureA = [self.device newTextureWithDescriptor:descriptor];
self.offscreenTexture = textureA;
}
- (void)renderer:(id <SCNSceneRenderer>)renderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time
{
[self doRender];
}
- (void)doRender
{
if (self.rendering) {
return;
}
self.rendering = YES;
CGRect viewport = CGRectMake(0, 0, self.textureSizeX, self.textureSizeY);
id<MTLTexture> texture = self.offscreenTexture;
MTLRenderPassDescriptor *renderPassDescriptor = [MTLRenderPassDescriptor new];
renderPassDescriptor.colorAttachments[0].texture = texture;
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1.0);
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];
self.renderer.pointOfView = self.sceneView.pointOfView;
[self.renderer renderAtTime:0 viewport:viewport commandBuffer:commandBuffer passDescriptor:renderPassDescriptor];
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> _Nonnull bf) {
[self.recorder writeFrameForTexture:texture];
self.rendering = NO;
}];
[commandBuffer commit];
}
Then in the recorder, set up the AVAssetWriterInputPixelBufferAdaptor with AVAssetWriter. And convert the texture to CVPixelBufferRef:
- (void)writeFrameForTexture:(id<MTLTexture>)texture {
CVPixelBufferPoolRef pixelBufferPool = self.assetWriterPixelBufferInput.pixelBufferPool;
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
[texture getBytes:pixelBufferBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
[self.assetWriterPixelBufferInput appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
}
Make sure the custom renderer and the adaptor share the same pixel encoding.
I tested this for the default ship.scn and it and it only consume 30% CPU compared to almost 90% compared to use snapshot method for every frame. And this will not pop up a permission dialog.
I have released an open source framework taking care of this. https://github.com/svtek/SceneKitVideoRecorder
It works by getting the drawables from scene views metal layer.
You can attach a display link to get your renderer called as the screen refreshes:
displayLink = CADisplayLink(target: self, selector: #selector(updateDisplayLink))
displayLink?.add(to: .main, forMode: .commonModes)
And then grab the drawable from metal layer by:
let metalLayer = sceneView.layer as! CAMetalLayer
let nextDrawable = metalLayer.nextDrawable()
Be wary that nextDrawable() call expends the drawables. You should call this as less as possible and do so in an autoreleasepool{} so the drawable gets released properly and replaced with a new one.
Then you should read the MTLTexture from the drawable to a pixel buffer which you can append to AVAssetWriter to create a video.
let destinationTexture = currentDrawable.texture
destinationTexture.getBytes(...)
With these in mind the rest is pretty straightforward video recording on iOS/Cocoa.
You can find all these implemented in the repo I've shared above.
I had a similar need and wanted to record the ARSceneView in the app internally, and without ReplayKit so that I can manipulate the video that is generated from the recording. I ended up using this project: https://github.com/lacyrhoades/SceneKit2Video . The project is made to render a SceneView to a video, but you can configure it to accept ARSceneViews. It works pretty well, and you can choose to get an imagefeed instead of the video using the delegate function if you like.
Here, I'm creating a typical graphic (it's full-screen size, on all devices) on the fly...
func buildImage()->UIImage
{
let wrapperA:UIView = say, a picture
let wrapperB:UIView = say, some text to go on top
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
basicImage.drawHierarchy(in: basicImage.bounds, afterScreenUpdates:true)
wrapperA.drawHierarchy(in: wrapperA.bounds, afterScreenUpdates:true)
wrapperB.drawHierarchy(in: wrapperB.bounds, afterScreenUpdates: true)
let result:UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// so we've just created a nice big image for some reason,
// no problem so far
print( result?.scale )
// I want to change that image to have a scale of 1.
// I don't know how to do that, so I actually just
// make a new identical one, with scale of 1
let resultFixed:UIImage = UIImage(cgImage: result!.cgImage!,
scale: 1.0,
orientation: result!.imageOrientation)
print( resultFixed.scale )
print("Let's use only '1-scale' images to upload to things like Instagram")
// return result
return resultFixed
// be sure to ask on SO if there's a way to
// just "change the scale" rather than make new.
}
I need the final image to be .scale of 1 - but .scale is a read only property.
The only thing I know how to do is make a whole new image copy ... but set the scale to 1 as it's being created.
Is there a better way?
Handy tip -
This was motivated by: say you're saving a large image to the user's album, and also allowing UIActivityViewController so as to post to (example) Instagram. As a general rule, it seems to be best to make the scale 1 before sending to (example) Instagram; if the scale is say 3 you actually just get the top-left 1/3 of the image on your Instagram post. In terms of saving it to the iOS photo album, it does seem to be harmless (perhaps, better in some ways) to set the scale to 1. (I only say "better" as, if the image is, example, ultimately say emailed to a friend on PC, it can cause less confusion if the scale is 1.) Interestingly though, if you just use the iOS Photos album, and take a scale 2 or 3 image, and share it to Instagram: it does in fact appear properly on Instagram! (perhaps Apple's Photos indeed knows it os best to make it scale 1, before sending it to somewhere like Instagram!).
As you say, the scale property of UIImage is read-only – therefore you cannot change it directly.
However, using UIImage's init(cgImage:scale:orientation) initialiser doesn't really copy the image – the underlying CGImage that it's wrapping (which contains the actual bitmap data) is still the same instance. It's only a new UIImage wrapper that is created.
Although that being said, you could cut out the intermediate UIImage wrapper in this case by getting the CGImage from the context directly through CGContext's makeImage() method. For example:
func buildImage() -> UIImage? {
// ...
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
defer {
UIGraphicsEndImageContext()
}
// get the current context
guard let context = UIGraphicsGetCurrentContext() else { return nil }
// -- do drawing here --
// get the CGImage from the context by calling makeImage() – then wrap in a UIImage
// through using Optional's map(_:) (as makeImage() can return nil)
// by default, the scale of the UIImage is 1.
return context.makeImage().map(UIImage.init(cgImage:))
}
btw you can change scale of result image throw creating new image
let newScaleImage = UIImage(cgImage: oldScaleImage.cgImage!, scale: 1.0, orientation: oldScaleImage.imageOrientation)
I am trying to blur my entire GameScene when my pause button is pressed. I have a method called blurSceen() but it doesn't seem to be adding the effect to the scene. Is there a way I can accomplish this or am I doing something wrong? I have viewed other posts about this topic but haven't been able to accomplish the effect.
func blurScreen() {
let effectsNode = SKEffectNode()
let filter = CIFilter(name: "CIGaussianBlur")
let blurAmount = 10.0
filter!.setValue(blurAmount, forKey: kCIInputRadiusKey)
effectsNode.filter = filter
effectsNode.position = self.view!.center
effectsNode.blendMode = .Alpha
// Add the effects node to the scene
self.addChild(effectsNode)
}
From the SKEffectNode docs:
An SKEffectNode object renders its children into a buffer and optionally applies a Core Image filter to this rendered output.
The effect node applies a filter only to its child nodes. Your effect node has no children, so there's nothing to apply a filter to.
Probably what you want is to add an effect node to your scene early on--but don't set the filter on it yet--and put all the nodes that you'll later want to blur in as its children. When it comes time to apply a blur, set the filter on the (already existing, already with children) effect node.
I had the same issue trying to blur the whole SKScene and it just wasn't working. The missing piece of the puzzle was this line:
shouldEnableEffects = true
Swift 4:
from gameScene:
let blur = CIFilter(name:"CIGaussianBlur",withInputParameters: ["inputRadius": 10.0])
self.filter = blur
self.shouldRasterize = true
self.shouldEnableEffects = true
I have a main screen which has a button, when this button is pressed it should transition immediately to another scene, but it doesn't. It actually takes a few seconds. Is there a way that I could load all the nodes in that scene beforehand? (Example: in the games load screen)
This is my code:
let pressButton = SKAction.setTexture(SKTexture(imageNamed: "playButtonP.png"))
let buttonPressed = SKAction.waitForDuration(0.15)
let buttonNormal = SKAction.setTexture(SKTexture(imageNamed: "playButton.png"))
let gameTrans = SKAction.runBlock(){
let doors = SKTransition.doorsOpenHorizontalWithDuration(0)
let levelerScene = LevelerScene(fileNamed: "LevelerScene")
self.view?.presentScene(levelerScene, transition: doors)
}
playButton.runAction(SKAction.sequence([pressButton,buttonPressed,buttonNormal,gameTrans]))
You could preload the SKTextures you're using in LevelerScene before presenting the scene. Then, once the loading has finished you would then present the scene. Here's an example from the Apple's Documentation, translated to Swift:
SKTexture.preloadTextures(arrayOfYourTextures) {
if let scene = GameScene(fileNamed: "GameScene") {
let skView = self.view as! SKView
skView.presentScene(scene)
}
}
In your case you have a couple of options:
1.
Keep an array of the textures, which you need to use in LevelerScene, that you preload in GameScene:
class LevelerScene : SKScene {
// You need to keep a strong reference to your textures to keep
// them in memory after they've been loaded.
let textures = [SKTexture(imageNamed: "Tex1"), SKTexture(imageNamed: "Tex1")]
// You could now reference the texture you want using the array.
//...
}
Now in GameScene, when the user presses the button:
if let view = self.view {
let leveler = LevelerScene(fileNamed: "LevelerScene")
SKTexture.preloadTextures(leveler.textures) {
// Done loading!
view.presentScene(leveler)
}
}
There's no way you can get around having to wait a little bit, but taking this approach the main thread won't get blocked and you'll be able to interact with GameScene whilst LevelerScene is loading.
You could also use this approach to make a loading SKScene for LevelerScene. GameScene would take you to the loading scene, which would load the textures and then move you to LevelerScene once it was complete.
It's important to note that because the reference to the textures is in LevelerScene, once LevelerScene is deinit-ed the textures will be removed from memory. Therefore, if you want to go back to LevelerScene you'll need to load the textures again.
2. You could use SKTexture.preloadTextures in GameViewController before any SKScenes have been presented. You'd need to keep a strong reference to these textures (perhaps in a singleton) which you could then reference in LevelerScene (or anywhere else you needed them in the app).
With this approach, because the SKTextures are stored outside of a scene, they won't be removed from memory when you transition to the next scene. This means you won't have to load the textures again if you leave and then go back to a scene. However, if you've got a lot of textures taking up a lot of memory you could run into some memory issues.
For more information see Preloading Textures Into Memory from Working with Sprites.
Hope that helps!